Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
2,000 | 2,817 | Bayesian Sets
Zoubin Ghahramani? and Katherine A. Heller
Gatsby Computational Neuroscience Unit
University College London
London WC1N 3AR, U.K.
{zoubin,heller}@gatsby.ucl.ac.uk
Abstract
Inspired by ?Google? Sets?, we consider the problem of retrieving items
from a concept or cluster, given a query consisting of a few items from
that cluster. We formulate this as a Bayesian inference problem and describe a very simple algorithm for solving it. Our algorithm uses a modelbased concept of a cluster and ranks items using a score which evaluates
the marginal probability that each item belongs to a cluster containing
the query items. For exponential family models with conjugate priors
this marginal probability is a simple function of sufficient statistics. We
focus on sparse binary data and show that our score can be evaluated exactly using a single sparse matrix multiplication, making it possible to
apply our algorithm to very large datasets. We evaluate our algorithm on
three datasets: retrieving movies from EachMovie, finding completions
of author sets from the NIPS dataset, and finding completions of sets of
words appearing in the Grolier encyclopedia. We compare to Google?
Sets and show that Bayesian Sets gives very reasonable set completions.
1
Introduction
What do Jesus and Darwin have in common? Other than being associated with two
different views on the origin of man, they also have colleges at Cambridge University named after them. If these two names are entered as a query into Google? Sets
(http://labs.google.com/sets) it returns a list of other colleges at Cambridge.
Google? Sets is a remarkably useful tool which encapsulates a very practical and interesting problem in machine learning and information retrieval.1 Consider a universe of items
D. Depending on the application, the set D may consist of web pages, movies, people,
words, proteins, images, or any other object we may wish to form queries on. The user
provides a query in the form of a very small subset of items Dc ? D. The assumption
is that the elements in Dc are examples of some concept / class / cluster in the data. The
algorithm then has to provide a completion to the set Dc ?that is, some set Dc0 ? D which
presumably includes all the elements in Dc and other elements in D which are also in this
concept / class / cluster2 .
?
ZG is also at CALD, Carnegie Mellon University, Pittsburgh PA 15213.
Google? Sets is a large-scale clustering algorithm that uses many millions of data instances
extracted from web data (Simon Tong, personal communication). We are unable to describe any
details of how the algorithm works due its proprietary nature.
2
From here on, we will use the term ?cluster? to refer to the target concept.
1
We can view this problem from several perspectives. First, the query can be interpreted
as elements of some unknown cluster, and the output of the algorithm is the completion
of that cluster. Whereas most clustering algorithms are completely unsupervised, here the
query provides supervised hints or constraints as to the membership of a particular cluster.
We call this view clustering on demand, since it involves forming a cluster once some
elements of that cluster have been revealed. An important advantage of this approach over
traditional clustering is that the few elements in the query can give useful information as
to the features which are relevant for forming the cluster. For example, the query ?Bush?,
?Nixon?, ?Reagan? suggests that the features republican and US President are relevant to
the cluster, while the query ?Bush?, ?Putin?, ?Blair? suggests that current and world leader
are relevant. Given the huge number of features in many real world data sets, such hints as
to feature relevance can produce much more sensible clusters.
Second, we can think of the goal of the algorithm to be to solve a particular information retrieval problem [2, 3, 4]. As in other retrieval problems, the output should be relevant to the
query, and it makes sense to limit the output to the top few items ranked by relevance to the
query. In our experiments, we take this approach and report items ranked by relevance. Our
relevance criterion is closely related to a Bayesian framework for understanding patterns of
generalization in human cognition [5].
2
Bayesian Sets
Let D be a data set of items, and x ? D be an item from this set. Assume the user provides
a query set Dc which is a small subset of D. Our goal is to rank the elements of D by how
well they would ?fit into? a set which includes Dc . Intuitively, the task is clear: if the set
D is the set of all movies, and the query set consists of two animated Disney movies, we
expect other animated Disney movies to be ranked highly.
We use a model-based probabilistic criterion to measure how well items fit into Dc . Having
observed Dc as belonging to some concept, we want to know how probable it is that x also
belongs with Dc . This is measured by p(x|Dc ). Ranking items simply by this probability
is not sensible since some items may be more probable than others, regardless of Dc . For
example, under most sensible models, the probability of a string decreases with the number
of characters, the probability of an image decreases with the number of pixels, and the
probability of any continuous variable decreases with the precision to which it is measured.
We want to remove these effects, so we compute the ratio:
score(x) =
p(x|Dc )
p(x)
(1)
where the denominator is the prior probability of x and under most sensible models will
scale exactly correctly with number of pixels, characters, discretization level, etc. Using
Bayes rule, this score can be re-written as:
score(x) =
p(x, Dc )
p(x) p(Dc )
(2)
which can be interpreted as the ratio of the joint probability of observing x and Dc , to the
probability of independently observing x and Dc . Intuitively, this ratio compares the probability that x and Dc were generated by the same model with the same, though unknown,
parameters ?, to the probability that x and Dc came from models with different parameters
? and ?0 (see figure 1). Finally, up to a multiplicative constant independent of x, the score
can be written as: score(x) = p(Dc |x), which is the probability of observing the query set
given x (i.e. the likelihood of x).
From the above discussion, it is still not clear how one would compute quantities such
as p(x|Dc ) and p(x). A natural model-based way of defining a cluster is to assume that
Figure 1: Our Bayesian score compares the hypotheses that the data was generated by each of the
above graphical models.
the data points in the cluster all come independently and identically distributed from some
simple parameterized statistical model. Assume that the parameterized model is p(x|?)
where ? are the parameters. If the data points in Dc all belong to one cluster, then under
this definition they were generated from the same setting of the parameters; however, that
setting is unknown, so we need to average over possible parameter values weighted by
some prior density on parameter values, p(?). Using these considerations and the basic
rules of probability we arrive at:
Z
p(x) =
p(x|?) p(?) d?
(3)
Z Y
p(Dc ) =
p(xi |?) p(?) d?
(4)
xi ?Dc
Z
p(x|Dc )
=
p(?|Dc )
=
p(x|?) p(?|Dc ) d?
p(Dc |?) p(?)
p(Dc )
(5)
(6)
We are now fully equipped to describe the ?Bayesian Sets? algorithm:
Bayesian Sets Algorithm
background: a set of items D, a probabilistic model p(x|?) where
x ? D, a prior on the model parameters p(?)
input: a query Dc = {xi } ? D
for all x ? D do
p(x|Dc )
compute
score(x) =
p(x)
end for
output: return elements of D sorted by decreasing score
We mention two properties of this algorithm to assuage two common worries with Bayesian
methods?tractability and sensitivity to priors:
1. For the simple models we will consider, the integrals (3)-(5) are analytical. In fact,
for the model we consider in section 3 computing all the scores can be reduced to
a single sparse matrix multiplication.
2. Although it clearly makes sense to put some thought into choosing sensible models p(x|?) and priors p(?), we will show in 5 that even with very simple models
and almost no tuning of the prior one can get very competitive retrieval results. In
practice, we use a simple empirical heuristic which sets the prior to be vague but
centered on the mean of the data in D.
3
Sparse Binary Data
We now derive in more detail the application of the Bayesian Sets algorithm to sparse
binary data. This type of data is a very natural representation for the large datasets we used
in our evaluations (section 5). Applications of Bayesian Sets to other forms of data (realvalued, discrete, ordinal, strings) are also possible, and especially practical if the statistical
model is a member of the exponential family (section 4).
Assume each item xi ? Dc is a binary vector xi = (xi1 , . . . , xiJ ) where xij ? {0, 1}, and
that each element of xi has an independent Bernoulli distribution:
p(xi |?) =
J
Y
x
?j ij (1 ? ?j )1?xij
(7)
j=1
The conjugate prior for the parameters of a Bernoulli distribution is the Beta distribution:
J
Y
?(?j + ?j ) ?j ?1
?j
(1 ? ?j )?j ?1
p(?|?, ?) =
?(?
j )?(?j )
j=1
(8)
where ? and ? are hyperparameters, and the Gamma function is a generalization of the
factorial function. For a query Dc = {xi } consisting of N vectors it is easy to show that:
p(Dc |?, ?) =
Y ?(?j + ?j ) ?(?
?j )?(??j )
?(?j )?(?j ) ?(?
?j + ??j )
j
(9)
PN
PN
where ?
? = ? + i=1 xij and ?? = ? + N ? i=1 xij . For an item x = (x?1 . . . x?J ) the
score, written with the hyperparameters explicit, can be computed as follows:
p(x|Dc , ?, ?) Y
score(x) =
=
p(x|?, ?)
j
? j +x?j )?(??j +1?x?j )
?(?j +?j +N ) ?(?
?(?j +?j +N +1)
?(?
? j )?(??j )
?(?j +?j ) ?(?j +x?j )?(?j +1?x?j )
?(?j +?j +1)
?(?j )?(?j )
(10)
This daunting expression can be dramatically simplified. We use the fact that ?(x) =
(x ? 1) ?(x ? 1) for x > 1. For each j we can consider the two cases x?j = 0 and x?j = 1
?j +?j ?
?j
. For x?j = 0 we have a
and separately. For x?j = 1 we have a contribution ?j +?
j +N ?j
contribution
?j +?j ??j
?j +?j +N ?j .
Putting these together we get:
score(x) =
Y
j
?j + ?j
?j + ?j + N
?
?j
?j
x?j
X
qj x?j
??j
?j
!1?x?j
(11)
The log of the score is linear in x:
log score(x) = c +
(12)
j
where
c=
X
j
log(?j + ?j ) ? log(?j + ?j + N ) + log ??j ? log ?j
(13)
and
qj = log ?
? j ? log ?j ? log ??j + log ?j
(14)
If we put the entire data set D into one large matrix X with J columns, we can compute
the vector s of log scores for all points using a single matrix vector multiplication
s = c + Xq
(15)
For sparse data sets this linear operation can be implemented very efficiently. Each query
Dc corresponds to computing the vector q and scalar c. This can also be done efficiently if
the query is also sparse, since most elements of q will equal log ?j ? log(?j + N ) which
is independent of the query.
4
Exponential Families
We generalize the above result to models in the exponential family. The distribution for
such models can be written in the form p(x|?) = f (x)g(?) exp{?> u(x)}, where u(x) is a
K-dimensional vector of sufficient statistics, ? are the natural parameters, and f and g are
non-negative functions. The conjugate prior is p(?|?, ?) = h(?, ?)g(?)? exp{?> ?}, where
? and ? are hyperparameters, and h normalizes the distribution.
Given a query Dc = {xi } with N items, and a candidate x, it is not hard to show that the
score for the candidate is:
P
h(? + 1, ? + u(x)) h(? + N, ? + i u(xi ))
P
(16)
score(x) =
h(?, ?) h(? + N + 1, ? + u(x) + i u(xi ))
This expression helps us understand when the score can be computed efficiently. First of
all, the score only depends on the size of the query (N ), the sufficient statistics computed
from each candidate, and from the whole query. It therefore makes sense to precompute U,
a matrix of sufficient statistics corresponding to X. Second, whether the score is a linear
operation on U depends on whether log h is linear in the second argument. This is the case
for the Bernoulli distribution, but not for all exponential family distributions. However,
for many distributions, such as diagonal covariance Gaussians, even though the score is
nonlinear in U, it can be computed by applying the nonlinearity elementwise to U. For
sparse matrices, the score can therefore still be computed in time linear in the number of
non-zero elements of U.
5
Results
We ran our Bayesian Sets algorithm on three different datasets: the Groliers Encyclopedia dataset, consisting of the text of the articles in the Encyclopedia, the EachMovie
dataset, consisting of movie ratings by users of the EachMovie service, and the NIPS authors dataset, consisting of the text of articles published in NIPS volumes 0-12 (spanning
the 1987-1999 conferences). The Groliers dataset is 30991 articles by 15276 words, where
the entries are the number of times each word appears in each document. We preprocess
(binarize) the data by column normalizing each word, and then thresholding so that a (article,word) entry is 1 if that word has a frequency of more than twice the article mean.
We do essentially no tuning of the hyperparameters. We use broad empirical priors, where
? = c?m, ? = c ? (1?m) where m is a mean vector over all articles, and c = 2. The
analogous priors are used for both other datasets.
The EachMovie dataset was preprocessed, first by removing movies rated by less than 15
people, and people who rated less than 200 movies. Then the dataset was binarized so that a
(person, movie) entry had value 1 if the person gave the movie a rating above 3 stars (from
a possible 0-5 stars). The data was then column normalized to account for overall movie
popularity. The size of the dataset after preprocessing was 1813 people by 1532 movies.
Finally the NIPS author dataset (13649 words by 2037 authors), was preprocessed very
similarly to the Grolier dataset. It was binarized by column normalizing each author, and
then thresholding so that a (word,author) entry is 1 if the author uses that word more frequently than twice the word mean across all authors.
The results of our experiments, and comparisons with Google Sets for word and movie
queries are given in tables 2 and 3. Unfortunately, NIPS authors have not yet achieved the
kind of popularity on the web necessary for Google Sets to work effectively. Instead we
list the top words associated with the cluster of authors given by our algorithm (table 4).
The running times of our algorithm on all three datasets are given in table 1. All experiments were run in Matlab on a 2GHz Pentium 4, Toshiba laptop. Our algorithm is very fast
both at pre-processing the data, and answering queries (about 1 sec per query).
S IZE
N ON -Z ERO E LEMENTS
P REPROCESS T IME
Q UERY T IME
G ROLIERS
E ACH M OVIE
NIPS
30991 ? 15276
2,363,514
6.1 S
1.1 S
1813 ? 1532
517,709
0.56 S
0.34 S
13649 ? 2037
933,295
3.22 S
0.47 S
Table 1: For each dataset we give the size of that dataset along with the time taken to do the (onetime) preprocessing and the time taken to make a query (both in seconds).
Q UERY: WARRIOR , S OLDIER
Q UERY: A NIMAL
Q UERY: F ISH , WATER , C ORAL
G OOGLE S ETS
BAYES S ETS
G OOGLE S ETS
BAYES S ETS
G OOGLE S ETS
BAYES S ETS
WARRIOR
SOLDIER
SPY
ENGINEER
MEDIC
SNIPER
DEMOMAN
PYRO
SCOUT
PYROMANIAC
HWGUY
SOLDIER
WARRIOR
MERCENARY
CAVALRY
BRIGADE
COMMANDING
SAMURAI
BRIGADIER
INFANTRY
COLONEL
SHOGUNATE
ANIMAL
PLANT
FREE
LEGAL
FUNGAL
HUMAN
HYSTERIA
VEGETABLE
MINERAL
INDETERMINATE
FOZZIE BEAR
ANIMAL
ANIMALS
PLANT
HUMANS
FOOD
SPECIES
MAMMALS
AGO
ORGANISMS
VEGETATION
PLANTS
FISH
WATER
CORAL
AGRICULTURE
FOREST
RICE
SILK ROAD
RELIGION
HISTORY POLITICS
DESERT
ARTS
WATER
FISH
SURFACE
SPECIES
WATERS
MARINE
FOOD
TEMPERATURE
OCEAN
SHALLOW
FT
Table 2: Clusters of words found by Google Sets and Bayesian Sets based on the given queries.
The top few are shown for each query and each algorithm. Bayesian Sets was run using Grolier
Encyclopedia data.
It is very difficult to objectively evaluate our results since there is no ground truth for this
task. One person?s idea of a good query cluster may differ drastically from another person?s.
We chose to compare our algorithm to Google Sets since it was our main inspiration and it
is currently the most public and commonly used algorithm for performing this task.
Since we do not have access to the Google Sets algorithm it was impossible for us to run
their method on our datasets. Moreover, Google Sets relies on vast amounts of web data,
which we do not have. Despite those two important caveats, Google Sets clearly ?knows?
a lot about movies3 and words, and the comparison to Bayesian Sets is informative.
We found that Google Sets performed very well when the query consisted of items which
can be found listed on the web (e.g. Cambridge colleges). On the other hand, for more
abstract concepts (e.g. ?soldier? and ?warrior?, see Table 2) our algorithm returned more
sensible completions.
While we believe that most of our results are self-explanatory, there are a few details that we
would like to elaborate on. The top query in table 3 consists of two classic romantic movies,
3
In fact, one of the example queries on the Google Sets website is a query of movie titles.
QUERY: G ONE
WITH THE WIND , CASABLANCA
G OOGLE S ETS
CASABLANCA (1942)
GONE WITH THE WIND (1939)
ERNEST SAVES CHRISTMAS (1988)
CITIZEN KANE (1941)
PET DETECTIVE (1994)
VACATION (1983)
WIZARD OF OZ (1939)
THE GODFATHER (1972)
LAWRENCE OF ARABIA (1962)
ON THE WATERFRONT (1954)
BAYES S ETS
GONE WITH THE WIND (1939)
CASABLANCA (1942)
THE AFRICAN QUEEN (1951)
THE PHILADELPHIA STORY (1940)
MY FAIR LADY (1964)
THE ADVENTURES OF ROBIN HOOD (1938)
THE MALTESE FALCON (1941)
REBECCA (1940)
SINGING IN THE RAIN (1952)
IT HAPPENED ONE NIGHT (1934)
QUERY: M ARY P OPPINS , TOY S TORY
QUERY: C UTTHROAT I SLAND , L AST ACTION H ERO
G OOGLE S ETS
BAYES S ETS
G OOGLE S ETS
BAYES S ETS
TOY STORY
MARY POPPINS
TOY STORY 2
MOULIN ROUGE
THE FAST AND THE FURIOUS
PRESQUE RIEN
SPACED
BUT I ? M A CHEERLEADER
MULAN
WHO FRAMED ROGER RABBIT
MARY POPPINS
TOY STORY
WINNIE THE POOH
CINDERELLA
THE LOVE BUG
BEDKNOBS AND BROOMSTICKS
DAVY CROCKETT
THE PARENT TRAP
DUMBO
THE SOUND OF MUSIC
LAST ACTION HERO
CUTTHROAT ISLAND
GIRL
END OF DAYS
HOOK
THE COLOR OF NIGHT
CONEHEADS
ADDAMS FAMILY I
ADDAMS FAMILY II
SINGLES
CUTTHROAT ISLAND
LAST ACTION HERO
KULL THE CONQUEROR
VAMPIRE IN BROOKLYN
SPRUNG
JUDGE DREDD
WILD BILL
HIGHLANDER III
VILLAGE OF THE DAMNED
FAIR GAME
Table 3: Clusters of movies found by Google Sets and Bayesian Sets based on the given queries. The
top 10 are shown for each query and each algorithm. Bayesian Sets was run using the EachMovie
dataset.
and while most of the movies returned by Bayesian Sets are also classic romances, hardly
any of the movies returned by Google Sets are romances, and it would be difficult to call
?Ernest Saves Christmas? either a romance or a classic. Both ?Cutthroat Island? and ?Last
Action Hero? are action movie flops, as are many of the movies given by our algorithm
for that query. All the Bayes Sets movies associated with the query ?Mary Poppins? and
?Toy Story? are children?s movies, while 5 of Google Sets? movies are not. ?But I?m
a Cheerleader?, while appearing to be a children?s movie, is actually an R rated movie
involving lesbian and gay teens.
QUERY: A.S MOLA , B.S CHOLKOPF
TOP M EMBERS
A.S MOLA
B.S CHOLKOPF
S.M IKA
G.R ATSCH
R.W ILLIAMSON
K.M ULLER
J.W ESTON
J.S HAWE -TAYLOR
V.VAPNIK
T.O NODA
TOP WORDS
VECTOR
SUPPORT
KERNEL
PAGES
MACHINES
QUADRATIC
SOLVE
REGULARIZATION
MINIMIZING
MIN
QUERY: L.S AUL , T.JAAKKOLA
TOP M EMBERS
L.S AUL
T.JAAKKOLA
M.R AHIM
M.J ORDAN
N.L AWRENCE
T.J EBARA
W.W IEGERINCK
M.M EILA
S.I KEDA
D.H AUSSLER
TOP WORDS
LOG
LIKELIHOOD
MODELS
MIXTURE
CONDITIONAL
PROBABILISTIC
EXPECTATION
PARAMETERS
DISTRIBUTION
ESTIMATION
QUERY: A.N G , R.S UTTON
TOP M EMBERS
R.S UTTON
A.N G
Y.M ANSOUR
B.R AVINDRAN
D.KOLLER
D.P RECUP
C.WATKINS
R.M OLL
T.P ERKINS
D.M C A LLESTER
TOP WORDS
DECISION
REINFORCEMENT
ACTIONS
REWARDS
REWARD
START
RETURN
RECEIVED
MDP
SELECTS
Table 4: NIPS authors found by Bayesian Sets based on the given queries. The top 10 are shown for
each query along with the top 10 words associated with that cluster of authors. Bayesian Sets was
run using NIPS data from vol 0-12 (1987-1999 conferences).
The NIPS author dataset is rather small, and co-authors of NIPS papers appear very similar
to each other. Therefore, many of the authors found by our algorithm are co-authors of a
NIPS paper with one or more of the query authors. An example where this is not the case is
Wim Wiegerinck, who we do not believe ever published a NIPS paper with Lawrence Saul
or Tommi Jaakkola, though he did have a NIPS paper on variational learning and graphical
models.
As part of the evaluation of our algorithm, we showed 30 na??ve subjects the unlabeled
results of Bayesian Sets and Google Sets for the queries shown from the EachMovie and
Groliers Encyclopedia datasets, and asked them to choose which they preferred. The results
of this study are given in table 5.
Q UERY
% BAYES S ETS
P- VALUE
WARRIOR
A NIMAL
F ISH
96.7
93.3
90.0
86.7
96.7
81.5
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
0.0008
G ONE WITH THE W IND
M ARY P OPPINS
C UTTHROAT I SLAND
Table 5: For each evaluated query (listed by
first query item), we give the percentage of respondents who preferred the results given by
Bayesian Sets and the p-value rejecting the null
hypothesis that Google Sets is preferable to
Bayesian Sets on that particular query.
Since, in the case of binary data, our method reduces to a matrix-vector multiplication, we
also came up with ten heuristic matrix-vector methods which we ran on the same queries,
using the same datasets. Descriptions and results can be found in supplemental material on
the authors websites.
6
Conclusions
We have described an algorithm which takes a query consisting of a small set of items,
and returns additional items which belong in this set. Our algorithm computes a score
for each item by comparing the posterior probability of that item given the set, to the prior
probability of that item. These probabilities are computed with respect to a statistical model
for the data, and since the parameters of this model are unknown they are marginalized out.
For exponential family models with conjugate priors, our score can be computed exactly
and efficiently. In fact, we show that for sparse binary data, scoring all items in a large
data set can be accomplished using a single sparse matrix-vector multiplication. Thus, we
get a very fast and practical Bayesian algorithm without needing to resort to approximate
inference. For example, a sparse data set with over 2 million nonzero entries (Grolier) can
be queried in just over 1 second.
Our method does well when compared to Google Sets in terms of set completions, demonstrating that this Bayesian criterion can be useful in realistic problem domains. One of the
problems we have not yet addressed is deciding on the size of the response set. Since the
scores have a probabilistic interpretation, it should be possible to find a suitable threshold
to these probabilities. In the future, we will incorporate such a threshold into our algorithm.
The problem of retrieving sets of items is clearly relevant to many application domains.
Our algorithm is very flexible in that it can be combined with a wide variety of types of
data (e.g. sequences, images, etc.) and probabilistic models. We plan to explore efficient
implementations of some of these extensions. We believe that with even larger datasets the
Bayesian Sets algorithm will be a very useful tool for many application areas.
Acknowledgements: Thanks to Avrim Blum and Simon Tong for useful discussions, and to Sam
Roweis for some of the data. ZG was partially supported at CMU by the DARPA CALO project.
References
[1] Google ?Sets. http://labs.google.com/sets
[2] Lafferty, J. and Zhai, C. (2002) Probabilistic relevance models based on document and query generation. In Language
modeling and information retrieval.
[3] Ponte, J. and Croft, W. (1998) A language modeling approach to information retrieval. SIGIR.
[4] Robertson, S. and Sparck Jones, K. (1976). Relevance weighting of search terms. J Am Soc Info Sci.
[5] Tenenbaum, J. B. and Griffiths, T. L. (2001). Generalization, similarity, and Bayesian inference. Behavioral and Brain
Sciences, 24:629?641.
[6] Tong, S. (2005). Personal communication.
| 2817 |@word covariance:1 mammal:1 detective:1 mention:1 score:27 document:2 animated:2 current:1 com:2 discretization:1 comparing:1 yet:2 written:4 romance:3 realistic:1 informative:1 assuage:1 remove:1 website:2 item:27 marine:1 caveat:1 provides:3 along:2 beta:1 retrieving:3 consists:2 wild:1 behavioral:1 frequently:1 love:1 brain:1 inspired:1 decreasing:1 food:2 equipped:1 mulan:1 project:1 moreover:1 laptop:1 null:1 what:1 kind:1 interpreted:2 string:2 supplemental:1 finding:2 binarized:2 exactly:3 preferable:1 uk:1 unit:1 appear:1 service:1 limit:1 rouge:1 despite:1 ets:13 chose:1 twice:2 tory:1 suggests:2 kane:1 co:2 gone:2 practical:3 hood:1 practice:1 area:1 empirical:2 thought:1 indeterminate:1 davy:1 word:19 pre:1 road:1 griffith:1 zoubin:2 protein:1 get:3 lady:1 unlabeled:1 put:2 ast:1 applying:1 impossible:1 bill:1 regardless:1 independently:2 rabbit:1 sigir:1 formulate:1 rule:2 classic:3 president:1 analogous:1 target:1 user:3 us:3 hypothesis:2 origin:1 pa:1 element:11 robertson:1 observed:1 ft:1 singing:1 decrease:3 ran:2 reward:2 asked:1 personal:2 solving:1 oral:1 completely:1 vague:1 girl:1 joint:1 darpa:1 fast:3 describe:3 london:2 query:54 choosing:1 heuristic:2 larger:1 solve:2 objectively:1 statistic:4 think:1 advantage:1 sequence:1 analytical:1 ero:2 ucl:1 relevant:5 entered:1 wizard:1 roweis:1 oz:1 bug:1 description:1 parent:1 cluster:22 ach:1 produce:1 object:1 help:1 depending:1 derive:1 ac:1 silk:1 completion:7 ish:2 measured:2 ij:1 received:1 soc:1 implemented:1 involves:1 come:1 blair:1 judge:1 differ:1 tommi:1 closely:1 centered:1 human:3 calo:1 public:1 material:1 generalization:3 probable:2 cheerleader:2 extension:1 ground:1 exp:2 presumably:1 lawrence:2 cognition:1 aul:2 deciding:1 ansour:1 agriculture:1 estimation:1 currently:1 cholkopf:2 wim:1 title:1 village:1 tool:2 weighted:1 uller:1 clearly:3 rather:1 pn:2 jaakkola:3 focus:1 rank:2 likelihood:2 bernoulli:3 pentium:1 sense:3 am:1 inference:3 membership:1 entire:1 explanatory:1 koller:1 selects:1 pixel:2 overall:1 flexible:1 animal:3 art:1 plan:1 marginal:2 equal:1 once:1 having:1 ovie:1 broad:1 jones:1 unsupervised:1 dumbo:1 future:1 report:1 others:1 utton:2 hint:2 few:5 gamma:1 ime:2 ve:1 consisting:6 huge:1 highly:1 evaluation:2 mixture:1 wc1n:1 citizen:1 integral:1 necessary:1 mola:2 taylor:1 re:1 darwin:1 instance:1 column:4 modeling:2 ar:1 queen:1 tractability:1 subset:2 entry:5 my:1 combined:1 person:4 density:1 thanks:1 sensitivity:1 probabilistic:6 xi1:1 modelbased:1 together:1 godfather:1 kull:1 na:1 oogle:6 containing:1 choose:1 resort:1 return:4 toy:5 account:1 star:2 sec:1 includes:2 addams:2 ranking:1 depends:2 multiplicative:1 view:3 lot:1 lab:2 performed:1 observing:3 wind:3 competitive:1 bayes:9 start:1 simon:2 contribution:2 who:4 efficiently:4 spaced:1 preprocess:1 generalize:1 bayesian:27 rejecting:1 published:2 ago:1 history:1 african:1 ary:2 definition:1 evaluates:1 frequency:1 associated:4 dataset:14 color:1 actually:1 worry:1 appears:1 supervised:1 day:1 response:1 daunting:1 evaluated:2 though:3 done:1 roger:1 just:1 hand:1 web:5 night:2 nonlinear:1 warrior:5 google:23 believe:3 mdp:1 mary:3 name:1 effect:1 concept:7 cald:1 ize:1 consisted:1 gay:1 normalized:1 inspiration:1 regularization:1 nonzero:1 ind:1 game:1 self:1 criterion:3 temperature:1 image:3 adventure:1 consideration:1 variational:1 common:2 teen:1 brigade:1 volume:1 million:2 belong:2 organism:1 vegetation:1 elementwise:1 he:1 interpretation:1 mellon:1 refer:1 cambridge:3 queried:1 framed:1 tuning:2 similarly:1 nonlinearity:1 language:2 had:1 access:1 similarity:1 surface:1 etc:2 posterior:1 showed:1 perspective:1 ebara:1 belongs:2 binary:6 came:2 accomplished:1 scoring:1 additional:1 moulin:1 ii:1 sound:1 needing:1 reduces:1 eachmovie:6 retrieval:6 ernest:2 involving:1 basic:1 denominator:1 essentially:1 expectation:1 cmu:1 kernel:1 achieved:1 whereas:1 want:2 remarkably:1 background:1 separately:1 cluster2:1 respondent:1 addressed:1 subject:1 member:1 lafferty:1 call:2 revealed:1 iii:1 identically:1 easy:1 variety:1 fit:2 gave:1 idea:1 qj:2 politics:1 whether:2 expression:2 ordan:1 returned:3 hardly:1 action:6 proprietary:1 matlab:1 dramatically:1 useful:5 clear:2 vegetable:1 listed:2 factorial:1 amount:1 dredd:1 encyclopedia:5 ten:1 tenenbaum:1 reduced:1 http:2 xij:5 percentage:1 fish:2 happened:1 spy:1 neuroscience:1 correctly:1 popularity:2 per:1 carnegie:1 discrete:1 vol:1 putting:1 demonstrating:1 threshold:2 blum:1 preprocessed:2 vast:1 run:5 parameterized:2 named:1 arrive:1 family:8 reasonable:1 almost:1 commanding:1 decision:1 quadratic:1 nixon:1 constraint:1 toshiba:1 argument:1 min:1 performing:1 precompute:1 conjugate:4 belonging:1 across:1 arabia:1 character:2 sam:1 island:3 shallow:1 making:1 encapsulates:1 intuitively:2 taken:2 legal:1 know:2 ordinal:1 hero:3 end:2 operation:2 gaussians:1 apply:1 ocean:1 appearing:2 save:2 vacation:1 top:13 clustering:4 running:1 rain:1 graphical:2 marginalized:1 music:1 coral:1 ghahramani:1 especially:1 quantity:1 traditional:1 diagonal:1 fungal:1 unable:1 sci:1 mineral:1 sensible:6 binarize:1 spanning:1 water:4 pet:1 zhai:1 ratio:3 minimizing:1 difficult:2 katherine:1 unfortunately:1 info:1 negative:1 implementation:1 unknown:4 datasets:10 defining:1 flop:1 communication:2 noda:1 disney:2 dc:36 ever:1 ponte:1 rating:2 rebecca:1 nip:13 brooklyn:1 pattern:1 suitable:1 ranked:3 natural:3 movie:25 rated:3 republican:1 realvalued:1 hook:1 philadelphia:1 xq:1 text:2 heller:2 prior:14 understanding:1 uery:5 acknowledgement:1 multiplication:5 fully:1 expect:1 plant:3 bear:1 interesting:1 generation:1 jesus:1 sufficient:4 article:6 thresholding:2 story:5 scout:1 normalizes:1 romantic:1 supported:1 last:3 free:1 drastically:1 understand:1 saul:1 wide:1 sparse:11 distributed:1 ghz:1 world:2 computes:1 author:18 commonly:1 reinforcement:1 preprocessing:2 simplified:1 approximate:1 preferred:2 christmas:2 pittsburgh:1 leader:1 xi:11 continuous:1 search:1 table:11 robin:1 nature:1 forest:1 onetime:1 domain:2 soldier:3 putin:1 did:1 main:1 universe:1 whole:1 hyperparameters:4 ika:1 fair:2 child:2 elaborate:1 gatsby:2 tong:3 precision:1 wish:1 explicit:1 exponential:6 candidate:3 answering:1 watkins:1 weighting:1 croft:1 removing:1 list:2 normalizing:2 oll:1 consist:1 trap:1 vapnik:1 avrim:1 effectively:1 ember:3 demand:1 simply:1 explore:1 forming:2 religion:1 partially:1 scalar:1 corresponds:1 truth:1 vampire:1 relies:1 extracted:1 rice:1 conditional:1 goal:2 sorted:1 man:1 hard:1 wiegerinck:1 engineer:1 specie:2 atsch:1 zg:2 desert:1 college:4 people:4 support:1 relevance:6 bush:2 incorporate:1 evaluate:2 |
2,001 | 2,818 | A Domain Decomposition Method for
Fast Manifold Learning
Hongyuan Zha
Department of Computer Science
Pennsylvania State University
University Park, PA 16802
[email protected]
Zhenyue Zhang
Department of Mathematics
Zhejiang University, Yuquan Campus,
Hangzhou, 310027, P. R. China
[email protected]
Abstract
We propose a fast manifold learning algorithm based on the methodology of domain decomposition. Starting with the set of sample points
partitioned into two subdomains, we develop the solution of the interface problem that can glue the embeddings on the two subdomains into
an embedding on the whole domain. We provide a detailed analysis to
assess the errors produced by the gluing process using matrix perturbation theory. Numerical examples are given to illustrate the efficiency and
effectiveness of the proposed methods.
1
Introduction
The setting of manifold learning we consider is the following. We are given a parameterized manifold of dimension d defined by a mapping f : ? ? Rm , where d < m, and
? open and connected in Rd . We assume the manifold is well-behaved, it is smooth and
contains no self-intersections etc. Suppose we have a set of points x1 , ? ? ? , xN , sampled
possibly with noise from the manifold, i.e.,
xi = f (?i ) + i ,
i = 1, . . . , N,
(1.1)
where i ?s represent noise. The goal of manifold learning is to recover the parameters ? i ?s
and/or the mapping f (?) from the sample points xi ?s [2, 6, 9, 12]. The general framework of
manifold learning methods involves imposing a connectivity structure such as a k-nearestneighbor graph on the set of sample points and then turn the embedding problem into the
solution of an eigenvalue problem. Usually constructing the graph dominates the computational cost of a manifold learning algorithm, but for large data sets, the computational cost
of the eigenvalue problem can be substantial as well.
The focus of this paper is to explore the methodology of domain decomposition for developing fast algorithms for manifold learning. Domain decomposition by now is a wellestablished field in scientific computing and has been successfully applied in many science
and engineering fields in connection with numerical solutions of partial differential equations. One class of domain decomposition methods partitions the solution domain into
subdomains, solves the problem on each subdomain and glue the partial solutions on the
subdomains by solving an interface problem [7, 10]. This is the general approach we will
follow in this paper. In particular, in section 3, we consider the case where the given set
of sample points x1 , . . . , xN are partitioned into two subdomains. On each of the subdomain, we can use a manifold learning method such as LLE [6], LTSA [12] or any other
manifold learning methods to construct an embedding for the subdomain in question. We
will then formulate the interface problem the solution of which will allow us to combine
the embeddings on the two subdomains together to obtain an embedding over the whole
domain. However, it is not always feasible to carry out the procedure described above. In
section 2, we give necessary and sufficient conditions under which the embedding on the
whole domain can be constructed from the embeddings on the subdomains. In section 4,
we analyze the errors produced by the gluing process using matrix perturbation theory. In
section 5, we briefly mention how the partitioning of the set of sample points into subdomains can be accomplished by some graph partitioning algorithms. Section 6 is devoted to
numerical experiments.
N OTATION . We use e to denote a column vector of all 1?s the dimension of which should
be clear from the context. N (?) and R(?) denote the null space and range space of a
matrix, respectively. For an index set I = [i1 , . . . , ik ], A(:, I) denotes the submatrix of
A consisting of columns of A with indices in I with a similar definition for the rows of a
matrix. We use k ? k to denote the spectral norm of a matrix.
2
A Basic Theorem
Let X = [x1 , ? ? ? , xN ] with xi = f (?i ) + i , i = 1, . . . , N. Assume that the whole sample
domain X is divided into two subdomains X1 = {xi | i ? I1 } and X2 = {xi | i ? I2 }.
Here I1 and I2 denote the index sets such that I1 ? I2 = {1, . . . , N } and I1 ? I2 is not
empty. Suppose we have obtained the two low-dimensional embeddings T 1 and T2 of the
sub-domains X1 and X2 , respectively. The domain decomposition method attempts to
recover the overall embedding T = {?1 , . . . , ?N } from the embeddings T1 and T2 on the
subdomains.
In general, the recovered sub-embedding Tj , j = 1, 2, may not be exactly the subset
{?i | i ? Ij } of T . For example, it is often the case that the recovered embeddings Tj
are approximately affinely equal to {?i | i ? Ij }, i.e., up to certain approximation errors,
there is an affine transformation such that
Tj = {Fj ?i + cj | i ? Ij },
where Fj is a nonsingular matrix and cj a column vector. Thus a domain decomposition
method for manifold learning should be invariant to affine transformation on the embeddings Tj obtained from subdomains. In that case, we can assume that Tj is just the subset
of T , i.e., Tj = {?i | i ? Ij }. With an abuse of notation, we also denote by T and Tj the
matrices of the column vectors in the set T and Tj , for example, we write T = [?1 , . . . , ?N ].
Let ?j be an orthogonal projection with N (?j ) = span([e, TjT ]). Then Tj can be recovered by computing the eigenvectors of ?j corresponding to its zero eigenvalues. To recover
the whole T we need to construct a matrix ? with N (?) = span([e, T T ]) [11].
To this end, for each Tj , let ?j = Qj QTj ? RNj ?Nj , where Qj is an orthonormal basis
matrix of N ([e, TjT ]T ) and Nj is the column-size of Tj . To construct a ? matrix, Let
Sj ? RN ?Nj be the 0-1 selection matrix defined as Sj = IN (:, Ij ), where IN is the
? j = Sj ?j S T . We then simply take ? = ?
?1 + ?
? 2,
identity matrix of order N . Let ?
j
? 1 + w2 ?
? 2 , where w1 and w2 are the weights: wi > 0 and
or more flexibly, ? = w1 ?
w1 + w2 = 1. Obviously k?k ? 1 since k?j k = 1. The following theorem gives the
necessary and sufficient conditions under which the null space of ? is just span{[e, T T ]}.
(In the theorem, we only require the ?j to positive semidefinite.)
Theorem 2.1 Let ?i be two positive semidefinite matrices such that N (?i ) =
span([e, TiT ]), i = 1, 2, and T0 = T1 ? T2 . Assume that [e, T1T ] and [e, T2T ] are of full
column-rank. Then N (?) = span([e, T T ]) if and only if [e, T0T ] is of full column-rank.
Proof. We first prove the necessity by contradiction. Assume that N ([e, T0T ]) 6=
N ([e, T2T ]), then there is y 6= 0 such that [e, T0T ]y = 0 and [e, T T (:, I2 )]y 6= 0. Denote by I1c the complement of I1 , i.e., the index set of i?s which do not belong to I1 . Then
[e, T T (:, I1c )]y 6= 0. Now we construct a vector x as
x(I1 ) = [e, T1T ]y,
x(I1c ) = 0.
Clearly x(I2 ) = 0 and hence x ? N (?). By the condition N (?) = span([e, T T ]), we can
write x in the form x = [e, T T ]z for a column vector z. Specially, x(I1 ) = [e, T1T ]z. Note
that we also have x(I1 ) = [e, T1T ]y by definition. It implies that z = y because [e, T1T ] is
of full rank. Therefor,
[e, T T (:, I1c )]y = [e, T T (:, I1c )]z = x(I1c ) = 0.
Using it together with [e, T0T ]y = 0 we have [e, T T (:, I2 )]y = 0, a contradiction.
Now we prove the sufficiency. Let Q be a basis matrix of N (?). we have
? 1 Q + w 2 G 2 QT ?
? 2 Q = QT ?Q = 0,
w1 G 1 QT ?
? i is positive semidefinite. So
which implies ?i Q(I1 , :) = 0, i = 1, 2, because ?
Q(Ii , :) = [e, TiT ]Gi ,
i = 1, 2.
(2.2)
Taking the overlap part Q(I0 , :) of Q with the different representations
Q(I0 , :) = [e, Ti (:, I0 )T ]Gi = [e, T0T ]Gi ,
we obtain [e, T0T ](G1 ? G2 ) = 0. So G1 = G2 because [e, T0T ] is of full column rank,
giving rise to Q = [e, T T ]G1 , i.e., N (?) ? span([e, T T ]). It follows together with the
obvious result span([e, T T ]) ? N (?) that N (?) = span([e, T T ]).
The above result states that when the overlapping is large enough such that [e, T 0T ] is of
full column-rank (which is generically true when T0 contains d + 1 points or more), the
embedding over the whole domain can be recovered from the embeddings over the two
subdomains. However, to follow Theorem 2.1, it seems that we will need to compute
the null space of ?. In the next section, we will show this can done much cheaply by
considering an interface problem which is of much smaller dimension.
3
Computing the Null Space of ?
In this section, we formulate the interface problem and show how to solve it to glue the
embeddings from the two subdomains to obtain an embedding over the whole domain. To
simplify notations, we re-denote by T ? the actual embedding over the whole domain and
Tj? the subsets of T ? corresponding to subdomains. We then use Tj to denote affinely
transformed versions of Tj? obtained by LTSA for example, i.e., Tj? = cj eT + Fj Tj . Here
cj is a constant column vector in Rd and Fj is a nonsingular matrix. Denote by T0j the
overlapping part of Tj corresponding to I0 = I1 ? I2 as in the proof of Theorem 2.1. We
?
consider the overlapping parts T0j
of Tj? ,
?
?
c1 eT + F1 T01 = T01
= T02
= c2 eT + F2 T02 .
Or equivalently,
h
[e,
T
T01
],
?[e,
T
T02
]
i
(c1 , F1 )T
(c2 , F2 )T
= 0.
(3.3)
i
h
T
T
]
], ?[e, T02
Therefore, if we take an orthonormal basis G of the null space of [e, T01
T
T
]G2 . Let Aj =
]G1 = [e, T02
and partition G = [GT1 , GT2 ]T conformally, then [e, T01
T
T T
Gj [e, Tj ] , j = 1, 2. Define the matrix A such that A(:, Ij ) = Aj . Then since ?i ATi =
0, the well-defined matrix AT is a basis of N (?),
?AT = S1 ?1 S1T AT + S2 ?2 S2T AT = S1 ?1 AT1 + S2 ?2 AT2 = 0.
Therefore, we can use AT to recover the global embedding T .
A simpler alternative way is use a one-sided affine transformation, i.e., fix one of T i and
affinely transform the other; the affine matrix is obtained by fixing one of T?0i and transforming the other. For example, we can determine c and F such that
T01 = ceT + F T02 ,
(3.4)
and transform T2 to T?2 = ce + F T2 . Clearly, for the overlapping part, T?02 = T01 . Then
we can construct a larger matrix T by T (:, I1 ) = T1 , T (:, I2 ) = ceT + F T2 . One can also
readily verify that T T is a basis matrix of N (?).
T
In the noisy case, a least squares formulation will be needed. For example, for the simultaneous affine transformation, we take G = [GT1 , GT2 ]T to be an orthonormal matrix in
R2(d+1)?(d+1) such that
T
T
k[e, T01
]G1 ? [e, T02
]G2 k = min .
It is known that the minimum G is given by the right
correspondh singular vector matrix
i
T
T
ing to the d + 1 smallest singular values of W = [e, T01 ], ?[e, T02 ] , and the residual
T
T
[e, T01
]G1 ? [e, T02
]G2
= ?d+2 (W ). For the one-side approach (3.4), [c, F ] can be a
solution to the least squares problem
min
T01 ? ceT + F T02
= min
(T01 ? t01 eT ) ? F (T02 ? t02 eT )
,
c, F
F
where t0j is the column mean of T0j . The minimum is achieved at F = (T01 ?t01 eT )(T02 ?
t02 eT )+ , c = t01 ? F t02 . Clearly, the residual now reads as
min
T01 ? ceT + F T02
=
(T01 ? t01 eT ) I ? (T02 ? t02 eT )+ (T02 ? t02 eT )
.
c, F
Notice that the overlapping parts in the two affinely transformed subsets are not exactly
equal to each other in the noisy case. There are several possible choices for setting A(:, I 0 )
or T?(:, I0 ). For example, one choice is to set T (:, I0 ) by a convex combination of T0j ?s,
T (:, I0 ) = ?T01 + (1 ? ?)T?02 .
with ? = 1/2 for example.
We summarize discussions above in the following two algorithms for gluing the two subdomains T1 and T2 .
Algorithm I. [Simultaneously affine transformation]
1. Compute the righthsingular vector matrix
i G corresponding to the d + 1 smallest
T
T
singular values of [e, T01 ], ?[e, T02 ] .
2. Partition G = [GT1 , GT2 ]T and set Ai = GTi [e, TiT ]T , i = 1, 2, and
A(:, I1 \I0 ) = A11 ,
A(:, I0 ) = ?A01 + (1 ? ?)A02 ,
A(:, I2 \I0 ) = A12 ,
where A0j is the overlap part of Aj and A1j is the Aj with A0j deleted.
3 Compute the column mean a of A, and an orthogonal basis U of N (aT ).
4. Set T = U T A.
Algorithm II. [One-side affine transformation]
T T
] kF .
1. Compute the least squares problem minW kT01 ? W [e, T02
2. Affinely transform T2 to T?2 = W [e, T T ]T .
2
3. Set the global coordinate matrix T by
T (:, I1 \I0 ) = T11 ,
4
T (:, I0 ) = ?T01 + (1 ? ?)T?02 ,
T (:, I2 \I0 ) = T?12 .
Error Analysis
As we mentioned before, the computation of Tj , j = 1, 2 using a manifold learning algorithm such as LTSA involves errors. In this section, we assess the impact of those errors on
the accuracy of the gluing process. Two issues are considered for the error analysis. One is
the perturbation analysis of N (?? ) when the computation of ??i is subject to error. In this
case, N (?? ) will be approximated by the smallest (d + 1)-dimensional eigenspace V of an
approximation ? ? ?? (Theorem 4.1). The other issue is the error estimation of V when
a basis matrix of V is approximately constructed by affinely transformed local embeddings
as described in section 3 (Theorem 4.2). Because of space limit, we will not present the
details of the proofs of the results.
The distance of two linear subspaces X and Y are defined by dist(X , Y) = kPX ? PY k,
where PX and PY are the orthogonal projection onto X and Y, respectively. Let
i = k?i ? ??i k, where ??i and ?i are the orthogonal projectors onto the range spaces
span([e, (Ti? )T ]) and span([e, (Ti )T ]), respectively. Clearly, if ?? = w1 ??1 + w2 ??2 and
? = w1 ?1 + w2 ?2 , then
dist span([e, (T ? )T ]), span([e, T T ]) = k? ? ?? k ? w1 1 + w2 2 ? .
Theorem 4.1 Let ? be the smallest nonzero eigenvalue of ?? and V the subspace spanned
by the eigenvectors of ? corresponding to the d + 1 smallest eigenvalues. If < ?/4, and
42 (k?? k ? ? + 2) < (? ? 2)3 , then
dist(V, N (?? )) ? p
.
(?/2 ? )2 + 2
Theorem 4.2 Let ? and be defined in Theorem 4.1. A is the matrix computed by the
simultaneous affine transformation (Algorithm I in section 3) Let ?i (?) be the i-th smallest
singular value of a matrix. Denote
1
?
T
T
? = ?d+2 ( [e, T01
], ?[e, T02
] ), ? =
.
2
?min (A)
If < ?/4, then
dist(V, span(A)) ?
1
?d+2 (?)
?+
?/2
(?/2 ? )2
From Theorems 4.1 and 4.2 we conclude directly that
1
?/2
2
dist(span(A), N (?? )) ?
?+
+p
.
?d+2 (?)
(?/2 ? )2
(? ? 2)2 + 42
5
Partitioning the Domains
To apply the domain decomposition methods, we need to partition the given set of data
points into several domains making use of the k nearest neighbor graph imposed on the
data points. This reduces the problem to a graph partition problem and many techniques
such as spectral graph partitioning and METIS [3, 5] can be used. In our experiments, we
have used a particularly simple approach: we use the reverse Cuthill-McKee method [4]
to order the vertices of the k-NN graph and then partition the vertices into domains (for
details see Test 2 in the next section).
Once we have partitioned the whole domain into multiple overlapping subdomains we can
use the following two approaches to glue them together.
Successive gluing. Here we glue the subdomains one by one as follows. Initially set
T (1) = T1 and I (1) = I1 , and then glue the patch Tk to T (k?1) and obtain the larger one
T (k) for k = 2, . . . , K, and so on. The index set of T (k) is given by I (k) = I (k?1) ? Ik .
(k)
Clearly the overlapping set of T (k?1) and Tk is I0 = I (k?1) ? Ik .
Recursive gluing. Here at the leaf level, we divide the subdomains into several pairs,
(0)
(0)
(1)
say (T2i?1 , T2i ), 1 = 1, 2, . . .. Then glue each pair to be a larger subdomain Ti and
continue. The recursive gluing method is obviously parallelizable.
6
Numerical Experiments
In this section we report numerical experiments for the proposed domain decomposition
methods for manifold learning. This efficiency and effectiveness of the methods clearly
depend on the accuracy of the computed embeddings for subdomains, the sizes of the
subdomains, and the sizes of the overlaps of the subdomains.
Test 1. Our first test data set is sampled from a Swiss-roll as follows
xi = [ti cos(ti ), hi , ti sin(ti )]T ,
i = 1, . . . , N = 2000,
(6.5)
9?
[ 3?
2 , 2 ]
where ti and hi are uniformly randomly chosen in the intervals
and [0, 21], respectively. Let ?i be the arc length of the corresponding spiral curve [t cos(t), t sin(t)]T
from t0 = 3?
2 to ti . ?max = maxi ?i . To compare the CPU time of the domain decomposition methods, we simply partition the ? -interval [0, ?max ] into k? subintervals (ai?1 , ai ]
with equal length and also partition the h-interval into kh subintervals (bj?1 , bj ]. Let
Dij = (ai?1 , ai ] ? (bj?1 , bj ] and Sij (r) be the balls centered at (ai , bj ) with radius r. We
set the subdomains as
Xij = {xk | (?k , hk ) ? Dij ? Sij (r)}.
Clearly r determines the size of overlapping parts of Xij with Xi+1,j , Xi,j+1 , Xi+1,j+1 .
The submatrices Xij are ordered as X1,1 , X1,2 , . . . , X1,kh , X2,1 , . . . and denoted as Xk ,
k = 1, . . . , K = k? kh . We first compute the K local 2-D embeddings T1 , . . . , TK by
applying LTSA on the sample data sets Xk for the subdomains. Then those local coordinate
embeddings Tk are aligned by the successive one-sided affine transformation algorithm by
adding subdomain Tk one by one.
Table 1 lists the total CPU time for the successive domain decomposition algorithm, including the time for computing the embeddings {Tk } for the subdomains, for different
parameters k? and kh with the parameter r = 5. In Table 2, we list the CPU time for the
recursive gluing approach taking into account the parallel procedure. As a comparison, the
CPU time of LTSA applying to the whole data points is 6.23 seconds.
Table 1: CPU Time (seconds) of the successive domain decomposition algorithm.
kh =2
3
4
5
6
k? = 3
1.89 1.70 1.64 1.61 1.64
167 1.67 1.61 1.70 1.77
4
5
1.66 1.59 1.67 1.78 1.86
163 1.66 1.75 1.89 2.09
6
7
1.59 1.70 1.84 2.02 2.23
8
1.58 1.80 1.94 2.22 2.44
9
1.63 1.83 2.06 2.31 2.66
10
1.63 1.86 2.38 2.56 2.94
Table 2: CPU Time (seconds) of the parallel recursive domain decomposition.
kh =2
3
4
5
6
k? = 3
0.52 0.34 0.27 0.19 017
4
0.53 0.23 0.20 0.17 0.13
5
0.31 0.17 0.19 0.17 0.14
6
0.25 0.19 0.16 0.13 0.14
7
0.20 0.16 0.14 0.14 0.11
0.20 0.17 0.16 0.14 0.14
8
9
0.19 0.16 0.14 0.14 0.14
10
0.19 0.16 0.17 0.19 0.13
Test 2. The symmetric reverse Cuthill-McKee permutation (symrcm) is an algorithm for
ordering the rows and columns of a symmetric sparse matrix [4]. It tends to move the
nonzero elements of the sparse matrix towards the main diagonals of the matrix. We use
Matlab?s symrcm to the adjacency matrix of the k-nearest-neighbor graph of the data points
to reorder them. Denote by X the reordered data set. We then partition the whole sample
points into K = 16 subsets Xi = X(:, si : ei ) with si = max{1, (i ? 1)m ? 20},
ei = min{im + 20, N }, and m = N/K = 125.
It is known that the t-h parameters in (6.5) represent an isometric parametrization of the
swiss-roll surface. We have shown that within the errors made in computing the local
embeddings, LTSA can recover the isometric parametrization up to an affine transformation
[11]. We denote by T? (k) = ceT + F T (k) the optimal approximation to T ? (:, I (k) ) within
affine transformations,
kT ? (:, I (k) ) ? T?(k) kF = min kT ? (:, I (k) ) ? (ceT + F T (k) )kF .
c,F
We denote by ?k the average of relative errors
?k =
X kT ? (:, i) ? T?(k) (:, i)k2
.
kT ? (:, i)k2
|I (k) |
(k)
1
i?I
In the left panel of Figure 1 we plot the initial embedding errors for the subdomains (blue
bar), the error of LTSA applied to the whole data set (red bar), and the errors ? k of the
successive gluing (red line). The successive gluing method gives an embedding with an
acceptable accuracy comparing with the accuracy obtained by applying LTSA to the whole
data set. As shown in the error analysis, the errors in successive gluing will increase when
the initial errors for the subdomains increase. To show it more clearly, we also plot the ? k
for the recursive gluing method in the right panel of Figure 1.
Acknowledgment. The work of first author was supported in part by by NSFC
(project 60372033), the Special Funds for Major State Basic Research Projects (project
?3
4.5
?3
x 10
4.5
successive alignment
subdomains
whole domain
4
3.5
3
3
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0
2
4
6
root 4
root 3
root 2
root 1
subdomains
whole domain
4
3.5
0
x 10
8
10
k
12
14
16
18
0
0
2
4
6
8
10
12
14
16
18
k
Figure 1: Relative errors for the successive (left) and recursive (right) approaches.
G19990328), and NSF grant CCF-0305879. The work of second author was supported in
part by NSF grants DMS-0311800 and CCF-0430349.
References
[1] M. Brand. Charting a manifold. Advances in Neural Information Processing Systems
15, MIT Press, 2003.
[2] D. Donoho and C. Grimes. Hessian Eigenmaps: new tools for nonlinear dimensionality reduction. Proceedings of National Academy of Science, 5591-5596, 2003.
[3] M. Fiedler. A property of eigenvectors of nonnegative symmetric matrices and its
application to graph theory. Czech. Math. J. 25:619?637, 1975.
[4] A. George and J. W. Liu. Computer Solution of Large Sparse Positive Definite Matrices. Prentice Hall, 1981.
[5] METIS. http://www-users.cs.umn.edu/?karypis/metis/.
[6] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290: 2323?2326, 2000.
[7] B. Smith, P. Bjorstad and W. Gropp Domain Decomposition, Parallel Multilevel
Methods for Elliptic Partial Differential Equations. Cambridge University Press,
1996.
[8] G.W. Stewart and J.G. Sun. Matrix Perturbation Theory. Academic Press, New York,
1990.
[9] J. Tenenbaum, V. De Silva and J. Langford. A global geometric framework for nonlinear dimension reduction. Science, 290:2319?2323, 2000.
[10] A. Toselli and O. Widlund. Domain Decomposition Methods - Algorithms and Theory. Springer, 2004.
[11] H. Zha and Z. Zhang. Spectral analysis of alignment in manifold learning. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing,
(ICASSP), 2005.
[12] Z. Zhang and H. Zha. Principal manifolds and nonlinear dimensionality reduction via
tangent space alignment. SIAM J. Scientific Computing. 26:313-338, 2005.
| 2818 |@word version:1 briefly:1 norm:1 seems:1 glue:7 open:1 a02:1 decomposition:15 mention:1 carry:1 reduction:4 initial:2 liu:1 contains:2 necessity:1 ati:1 recovered:4 comparing:1 si:2 readily:1 numerical:5 partition:9 plot:2 fund:1 leaf:1 xk:3 parametrization:2 smith:1 math:1 cse:1 successive:9 simpler:1 zhang:3 t01:23 constructed:2 c2:2 differential:2 toselli:1 ik:3 s2t:1 prove:2 combine:1 dist:5 actual:1 cpu:6 considering:1 project:3 campus:1 notation:2 panel:2 eigenspace:1 null:5 transformation:10 nj:3 ti:10 exactly:2 rm:1 k2:2 partitioning:4 grant:2 t1:6 positive:4 engineering:1 before:1 local:4 tends:1 limit:1 nsfc:1 approximately:2 abuse:1 china:1 nearestneighbor:1 co:2 range:2 karypis:1 zhejiang:1 acknowledgment:1 recursive:6 definite:1 swiss:2 procedure:2 submatrices:1 projection:2 onto:2 selection:1 prentice:1 context:1 applying:3 py:2 www:1 projector:1 imposed:1 rnj:1 starting:1 flexibly:1 convex:1 formulate:2 contradiction:2 qtj:1 orthonormal:3 spanned:1 embedding:14 coordinate:2 suppose:2 user:1 pa:1 element:1 approximated:1 particularly:1 t02:23 t2i:2 connected:1 sun:1 ordering:1 substantial:1 mentioned:1 transforming:1 depend:1 solving:1 tit:3 reordered:1 efficiency:2 f2:2 basis:7 icassp:1 fiedler:1 fast:3 larger:3 solve:1 say:1 gi:3 g1:6 transform:3 noisy:2 obviously:2 eigenvalue:5 propose:1 aligned:1 roweis:1 academy:1 t0j:5 kh:6 empty:1 kpx:1 a11:1 tk:6 illustrate:1 develop:1 fixing:1 nearest:2 ij:6 qt:3 solves:1 c:1 involves:2 implies:2 radius:1 centered:1 a12:1 adjacency:1 require:1 multilevel:1 f1:2 fix:1 tjt:2 im:1 considered:1 hall:1 mapping:2 bj:5 major:1 smallest:6 estimation:1 successfully:1 a0j:2 gt1:3 tool:1 mit:1 clearly:8 always:1 focus:1 zju:1 rank:5 hk:1 affinely:6 a01:1 hangzhou:1 i0:14 nn:1 initially:1 transformed:3 i1:16 overall:1 issue:2 denoted:1 special:1 s1t:1 field:2 construct:5 equal:3 once:1 psu:1 park:1 t2:8 report:1 simplify:1 randomly:1 simultaneously:1 national:1 consisting:1 attempt:1 alignment:3 umn:1 generically:1 grime:1 semidefinite:3 tj:20 devoted:1 kt:4 partial:3 necessary:2 minw:1 orthogonal:4 divide:1 re:1 i1c:6 column:14 stewart:1 cost:2 vertex:2 subset:5 at2:1 dij:2 eigenmaps:1 international:1 siam:1 together:4 connectivity:1 w1:7 possibly:1 account:1 de:1 gt2:3 root:4 analyze:1 red:2 zha:4 recover:5 parallel:3 ass:2 square:3 accuracy:4 roll:2 nonsingular:2 produced:2 simultaneous:2 parallelizable:1 definition:2 obvious:1 dm:1 proof:3 sampled:2 dimensionality:3 cj:4 isometric:2 follow:2 methodology:2 sufficiency:1 done:1 formulation:1 just:2 langford:1 ei:2 nonlinear:4 overlapping:8 aj:4 behaved:1 scientific:2 gti:1 verify:1 true:1 ccf:2 hence:1 read:1 symmetric:3 nonzero:2 i2:11 sin:2 self:1 interface:5 fj:4 silva:1 subdomain:5 mckee:2 belong:1 cambridge:1 imposing:1 ai:6 rd:2 mathematics:1 therefor:1 surface:1 gj:1 etc:1 t2t:2 reverse:2 certain:1 continue:1 accomplished:1 gropp:1 minimum:2 george:1 determine:1 signal:1 ii:2 full:5 multiple:1 reduces:1 smooth:1 ing:1 academic:1 divided:1 impact:1 basic:2 represent:2 achieved:1 c1:2 interval:3 singular:4 w2:6 specially:1 ltsa:8 subject:1 effectiveness:2 embeddings:15 enough:1 spiral:1 pennsylvania:1 cn:1 qj:2 t0:3 speech:1 hessian:1 york:1 matlab:1 detailed:1 clear:1 eigenvectors:3 locally:1 tenenbaum:1 wellestablished:1 http:1 xij:3 nsf:2 notice:1 blue:1 write:2 cet:6 deleted:1 ce:1 graph:9 parameterized:1 patch:1 t1t:5 acceptable:1 submatrix:1 hi:2 nonnegative:1 x2:3 span:15 min:7 px:1 department:2 developing:1 metis:3 combination:1 ball:1 smaller:1 partitioned:3 wi:1 gluing:12 making:1 s1:2 invariant:1 cuthill:2 sij:2 sided:2 equation:2 turn:1 needed:1 end:1 apply:1 spectral:3 elliptic:1 alternative:1 subdomains:28 denotes:1 t11:1 giving:1 move:1 question:1 diagonal:1 subspace:2 distance:1 manifold:18 charting:1 length:2 index:5 equivalently:1 a1j:1 rise:1 arc:1 rn:1 perturbation:4 complement:1 pair:2 connection:1 acoustic:1 czech:1 bar:2 usually:1 summarize:1 max:3 including:1 overlap:3 residual:2 geometric:1 tangent:1 kf:3 relative:2 permutation:1 at1:1 affine:11 sufficient:2 row:2 supported:2 otation:1 side:2 lle:1 allow:1 conformally:1 neighbor:2 saul:1 taking:2 sparse:3 curve:1 dimension:4 xn:3 zyzhang:1 author:2 made:1 sj:3 global:3 hongyuan:1 conclude:1 reorder:1 xi:10 table:4 subintervals:2 zhenyue:1 constructing:1 domain:30 main:1 yuquan:1 whole:15 noise:2 s2:2 x1:8 sub:2 theorem:12 maxi:1 r2:1 list:2 dominates:1 adding:1 intersection:1 simply:2 explore:1 cheaply:1 ordered:1 g2:5 springer:1 determines:1 goal:1 identity:1 donoho:1 towards:1 feasible:1 uniformly:1 principal:1 total:1 brand:1 |
2,002 | 2,819 | A Bayesian Spatial Scan Statistic
Daniel B. Neill
Andrew W. Moore
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
{neill,awm}@cs.cmu.edu
Gregory F. Cooper
Center for Biomedical Informatics
University of Pittsburgh
Pittsburgh, PA 15213
[email protected]
Abstract
We propose a new Bayesian method for spatial cluster detection, the
?Bayesian spatial scan statistic,? and compare this method to the standard
(frequentist) scan statistic approach. We demonstrate that the Bayesian
statistic has several advantages over the frequentist approach, including
increased power to detect clusters and (since randomization testing is
unnecessary) much faster runtime. We evaluate the Bayesian and frequentist methods on the task of prospective disease surveillance: detecting spatial clusters of disease cases resulting from emerging disease outbreaks. We demonstrate that our Bayesian methods are successful in
rapidly detecting outbreaks while keeping number of false positives low.
1
Introduction
Here we focus on the task of spatial cluster detection: finding spatial regions where some
quantity is significantly higher than expected. For example, our goal may be to detect
clusters of disease cases, which may be indicative of a naturally occurring epidemic (e.g.
influenza), a bioterrorist attack (e.g. anthrax release), or an environmental hazard (e.g. radiation leak). [1] discusses many other applications of cluster detection, including mining
astronomical data, medical imaging, and military surveillance. In all of these applications,
we have two main goals: to identify the locations, shapes, and sizes of potential clusters,
and to determine whether each potential cluster is more likely to be a ?true? cluster or simply a chance occurrence. Thus we compare the null hypothesis H0 of no clusters against
some set of alternative hypotheses H1 (S), each representing a cluster in some region or
regions S. In the standard frequentist setting, we do this by significance testing, computing
the p-values of potential clusters by randomization; here we propose a Bayesian framework, in which we compute posterior probabilities of each potential cluster.
Our primary motivating application is prospective disease surveillance: detecting spatial
clusters of disease cases resulting from a disease outbreak. In this application, we perform
surveillance on a daily basis, with the goal of finding emerging epidemics as quickly as
possible. For this task, we are given the number of cases of some given syndrome type
(e.g. respiratory) in each spatial location (e.g. zip code) on each day. More precisely, we
typically cannot measure the actual number of cases, and instead rely on related observable
quantities such as the number of Emergency Department visits or over-the-counter drug
sales. We must then detect those increases which are indicative of emerging outbreaks,
as close to the start of the outbreak as possible, while keeping the number of false positives low. In biosurveillance of disease, every hour of earlier detection can translate into
thousands of lives saved by more timely administration of antibiotics, and this has led to
widespread interest in systems for the rapid and automatic detection of outbreaks.
In this spatial surveillance setting, each day we have data collected for a set of discrete
spatial locations si . For each location si , we have a count ci (e.g. number of disease cases),
and an underlying baseline bi . The baseline may correspond to the underlying population
at risk, or may be an estimate of the expected value of the count (e.g. derived from the
time series of previous count data). Our goal, then, is to find if there is any spatial region
S (set of locations si ) for which the counts are significantly higher than expected, given the
baselines. For simplicity, we assume here (as in [2]) that the locations si are aggregated to a
uniform, two-dimensional, N ? N grid G, and we search over the set of rectangular regions
S ? G. This allows us to search both compact and elongated regions, allowing detection of
elongated disease clusters resulting from dispersal of pathogens by wind or water.
1.1 The frequentist scan statistic
One of the most important statistical tools for cluster detection is Kulldorff?s spatial scan
statistic [3-4]. This method searches over a given set of spatial regions, finding those regions which maximize a likelihood ratio statistic and thus are most likely to be generated
under the alternative hypothesis of clustering rather than the null hypothesis of no clustering. Randomization testing is used to compute the p-value of each detected region,
correctly adjusting for multiple hypothesis testing, and thus we can both identify potential
clusters and determine whether they are significant. Kulldorff?s framework assumes that
counts ci are Poisson distributed with ci ? Po(qbi ), where bi represents the (known) census population of cell si and q is the (unknown) underlying disease rate. Then the goal of
the scan statistic is to find regions where the disease rate is higher inside the region than
| H1 (S))
outside. The statistic used for this is the likelihood ratio F(S) = P(Data
P(Data | H0 ) , where the
null hypothesis H0 assumes a uniform disease rate q = qall . Under H1 (S), we assume that
q = qin for all si ? S, and q = qout for all si ? G ? S, for some constants qin > qout . From
this, we can derive an expression for F(S) using maximum likelihood estimates of qin ,
qout , and qall : F(S) = ( CBinin )Cin ( CBout
)Cout ( CBall
)?Call , if CBinin > CBout
, and F(S) = 1 otherwise.
out
out
all
In this expression, we have Cin = ?S ci , Cout = ?G?S ci , Call = ?G ci , and similarly for the
baselines Bin = ?S bi , Bout = ?G?S bi , and Ball = ?G bi .
Once we have found the highest scoring region S? = arg maxS F(S) of grid G, and its score
F ? = F(S? ), we must still determine the statistical significance of this region by randomization testing. To do so, we randomly create a large number R of replica grids by sampling
under the null hypothesis ci ? Po(qall bi ), and find the highest scoring region and its score
+1
for each replica grid. Then the p-value of S? is Rbeat
R+1 , where Rbeat is the number of repli0
?
cas G with F higher than the original grid. If this p-value is less than some threshold (e.g.
0.05), we can conclude that the discovered region is unlikely to have occurred by chance,
and is thus a significant spatial cluster; otherwise, no significant clusters exist.
The frequentist scan statistic is a useful tool for cluster detection, and is commonly used in
the public health community for detection of disease outbreaks. However, there are three
main disadvantages to this approach. First, it is difficult to make use of any prior information that we may have, for example, our prior beliefs about the size of a potential outbreak
and its impact on disease rate. Second, the accuracy of this technique is highly dependent
on the correctness of our maximum likelihood parameter estimates. As a result, the model
is prone to parameter overfitting, and may lose detection power in practice because of
model misspecification. Finally, the frequentist scan statistic is very time consuming, and
may be computationally infeasible for large datasets. A naive approach requires searching
over all rectangular regions, both for the original grid and for each replica grid. Since there
are O(N 4 ) rectangles to search for an N ? N grid, the total computation time is O(RN 4 ),
where R = 1000 is a typical number of replications. In past work [5, 2, 6], we have shown
how to reduce this computation time by a factor of 20-2000x through use of the ?fast spatial
scan? algorithm; nevertheless, we must still perform this faster search both for the original
grid and for each replica.
We propose to remedy these problems through the use of a Bayesian spatial scan statistic.
First, our Bayesian model makes use of prior information about the likelihood, size, and
impact of an outbreak. If these priors are chosen well, we should achieve better detection power than the frequentist approach. Second, the Bayesian method uses a marginal
likelihood approach, averaging over possible values of the model parameters qin , qout , and
qall , rather than relying on maximum likelihood estimates of these parameters. This makes
the model more flexible and less prone to overfitting, and reduces the potential impact of
model misspecification. Finally, under the Bayesian model there is no need for randomization testing, and (since we need only to search the original grid) even a naive search can be
performed relatively quickly. We now present the Bayesian spatial scan statistic, and then
compare it to the frequentist approach on the task of detecting simulated disease epidemics.
2
The Bayesian scan statistic
Here we consider the natural Bayesian extension of Kulldorff?s scan statistic, moving from
a Poisson to a conjugate Gamma-Poisson model. Bayesian Gamma-Poisson models are
a common representation for count data in epidemiology, and have been used in disease
mapping by Clayton and Kaldor [7], Molli?e [8], and others. In disease mapping, the effect
of the Gamma prior is to produce a spatially smoothed map of disease rates; here we instead
focus on computing the posterior probabilities, allowing us to determine the likelihood that
an outbreak has occurred, and to estimate the location and size of potential outbreaks.
For the Bayesian spatial scan, as in the frequentist approach, we wish to compare the null
hypothesis H0 of no clusters to the set of alternative hypotheses H1 (S), each representing
a cluster in some region S. As before, we assume Poisson likelihoods, ci ? Po(qbi ). The
difference is that we assume a hierarchical Bayesian model where the disease rates qin , qout ,
and qall are themselves drawn from Gamma distributions. Thus, under the null hypothesis
H0 , we have q = qall for all si ? G, where qall ? Ga(?all , ?all ). Under the alternative hypothesis H1 (S), we have q = qin for all si ? S and q = qout for all si ? G ? S, where we independently draw qin ? Ga(?in , ?in ) and qout ? Ga(?out , ?out ). We discuss how the ? and ? priors
are chosen below. From this model, we can compute the posterior probabilities P(H1 (S)|D)
of an outbreak in each region S, and the probability P(H0 | D) that no outbreak has ocH0 )P(H0 )
(S))P(H1 (S))
curred, given dataset D: P(H0 | D) = P(D |P(D)
and P(H1 (S) | D) = P(D | H1 P(D)
,
where P(D) = P(D | H0 )P(H0 ) + ?S P(D | H1 (S))P(H1 (S)). We discuss the choice of prior
probabilities P(H0 ) and P(H1 (S)) below. To compute the marginal likelihood of the data
given each hypothesis, we must integrate over all possible values of the parameters (qin ,
qout , qall ) weighted by their respective probabilities. Since we have chosen a conjugate
prior, we can easily obtain a closed-form solution for these likelihoods:
P(D | H0 ) =
Z
P(qall ? Ga(?all , ?all ))
P(D | H1 (S)) =
?
Z
? P(ci ? Po(qall bi )) dqall
si ?G
Z
P(qin ? Ga(?in , ?in )) ? P(ci ? Po(qin bi )) dqin
P(qout ? Ga(?out , ?out ))
si ?S
?
P(ci ? Po(qout bi )) dqout
si ?G?S
Now, computing the integral, and letting C = ? ci and B = ? bi , we obtain:
?? ??1 ??q
(qbi )ci e?qbi
q
e
dq ?
?
?(?)
(ci )!
si
si
Z
Z
?
?
?
?
?
? ?(? +C)
q??1 e??q q? ci e?q ? bi dq =
q?+C?1 e?(?+B)q dq =
?(?)
?(?)
(? + B)?+C ?(?)
Z
P(q ? Ga(?, ?)) ? P(ci ? Po(qbi )) dq =
Z
Thus we have the following expressions for the marginal likelihoods: P(D | H0 ) ?
?
(?all )?all ?(?all +Call )
)?out ?(?out +Cout )
in ) in ?(?in +Cin )
, and P(D | H1 (S)) ? (?(?+B
? (? (?out
.
?all +Call
+B )?out +Cout ?(? )
)?in +Cin ?(? )
(?all +Ball )
?(?all )
in
in
in
out
out
out
The Bayesian spatial scan statistic can be computed simply by first calculating the score
P(D | H1 (S))P(H1 (S)) for each spatial region S, maintaining a list of regions ordered by
score. We then calculate P(D | H0 )P(H0 ), and add this to the sum of all region scores, obtaining the probability of the data P(D). Finally, we can compute the posterior probability
H0 )P(H0 )
(S))P(H1 (S))
for each region, as well as P(H0 | D) = P(D |P(D)
. Then
P(H1 (S) | D) = P(D | H1 P(D)
we can return all regions with non-negligible posterior probabilities, the posterior probability of each, and the overall probability of an outbreak. Note that no randomization testing
is necessary, and thus overall complexity is proportional to number of regions searched,
e.g. O(N 4 ) for searching over axis-aligned rectangles in an N ? N grid.
2.1
Choosing priors
One of the most challenging tasks in any Bayesian analysis is the choice of priors. For
any region S that we examine, we must have values of the parameter priors ?in (S), ?in (S),
?out (S), and ?out (S), as well as the region prior probability P(H1 (S)). We must also choose
the global parameter priors ?all and ?all , as well as the ?no outbreak? prior P(H0 ).
Here we consider the simple case of a uniform region prior, with a known prior probability
of an outbreak P1 . In other words, if there is an outbreak, it is assumed to be equally
1
likely to occur in any spatial region. Thus we have P(H0 ) = 1 ? P1 , and P(H1 (S)) = NPreg
,
where Nreg is the total number of regions searched. The parameter P1 can be obtained from
historical data, estimated by human experts, or can simply be used to tune the sensitivity
and specificity of the algorithm. The model can also be easily adapted to a non-uniform
region prior, taking into account our prior beliefs about the size and shape of outbreaks.
For the parameter priors, we assume that we have access to a large number of days of past
data, during which no outbreaks are known to have occurred. We can then obtain estimated
values of the parameter priors under the null hypothesis by matching the moments of each
Gamma distribution to their historical values. In other words, we set the expectation and
variance of the Gamma distribution Ga(?allh, ?alli) to the sample expectation
h i and variance
Call
?all
?all
Call
of Ball observed in past data: ? = Esample Ball , and ?2 = Varsample CBall
. Solving for
all
all
?all and ?all , we obtain ?all
h
i2
all
h
i
Esample CBall
Esample CBall
hall i and ?all =
h all i .
=
Varsample CBall
Varsample CBall
all
all
The calculation of priors ?in (S), ?in (S), ?out (S), and ?out (S) is identical except for two differences: first, we must condition on the region S, and second, we must assume the alternative hypothesis H1 (S) rather than the null hypothesis H0 . Repeating the above derivation for
i2
i
h
h
(S)
(S)
Esample CBout
Esample CBout
out (S) i
out (S) i
h
h
the ?out? parameters, we obtain ?out (S) =
and ?out (S) =
(S)
(S) ,
Varsample CBout
Varsample CBout
out (S)
out (S)
where Cout (S) and Bout (S) are respectively the total count ?G?S ci and total baseline ?G?S bi
outside the region. Note that an outbreak in some region S does not affect the disease rate
outside region S. Thus we can use the same values of ?out (S) and ?out (S) whether we are
assuming the null hypothesis H0 or the alternative hypothesis H1 (S).
On the other hand, the effect of an outbreak inside region S must be taken into account when
computing ?in (S) and ?in (S); since we assume that no outbreak has occurred in the past
data, we cannot just use the sample mean and variance, but must consider what we expect
these quantities to be in the event of an outbreak. We assume that the outbreak will increase
qin by a multiplicative factor m, thus multiplying the mean and variance of CBinin by m. To
account for this in the Gamma distribution Ga(?in , ?in ), we multiply ?in by m while leaving
h
i2
h
i
Esample CBinin (S)
Esample CBinin (S)
(S)
(S)
h
i and ?in (S) =
h
i,
?in unchanged. Thus we have ?in (S) = m
Varsample CBinin (S)
Varsample CBinin (S)
(S)
(S)
where Cin (S) = ?S ci and Bin (S) = ?S bi . Since we typically do not know the exact value of
m, here we use a discretized uniform distribution for m, ranging from m = 1 . . . 3 at intervals
of 0.2. Then scores can be calculated by averaging likelihoods over the distribution of m.
Finally, we consider how to deal with the case where the past values of the counts and
baselines are not given. In this ?blind Bayesian? (BBayes) case, we assume that counts
are randomly generated under the null hypothesis ci ? Po(q0 bi ), where q0 is the expected
ratio of count to baseline under the null (for example, q0 = 1 if baselines are obtained
by estimating the expected value of the count). Under this simple assumption, we can
easily compute the expectation and variance of the ratio of count to baseline under the null
Var[Po(q0 B)] q0 B q0
E[Po(q0 B)] q0 B
= B = q0 , and Var CB =
= B2 = B . Thus
hypothesis: E CB =
B
B2
we have ? = q0 B and ? = B under the null hypothesis. This gives us ?all = q0 Ball , ?all =
Ball , ?out (S) = q0 Bout (S), ?out (S) = Bout (S), ?in (S) = mq0 Bin (S), and ?in (S) = Bin (S). We
can use a uniform distribution for m as before. In our empirical evaluation below, we
consider both the Bayes and BBayes methods of generating parameter priors.
3
Results: detection power
We evaluated the Bayesian and frequentist methods on two types of simulated respiratory
outbreaks, injected into real Emergency Department and over-the-counter drug sales data
for Allegheny County, Pennsylvania. All data were aggregated to the zip code level to
ensure anonymity, giving the daily counts of respiratory ED cases and sales of OTC cough
and cold medication in each of 88 zip codes for one year. The baseline (expected count)
for each zip code was estimated using the mean count of the previous 28 days. Zip code
centroids were mapped to a 16 ? 16 grid, and all rectangles up to 8 ? 8 were examined. We
first considered simulated aerosol releases of inhalational anthrax (e.g. from a bioterrorist
attack), generated by the Bayesian Aerosol Release Detector, or BARD [9]. The BARD
simulator uses a Bayesian network model to determine the number of spores inhaled by
individuals in affected areas, the resulting number and severity of anthrax cases, and the
resulting number of respiratory ED cases on each day of the outbreak in each affected zip
code. Our second type of outbreak was a simulated ?Fictional Linear Onset Outbreak?
(or ?FLOO?), as in [10]. A FLOO(?, T ) outbreak is a simple simulated outbreak with
duration T , which generates t? cases in each affected zip code on day t of the outbreak
(0 < t ? T /2), then generates T ?/2 cases per day for the remainder of the outbreak. Thus
we have an outbreak where the number of cases ramps up linearly and then levels off.
While this is clearly a less realistic outbreak than the BARD-simulated anthrax attack, it
does have several advantages: most importantly, it allows us to precisely control the slope
of the outbreak curve and examine how this affects our methods? detection ability.
To test detection power, a semi-synthetic testing framework similar to [10] was used: we
first run our spatial scan statistic for each day of the last nine months of the year (the first
three months are used only to estimate baselines and priors), and obtain the score F ? for
each day. Then for each outbreak we wish to test, we inject that outbreak into the data, and
obtain the score F ? (t) for each day t of the outbreak. By finding the proportion of baseline
days with scores higher than F ? (t), we can determine the proportion of false positives we
would have to accept to detect the outbreak on day t. This allows us to compute, for any
given level of false positives, what proportion of outbreaks can be detected, and the mean
number of days to detection. We compare three methods of computing the score F ? : the frequentist method (F ? is the maximum likelihood ratio F(S) over all regions S), the Bayesian
maximum method (F ? is the maximum posterior probability P(H1 (S) | D) over all regions
S), and the Bayesian total method (F ? is the sum of posterior probabilities P(H1 (S)|D) over
all regions S, i.e. total posterior probability of an outbreak). For the two Bayesian methods,
we consider both Bayes and BBayes methods for calculating priors, thus giving us a total
of five methods to compare (frequentist, Bayes max, BBayes max, Bayes tot, BBayes tot).
In Table 1, we compare these methods with respect to proportion of outbreaks detected and
Table 1: Days to detect and proportion of outbreaks detected, 1 false positive/month
method
frequentist
Bayes max
BBayes max
Bayes tot
BBayes tot
FLOO ED
(4,14)
1.859
(100%)
1.740
(100%)
1.683
(100%)
1.882
(100%)
1.840
(100%)
FLOO ED
(2,20)
3.324
(100%)
2.875
(100%)
2.848
(100%)
3.195
(100%)
3.180
(100%)
FLOO ED
(1,20)
6.122
(96%)
5.043
(100%)
4.984
(100%)
5.777
(100%)
5.672
(100%)
BARD ED
(.125)
1.733
(100%)
1.600
(100%)
1.600
(100%)
1.633
(100%)
1.617
(100%)
BARD ED
(.016)
3.925
(88%)
3.755
(88%)
3.698
(88%)
3.811
(88%)
3.792
(88%)
FLOO OTC
(40,14)
3.582
(100%)
5.455
(63%)
5.164
(65%)
3.475
(100%)
4.380
(100%)
FLOO OTC
(25,20)
5.393
(100%)
7.588
(79%)
7.035
(77%)
5.195
(100%)
6.929
(99%)
mean number of days to detect, at a false positive rate of 1/month. Methods were evaluated
on seven types of simulated outbreaks: three FLOO outbreaks on ED data, two FLOO outbreaks on OTC data, and two BARD outbreaks (with different amounts of anthrax release)
on ED data. For each outbreak type, each method?s performance was averaged over 100 or
250 simulated outbreaks for BARD or FLOO respectively.
In Table 1, we observe very different results for the ED and OTC datasets. For the five runs
on ED data, all four Bayesian methods consistently detected outbreaks faster than the frequentist method. This difference was most evident for the more slowly growing (harder to
detect) outbreaks, especially FLOO(1,20). Across all ED outbreaks, the Bayesian methods showed an average improvement of between 0.13 days (Bayes tot) and 0.43 days
(BBayes max) as compared to the frequentist approach; ?max? methods performed substantially better than ?tot? methods, and ?BBayes? methods performed slightly better than
?Bayes? methods. For the two runs on OTC data, on the other hand, most of the Bayesian
methods performed much worse (over 1 day slower) than the frequentist method. The exception was the Bayes tot method, which again outperformed the frequentist method by an
average of 0.15 days. We believe that the main reason for these differing results is that the
OTC data is much noisier than the ED data, and exhibits much stronger seasonal trends.
As a result, our baseline estimates (using mean of the previous 28 days) are reasonably accurate for ED, but for OTC the baseline estimates will lag behind the seasonal trends (and
thus, underestimate the expected counts for increasing trends and overestimate for decreasing trends). The BBayes methods, which assume E[C/B] = 1 and thus rely heavily on the
accuracy of baseline estimates, are not reasonable for OTC. On the other hand, the Bayes
methods (which instead learn the priors from previous counts and baselines) can adjust for
consistent misestimation of baselines and thus more accurately account for these seasonal
trends. The ?max? methods perform badly on the OTC data because a large number of
baseline days have posterior probabilities close to 1; in this case, the maximum region posterior varies wildly from day to day, depending on how much of the total probability is
assigned to a single region, and is not a reliable measure of whether an outbreak has occurred. The total posterior probability of an outbreak, on the other hand, will still be higher
for outbreak than non-outbreak days, so the ?tot? methods can perform well on OTC as
well as ED data. Thus, our main result is that the Bayes tot method, which infers baselines
from past counts and uses total posterior probability of an outbreak to decide when to sound
the alarm, consistently outperforms the frequentist method for both ED and OTC datasets.
4
Results: computation time
As noted above, the Bayesian spatial scan must search over all rectangular regions for the
original grid only, while the frequentist scan (in order to calculate statistical significance by
randomization) must also search over all rectangular regions for a large number (typically
R = 1000) of replica grids. Thus, as long as the search time per region is comparable for the
Bayesian and frequentist methods, we expect the Bayesian approach to be approximately
1000x faster. In Table 2, we compare the run times of the Bayes, BBayes, and frequen-
Table 2: Comparison of run times for varying grid size N
method
Bayes (naive)
BBayes (naive)
frequentist (naive)
frequentist (fast)
N = 16
0.7 sec
0.6 sec
12 min
20 sec
N = 32
10.8 sec
9.3 sec
2.9 hrs
1.8 min
N = 64
2.8 min
2.4 min
49 hrs
10.7 min
N = 128
44 min
37 min
?31 days
77 min
N = 256
12 hrs
10 hrs
?500 days
10 hrs
tist methods for searching a single grid and calculating significance (p-values or posterior
probabilities for the frequentist and Bayesian methods respectively), as a function of the
grid size N. All rectangles up to size N/2 were searched, and for the frequentist method
R = 1000 replications were performed. The results confirm our intuition: the Bayesian
methods are 900-1200x faster than the frequentist approach, for all values of N tested.
However, the frequentist approach can be accelerated dramatically using our ?fast spatial
scan? algorithm [2], a multiresolution search method which can find the highest scoring
region of a grid while searching only a small subset of regions. Comparing the fast spatial
scan to the Bayesian approach, we see that the fast spatial scan is slower than the Bayesian
method for grid sizes up to N = 128, but slightly faster for N = 256. Thus we now have two
options for making the spatial scan statistic computationally feasible for large grid sizes:
to use the fast spatial scan to speed up the frequentist scan statistic, or to use the Bayesian
scan statistics framework (in which case the naive algorithm is typically fast enough). For
even larger grid sizes, it may be possible to extend the fast spatial scan to the Bayesian
approach: this would give us the best of both worlds, searching only one grid, and using a
fast algorithm to do so. We are currently investigating this potentially useful synthesis.
5
Discussion
We have presented a Bayesian spatial scan statistic, and demonstrated several ways in
which this method is preferable to the standard (frequentist) scan statistics approach. In
Section 3, we demonstrated that the Bayesian method, with a relatively non-informative
prior distribution, consistently outperforms the frequentist method with respect to detection power. Since the Bayesian framework allows us to easily incorporate prior information about size, shape, and impact of an outbreak, it is likely that we can achieve even
better detection performance using more informative priors, e.g. obtained from experts in
the domain. In Section 4, we demonstrated that the Bayesian spatial scan can be computed
in much less time than the frequentist method, since randomization testing is unnecessary.
This allows us to search large grid sizes using a naive search algorithm, and even larger
grids might be searched by extending the fast spatial scan to the Bayesian framework.
We now consider three other arguments for use of the Bayesian spatial scan. First, the
Bayesian method has easily interpretable results: it outputs the posterior probability that
an outbreak has occurred, and the distribution of this probability over possible outbreak
regions. This makes it easy for a user (e.g. public health official) to decide whether to
investigate each potential outbreak based on the costs of false positives and false negatives;
this type of decision analysis cannot be done easily in the frequentist framework. Another
useful result of the Bayesian method is that we can compute a ?map? of the posterior probabilities of an outbreak in each grid cell, by summing the posterior probabilities P(H1 (S) | D)
of all regions containing that cell. This technique allows us to deal with the case where the
posterior probability mass is spread among many regions, by observing cells which are
common to most or all of these regions. We give an example of such a map below:
Figure 1: Output of Bayesian spatial scan on baseline OTC data, 1/30/05.
Cell shading is based on posterior probability of an outbreak in that cell,
ranging from white (0%) to black (100%). The bold rectangle represents
the most likely region (posterior probability 12.27%) and the darkest cell
is the most likely cell (total posterior probability 86.57%). Total posterior
probability of an outbreak is 86.61%.
Second, calibration of the Bayesian statistic is easier than calibration of the frequentist
statistic. As noted above, it is simple to adjust the sensitivity and specificity of the Bayesian
method by setting the prior probability of an outbreak P1 , and then we can ?sound the
alarm? whenever posterior probability of an outbreak exceeds some threshold. In the frequentist method, on the other hand, many regions in the baseline data have sufficiently
high likelihood ratios that no replicas beat the original grid; thus we cannot distinguish the
p-values of outbreak and non-outbreak days. While one alternative is to ?sound the alarm?
when the likelihood ratio is above some threshold (rather than when p-value is below some
threshold), this is technically incorrect: because the baselines for each day of data are different, the distribution of region scores under the null hypothesis will also differ from day
to day, and thus days with higher likelihood ratios do not necessarily have lower p-values.
Third, we argue that it is easier to combine evidence from multiple detectors within the
Bayesian framework, i.e. by modeling the joint probability distribution. We are in the process of examining Bayesian detectors which look simultaneously at the day?s Emergency
Department records and over-the-counter drug sales in order to detect emerging clusters,
and we believe that combination of detectors is an important area for future research.
In conclusion, we note that, though both Bayesian modeling [7-8] and (frequentist) spatial scanning [3-4] are common in the spatial statistics literature, this is (to the best of our
knowledge) the first model which combines the two techniques into a single framework.
In fact, very little work exists on Bayesian methods for spatial cluster detection. One notable exception is the literature on spatial cluster modeling [11-12], which attempts to infer
the location of cluster centers by inferring parameters of a Bayesian process model. Our
work differs from these methods both in its computational tractability (their models typically have no closed form solution, so computationally expensive MCMC approximations
are used) and its easy interpretability (their models give no indication as to statistical significance or posterior probability of clusters found). Thus we believe that this is the first
Bayesian spatial cluster detection method which is powerful and useful, yet computationally tractable. We are currently running the Bayesian and frequentist scan statistics on
daily OTC sales data from over 10000 stores, searching for emerging disease outbreaks on
a daily basis nationwide. Additionally, we are working to extend the Bayesian statistic to
fMRI data, with the goal of discovering regions of brain activity corresponding to given
cognitive tasks [13, 6]. We believe that the Bayesian approach has the potential to improve
both speed and detection power of the spatial scan in this domain as well.
References
[1] M. Kulldorff. 1999. Spatial scan statistics: models, calculations, and applications. In J. Glaz and M. Balakrishnan, eds., Scan
Statistics and Applications, Birkhauser, 303-322.
[2] D. B. Neill and A. W. Moore. 2004. Rapid detection of significant spatial clusters. In Proc. 10th ACM SIGKDD Intl. Conf.
on Knowledge Discovery and Data Mining, 256-265.
[3] M. Kulldorff and N. Nagarwalla. 1995. Spatial disease clusters: detection and inference. Statistics in Medicine 14, 799-810.
[4] M. Kulldorff. 1997. A spatial scan statistic. Communications in Statistics: Theory and Methods 26(6), 1481-1496.
[5] D. B. Neill and A. W. Moore. 2004. A fast multi-resolution method for detection of significant spatial disease clusters. In
Advances in Neural Information Processing Systems 16, 651-658.
[6] D. B. Neill, A. W. Moore, F. Pereira, and T. Mitchell. 2005. Detecting significant multidimensional spatial clusters. In
Advances in Neural Information Processing Systems 17, 969-976.
[7] D. G. Clayton and J. Kaldor. 1987. Empirical Bayes estimates of age-standardized relative risks for use in disease mapping.
Biometrics 43, 671-681.
[8] A. Molli?e. 1999. Bayesian and empirical Bayes approaches to disease mapping. In A. B. Lawson, et al., eds. Disease Mapping
and Risk Assessment for Public Health. Wiley, Chichester.
[9] W. Hogan, G. Cooper, M. Wagner, and G. Wallstrom. 2004. A Bayesian anthrax aerosol release detector. Technical Report,
RODS Laboratory, University of Pittsburgh.
[10] D. B. Neill, A. W. Moore, M. Sabhnani, and K. Daniel. 2005. Detection of emerging space-time clusters. In Proc. 11th ACM
SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining.
[11] R. E. Gangnon and M. K. Clayton. 2000. Bayesian detection and modeling of spatial disease clustering. Biometrics 56,
922-935.
[12] A. B. Lawson and D. G. T. Denison, eds. 2002. Spatial Cluster Modelling. Chapman & Hall/CRC, Boca Raton, FL.
[13] X. Wang, R. Hutchinson, and T. Mitchell. 2004. Training fMRI classifiers to detect cognitive states across multiple human
subjects. In Advances in Neural Information Processing Systems 16, 709-716.
| 2819 |@word proportion:5 stronger:1 gfc:1 anthrax:6 shading:1 harder:1 moment:1 series:1 score:11 tist:1 daniel:2 past:6 outperforms:2 comparing:1 si:15 yet:1 must:12 tot:9 realistic:1 informative:2 shape:3 biosurveillance:1 interpretable:1 discovering:1 denison:1 indicative:2 record:1 detecting:5 location:8 attack:3 five:2 replication:2 incorrect:1 frequen:1 combine:2 inside:2 expected:7 rapid:2 themselves:1 examine:2 p1:4 simulator:1 growing:1 discretized:1 brain:1 multi:1 relying:1 decreasing:1 actual:1 little:1 increasing:1 estimating:1 underlying:3 mass:1 null:14 what:2 substantially:1 emerging:6 differing:1 finding:4 every:1 multidimensional:1 runtime:1 preferable:1 classifier:1 sale:5 control:1 medical:1 overestimate:1 positive:7 before:2 negligible:1 approximately:1 alli:1 might:1 black:1 examined:1 challenging:1 bi:14 averaged:1 testing:9 practice:1 differs:1 cold:1 area:2 empirical:3 drug:3 significantly:2 matching:1 word:2 specificity:2 cannot:4 close:2 ga:9 risk:3 elongated:2 map:3 center:2 demonstrated:3 independently:1 duration:1 rectangular:4 resolution:1 simplicity:1 importantly:1 population:2 searching:6 aerosol:3 heavily:1 user:1 exact:1 us:3 hypothesis:21 pa:2 trend:5 expensive:1 anonymity:1 observed:1 wang:1 boca:1 thousand:1 calculate:2 region:53 counter:3 highest:3 cin:5 disease:28 intuition:1 leak:1 complexity:1 hogan:1 solving:1 technically:1 basis:2 po:10 easily:6 joint:1 derivation:1 fast:11 detected:5 outside:3 h0:22 choosing:1 lag:1 larger:2 ramp:1 otherwise:2 epidemic:3 ability:1 statistic:32 advantage:2 indication:1 propose:3 remainder:1 qin:11 aligned:1 rapidly:1 translate:1 achieve:2 multiresolution:1 cluster:34 glaz:1 extending:1 intl:2 produce:1 generating:1 derive:1 andrew:1 depending:1 radiation:1 school:1 c:1 differ:1 saved:1 awm:1 human:2 public:3 bin:4 crc:1 county:1 randomization:8 extension:1 sufficiently:1 dispersal:1 hall:2 considered:1 cb:2 mapping:5 pitt:1 proc:2 outperformed:1 lose:1 currently:2 correctness:1 create:1 tool:2 weighted:1 clearly:1 rather:4 surveillance:5 varying:1 release:5 focus:2 derived:1 seasonal:3 improvement:1 consistently:3 modelling:1 likelihood:17 sigkdd:2 medication:1 baseline:22 centroid:1 detect:9 inference:1 dependent:1 typically:5 unlikely:1 accept:1 arg:1 overall:2 flexible:1 among:1 spatial:48 otc:14 marginal:3 once:1 sampling:1 chapman:1 identical:1 represents:2 look:1 future:1 fmri:2 others:1 report:1 randomly:2 gamma:7 simultaneously:1 individual:1 attempt:1 detection:25 interest:1 mining:3 highly:1 multiply:1 investigate:1 evaluation:1 adjust:2 chichester:1 behind:1 accurate:1 integral:1 daily:4 necessary:1 respective:1 biometrics:2 increased:1 military:1 earlier:1 modeling:4 disadvantage:1 cost:1 tractability:1 subset:1 uniform:6 successful:1 examining:1 motivating:1 varies:1 scanning:1 gregory:1 hutchinson:1 synthetic:1 epidemiology:1 sensitivity:2 off:1 informatics:1 synthesis:1 quickly:2 again:1 containing:1 choose:1 slowly:1 worse:1 cognitive:2 conf:2 expert:2 inject:1 return:1 account:4 potential:10 b2:2 sec:5 bold:1 notable:1 blind:1 onset:1 multiplicative:1 performed:5 h1:26 wind:1 closed:2 observing:1 start:1 bayes:15 option:1 timely:1 slope:1 accuracy:2 variance:5 correspond:1 identify:2 bayesian:61 accurately:1 multiplying:1 detector:5 whenever:1 ed:19 against:1 underestimate:1 naturally:1 dataset:1 adjusting:1 mitchell:2 astronomical:1 qbi:5 infers:1 knowledge:3 higher:7 day:32 evaluated:2 done:1 though:1 wildly:1 just:1 biomedical:1 hand:5 working:1 assessment:1 widespread:1 believe:4 effect:2 true:1 remedy:1 assigned:1 spatially:1 q0:12 moore:5 laboratory:1 deal:2 white:1 during:1 noted:2 cout:5 evident:1 demonstrate:2 ranging:2 common:3 influenza:1 extend:2 occurred:6 mellon:1 significant:6 automatic:1 grid:26 similarly:1 moving:1 access:1 fictional:1 calibration:2 add:1 posterior:24 showed:1 store:1 life:1 scoring:3 zip:7 syndrome:1 determine:6 aggregated:2 maximize:1 semi:1 multiple:3 sound:3 reduces:1 infer:1 exceeds:1 technical:1 faster:6 calculation:2 long:1 hazard:1 equally:1 visit:1 impact:4 cmu:1 poisson:5 expectation:3 cbmi:1 cell:8 interval:1 leaving:1 subject:1 balakrishnan:1 call:6 enough:1 easy:2 affect:2 pennsylvania:1 reduce:1 administration:1 rod:1 whether:5 expression:3 nine:1 dramatically:1 useful:4 tune:1 amount:1 repeating:1 antibiotic:1 exist:1 estimated:3 correctly:1 per:2 carnegie:1 discrete:1 affected:3 four:1 threshold:4 nevertheless:1 drawn:1 replica:6 rectangle:5 imaging:1 sum:2 year:2 run:5 injected:1 powerful:1 reasonable:1 decide:2 draw:1 decision:1 comparable:1 fl:1 emergency:3 distinguish:1 neill:6 badly:1 activity:1 adapted:1 occur:1 precisely:2 generates:2 speed:2 argument:1 min:8 relatively:2 department:3 ball:6 combination:1 conjugate:2 across:2 slightly:2 making:1 outbreak:69 census:1 taken:1 computationally:4 discus:3 count:18 know:1 letting:1 tractable:1 observe:1 hierarchical:1 occurrence:1 frequentist:36 alternative:7 darkest:1 slower:2 original:6 assumes:2 clustering:3 ensure:1 cough:1 floo:11 running:1 standardized:1 maintaining:1 calculating:3 medicine:1 giving:2 especially:1 qout:10 unchanged:1 spore:1 quantity:3 primary:1 exhibit:1 mapped:1 simulated:8 misestimation:1 prospective:2 seven:1 argue:1 collected:1 water:1 reason:1 assuming:1 bard:7 code:7 ratio:8 difficult:1 potentially:1 negative:1 unknown:1 perform:4 allowing:2 datasets:3 beat:1 severity:1 misspecification:2 communication:1 discovered:1 rn:1 smoothed:1 community:1 raton:1 clayton:3 nreg:1 bout:4 hour:1 below:5 including:2 max:8 reliable:1 belief:2 interpretability:1 power:7 event:1 natural:1 rely:2 hr:5 representing:2 improve:1 axis:1 naive:7 health:3 prior:29 literature:2 discovery:2 relative:1 expect:2 proportional:1 var:2 age:1 integrate:1 consistent:1 dq:4 prone:2 last:1 keeping:2 infeasible:1 taking:1 wagner:1 distributed:1 curve:1 calculated:1 world:1 commonly:1 historical:2 observable:1 compact:1 confirm:1 global:1 overfitting:2 investigating:1 summing:1 pittsburgh:4 unnecessary:2 conclude:1 consuming:1 assumed:1 search:13 kulldorff:6 table:5 additionally:1 learn:1 reasonably:1 ca:1 obtaining:1 necessarily:1 domain:2 official:1 significance:5 main:4 spread:1 linearly:1 alarm:3 respiratory:4 cooper:2 wiley:1 inferring:1 pereira:1 wish:2 lawson:2 third:1 list:1 evidence:1 exists:1 false:8 ci:19 pathogen:1 occurring:1 easier:2 led:1 simply:3 likely:6 ordered:1 qall:10 environmental:1 chance:2 acm:2 goal:6 month:4 feasible:1 typical:1 except:1 birkhauser:1 averaging:2 total:12 exception:2 searched:4 scan:37 noisier:1 accelerated:1 incorporate:1 evaluate:1 mcmc:1 tested:1 |
2,003 | 282 | 474
Mel and Koch
Sigma-Pi Learning:
On Radial Basis Functions and Cortical
Associative Learning
Christof Koch
Bartlett W. Mel
Computation and Neural Systems Program
Caltech, 216-76
Pasadena, CA 91125
ABSTRACT
The goal in this work has been to identify the neuronal elements
of the cortical column that are most likely to support the learning
of nonlinear associative maps. We show that a particular style of
network learning algorithm based on locally-tuned receptive fields
maps naturally onto cortical hardware, and gives coherence to a
variety of features of cortical anatomy, physiology, and biophysics
whose relations to learning remain poorly understood.
1
INTRODUCTION
Synaptic modification is widely believed to be the brain's primary mechanism for
long-term information storage. The enormous practical and theoretical importance
of biological synaptic plasticity has stimulated interest among both experimental
neuroscientists and neural network modelers, and has provided strong incentive for
the development of computational models that can both explain and predict.
We present here a model for the synaptic basis of associative learning in cerebral
cortex. The main hypothesis of this work is that the principal output neurons
of a cortical association area learn functions of their inputs as locally-generalizing
lookup tables. As abstractions, locally-generalizing learning methods have a long
history in statistics and approximation theory (see Atkeson, 1989; Barron & Barron,
Sigma-Pi Learning
Figure 1: A Neural Lookup Table. A nonlinear function of several variables may
be decomposed as a weighted sum over a set of localized "receptive fields" units.
1988). Radial Basis Function (RBF) methods are essentially similar (see Broomhead
& Lowe, 1988) and have recently been discussed by Poggio and Girosi (1989) in
relation to regularization theory. As is standard for network learning problems,
locally-generalizing methods involve the learning of a map f(~) : ~ ~ y from
example (~, y) pairs. Rather than operate directly on the input space, however,
input vectors are first "decoded" by a population of "receptive field" units with
centers ei that each represents a local, often radially-symmetric, region in the input
space. Thus, an output unit computes its activation level y L:i wig( x - ei), where
9 defines a "radial basis function" , commonly a Gaussian, and Wi is its weight (Fig.
1). The learning problem can then be characterized as one of finding weights w
that minimize the mean squared error over the N element training set. Learning
schemes of this type lend themselves directly to very simple Hebb-type rules for
synaptic modification since the initially nonlinear learning problem is transformed
into a linear one in the unknown parameters w (see Broomhead & Lowe, 1988).
=
Locally-generalizing learning algorithms as neurobiological models date at least to
Albus (1971) and Marr (1969, 1970); they have also been explored more recently by
a number of workers with a more pure computational bent (Broomhead & Lowe,
1988; Lapedes & Farber, 1988; Mel, 1988, 1989; Miller, 1988; Moody, 1989; Poggio
& Girosi, 1989).
475
476
Mel and Koch
2
SIGMA-PI LEARNING
Unlike the classic thresholded linear unit that is the mainstay of many current
connectionist models, the output of a sigma-pi unit is computed as a sum of contributions from a set of independent multiplicative clusters of input weights (adapted
from Rumelhart & McClelland, 1986): y O'(E j WjCj), where Cj
rt ViXi is the
product of weighted inputs to cluster j, Wj is the weight on cluster j as a whole,
and 0' is an optional thresholding nonlinearity applied to the sum of total cluster activity. During learning, the output may also by clamped by an unconditioned
teacher input, i.e. such that y = ti(~)' Units of this general type were first proposed
by Feldman & Ballard (1982), and have been used occasionally by other connectionist modelers, most commonly to allow certain inputs to gate others or to allow
the activation of one unit to control the strength of interconnection between two
other units (Rumelhart & McClelland, 1986). The use of sigma-pi units as function
lookup tables was suggested by Feldman & Ballard (1982), who cited a possible
relevance to local dendritic interactions among synaptic inputs (see also Durbin &
Rumelhart, 1989).
=
=
In the present work, the specific nonlinear interaction among inputs to a sigma-pi
cluster is not of primary theoretical importance. The crucial property of a cluster
is that its output should be AND-like, i.e . selective for the simultaneous activity
of all of its k input lines!.
2.1
NETWORK ARCHITECTURE
We assume an underlying d-dimensional input space X E Rd over which functions
are to be learned . Vectors in X are represented by a population X of N units
whose state is denoted by ~ E RN. Within X, each of the d dimensions of X is
individually value-coded, i.e. consists of a set of units with gaussian receptive fields
distributed in overlapping fashion along the range of allowable parameter values,
for example, the angle of a joint, or the orientation of a visual stimulus at a specific
retinal location. (A more biologically realistic case would allow for individual units
in X to have multi-dimensional gaussian receptive fields, for example a 4-d visual
receptive field encoding retinal x and y, edge orientation, and binocular disparity.)
We assume a map t(~) : ~ ~ y. is to be learned, where the components ofy' E RM are
represented by an output population Y of M units. According to the familiar singlelayer feedforward network learning paradigm, X projects to Y via an "associational"
pathway with modifiable synapses. We consider the task of a single output unit Yi
(hereafter denoted by y), whose job is to estimate the underlying teacher function
ti(~) : ~ ~ y from examples. Output unit y is assumed to have access to the entire
input vector ~, and a single unconditioned teacher input ti. We further assume that
1 A local threshold function can act as an AND in place of a multiplication, and for purposes of
biological modeling, is a more likely dendritic mechanism than pure multiplication. In continuing
work, we are exploring the more detailed interactions between Hebb-type learning rules and various
post-synaptic nonlinearities, specifically the NMDA channel, that could underlie a multiplication
relation among nearby inputs.
Sigma-Pi Learning
=
all possible clusters Cj of size 1 through k
k maz pre-exist in y's dendritic field,
with cluster weights Wj initially set to 0, and input weights Vi within each cluster set
equal to 1. Following from our assumption that each of the input lines Xi represents
a I-dimensional gaussian receptive field in X, a multiplicative cluster of k such
inputs can yield a k-dimensional receptive field in X that may then be weighted .
In this way, a sigma-pi unit can directly implement an RBF decomposition over X.
Additionally, since a sigma-pi unit is essentially a massively parallel lookup table
with clusters as stored table entries, it is significant that the sigma-pi function is
inherently modular, such that groups of sigma-pi units that receive the same teacher
signal can, by simply adding their outputs, act as a single much larger virtual sigmapi unit with correspondingly increased table capacity2. A neural architecture that
allows system storage capacity to be multiplied by a factor of k by growing k neurons
in the place of one, is one that should be strongly preferred by biological evolution.
2.2
THE LEARNING RULE
The cluster weights Wj are modified during training according to the following selfnormalizing Hebb rule:
wi
a cip tp - f3 W j,
=
where a and f3 are small positive constants, and cip and tp are, respectively, the jth
cluster response and teacher signal in state p. The steady state of this learning rule
occurs when Wj
~ < cit >, which tries to maximize the correlation 3 of cluster output and teacher signal over the training set, while minimizing total synaptic weight
for all clusters. The inputs weights Vi are unmodified during learning, representing
the degree of cluster membership for each input line.
=
We briefly note that because this Hebb-type learning rule is truly local, i.e. depends
only upon activity levels available directly at a synapse to be modified, it may be
applied transparently to a group of neurons driven by the same global teacher input
(see above discussion of sigma-pi modularity). Error-correcting rules that modify
synapses based on a difference between desired vs. actual neural output do not
share this property.
3
TOWARD A BIOLOGICAL MODEL
In the remainder of this paper we examine the hypothesis that sigma-pi units underlie associative learning in cerebral cortex. To do so, we identify the six essential
elements of the sigma-pi learning scheme and discuss the evidence for each: i) a population of output neurons, ii) a focal teacher input, iii), a diffuse association input,
iv) Hebb-type synaptic plasticity, v) local dendritic multiplication (or thresholding),
and vi) a cluster reservoir.
Following Eccles (1985), we concern ourselves here with the cytoarchitecture of
"generic" association cortex, rather than with the more specialized (and more often
studied) primary sensory and motor areas. We propose the cortical circuit of fig.
2This assumes the global thresholding nonlinearity
3S trictly speaking, the average product.
q
is weak, i.e. has an extended linear range.
477
478
Mel and Koch
ASSOciationl~;.,~~lil~ll~fi~~~
ribers""
j
IV
V,VI
Association
Inputs
Specific
Afferent
Figure 2: Elements of the cortical column in a generic association cortex.
2 to contain all of the basic elements necessary for associative learning, closely
paralleling the accounts of Marr (1970) and Eccles (1985) at this level of description.
We limit our focus to the cortically-projecting "output" pyramids oflayers II and III,
which are posited to be sigma-pi units. These cells are a likely locus of associative
learning as they are well situated to receive both teacher and associational input
pathways. With reference to the modularity property of sigma-pilearning (sec. 2.1),
we interpret the aggregates of layer II/III pyramidal cells whose apical dendrites
rise toward the cortical surface in tight clumps (on the order of 100 cells, Peters,
1989), as a single virtual sigma-pi unit.
3.1
THE TEACHER INPUT
We tentatively define the "teacher" input to an association area to be those inputs
that terminate primarily in layer IV onto spiny stellate cells or small pyramidal
cells. Lund et al. (1985) points out that spiny stellate cells are most numerous
in primary sensory areas, but that the morphologically similar class of small pyramidal cells in layer IV seem to mimick the spiny stellates in their local, vertically
oriented excitatory axonal distributions. The layer IV spiny stellates are known
to project primarily up (but also down) a narrow vertical cylinder in which they
sit, probably making powerful "cartridge" synapses onto overlying pyramidal cells.
These excitatory interneurons are presumably capable of strongly deplorarizing entire output cells (Szentagothai, 1977), thus providing the needed unit-wide teacher
signals to the output neurons. We therefore assumethis teacher pathway plays a
role analagous to the presumed role of cerebellar climbing fibers (Albus, 1971; Marr,
Sigma-Pi Learning
1969} The inputs to layer IV can be of both thalamic and/or cortical origin.
3.2
THE ASSOCIATIONAL INPUT
A second major form of extrinsic excitatory input with access to layer II/III pyramidal cells is the massive system of horizontal fibers in layer I. The primary source
of these fibers is currently believed to be long range excitatory association fibers
from both other cortical and subcortical areas (Jones, 1981). In accordance with
Marr (1970) and Eccles (1985), we interpret this system of horizontal fibers, which
virtually permeates the dendritic fields of the layer II/III pyramidal cells, as the primary conditioned input pathway at which cortical associative learning takes place.
There is evidence that an individual layer I fibers can make excitatory synapses
on apical dendrites of pyramidal cells across an area of cortex 5-6mm in diameter
(Szentagothai, 1977).
3.3
HEBB RULES, MULTIPLICATION, AND CLUSTERING
The process of cluster formation in sigma-pi learning is driven by a local Hebb-type
rule. Long term Hebb-type synaptic modification has been demonstrated in several
cortical areas, dependent only upon local post-synaptic depolarization (Kelso et al.,
1986), and thought to be mediated by the the voltage-dependent NMDA channel
(see Brown et al., 1988). In addition to the standard tendency for LTP with pre- and
post-synaptic correlation, sigma-pi learning implicitly specifies cooperation among
pre-synaptic units, in the sense that the largest increase in cluster weight Wj occurs
when all inputs Xi to a cluster are simultaneously and strongly active. This type of
cooperation among pre-synaptic inputs should follow directly from the assumption
that local post-synaptic depolarization is the key ingredient in the induction of LTP.
In other words, like-activated synaptic inputs must inevitably contribute to each
other's enhancement during learning to the extent they are clustered on a postsynaptic dendrite. This type of cooperativity in learning gives key importance to
dendritic space in neural learning, and has not until very recently been modelled at
a biophysical level (T. Brown, pers. comm; J. Moody, pers. comm.).
In addition to its possible role in enhancing like-activated synaptic clusters however,
the NMDA channel may be hypothesized to simultaneously underlie the "multiplicative" interaction among neighboring inputs needed for ensuring cluster-selectivity
in sigma-pi learning. Thus, if sufficiently endowed with NMDA channels, cortical
pyramidal cells could respond highly selectively to associative input "vectors" whose
active afferents are spatially clumped, rather than scattered uniformly, across the
dendritic arbor. The possibility that dendritic computations could include local
multiplicative nonlinearities is widely accepted (e.g. Shepherd et al., 1985; Koch et
al., 1983).
3.4
A VIRTUAL CLUSTER RESERVOIR
The abstract definition of sigma-pi learning specifies that all possible clusters Cj of
size 1 < k < k max pre-exist on the "dendrites" of each virtual sigma-pi unit (which
we have previously proposed to consist of a vertically aggregated clump of 100
479
480
Mel and Koch
pyramidal cells that receive the same teacher input from layer 4). During learning,
the weight on each cluster is governed by a simple Hebb rule. Since the number of
possible clusters of size k overwhelms total available dendritic space for even small
k 4 , it must be possible to create a cluster when it is needed. We propose that
the complex 3-d mesh of axonal and dendritic arborizations in layer 1 are ideal for
maximizing the probability that arbitrary (small) subsets of association axons cross
near to each other in space at some point in their collective arborizations. Thus,
we propose that the tangle ofaxons within a dendrite's receptive field gives rise to
an enormous set of almost-clusters, poised to "latch" onto a post-synaptic dendrite
when called for by a Hebb-type learning rule. This geometry of pre- and postsynaptic interface is to be strongly contrasted with the architecture of cerebellum,
where the afferent "parallel" fibers have no possibility of clustering on post-synaptic
dendrites.
Known biophysical mechamisms for the sprouting and guidance of growth cones
during development, in some cases driven by neural activity seem well suited to the
task of cluster formation over small distances in the adult brain.
4
CONCLUSIONS
The locally-generalizing, table-based sigma-pi learning scheme is a parsimonious
mechanisms that can account for the learning of nonlinear associative maps in cerebral cortex. Only a single layer of excitatory synapses is modified, under the control
of a Hebb-type learning rule. Numerous open questions remain however, for example the degree to which clusters of active synapses scattered across a pyramidal
dendritic tree can function independently, providing the necessary AND-like selectivity.
Acknowledgements
Thanks are due to Ojvind Bernander, Rodney Douglas, Richard Durbin, Kamil
Grajski, David Mackay, and John Moody for numerous helpful discussions. We
acknowledge support from the Office of Naval Research, the James S. McDonnell
Foundation, and the Del Webb Foundation.
References
Albus, J.S. A theory of cerebellar function. Math. Bio6Ci., 1971, 10,25-61.
Atkeson, C.G. Using associative content-addressable memories to control robots, MIT A.I. Memo
1124, September 1989.
Barron, A.R. & Barron, R.L. Statistical learning networks: a unifying view. Presented at the 1988
Sympo6ium on the Interface: Stati6tic6 and Computing Science, Reston, Virginia.
Bliss, T.V.P. & Lf/Jmo, T. Long-lasting potentiation of synaptic transmission in the dentate area
of the anaesthetized rabbit following stimulation of the perforant path. J. PhY6ioi., 1973, 232,
331-356.
4 For example, assume a 3-d learning problem and clusters of size k = 3; with 100 afferents per
input dimension, there are 1003
106 possible clusters. If we assume 5,000 available association
synapses per pyramidal cell, there is dendritic space for at most 166,000 clusters of size 3.
=
Sigma-Pi Learning
Broomhead, D.S. & Lowe, D. Multivariable functional interpolation and adaptive networks. Complex Sy.tem., 1988, 2, 321-355.
Brown, T.H., Chapman, P.F., Kairiss, E.W., & Keenan, C.L. Long-term synaptic potentiation.
Science, 1988, 242, 724-728.
Durbin, R. & Rumelhart, D.E. Product units: a computationally powerful and biologically plausible extension to backpropagation networks. Complex Sy.tem., 1989, 1, 133.
Eccles, J.C. The cerebral neocortex: a theory of its operation. In Cerebral Cortex, vol. 2, A.
Peters & E.G. Jones, (Eds.), Plenum: New York, 1985.
Feldman, J.A. & Ballard, D.H. Connectionist models and their properties.
1982, 6, 205-254.
Cognitive Science,
Giles, C.L. & Maxwell, T. Learning, invariance, and generalization in high-order neural networks.
Applied Optic., 1987, 26(23), 4972-4978.
Hebb, D.O. The organization oj behavior. New York: Wiley, 1949.
Jones, E.G. Anatomy of cerebral cortex: columnar input-ouput relations. In The organization
oj cerebral cortex, F.O. Schmitt, F.G. Worden, G. Adelman, & S.G. Dennis, (Eds.), MIT Press:
Cambridge, MA, 1981.
Kelso, S.R., Ganong, A.H., & Brown, T.H. Hebbian synapses in hippocampus. PNAS USA, 1986,
83, 5326-5330.
Koch, C., Poggio, T., & Torre, V. Nonlinear interactions in a dendritic tree: localization, timing,
and role in information processing. PNAS, 1983, 80, 2799-2802.
Lapedes, A. & Farber, R. How neural nets work. In NeurallnJormation Procfuing Sy.tem., D.Z.
Anderson, (Ed.), American Institute of Physics: New York, 1988.
Lund, J.S. Spiny stellate neurons. In Cerebral Cortex, vol. 1, A. Peters & E.G. Jones, (Eds.),
Plenum: New York, 1985.
Marr, D. A theory for cerebral neocortex. Proc. Roy. Soc. Lond. B, 1970, 176, 161-234.
Marr, D. A theory of cerebellar cortex. J. Phy.iol., 1969, 202, 437-470.
Mel, B.W. MURPHY: A robot that learns by doing. In NeurallnJormation Proceuing SY6tem.,
D.Z. Anderson, (Ed.), American Institute of Physics: New York, 1988.
Mel, B.W. MURPHY: A neurally inspired connectionist approach to learning and perfonnance in
vision-based robot motion planning. Ph.D. thesis, University of Illinois, 1989.
Miller W.T., Hewes, R.P., Glanz, F.H., & Kraft, L.G. Real time dynamic control of an industrial manipulator using a neural network based learning controller. Technical Report, Dept. of
Electrical and Computer Engineering, University of New Hampshire, 1988.
Moody, J. & Darken, C. Learning with localized receptive fields. In Proc. 1988 Connectioni6t
Model. Summer School, Morgan-Kaufmann, 1988.
Peters, A. Plenary address, 1989 Soc. Neurosc. Meeting, Phoenix, AZ.
Poggio, T. & Girosi, F. Learning, networks and approximation theory. Science, In press.
Rumelhart, D.E., Hinton, G.E., & McClelland, J.L. A general framework for parallel distributed
processing. In Parallel di.tributed proceuing: exploration. in the micro.tructure oj cognition, vol.
1, D.E. Rumelhart, J.L. McClelland, (Eds.), Cambridge, MA: Bradford, 1986.
Shepherd, G.M., Brayton, R.K., Miller, J.P., Segev, I., Rinzel, J., & Rall, W. Signal enhancement
in distal cortical dendrites by means of interactions between active dendritic spines. PNAS, 1985,
82, 2192-2195.
Szentagothai, J. The neuron network of the cerebral cortex: a functional interpretation. (1977)
Proc. R. Soc. Lond. B., 201:219-248.
481
| 282 |@word briefly:1 maz:1 hippocampus:1 mimick:1 open:1 decomposition:1 phy:1 disparity:1 hereafter:1 tuned:1 lapedes:2 current:1 activation:2 must:2 john:1 mesh:1 realistic:1 plasticity:2 girosi:3 motor:1 rinzel:1 v:1 hewes:1 math:1 contribute:1 location:1 along:1 ouput:1 consists:1 pathway:4 poised:1 presumed:1 spine:1 behavior:1 themselves:1 examine:1 growing:1 multi:1 brain:2 planning:1 inspired:1 decomposed:1 rall:1 actual:1 provided:1 project:2 underlying:2 circuit:1 depolarization:2 finding:1 ti:3 act:2 growth:1 rm:1 control:4 unit:26 underlie:3 christof:1 positive:1 understood:1 local:10 modify:1 vertically:2 limit:1 accordance:1 timing:1 engineering:1 encoding:1 mainstay:1 proceuing:2 tributed:1 path:1 interpolation:1 stellates:2 studied:1 range:3 clump:2 practical:1 implement:1 lf:1 backpropagation:1 addressable:1 area:8 physiology:1 thought:1 kelso:2 pre:6 radial:3 word:1 kairiss:1 onto:4 storage:2 map:5 demonstrated:1 center:1 maximizing:1 independently:1 rabbit:1 pure:2 correcting:1 rule:12 grajski:1 ojvind:1 marr:6 population:4 classic:1 plenum:2 play:1 massive:1 paralleling:1 hypothesis:2 origin:1 element:5 rumelhart:6 roy:1 role:4 electrical:1 region:1 wj:5 comm:2 reston:1 tangle:1 dynamic:1 tight:1 overwhelms:1 upon:2 localization:1 kraft:1 basis:4 joint:1 represented:2 various:1 fiber:7 iol:1 cooperativity:1 aggregate:1 formation:2 whose:5 modular:1 widely:2 larger:1 plausible:1 interconnection:1 statistic:1 unconditioned:2 associative:10 biophysical:2 net:1 propose:3 oflayers:1 interaction:6 product:3 remainder:1 neighboring:1 date:1 poorly:1 albus:3 description:1 az:1 cluster:33 enhancement:2 transmission:1 school:1 job:1 strong:1 soc:3 brayton:1 anatomy:2 farber:2 closely:1 torre:1 exploration:1 virtual:4 potentiation:2 clustered:1 generalization:1 stellate:3 biological:4 dendritic:14 exploring:1 extension:1 mm:1 koch:7 sufficiently:1 presumably:1 cognition:1 predict:1 dentate:1 major:1 purpose:1 proc:3 currently:1 individually:1 largest:1 create:1 weighted:3 mit:2 gaussian:4 modified:3 rather:3 voltage:1 office:1 focus:1 naval:1 industrial:1 sense:1 helpful:1 abstraction:1 dependent:2 membership:1 entire:2 initially:2 pasadena:1 relation:4 transformed:1 selective:1 among:7 orientation:2 morphologically:1 denoted:2 development:2 mackay:1 field:12 equal:1 f3:2 chapman:1 represents:2 jones:4 arborizations:2 tem:3 report:1 connectionist:4 others:1 stimulus:1 richard:1 primarily:2 micro:1 oriented:1 connectioni6t:1 simultaneously:2 individual:2 murphy:2 familiar:1 geometry:1 ourselves:1 cylinder:1 neuroscientist:1 interest:1 interneurons:1 cip:2 highly:1 possibility:2 organization:2 truly:1 activated:2 edge:1 capable:1 worker:1 necessary:2 poggio:4 perfonnance:1 tree:2 iv:6 continuing:1 desired:1 guidance:1 theoretical:2 increased:1 column:2 modeling:1 giles:1 plenary:1 tp:2 unmodified:1 apical:2 entry:1 subset:1 virginia:1 stored:1 teacher:14 thanks:1 cited:1 physic:2 moody:4 squared:1 thesis:1 cognitive:1 glanz:1 american:2 style:1 account:2 nonlinearities:2 lookup:4 retinal:2 sec:1 bliss:1 analagous:1 afferent:4 vi:4 depends:1 multiplicative:4 try:1 lowe:4 view:1 doing:1 thalamic:1 parallel:4 rodney:1 contribution:1 minimize:1 kaufmann:1 who:1 miller:3 yield:1 identify:2 sy:3 climbing:1 weak:1 modelled:1 history:1 explain:1 simultaneous:1 synapsis:8 synaptic:20 ed:6 definition:1 james:1 naturally:1 pers:2 di:1 modeler:2 radially:1 broomhead:4 cj:3 nmda:4 maxwell:1 follow:1 response:1 synapse:1 strongly:4 anderson:2 binocular:1 correlation:2 until:1 horizontal:2 dennis:1 ei:2 nonlinear:6 overlapping:1 del:1 defines:1 manipulator:1 usa:1 hypothesized:1 contain:1 brown:4 perforant:1 evolution:1 regularization:1 spatially:1 symmetric:1 distal:1 cerebellum:1 ll:1 during:6 latch:1 adelman:1 sprouting:1 mel:8 steady:1 multivariable:1 allowable:1 ofaxons:1 eccles:4 motion:1 interface:2 recently:3 fi:1 specialized:1 stimulation:1 functional:2 phoenix:1 cerebral:10 association:9 discussed:1 interpretation:1 interpret:2 significant:1 cambridge:2 feldman:3 rd:1 focal:1 clumped:1 nonlinearity:2 illinois:1 access:2 robot:3 cortex:12 surface:1 driven:3 massively:1 occasionally:1 certain:1 selectivity:2 meeting:1 yi:1 caltech:1 morgan:1 aggregated:1 paradigm:1 maximize:1 signal:5 ii:5 neurally:1 ofy:1 pnas:3 hebbian:1 technical:1 characterized:1 believed:2 long:6 posited:1 cross:1 post:6 bent:1 coded:1 biophysics:1 ensuring:1 basic:1 controller:1 essentially:2 enhancing:1 vision:1 cerebellar:3 pyramid:1 cell:15 receive:3 addition:2 pyramidal:11 source:1 crucial:1 operate:1 unlike:1 probably:1 shepherd:2 virtually:1 ltp:2 seem:2 axonal:2 near:1 ideal:1 feedforward:1 iii:5 variety:1 architecture:3 six:1 bartlett:1 peter:4 speaking:1 york:5 detailed:1 involve:1 neocortex:2 locally:6 situated:1 hardware:1 ph:1 mcclelland:4 cit:1 diameter:1 specifies:2 exist:2 transparently:1 extrinsic:1 per:2 modifiable:1 incentive:1 vol:3 group:2 key:2 threshold:1 enormous:2 douglas:1 thresholded:1 sum:3 cone:1 angle:1 overlying:1 powerful:2 respond:1 place:3 almost:1 parsimonious:1 coherence:1 layer:12 summer:1 durbin:3 activity:4 adapted:1 strength:1 optic:1 segev:1 diffuse:1 nearby:1 lond:2 according:2 mcdonnell:1 remain:2 across:3 postsynaptic:2 spiny:5 wi:2 modification:3 biologically:2 making:1 lasting:1 projecting:1 computationally:1 previously:1 discus:1 mechanism:3 needed:3 locus:1 available:3 operation:1 endowed:1 multiplied:1 barron:4 generic:2 gate:1 assumes:1 clustering:2 include:1 unifying:1 neurosc:1 anaesthetized:1 szentagothai:3 question:1 occurs:2 receptive:10 primary:6 rt:1 september:1 distance:1 capacity:1 extent:1 toward:2 induction:1 providing:2 minimizing:1 webb:1 sigma:25 memo:1 rise:2 lil:1 collective:1 unknown:1 vertical:1 neuron:7 darken:1 acknowledge:1 inevitably:1 optional:1 extended:1 hinton:1 rn:1 arbitrary:1 david:1 pair:1 learned:2 narrow:1 adult:1 address:1 suggested:1 lund:2 program:1 max:1 memory:1 lend:1 oj:3 representing:1 scheme:3 numerous:3 mediated:1 bernander:1 tentatively:1 acknowledgement:1 singlelayer:1 multiplication:5 keenan:1 subcortical:1 localized:2 ingredient:1 foundation:2 degree:2 schmitt:1 thresholding:3 pi:24 share:1 excitatory:6 cooperation:2 jth:1 allow:3 institute:2 wide:1 correspondingly:1 distributed:2 dimension:2 cortical:14 computes:1 sensory:2 commonly:2 adaptive:1 atkeson:2 ganong:1 preferred:1 neurobiological:1 implicitly:1 global:2 active:4 assumed:1 xi:2 modularity:2 table:7 stimulated:1 additionally:1 terminate:1 learn:1 ballard:3 channel:4 ca:1 inherently:1 dendrite:8 complex:3 kamil:1 tructure:1 main:1 whole:1 neuronal:1 fig:2 reservoir:2 scattered:2 fashion:1 hebb:12 axon:1 wiley:1 cortically:1 decoded:1 clamped:1 governed:1 learns:1 down:1 specific:3 explored:1 evidence:2 concern:1 essential:1 sit:1 consist:1 adding:1 importance:3 associational:3 conditioned:1 columnar:1 suited:1 generalizing:5 simply:1 likely:3 visual:2 wig:1 ma:2 goal:1 rbf:2 content:1 specifically:1 uniformly:1 contrasted:1 principal:1 hampshire:1 total:3 called:1 sy6tem:1 accepted:1 experimental:1 tendency:1 arbor:1 invariance:1 bradford:1 selectively:1 support:2 relevance:1 dept:1 |
2,004 | 2,820 | Group and Topic Discovery
from Relations and Their Attributes
Xuerui Wang, Natasha Mohanty, Andrew McCallum
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
{xuerui,nmohanty,mccallum}@cs.umass.edu
Abstract
We present a probabilistic generative model of entity relationships and
their attributes that simultaneously discovers groups among the entities
and topics among the corresponding textual attributes. Block-models of
relationship data have been studied in social network analysis for some
time. Here we simultaneously cluster in several modalities at once, incorporating the attributes (here, words) associated with certain relationships.
Significantly, joint inference allows the discovery of topics to be guided
by the emerging groups, and vice-versa. We present experimental results
on two large data sets: sixteen years of bills put before the U.S. Senate, comprising their corresponding text and voting records, and thirteen
years of similar data from the United Nations. We show that in comparison with traditional, separate latent-variable models for words, or Blockstructures for votes, the Group-Topic model?s joint inference discovers
more cohesive groups and improved topics.
1
Introduction
The field of social network analysis (SNA) has developed mathematical models that discover patterns in interactions among entities. One of the objectives of SNA is to detect
salient groups of entities. Group discovery has many applications, such as understanding
the social structure of organizations or native tribes, uncovering criminal organizations,
and modeling large-scale social networks in Internet services such as Friendster.com or
LinkedIn.com. Social scientists have conducted extensive research on group detection,
especially in fields such as anthropology and political science. Recently, statisticians and
computer scientists have begun to develop models that specifically discover group memberships [5, 2, 7]. One such model is the stochastic Blockstructures model [7], which discovers
the latent groups or classes based on pair-wise relation data. A particular relation holds between a pair of entities (people, countries, organizations, etc.) with some probability that
depends only on the class (group) assignments of the entities. This model is extended in
[4] to support an arbitrary number of groups by using a Chinese Restaurant Process prior.
The aforementioned models discover latent groups by examining only whether one or more
relations exist between a pair of entities. The Group-Topic (GT) model presented in this paper, on the other hand, considers both the relations between entities and also the attributes
of the relations (e.g., the text associated with the relations) when assigning group memberships. The GT model can be viewed as an extension of the stochastic Blockstructures
model [7] with the key addition that group membership is conditioned on a latent variable,
which in turn is also associated with the attributes of the relation. In our experiments, the
attributes of relations are words, and the latent variable represents the topic responsible for
generating those words. Our model captures the (language) attributes associated with interactions, and uses distinctions based on these attributes to better assign group memberships.
Consider a legislative body and imagine its members forming coalitions (groups), and voting accordingly. However, different coalitions arise depending on the topic of the resolution
up for a vote. In the GT model, the discovery of groups is guided by the emerging topics,
and the forming of topics is shaped by emerging groups.Resolutions that would have been
assigned the same topic in a model using words alone may be assigned to different topics if they exhibit distinct voting patterns. Topics may be merged if the entities vote very
similarly on them. Likewise, multiple different divisions of entities into groups are made
possible by conditioning them on the topics.
The importance of modeling the language associated with interactions between people has
recently been demonstrated in the Author-Recipient-Topic (ART) model [6]. It can measure role similarity by comparing the topic distributions for two entities. However, the
ART model does not explicitly discover groups formed by entities. When forming latent groups, the GT model simultaneously discovers salient topics relevant to relationships
between entities?topics which the models that only examine words are unable to detect.
We demonstrate the capabilities of the GT model by applying it to two large sets of voting data: one from US Senate and the other from the General Assembly of the UN. The
model clusters voting entities into coalitions and simultaneously discovers topics for word
attributes describing the relations (bills or resolutions) between entities. We find that the
groups obtained from the GT model are significantly more cohesive (p-value < 0.01) than
those obtained from the Blockstructures model. The GT model also discovers new and
more salient topics that help better predict entities? behaviors.
2
Group-Topic Model
The Group-Topic model is a directed graphical model that clusters entities with relations
between them, as well as attributes of those relations. The relations may be either symmetric or asymmetric and have multiple attributes. In this paper, we focus on symmetric
relations and have words as the attributes on relations. The graphical model representation
of the model and our notation are shown in Figure 1.
Without considering the topics of events, or by treating all events in a corpus as reflecting a single
topic, the simplified model becomes equivalent to
the stochastic Blockstructures model [7]. Here, each
event defines a relationship, e.g., whether in the event
two entities? group(s) behave the same way or not.
On the other hand, in our model a relation may also
have multiple attributes. When we consider the complete model, the dataset is dynamically divided into
T sub-blocks each of which corresponds to a topic.
The generative process of the GT model is as right.
tb ? Uniform(1/T )
wit |?t ? Multinomial(?t )
?t |? ? Dirichlet(?)
git |?t ? Multinomial(?t )
?t |? ? Dirichlet(?)
(b) (b)
(b)
vij |?gi gj ? Binomial(?gi gj )
(b)
?gh |? ? Beta(?).
We want to perform joint inference on (text) attributes and relations to obtain topic-wise
group memberships. We employ Gibbs sampling to conduct inference. Note that we adopt
conjugate priors in our setting, and thus we can easily integrate out ?, ? and ? to decrease
?
SYMBOL
gst
tb
(b)
wk
(b)
vij
g
?
S T
t
w
S
T
G
B
V
Nb
Sb
v
Sb2
Nb
B
?
?
BG2
T
?
DESCRIPTION
entity s?s group assignment in topic t
topic of an event b
the kth token in the event b
entity i and j?s group(s) behaved same (1)
or differently (2) on the event b
# of entities
# of topics
# of groups
# of events
# of unique words
# of word tokens in the event b
# of entities who participated in the event b
?
Figure 1: The Group-Topic model and notations used in this paper
the uncertainty associated with them.. In our case we need to compute the conditional distribution P (gst |w, v, g?st , t, ?, ?, ?) and P (tb |w, v, g, t?b , ?, ?, ?), where g?st denotes
the group assignments for all entities except entity s in topic t, and t?b represents the topic
assignments for all events except event b. Beginning with the joint probability of a dataset,
and using the chain rule, we can obtain the conditional probabilities conveniently. In our
setting, the relationship we are investigating is always symmetric, so we do not distinguish
Rij and Rji in our derivations (only Rij (i ? j) remain). Thus
P (gst |v, g?st , w, t, ?, ?, ?)
?gst +ntgst ?1
? PG
g=1
QB
b=1
(?g +ntg )?1
I(tb = t)
QG
h=1
Q2 Qd(b)
gst hk
(b)
?k +mg hk ?x
st
P2 k=1(b) x=1
Q k=1 dgst hk P2
(b)
x=1
(
k=1
(?k +mg
st
)?x
hk
!
,
where ntg represents how many entities are assigned into group g in topic t, ctv repre(b)
sents how many tokens of word v are assigned to topic t, mghk represents how many times
group g and h vote same (k = 1) and differently (k = 2) on event b, I(tb = t) is an
(b)
(b)
indicator function, and dgst hk is the increase in mgst hk if entity s were assigned to group
gst than without considering s at all (if I(tb = t) = 0, we ignore the increase in event b).
P (tb |v, g, w, t?b , ?, ?, ?)
QV Qe(b)
v (? +c
v
tb v ?x)
? PV v=1(b) x=1
Q v=1 ev PV
x=1
v=1
QG
(?v +ctb v )?x
g=1
QG
Q2
(b)
?(?k +mghk )
k=1
P
,
2
(b)
h=g ?(
k=1
(?k +mghk ))
(b)
where ev is the number of tokens of word v in event b.
The GT model uses information from two different modalities whose likelihoods are generally not directly comparable, since the number of occurrences of each type may vary
greatly. Thus we raise the first term in the above formula to a power, as is common in
speech recognition when the acoustic and language models are combined.
3
Related Work
There has been a surge of interest in models that describe relational data, or relations
between entities viewed as links in a network, including recent work in group discovery
[2, 5]. The GT model is an enhancement of the stochastic Blockstructures model [7] and
Datasets
Senate
UN
Avg. AI for GT
0.8294
0.8664
Avg. AI for Baseline
0.8198
0.8548
p-value
< .01
< .01
Table 1: Average AI for GT and Baseline for both Senate and UN datasets. The group
cohesion in GT is significantly better than in baseline.
the extended model of Kemp et al. [4] as it takes advantage of information from different
modalities by conditioning group membership on topics. In this sense, the GT model draws
inspiration from the Role-Author-Recipient-Topic (RART) model [6]. As an extension of
ART model, RART clusters together entities with similar roles. In contrast, the GT model
presented here clusters entities into groups based on their relations to other entities.
There has been a considerable amount of previous work in understanding voting patterns.
Exploring the notion that the behavior of an entity can be explained by its (hidden) group
membership, Jakulin and Buntine [3] develop a discrete PCA model for discovering groups,
where each entity can belong to each of the k groups with a certain probability, and each
group has its own specific pattern of behaviors. They apply this model to voting data in
the 108th US Senate where the behavior of an entity is its vote on a resolution. We apply
our GT model also to voting data. However, unlike [3], since our goal is to cluster entities
based on the similarity of their voting patterns, we are only interested in whether a pair of
entities voted the same or differently, not their actual yes/no votes. This ?content-ignorant?
feature is similarly found in work on web log clustering [1].
4
Experimental Results
We present experiments applying the GT model to the voting records of members of two
legislative bodies: the US Senate and the UN General Assembly. For comparison, we
present the results of a baseline method that first uses a mixture of unigrams to discover
topics and associate a topic with each resolution, and then runs the Blockstructures model
[7] separately on the resolutions assigned to each topic. This baseline approach is similar
to the GT model in that it discovers both groups and topics, and has different group assignments on different topics. However, whereas the baseline model performs inference
serially, GT performs joint inference simultaneously.
We are interested in the quality of both the groups and the topics. In the political science
literature, group cohesion is quantified by the Agreement Index (AI) [3], which, based on
the number of group members that vote Yes, No or Abstain, measures the similarity of
votes cast by members of a group during a particular roll call. Higher AI means better
cohesion. The group cohesion using the GT model is found to be significantly greater than
the baseline group cohesion under pairwise t-test, as shown in Table 1 for both datasets,
which indicates that the GT model is better able to capture cohesive groups.
4.1
The US Senate Dataset
Our Senate dataset consists of the voting records of Senators in the 101st-109th US Senate
(1989-2005) obtained from the Library of Congress THOMAS database. During a roll call
for a particular bill, a Senator may respond Yea or Nay to the question that has been put
to vote, else the vote will be recorded as Not Voting. We do not consider Not Voting as a
unique vote since most of the time it is a result of a Senator being absent from the session
of the US Senate. The text associated with each resolution is composed of its index terms
provided in the database. There are 3423 resolutions in our experiments (we excluded
roll calls that were not associated with resolutions). Since there are far fewer words than
Economic
federal
labor
insurance
aid
tax
business
employee
care
Education
education
school
aid
children
drug
students
elementary
prevention
Military Misc.
government
military
foreign
tax
congress
aid
law
policy
Energy
energy
power
water
nuclear
gas
petrol
research
pollution
Table 2: Top words for topics generated with the mixture of unigrams model on the Senate
dataset. The headers are our own summary of the topics.
Economic
labor
insurance
tax
congress
income
minimum
wage
business
Education + Domestic
education
school
federal
aid
government
tax
energy
research
Foreign
foreign
trade
chemicals
tariff
congress
drugs
communicable
diseases
Social Security + Medicare
social
security
insurance
medical
care
medicare
disability
assistance
Table 3: Top words for topics generated with the GT model on the Senate dataset. The
topics are influenced by both the words and votes on the bills.
pairs of votes, we raise the text likelihood to the 5th power (mentioned in Section 2) in the
experiments with this dataset so as to balance its influence during inference.
We cluster the data into 4 topics and 4 groups (cluster sizes are chosen somewhat arbitrarily)
and compare the results of GT with the baseline. The most likely words for each topic from
the traditional mixture of unigrams model is shown in Table 2, whereas the topics obtained
using GT are shown in Table 3. The GT model collapses the topics Education and Energy
together into Education and Domestic, since the voting patterns on those topics are quite
similar. The new topic Social Security + Medicare did not have strong enough word
coherence to appear in the baseline model, but it has a very distinct voting pattern, and thus
is clearly found by the GT model. Thus, importantly, GT discovers topics that help predict
people?s behavior and relations, not simply word co-occurrences.
Examining the group distribution across topics in the GT model, we find that on the topic
Economic the Republicans form a single group whereas the Democrats split into 3 groups
indicating that Democrats have been somewhat divided on this topic. On the other hand,
in Education + Domestic and Social Security + Medicare, Democrats are more unified
whereas the Republicans split into 3 groups. The group membership of Senators on Education + Domestic issues is shown in Table 4. We see that the first group of Republicans
include a Democratic Senator from Texas, a state that usually votes Republican. Group 2
(majority Democrats) includes Sen. Chafee who has been involved in initiatives to improve
education, as well as Sen. Jeffords who left the Republican Party to become an Independent
and has championed legislation to strengthen education and environmental protection.
Nearly all the Republican Senators in Group 4 (in Table 4) are advocates for education and
many of them have been awarded for their efforts. For instance, Sen. Voinovich and Sen.
Symms are strong supporters of early education and vocational education, respectively; and
Group 1
73 Republicans
Krueger (D-TX)
Group 2
90 Democrats
Chafee (R-RI)
Jeffords (I-VT)
Group 3
Cohen (R-ME)
Danforth (R-MO)
Durenberger (R-MN)
Hatfield (R-OR)
Heinz (R-PA)
Kassebaum (R-KS)
Packwood (R-OR)
Specter (R-PA)
Snowe (R-ME)
Collins (R-ME)
Group 4
Armstrong (R-CO)
Brown (R-CO)
Garn (R-UT)
DeWine (R-OH)
Humphrey (R-NH) Thompson (R-TN)
McCain (R-AZ)
Fitzgerald (R-IL)
McClure (R-ID)
Voinovich (R-OH)
Roth (R-DE)
Miller (D-GA)
Symms (R-ID)
Coleman (R-MN)
Wallop(R-WY)
Table 4: Senators in the four groups corresponding to Education + Domestic in Table 3.
Everything Nuclear
nuclear
weapons
use
implementation
countries
Human Rights
rights
human
palestine
situation
israel
Security in Middle East
occupied
israel
syria
security
calls
Table 5: Top words for topics generated from mixture of unigrams model with the UN
dataset. Only text information is utilized to form the topics, as opposed to Table 6 where
our GT model takes advantage of both text and voting information.
Sen. Roth has voted for tax deductions for education. It is also interesting to see that Sen.
Miller (D-GA) appears in a Republican group; although he is in favor of educational reforms, he is a conservative Democrat and frequently criticizes his own party?even backing
Republican George W. Bush over Democrat John Kerry in the 2004 Presidential Election.
Many of the Senators in Group 3 have also focused on education and other domestic issues
such as energy, however, they often have a more liberal stance than those in Group 4, and
come from states that are historically less conservative. For example, Sen. Danforth has
presented bills for a more fair distribution of energy resources. Sen. Kassebaum is known
to be uncomfortable with many Republican views on domestic issues such as education,
and has voted against voluntary prayer in school. Thus, both Groups 3 and 4 differ from
the Republican core (Group 2) on domestic issues, and also differ from each other.
We also inspect the Senators that switch groups the most across topics in the GT model. The
top 5 Senators are Shelby (D-AL), Heflin (D-AL), Voinovich (R-OH), Johnston (D-LA),
and Armstrong (R-CO). Sen. Shelby (D-AL) votes with the Republicans on Economic,
with the Democrats on Education + Domestic and with a small group of maverick Republicans on Foreign and Social Security + Medicare. Sen. Shelby, together with Sen.
Heflin, is a Democrat from a fairly conservative state (Alabama) and are found to side with
the Republicans on many issues.
4.2
The United Nations Dataset
The second dataset involves the voting record of the UN General Assembly1 . We focus
on the resolutions discussed from 1990-2003, which contain votes of 192 countries on 931
resolutions. If a country is present during the roll call, it may choose to vote Yes, No or
1
http://home.gwu.edu/?voeten/UNVoting.htm
G
R
O
U
P
?
1
2
3
4
5
Nuclear Nonproliferation
nuclear
states
united
weapons
nations
Brazil
Columbia
Chile
Peru
Venezuela...
USA
Japan
Germany
UK...
Russia...
China
India
Mexico
Iran
Pakistan...
Kazakhstan
Belarus
Yugoslavia
Azerbaijan
Cyprus...
Thailand
Philippines
Malaysia
Nigeria
Tunisia...
Nuclear Arms Race
nuclear
arms
prevention
race
space
UK
France
Spain
Monaco
East-Timor
India
Russia
Micronesia
Japan
Germany
Italy...
Poland
Hungary...
China
Brazil
Mexico
Indonesia
Iran...
USA
Israel
Palau
Human Rights
rights
human
palestine
occupied
israel
Brazil
Mexico
Columbia
Chile
Peru...
Nicaragua
Papua
Rwanda
Swaziland
Fiji...
USA
Japan
Germany
UK...
Russia...
China
India
Indonesia
Thailand
Philippines...
Belarus
Turkmenistan
Azerbaijan
Uruguay
Kyrgyzstan...
Table 6: Top words for topics generated from the GT model with the UN dataset as well as
the corresponding groups for each topic (column). The countries listed for each group are
ordered by their 2005 GDP (PPP).
Abstain. Unlike the Senate dataset, a country?s vote can have one of three possible values
instead of two. Because we parameterize agreement and not the votes themselves, this 3value setting does not require any change to our model. In experiments with this dataset,
we use a weighting factor 500 for text (adjusting the likelihood of text by a power of 500
so as to make it comparable with the likelihood of pairs of votes for each resolution). We
cluster this dataset into 3 topics and 5 groups (chosen somewhat arbitrarily).
The most probable words in each topic from the mixture of unigrams model is shown in
Table 5. For example, Everything Nuclear constitutes all resolutions that have anything to
do with the use of nuclear technology, including nuclear weapons. Comparing these with
topics generated from the GT model shown in Table 6, we see that the GT model splits the
discussion about nuclear technology into two separate topics, Nuclear Nonproliferation
(generally about countries obtaining nuclear weapons and management of nuclear waste),
and Nuclear Arms Race (focused on the historic arms race between Russia and the US, and
preventing a nuclear arms race in outer space). These two issues had drastically different
voting patterns in the UN, as can be seen in the contrasting group structure for those topics
in Table 6. Thus, again, the GT model is able to discover more salient topics?topics
that reflect the voting patterns and coalitions, not simply word co-occurrence alone. The
countries in Table 6 are ranked by their GDP in 2005.2
As seen in Table 6, groups formed in Nuclear Arms Race are unlike the groups formed
in other topics. These groups map well to the global political situation of that time when,
despite the end of the Cold War, there was mutual distrust between Russia and the US with
regard to the continued manufacture of nuclear weapons. For missions to outer space and
nuclear arms, India was a staunch ally of Russia, while Israel was an ally of the US.
5
Conclusions
We introduce the Group-Topic model that jointly discovers latent groups in a network as
well as clusters of attributes (or topics) of events that influence the interaction between
entities. The model extends prior work on latent group discovery by capturing not only
pair-wise relations between entities but also multiple attributes of the relations (in particular, words describing the relations). In this way the GT model obtains more cohesive groups
as well as salient topics that influence the interaction between groups. This paper demonstrates that the Group-Topic model is able to discover topics capturing the group based
interactions between members of a legislative body. The model can be applied not just to
voting data, but any data having relations with attributes. We are now using the model to
analyze the citations in academic papers capturing the topics of research papers and discovering research groups. The model can be altered suitably to consider other categorical,
multi-dimensional, and continuous attributes characterizing relations.
Acknowledgments
This work was supported in part by the CIIR, the Central Intelligence Agency, the National
Security Agency, the National Science Foundation under NSF grant #IIS-0326249, and by
the Defense Advanced Research Projects Agency, through the Department of the Interior,
NBC, Acquisition Services Division, under contract #NBCHD030010. We would also like
to thank Prof. Vincent Moscardelli, Chris Pal and Aron Culotta for helpful discussions.
References
[1] Doug Beeferman and Adam Berger. Agglomerative clustering of a search engine query log. In
The 6th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, 2000.
[2] Indrajit Bhattacharya and Lise Getoor. Deduplication and group detection using links. In The
10th SIGKDD Conference Workshop on Link Analysis and Group Detection (LinkKDD), 2004.
[3] Aleks Jakulin and Wray Buntine. Analyzing the US Senate in 2003: Similarities, networks,
clusters and blocs, 2004. http://kt.ijs.si/aleks/Politics/us senate.pdf.
[4] Charles Kemp, Thomas L. Griffiths, and Joshua Tenenbaum. Discovering latent classes in relational data. Technical report, AI Memo 2004-019, MIT CSAIL, 2004.
[5] Jeremy Kubica, Andrew Moore, Jeff Schneider, and Yiming Yang. Stochastic link and group
detection. In The 17th National Conference on Artificial Intelligence (AAAI), 2002.
[6] Andrew McCallum, Andres Corrada-Emanuel, and Xuerui Wang. Topic and role discovery in
social networks. In The 19th International Joint Conference on Artificial Intelligence, 2005.
[7] Krzysztof Nowicki and Tom A.B. Snijders. Estimation and prediction for stochastic blockstructures. Journal of the American Statistical Association, 96(455):1077?1087, 2001.
2
http://en.wikipedia.org/wiki/List of countries by GDP %28PPP%29. In Table 6, we omit some
countries (represented by ...) in order to show other interesting but relatively low-ranked countries
(for example, Russia) in the GDP list.
| 2820 |@word middle:1 suitably:1 cyprus:1 git:1 pg:1 yea:1 uma:1 united:3 rart:2 com:2 comparing:2 protection:1 si:1 assigning:1 john:1 indonesia:2 treating:1 malaysia:1 alone:2 generative:2 discovering:3 fewer:1 intelligence:3 accordingly:1 mccallum:3 beginning:1 chile:2 coleman:1 core:1 record:4 liberal:1 org:1 mathematical:1 beta:1 become:1 initiative:1 consists:1 advocate:1 nay:1 introduce:1 nbc:1 pairwise:1 behavior:5 themselves:1 examine:1 surge:1 frequently:1 multi:1 heinz:1 linkkdd:1 actual:1 election:1 considering:2 humphrey:1 becomes:1 provided:1 discover:7 notation:2 domestic:9 spain:1 project:1 corrada:1 israel:5 emerging:3 q2:2 developed:1 contrasting:1 unified:1 voting:20 nation:3 uncomfortable:1 demonstrates:1 uk:3 medical:1 grant:1 appear:1 omit:1 palestine:2 before:1 service:2 scientist:2 congress:4 jakulin:2 despite:1 id:2 analyzing:1 anthropology:1 championed:1 studied:1 quantified:1 dynamically:1 k:1 china:3 nigeria:1 co:5 collapse:1 directed:1 unique:2 responsible:1 acknowledgment:1 ciir:1 block:2 cold:1 manufacture:1 drug:2 significantly:4 word:24 griffith:1 ga:2 ntg:2 interior:1 put:2 nb:2 applying:2 influence:3 bill:5 equivalent:1 demonstrated:1 roth:2 map:1 educational:1 thompson:1 danforth:2 resolution:13 wit:1 focused:2 rule:1 continued:1 importantly:1 nuclear:19 oh:3 his:1 notion:1 linkedin:1 brazil:3 bg2:1 imagine:1 strengthen:1 us:3 agreement:2 associate:1 pa:2 recognition:1 utilized:1 asymmetric:1 native:1 database:2 role:4 wang:2 capture:2 rij:2 tunisia:1 parameterize:1 culotta:1 decrease:1 trade:1 disease:1 mentioned:1 agency:3 fitzgerald:1 hatfield:1 raise:2 division:2 easily:1 joint:6 htm:1 differently:3 represented:1 tx:1 derivation:1 distinct:2 describe:1 query:1 artificial:2 header:1 whose:1 quite:1 presidential:1 favor:1 gi:2 jointly:1 advantage:2 mg:2 ctb:1 sen:11 interaction:6 mission:1 relevant:1 hungary:1 tax:5 description:1 az:1 cluster:11 enhancement:1 generating:1 adam:1 yiming:1 sna:2 help:2 depending:1 andrew:3 develop:2 school:3 strong:2 p2:2 c:1 involves:1 come:1 qd:1 differ:2 guided:2 philippine:2 merged:1 attribute:19 nicaragua:1 stochastic:6 ppp:2 kubica:1 human:4 everything:2 education:18 require:1 government:2 assign:1 probable:1 elementary:1 extension:2 exploring:1 hold:1 predict:2 mo:1 vary:1 adopt:1 cohesion:5 early:1 estimation:1 vice:1 qv:1 federal:2 mit:1 clearly:1 always:1 occupied:2 lise:1 focus:2 likelihood:4 indicates:1 hk:6 political:3 sigkdd:2 contrast:1 baseline:9 sense:1 detect:2 helpful:1 inference:7 greatly:1 membership:8 foreign:4 sb:1 hidden:1 relation:25 deduction:1 france:1 comprising:1 germany:3 backing:1 interested:2 issue:6 uncovering:1 among:3 aforementioned:1 reform:1 prevention:2 art:3 fairly:1 mutual:1 field:2 once:1 shaped:1 having:1 sampling:1 represents:4 nearly:1 constitutes:1 kerry:1 report:1 gdp:4 employ:1 composed:1 simultaneously:5 national:3 senator:10 ignorant:1 statistician:1 detection:4 organization:3 interest:1 mining:1 insurance:3 mixture:5 chain:1 kt:1 conduct:1 instance:1 military:2 modeling:2 column:1 assignment:5 uniform:1 examining:2 conducted:1 pal:1 buntine:2 combined:1 st:6 international:1 amherst:1 csail:1 probabilistic:1 contract:1 bloc:1 together:3 again:1 aaai:1 recorded:1 management:1 opposed:1 choose:1 russia:7 reflect:1 central:1 conf:1 american:1 uruguay:1 rji:1 japan:3 jeremy:1 de:1 student:1 wk:1 includes:1 waste:1 int:1 explicitly:1 race:6 depends:1 aron:1 view:1 unigrams:5 analyze:1 deduplication:1 repre:1 capability:1 kassebaum:2 formed:3 voted:3 il:1 roll:4 who:3 likewise:1 miller:2 yes:3 vincent:1 andres:1 wray:1 ijs:1 influenced:1 against:1 energy:6 acquisition:1 involved:1 associated:8 emanuel:1 dataset:14 adjusting:1 massachusetts:1 begun:1 knowledge:1 ut:1 reflecting:1 appears:1 monaco:1 higher:1 tom:1 improved:1 just:1 hand:3 ally:2 web:1 defines:1 quality:1 behaved:1 usa:3 brown:1 contain:1 assigned:6 inspiration:1 excluded:1 symmetric:3 chemical:1 stance:1 moore:1 misc:1 nowicki:1 cohesive:4 assistance:1 during:4 peru:2 qe:1 anything:1 pdf:1 complete:1 demonstrate:1 tn:1 performs:2 gh:1 wise:3 discovers:9 recently:2 abstain:2 krueger:1 common:1 charles:1 wikipedia:1 multinomial:2 cohen:1 conditioning:2 nh:1 belong:1 he:2 discussed:1 association:1 employee:1 versa:1 gibbs:1 ai:6 similarly:2 session:1 maverick:1 language:3 had:1 similarity:4 gj:2 etc:1 gt:35 own:3 recent:1 italy:1 awarded:1 certain:2 arbitrarily:2 vt:1 joshua:1 seen:2 minimum:1 greater:1 care:2 somewhat:3 george:1 schneider:1 ii:1 multiple:4 snijders:1 legislative:3 technical:1 academic:1 mcclure:1 divided:2 yugoslavia:1 qg:3 prediction:1 addition:1 want:1 participated:1 separately:1 whereas:4 else:1 johnston:1 country:11 modality:3 weapon:5 unlike:3 sent:1 member:5 call:5 yang:1 split:3 enough:1 switch:1 restaurant:1 chafee:2 economic:4 supporter:1 texas:1 absent:1 politics:1 whether:3 pca:1 war:1 defense:1 effort:1 speech:1 generally:2 listed:1 sb2:1 amount:1 iran:2 thailand:2 tenenbaum:1 http:3 wiki:1 exist:1 nsf:1 discrete:1 group:92 key:1 salient:5 four:1 krzysztof:1 year:2 run:1 legislation:1 uncertainty:1 respond:1 extends:1 blockstructures:8 distrust:1 home:1 draw:1 coherence:1 comparable:2 capturing:3 xuerui:3 internet:1 distinguish:1 ri:1 qb:1 gst:6 relatively:1 department:2 coalition:4 conjugate:1 remain:1 across:2 explained:1 resource:1 turn:1 describing:2 end:1 pakistan:1 apply:2 occurrence:3 bhattacharya:1 thomas:2 recipient:2 binomial:1 dirichlet:2 denotes:1 assembly:2 clustering:2 graphical:2 top:5 include:1 especially:1 chinese:1 prof:1 objective:1 pollution:1 question:1 traditional:2 disability:1 exhibit:1 kth:1 separate:2 unable:1 link:4 entity:37 majority:1 outer:2 thank:1 me:3 topic:76 chris:1 agglomerative:1 considers:1 kemp:2 water:1 friendster:1 index:2 relationship:6 berger:1 balance:1 mexico:3 thirteen:1 memo:1 implementation:1 policy:1 perform:1 inspect:1 datasets:3 behave:1 gas:1 voluntary:1 situation:2 extended:2 relational:2 arbitrary:1 criminal:1 pair:7 cast:1 extensive:1 security:8 acoustic:1 engine:1 distinction:1 textual:1 alabama:1 nbchd030010:1 able:3 usually:1 pattern:9 ev:2 wy:1 democratic:1 tb:8 including:2 power:4 event:16 getoor:1 serially:1 business:2 ranked:2 aleks:2 indicator:1 senate:15 medicare:5 mn:2 arm:7 advanced:1 improve:1 altered:1 historically:1 republican:14 library:1 technology:2 doug:1 ctv:1 categorical:1 columbia:2 text:9 prior:3 understanding:2 discovery:8 literature:1 poland:1 law:1 historic:1 interesting:2 sixteen:1 foundation:1 integrate:1 wage:1 fiji:1 vij:2 summary:1 token:4 supported:1 tribe:1 drastically:1 side:1 india:4 characterizing:1 regard:1 preventing:1 author:2 made:1 avg:2 simplified:1 far:1 income:1 social:11 party:2 citation:1 obtains:1 ignore:1 global:1 investigating:1 corpus:1 un:8 latent:9 continuous:1 search:1 table:19 obtaining:1 did:1 arise:1 child:1 fair:1 body:3 syria:1 en:1 aid:4 sub:1 pv:2 mohanty:1 weighting:1 formula:1 specific:1 symms:2 symbol:1 list:2 incorporating:1 workshop:1 importance:1 venezuela:1 conditioned:1 democrat:9 simply:2 likely:1 forming:3 conveniently:1 labor:2 natasha:1 ordered:1 corresponds:1 environmental:1 acm:1 ma:1 conditional:2 viewed:2 goal:1 rwanda:1 jeff:1 considerable:1 content:1 change:1 specifically:1 except:2 conservative:3 experimental:2 la:1 vote:20 east:2 indicating:1 people:3 support:1 collins:1 bush:1 armstrong:2 |
2,005 | 2,821 | Generalization to Unseen Cases
Teemu Roos
Helsinki Institute for Information Technology
P.O.Box 68, 00014 Univ. of Helsinki, Finland
?
Peter Grunwald
CWI, P.O.Box 94079, 1090 GB,
Amsterdam, The Netherlands
[email protected]
[email protected]
Petri Myllym?aki
Helsinki Institute for Information Technology
P.O.Box 68, 00014 Univ of Helsinki, Finland
Henry Tirri
Nokia Research Center
P.O.Box 407 Nokia Group, Finland
[email protected]
[email protected]
Abstract
We analyze classification error on unseen cases, i.e. cases that are different from those in the training set. Unlike standard generalization error,
this off-training-set error may differ significantly from the empirical error with high probability even with large sample sizes. We derive a datadependent bound on the difference between off-training-set and standard
generalization error. Our result is based on a new bound on the missing
mass, which for small samples is stronger than existing bounds based
on Good-Turing estimators. As we demonstrate on UCI data-sets, our
bound gives nontrivial generalization guarantees in many practical cases.
In light of these results, we show that certain claims made in the No Free
Lunch literature are overly pessimistic.
1
Introduction
A large part of learning theory deals with methods that bound the generalization error of
hypotheses in terms of their empirical errors. The standard definition of generalization
error allows overlap between the training sample and test cases. When such overlap is
not allowed, i.e., when considering off-training-set error [1]?[5] defined in terms of only
previously unseen cases, usual generalization bounds do not apply. The off-training-set
error and the empirical error sometimes differ significantly with high probability even for
large sample sizes. In this paper, we show that in many practical cases, one can nevertheless
bound this difference. In particular, we show that with high probability, in the realistic
situation where the number of repeated cases, or duplicates, relative to the total sample size
is small, the difference between the off-training-set error and the standard generalization
error is also small. In this case any standard generalization error bound, no matter how it is
arrived at, transforms into a similar bound on the off-training-set error.
Our Contribution We show that with probability at least 1??, if there are r repetitions in
the training sample, then the difference between
error and the standard
q the off-training-set
1
4
generalization error is at most of order O
(Thm. 2). Our main
n log ? + r log n
result (Corollary 1 of Thm. 1) gives a stronger non-asymptotic bound that can be evaluated
numerically. The proof of Thms. 1 and 2 is based on Lemma 2, which is of independent
interest, giving a new lower bound on the so-called missing mass, the total probability of as
yet unseen cases. For small samples and few repetitions, this bound is significantly stronger
than existing bounds based on Good-Turing estimators [6]?[8].
Properties of Our Bounds Our bounds hold (1) uniformly, are (2) distribution-free and
(3) data-dependent, yet (4) relevant for data-sets encountered in practice. Let us consider
these properties in turn. Our bounds hold uniformly in that they hold for all hypotheses
(functions from features to labels) at the same time. Thus, unlike many bounds on standard
generalization error, our bounds do not depend in any way on the richness of the hypothesis
class under consideration measured in terms of, for instance, its VC dimension, or the
margin of the selected hypothesis on the training sample, or any other property of the
mechanism with which the hypothesis is chosen. Our bounds are distribution-free in that
they hold no matter what the (unknown) data-generating distribution is. Our bounds depend
on the data: they are useful only if the number of repetitions in the training set is very small
compared to the training set size. However, in machine learning practice this is often the
case as demonstrated in Sec. 3 with several UCI data-sets.
Relevance Why are our results interesting? There are at least three reasons, the first two of
which we discuss extensively in Sec. 4: (1) The use of off-training-set error is an essential
ingredient of the No Free Lunch (NFL) theorems [1]?[5]. Our results counter-balance some
of the overly pessimistic conclusions of this work. This is all the more relevant since the
NFL theorems have been quite influential in shaping the thinking of both theoretical and
practical machine learning researchers (see, e.g., Sec. 9.2 of the well-known textbook [5]).
(2) The off-training-set error is an intuitive measure of generalization performance. Yet in
practice it differs from standard generalization error (even with continuous feature spaces).
Thus, we feel, it is worth studying. (3) Technically, we establish a surprising connection
between off-training-set error (a concept from classification) and missing mass (a concept
mostly applied in language modeling), and give a new lower bound on the missing mass.
The paper is organized as follows: In Sec. 2 we fix notation, including the various error
functionals considered, and state some preliminary results. In Sec. 3 we state our bounds,
and we demonstrate their use on data-sets from the UCI machine learning repository. We
discuss the implications of our results in Sec. 4. Postponed proofs are in Appendix A.
2
Preliminaries and Notation
Let X be an arbitrary space of inputs, and let Y be a discrete space of labels. A learner
observes a random training sample, D, of size n, consisting of the values of a sequence
of input?label pairs ((X1 , Y1 ), ..., (Xn , Yn )), where (Xi , Yi ) ? X ? Y. Based on the
sample, the learner outputs a hypothesis h : X ? Y that gives, for each possible input
value, a prediction of the corresponding label. The learner is successful if the produced
hypothesis has high probability of making a correct prediction when applied to a test case.
(Xn+1 , Yn+1 ). Both the training sample and the test case are independently drawn from a
common generating distribution P ? . We use the following error functionals:
Definition 1 (errors). Given a training sample D of size n, the i.i.d., off-training-set, and
empirical error of a hypothesis h are given by
Eiid (h) := Pr[Y 6= h(X)]
Eots (h, D) := Pr[Y
/ XD ]
Pn6= h(X) | X ?
Eemp (h, D) := n1 i=1 I{h(Xi )6=Yi }
i.i.d. error,
off-training-set error,
empirical error,
where XD is the set of X-values occurring in sample D, and the indicator function I{?}
takes value one if its argument is true and zero otherwise.
The first one of these is just the standard generalization error of learning theory. Following
[2], we call it i.i.d. error. For general input spaces and generating distributions E ots (h, D)
may be undefined for some D. In either case, this is not a problem. First, if XD has measure
one, the off-training-set error is undefined and we need not concern ourselves with it; the
relevant error measure is Eiid (h) and standard results apply1 . If, on the other hand, XD has
measure zero, the off-training-set error and the i.i.d. error are equivalent and our results (in
Sec. 3 below) hold trivially. Thus, if off-training-set error is relevant, our results hold.
Definition 2. Given a training sample D, the sample coverage p(XD ) is the probability
that a new X-value appears in D: p(XD ) := Pr[X ? XD ], where XD is as in Def. 1. The
remaining probability, 1 ? p(XD ), is called the missing mass.
Lemma 1. For any training set D such that Eots (h, D) is defined, we have
a)
b)
|Eots (h, D) ? Eiid (h)| ? p(XD ) ,
p(XD )
Eots (h, D) ? Eiid (h) ?
Eiid (h) .
1 ? p(XD )
Proof. Both bounds follow essentially from the following inequalities2 :
Eots (h, D)
Pr[Y =
6 h(X), X ?
/ XD ]
Pr[Y 6= h(X)]
Eiid (h)
?
?1=
?1
Pr[X ?
/ XD ]
Pr[X ?
/ XD ]
1 ? p(XD )
Eiid (h)
Eiid (h)
=
? 1 (1 ? p(XD )) +
? 1 p(XD )
1 ? p(XD )
1 ? p(XD )
? Eiid (h) + p(XD ) ,
=
where ? denotes the minimum. This gives one direction of Lemma 1.a (an upper bound on
Eots (h, D)); the other direction is obtained by using analogous inequalities for the quantity
1 ? Eots (h, D), with Y 6= h(X) replaced by Y = h(X), which gives the upper bound
1 ? Eots (h, D) ? 1 ? Eiid (h) + p(XD ). Lemma 1.b follows from the first line by ignoring
the upper bound 1, and subtracting Eiid (h) from both sides.
Given the value of (or an upper bound on) Eiid (h), the upper bound of Lemma 1.b may
be significantly stronger than that of Lemma 1.a. However, in this work we only use
Lemma 1.a for simplicity since it depends on p(XD ) alone. The lemma would be of little
use without a good enough upper bound on the sample coverage p(XD ), or equivalently, a
lower bound on the missing mass. In the next section we obtain such a bound.
3
An Off-training-set Error Bound
Good-Turing estimators [6], named after Irving J. Good, and Alan Turing, are widely used
in language modeling to estimate the missing mass. The known small bias of such estimators, together with a rate of convergence, can be used to obtain lower and upper bound for
the missing mass [7, 8]. Unfortunately, for the sample sizes we are interested in, the lower
bounds are not quite tight enough (see Fig. 1 below). In this section we state a new lower
bound, not based on Good-Turing estimators, that is practically useful in our context. We
compare this bound to the existing ones after Thm. 2.
Let X?n ? X be the set consisting of the n most probable individual values of X. In case
there are several such subsets any one of them will do. In case X has less than n elements,
X?n := X . Denote for short p?n := Pr[X ? X?n ]. No assumptions are made regarding the
value of p?n , it may or may not be zero. The reason for us being interested in p?n is that
1
2
Note however, that a continuous feature space does not necessarily imply this, see Sec. 4.
This neat proof is due to Gilles Blanchard (personal communication).
it gives us an upper bound p(XD ) ? p?n on the sample coverage that holds for all D. We
prove that when p?n is large it is likely that a sample of size n will have several repeated Xvalues so that the number of distinct X-values is less than n. This implies that if a sample
with a small number of repeated X-values is observed, it is safe to assume that p?n is small
and therefore, the sample coverage p(XD ) must also be small.
Lemma 2. The probability of obtaining a sample of size n ? 1 with at most 0 ? r < n
repeated X-values is upper-bounded by Pr[?at most r repetitions?] ? ?(n, r, p?n ) , where
n
X
n k
?(n, r, p?n ) :=
p? (1 ? p?n )n?k f (n, r, k)
(1)
k n
k=0
(
1
if k < r
n!
and f (n, r, k) is given by f (n, r, k) :=
min kr (n?k+r)!
n?(k?r) , 1
if k ? r.
?(n, r, p?n ) is a non-increasing function of p?n .
For a proof, see Appendix A. Given a fixed confidence level 1 ? ? we can now define a
data-dependent upper bound on the sample coverage
B(?, D) := arg min {p : ?(n, r, p) ? ?} ,
p
(2)
where r is the number of repeated X-values in D, and ?(n, r, p) is given by Eq. (1).
Theorem 1. For any 0 ? ? ? 1, the upper bound B(?, D) on the sample coverage given
by Eq. (2) holds with at least probability 1 ? ?:
Pr [p(XD ) ? B(?, D)] ? 1 ? ? .
Proof. Consider fixed values of the confidence level 1 ? ?, sample size n, and probability
p?n . Let R be the largest integer for which ?(n, R, p?n ) ? ?. By Lemma 2 the probability of
obtaining at most R repetitions is upper-bounded by ?. Thus, it is sufficient that the bound
holds whenever the number of repetitions is greater than R. For any such r > R, we have
?(n, r, p?n ) > ?. By Lemma 2 the function ?(n, r, p?n ) is non-increasing in p?n , and hence
it must be that p?n < arg minp {p : ?(n, r, p) ? ?} = B(?, D). Since p(XD ) ? p?n , the
bound then holds for all r > R.
Rather than the sample coverage p(XD ), the real interest is often in off-training-set error.
Using the relation between the two quantities, one gets the following corollary that follows
directly from Lemma 1.a and Thm. 1.
Corollary 1 (main result: off-training-set error bound). For any 0 ? ? ? 1, the difference between the i.i.d. error and the off-training-set error is bounded by
Pr [?h |Eots (h, D) ? Eiid (h)| ? B(?, D)] ? 1 ? ? .
Corollary 1 implies that the off-training-set error and the i.i.d. error are entangled, thus
transforming all distribution-free bounds on the i.i.d. error to similar bounds on the offtraining-set error. Since the probabilistic part of the result (Lemma 1) does not involve a
specific hypothesis, Corollary 1 holds for all hypotheses at the same time, and does not
depend on the richness of the hypothesis class in terms of, for instance, its VC dimension.
Figure 1 illustrates the behavior of the bound (2) as the sample size grows. It can be seen
that for a small number of repetitions the bound is nontrivial already at moderate sample
sizes. Moreover, the effect of repetitions is tolerable, and it diminishes as the number of
repetitions grows. Table 1 lists values of the bound for a number of data-sets from the UCI
machine learning repository [9]. In many cases the bound is about 0.10?0.20 or less.
Theorem 2 gives an upper bound on the rate with which the bound decreases as n grows.
1
T
G-
0.9
0.8
10
r=
B(?, D)
0.7
0.6
0.5
r=
0.4
1
r=
0
0.3
0.2
PSfrag replacements
0.1
1
10
100
1000
10000
sample size
Figure 1: Upper bound B(?, D) given by Eq. (2) for samples with zero (r = 0) to ten
(r = 10) repeated X-values on the 95 % confidence level (? = 0.05). The dotted curve
is an asymptotic version for r = 0 given by Thm. 2. The curve labeled ?G-T? (for r = 0)
is based on Good-Turing estimators (Thm. 3 in [7]). Asymptotically, it exceeds our r = 0
bound by a factor O(log n). Bound for the UCI data-sets in Table 1 are marked with small
triangles (5). Note the log-scale for sample size.
Theorem 2 (a weaker bound in closed-form).
q For all n and all p?n , all r < n, the function
1
B(?, D) has the upper bound B(?, D) ? 3 2n
log 4? + 2r log n .
For a proof, see Appendix A. Let us compare Thm. 2 to the existing bounds on B(?, D)
based on Good-Turing
? estimators [7, 8]. For fixed ?, Thm. 3 in [7] gives an upper bound
of O (r/n + log n/ ?
n). The exact bound
? is drawn as the G-T curve in Fig. 1. In contrast,
our bound gives O C + r log n/ n , for a known constant C > 0. For fixed r and
increasing
bound of order O(log n) if r = 0,
? n, this gives an improvement over the G-T ?
and O( log n) if r > 0. For r growing faster than O( log n), asymptotically our bound
becomes uncompetitive3 . The real advantage of our bound is that, in contrast to G-T, it
gives nontrivial bounds for sample sizes and number of repetitions that typically occur in
classification problems. For practical applications in language modeling (large samples,
many repetitions), the existing G-T bound of [7] is probably preferable.
The developments in [8] are also relevant, albeit in a more indirect manner. In Thm. 10
of that paper, it is shown that the probability that the missing mass is larger than its ex2
pected value by an amount is bounded by e?(e/2)n . In [7], Sec. 4, some techniques
are developed to bound the expected missing mass in terms of the number of repetitions in
the sample. One might conjecture that, combined with Thm. 10 of [8], these
? techniques
can be extended to yield an upper bound on B(?, D) of order O(r/n + 1/ n) that would
be asymptotically stronger than the current bound. We plan to investigate this and other
potential ways to improve the bounds in future work. Any advance in this direction makes
the implications of our bounds even more compelling.
3
If data are i.i.d. according to a fixed P ? , then, as follows from the strong law of large numbers,
r, considered as a function of n, will either remain zero for ever or will be larger than cn for some
c > 0, for all n larger than some n0 . In practice, our bound is still relevant because typical data-sets
often have r very small compared to n (see Table 1). This is possible because apparently n n 0 .
Table 1: Bounds on the difference between the i.i.d. error and the off-training-set error
given by Eq. (2) on confidence level 95% (? = 0.05). A dash (-) indicates no repetitions.
Bounds greater than 0.5 are in parentheses.
DATA
SAMPLE SIZE
REPETITIONS
BOUND
Abalone
Adult
Annealing
Artificial Characters
Breast Cancer (Diagnostic)
Breast Cancer (Original)
Credit Approval
Cylinder Bands
Housing
Internet Advertisement
Isolated Letter Speech Recogn.
Letter Recognition
Multiple Features
Musk
Page Blocks
Water Treatment Plant
Waveform
4
4177
32562
798
1000
569
699
690
542
506
2385
1332
20000
2000
6598
5473
527
5000
25
8
34
236
441
1332
4
17
80
-
0.0383
0.0959
0.3149
(0.5112)
0.1057
(1.0)
0.0958
0.1084
0.1123
(0.9865)
0.0685
(0.6503)
0.1563
0.1671
0.3509
0.1099
0.0350
Discussion ? Implications of Our Results
The use of off-training-set error is an essential ingredient of the influential No Free Lunch
theorems [1]?[5]. Our results imply that, while the NFL theorems themselves are valid,
some of the conclusions drawn from them are overly pessimistic, and should be reconsidered. For instance, it has been suggested that the tools of conventional learning theory
(dealing with standard generalization error) are ?ill-suited for investigating off-trainingset error? [3]. With the help of the little add-on we provide in this paper (Corollary 1),
any bound on standard generalization error can be converted to a bound on off-training-set
error. Our empirical results on UCI data-sets show that the resulting bound is often not
essentially weaker than the original one. Thus, the conventional tools turn out not to be so
?ill-suited? after all. Secondly, contrary to what is sometimes suggested4 , we show that one
can relate performance on the training sample to performance on as yet unseen cases.
On the other side of the debate, it has sometimes been claimed that the off-training-set error
is irrelevant to much of modern learning theory where often the feature space is continuous.
This may seem to imply that off-training-set error coincides with standard generalization
error (see remark after Def. 1). However, this is true only if the associated distribution is
continuous: then the probability of observing the same X-value twice is zero. However,
in practice even when the feature space has continuous components, data-sets sometimes
contain repetitions (e.g., Adult, see Table 1), if only for the reason that continuous features
may be discretized or truncated. In practice repetitions occur in many data-sets, implying
that off-training-set error can be different from the standard i.i.d. error. Thus, off-trainingset error is relevant. Also, it measures a quantity that is in some ways close to the meaning
of ?inductive generalization? ? in dictionaries the words ?induction? and ?generalization?
frequently refer to ?unseen instances?. Thus, off-training-set error is not just relevant but
also intuitive. This makes it all the more interesting that standard generalization bounds
transfer to off-training-set error ? and that is the central implication of this paper.
4
For instance, ?if we are interested in the error for [unseen cases], the NFL theorems tell us that
(in the absence of prior assumptions) [empirical error] is meaningless? [2].
Acknowledgments
We thank Gilles Blanchard for useful discussions. Part of this work was carried out while
the first author was visiting CWI. This work was supported in part by the Academy of
Finland (Minos, Prima), Nuffic, and IST Programme of the European Community, under
the PASCAL Network, IST-2002-506778. This publication only reflects the authors? views.
References
[1] Wolpert, D.H.: On the connection between in-sample testing and generalization error. Complex
Systems 6 (1992) 47?94
[2] Wolpert, D.H.: The lack of a priori distinctions between learning algorithms. Neural Computation 8 (1996) 1341?1390
[3] Wolpert, D.H.: The supervised learning no-free-lunch theorems. In: Proc. 6th Online World
Conf. on Soft Computing in Industrial Applications (2001).
[4] Schaffer, C.: A conservation law for generalization performance. In: Proc. 11th Int. Conf. on
Machine Learning (1994) 259?265
[5] Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd Edition. Wiley, 2001.
[6] Good, I.J.: The population frequencies of species and the estimation of population parameters.
Biometrika 40 (1953) 237?264
[7] McAllester, D.A., Schapire, R.E.: On the convergence rate of Good-Turing estimators. In: Proc.
13th Ann. Conf. on Computational Learning Theory (2000) 1?6
[8] McAllester, D.A., Ortiz L.: Concentration inequalities for the missing mass and for histogram
rule error. Journal of Machine Learning Research 4 (2003) 895?911.
[9] Blake, C., and Merz, C.: UCI repository of machine learning databases. Univ. of California,
Dept. of Information and Computer Science (1998)
A
Postponed Proofs
We first state two propositions that are useful in the proof of Lemma 2.
Proposition 1. Let Xm be a domain of size m, and let PX?m be an associated probability
distribution. The probability of getting no repetitions when sampling 1 ? k ? m items
with replacement from distribution PX?m is upper-bounded by
m!
Pr[?no repetitions? | k] ? (m?k)!m
k .
Proof Sketch of Proposition 1. By way of contradiction it is possible to show that the probability of obtaining no repetitions is maximized when PX?m is uniform. After this, it is
easily seen that the maximal probability equals the right-hand side of the inequality.
Proposition 2. Let Xm be a domain of size m, and let PX?m be an associated probability
distribution. The probability of getting at most r ? 0 repeated values when sampling
1 ? k ? m items with replacement from distribution PX?m is upper-bounded by
(
1
if k < r
m!
Pr[?at most r repetitions? | k] ?
k
?(k?r)
min r (m?k+r)! m
,1
if k ? r.
Proof of Proposition 2. The case k < r is trivial. For k ? r, the event ?at most r repetitions in k draws? is equivalent to the event that there is at least one subset of size k ? r of
the X-variables {X1 , . . . , Xk } such that all variables in the subset take distinct values. For
a subset of size k ? r, Proposition 1 implies thatthe probability that all values are distinct
m!
is at most (m?k+r)!
m?(k?r) . Since there are kr subsets of the X-variables of size k ? r,
the union bound implies that multiplying this by kr gives the required result.
Proof of Lemma 2. The probability of getting at most r repeated X-values can be upper
bounded by considering repetitions in the maximally probable set X?n only. The probability
of no repetitions in X?n can be broken into n + 1 mutually exclusive cases depending on
how many X-values fall into the set X?n . Thus we get
Pr[?at most r repetitions in X?n ?] =
n
X
k=0
Pr[?at most r repetitions in X?n ? | k] Pr[k] ,
where Pr[? | k] denotes probability under the condition that k of the n cases fall into
X?n , and Pr[k] denotes the probability of the latter occurring. Proposition 2 gives an upper bound on the conditional probability. The probability Pr[k] is given by the binomial
distribution with parameter p?n : Pr[k] = Bin(k ; n, p?n ) = nk p?kn (1 ? p?n )n?k . Combining these gives the formula for ?(n, r, p?n ). Showing that ?(n, r, p?n ) is non-increasing
in p?n is tedious but uninteresting and we only sketch the proof: It can be checked that
the conditional probability given by Proposition 2 is non-increasing in k (the min operator is essential for this). From this the claim follows since for increasing p?n the binomial
distribution puts more weight to terms with large k, thus not increasing the sum.
Proof of Thm. 2. The first three factors in the definition (1) of ?(n, r, p?n ) are equal to a
binomial probability Bin(k ; n, p?n ), and the expectation of k is thus n?
pn . By the Hoeffding bound, for all > 0, the probability of k < n(?
pn ? ) is bounded by exp(?2n2 ).
Applying this bound with = p?n /3 we get that the probability of k < 32 p?n is bounded by
exp(? 92 n?
p2n ). Combined with (1) this gives the following upper bound on ?(n, r, p?n ):
2
2
f
(n,
r,
k)
+
max
f
(n,
r,
k)
?
exp
?
f (n, r, k)
exp ? 29 n?
p2n max
n?
p
+ max
n
9
2
2
2
k<n 3 pn
k?n 3 pn
k?n 3 pn
(3)
where the maxima are taken over integer-valued k. In the last inequality we used the fact
that for all n, r, k, it holds that f (n, r, k) ? 1. Now note that for k ? r, we can bound
f (n, r, k) ?
k?r?1
Y
k
k
Y n?j
k
n
n?j Y
n
?
?
r
n
n?j
r j=0 n
j=0
j=k?r
r+1
Y
k
k
n
n Y n?j
n
n?j
? n2r
.
n?k
n ? k j=1 n
r j=1 n
(4)
If k < r, f (n, r, k) = 1 so that (4) holds in fact for all k with 1 ? k ? n. We bound
Qk
the last factor j=1 n?j
n further as follows. The average of the k factors of this product is
Since a product of k factors is always less than or
k k
equal to the average
of the factors to the power of k, we get the upper bound 1 ? 2n
?
k2
k?k
exp ? 2n ? exp ? 2n , where the first inequality follows from 1 ? x ? exp(?x)
2
n
k
for x < 1. Plugging this into (4) gives f (n, r, k) ? n2r n?k
exp ? 2n
. Plugging
2
k
this back into (3) gives ?(n, r, p?n ) ? exp(? 92 n?
p2n ) + maxk?n 32 pn 3n2r exp ? 2n
?
less than or equal to
n?k/2
n
= 1?
k
2n .
exp(? 92 n?
p2n ) + 3n2r exp(? 29 n?
p2n ) ? 4n2r exp(? 29 n?
p2n ).
Recall that B(?, D) := arg minp {p : ?(n, r, p) ? ?}. Replacing ?(n, r, p) by the above
upper bound, makes the set of p satisfying the inequality smaller. Thus, the minimal member of the reduced set is greater than or equal to the minimal member of the set with
?(n, r, p) ? ?, giving the following bound on B(?, D): q
1
B(?, D) ? arg minp p : 4n2r exp ? 92 np2 ? ? = 3 2n
log 4? + 2r log n .
| 2821 |@word repository:3 version:1 stronger:5 duda:1 nd:1 tedious:1 existing:5 current:1 com:1 surprising:1 yet:4 must:2 realistic:1 n0:1 alone:1 implying:1 selected:1 item:2 xk:1 short:1 tirri:2 psfrag:1 prove:1 ex2:1 manner:1 expected:1 behavior:1 themselves:1 frequently:1 growing:1 discretized:1 approval:1 little:2 considering:2 increasing:7 becomes:1 notation:2 bounded:9 moreover:1 mass:11 what:2 textbook:1 developed:1 guarantee:1 xd:29 preferable:1 biometrika:1 k2:1 yn:2 might:1 twice:1 practical:4 acknowledgment:1 testing:1 practice:6 block:1 union:1 differs:1 empirical:7 significantly:4 confidence:4 word:1 get:4 close:1 operator:1 put:1 context:1 applying:1 equivalent:2 conventional:2 demonstrated:1 center:1 missing:11 independently:1 simplicity:1 contradiction:1 estimator:8 rule:1 population:2 analogous:1 feel:1 exact:1 hypothesis:11 element:1 recognition:1 satisfying:1 labeled:1 database:1 observed:1 richness:2 counter:1 decrease:1 observes:1 transforming:1 broken:1 personal:1 depend:3 tight:1 technically:1 learner:3 triangle:1 easily:1 indirect:1 various:1 recogn:1 univ:3 distinct:3 artificial:1 tell:1 quite:2 widely:1 larger:3 reconsidered:1 valued:1 otherwise:1 unseen:7 online:1 housing:1 sequence:1 advantage:1 pdg:1 subtracting:1 maximal:1 product:2 uci:7 relevant:8 combining:1 academy:1 intuitive:2 getting:3 convergence:2 generating:3 help:1 derive:1 depending:1 measured:1 eq:4 strong:1 coverage:7 c:2 implies:4 differ:2 direction:3 safe:1 waveform:1 correct:1 vc:2 mcallester:2 bin:2 ots:1 fix:1 generalization:22 preliminary:2 proposition:8 pessimistic:3 probable:2 secondly:1 minos:1 hold:13 practically:1 considered:2 credit:1 blake:1 exp:14 claim:2 finland:4 dictionary:1 estimation:1 diminishes:1 proc:3 label:4 myllymaki:1 largest:1 repetition:25 tool:2 reflects:1 always:1 rather:1 pn:6 publication:1 corollary:6 cwi:3 np2:1 improvement:1 indicates:1 contrast:2 industrial:1 dependent:2 typically:1 relation:1 interested:3 arg:4 classification:4 musk:1 ill:2 pascal:1 priori:1 development:1 plan:1 equal:5 sampling:2 thinking:1 petri:2 future:1 duplicate:1 few:1 modern:1 individual:1 replaced:1 consisting:2 ourselves:1 replacement:3 n1:1 ortiz:1 cylinder:1 interest:2 investigate:1 nl:1 light:1 undefined:2 implication:4 isolated:1 theoretical:1 minimal:2 instance:5 modeling:3 compelling:1 soft:1 subset:5 uniform:1 uninteresting:1 successful:1 kn:1 combined:2 probabilistic:1 off:30 together:1 central:1 hoeffding:1 conf:3 potential:1 converted:1 sec:9 blanchard:2 matter:2 int:1 depends:1 view:1 closed:1 analyze:1 apparently:1 observing:1 contribution:1 qk:1 maximized:1 yield:1 produced:1 multiplying:1 worth:1 researcher:1 whenever:1 checked:1 definition:4 frequency:1 proof:14 associated:3 treatment:1 recall:1 organized:1 shaping:1 back:1 appears:1 supervised:1 follow:1 maximally:1 evaluated:1 box:4 just:2 hand:2 sketch:2 replacing:1 lack:1 grows:3 effect:1 concept:2 true:2 contain:1 inductive:1 hence:1 deal:1 irving:1 aki:1 coincides:1 abalone:1 arrived:1 demonstrate:2 meaning:1 consideration:1 fi:2 common:1 stork:1 numerically:1 refer:1 trivially:1 language:3 henry:2 add:1 moderate:1 irrelevant:1 claimed:1 certain:1 inequality:6 postponed:2 yi:2 seen:2 minimum:1 greater:3 multiple:1 alan:1 exceeds:1 faster:1 hart:1 plugging:2 parenthesis:1 prediction:2 breast:2 essentially:2 expectation:1 histogram:1 sometimes:4 annealing:1 entangled:1 meaningless:1 unlike:2 probably:1 member:2 contrary:1 seem:1 call:1 integer:2 enough:2 trainingset:2 regarding:1 cn:1 nfl:4 gb:1 peter:1 speech:1 remark:1 useful:4 involve:1 netherlands:1 transforms:1 amount:1 extensively:1 ten:1 band:1 reduced:1 schapire:1 dotted:1 diagnostic:1 overly:3 discrete:1 group:1 ist:2 nevertheless:1 drawn:3 asymptotically:3 sum:1 turing:8 letter:2 named:1 draw:1 appendix:3 bound:87 def:2 internet:1 dash:1 encountered:1 pected:1 nontrivial:3 occur:2 helsinki:6 argument:1 min:4 px:5 conjecture:1 influential:2 according:1 remain:1 smaller:1 character:1 lunch:4 making:1 pr:20 taken:1 mutually:1 previously:1 turn:2 discus:2 mechanism:1 studying:1 apply:1 tolerable:1 original:2 denotes:3 remaining:1 binomial:3 giving:2 establish:1 already:1 quantity:3 concentration:1 exclusive:1 usual:1 visiting:1 thank:1 trivial:1 reason:3 water:1 induction:1 balance:1 equivalently:1 mostly:1 unfortunately:1 relate:1 debate:1 unknown:1 gilles:2 upper:24 truncated:1 situation:1 extended:1 communication:1 ever:1 maxk:1 y1:1 arbitrary:1 thm:12 community:1 schaffer:1 pair:1 required:1 connection:2 california:1 distinction:1 adult:2 suggested:1 below:2 pattern:1 xm:2 including:1 max:3 power:1 overlap:2 event:2 indicator:1 improve:1 technology:2 imply:3 carried:1 prior:1 literature:1 relative:1 asymptotic:2 law:2 plant:1 interesting:2 ingredient:2 sufficient:1 minp:3 cancer:2 supported:1 last:2 free:7 neat:1 side:3 bias:1 weaker:2 institute:2 fall:2 nokia:3 curve:3 dimension:2 xn:2 valid:1 world:1 author:2 made:2 eemp:1 programme:1 functionals:2 dealing:1 roos:2 investigating:1 conservation:1 xi:2 continuous:6 p2n:6 why:1 table:5 transfer:1 ignoring:1 obtaining:3 necessarily:1 european:1 complex:1 domain:2 main:2 edition:1 myllym:1 allowed:1 repeated:8 x1:2 fig:2 grunwald:1 wiley:1 advertisement:1 theorem:9 formula:1 specific:1 showing:1 list:1 concern:1 essential:3 albeit:1 kr:3 illustrates:1 occurring:2 margin:1 nk:1 suited:2 wolpert:3 likely:1 amsterdam:1 datadependent:1 conditional:2 marked:1 ann:1 absence:1 typical:1 uniformly:2 lemma:15 total:2 called:2 specie:1 merz:1 latter:1 relevance:1 dept:1 prima:1 |
2,006 | 2,822 | Bayesian Surprise Attracts Human Attention
Laurent Itti
Department of Computer Science
University of Southern California
Los Angeles, California 90089-2520, USA
[email protected]
Pierre Baldi
Department of Computer Science
University of California, Irvine
Irvine, California 92697-3425, USA
[email protected]
Abstract
The concept of surprise is central to sensory processing, adaptation,
learning, and attention. Yet, no widely-accepted mathematical theory
currently exists to quantitatively characterize surprise elicited by a stimulus or event, for observers that range from single neurons to complex
natural or engineered systems. We describe a formal Bayesian definition of surprise that is the only consistent formulation under minimal axiomatic assumptions. Surprise quantifies how data affects a natural or artificial observer, by measuring the difference between posterior and prior
beliefs of the observer. Using this framework we measure the extent to
which humans direct their gaze towards surprising items while watching
television and video games. We find that subjects are strongly attracted
towards surprising locations, with 72% of all human gaze shifts directed
towards locations more surprising than the average, a figure which rises
to 84% when considering only gaze targets simultaneously selected by
all subjects. The resulting theory of surprise is applicable across different spatio-temporal scales, modalities, and levels of abstraction.
Life is full of surprises, ranging from a great christmas gift or a new magic trick, to
wardrobe malfunctions, reckless drivers, terrorist attacks, and tsunami waves. Key to survival is our ability to rapidly attend to, identify, and learn from surprising events, to decide
on present and future courses of action [1]. Yet, little theoretical and computational understanding exists of the very essence of surprise, as evidenced by the absence from our
everyday vocabulary of a quantitative unit of surprise: Qualities such as the ?wow factor?
have remained vague and elusive to mathematical analysis.
Informal correlates of surprise exist at nearly all stages of neural processing. In sensory
neuroscience, it has been suggested that only the unexpected at one stage is transmitted
to the next stage [2]. Hence, sensory cortex may have evolved to adapt to, to predict,
and to quiet down the expected statistical regularities of the world [3, 4, 5, 6], focusing
instead on events that are unpredictable or surprising. Electrophysiological evidence for
this early sensory emphasis onto surprising stimuli exists from studies of adaptation in
visual [7, 8, 4, 9], olfactory [10, 11], and auditory cortices [12], subcortical structures
like the LGN [13], and even retinal ganglion cells [14, 15] and cochlear hair cells [16]:
neural response greatly attenuates with repeated or prolonged exposure to an initially novel
stimulus. Surprise and novelty are also central to learning and memory formation [1], to
the point that surprise is believed to be a necessary trigger for associative learning [17, 18],
as supported by mounting evidence for a role of the hippocampus as a novelty detector [19,
20, 21]. Finally, seeking novelty is a well-identified human character trait, with possible
association with the dopamine D4 receptor gene [22, 23, 24].
In the Bayesian framework, we develop the only consistent theory of surprise, in terms of
the difference between the posterior and prior distributions of beliefs of an observer over
the available class of models or hypotheses about the world. We show that this definition
derived from first principles presents key advantages over more ad-hoc formulations, typically relying on detecting outlier stimuli. Armed with this new framework, we provide
direct experimental evidence that surprise best characterizes what attracts human gaze in
large amounts of natural video stimuli. We here extend a recent pilot study [25], adding
more comprehensive theory, large-scale human data collection, and additional analysis.
1 Theory
Bayesian Definition of Surprise. We propose that surprise is a general concept, which
can be derived from first principles and formalized across spatio-temporal scales, sensory
modalities, and, more generally, data types and data sources. Two elements are essential
for a principled definition of surprise. First, surprise can exist only in the presence of
uncertainty, which can arise from intrinsic stochasticity, missing information, or limited
computing resources. A world that is purely deterministic and predictable in real-time for
a given observer contains no surprises. Second, surprise can only be defined in a relative,
subjective, manner and is related to the expectations of the observer, be it a single synapse,
neuronal circuit, organism, or computer device. The same data may carry different amount
of surprise for different observers, or even for the same observer taken at different times.
In probability and decision theory it can be shown that the only consistent and optimal
way for modeling and reasoning about uncertainty is provided by the Bayesian theory of
probability [26, 27, 28]. Furthermore, in the Bayesian framework, probabilities correspond
to subjective degrees of beliefs in hypotheses or models which are updated, as data is acquired, using Bayes? theorem as the fundamental tool for transforming prior belief distributions into posterior belief distributions. Therefore, within the same optimal framework, the
only consistent definition of surprise must involve: (1) probabilistic concepts to cope with
uncertainty; and (2) prior and posterior distributions to capture subjective expectations.
Consistently with this Bayesian approach, the background information of an observer is
captured by his/her/its prior probability distribution {P (M )}M ?M over the hypotheses
or models M in a model space M. Given this prior distribution of beliefs, the fundamental effect of a new data observation D on the observer is to change the prior distribution {P (M )}M ?M into the posterior distribution {P (M |D)}M ?M via Bayes theorem,
whereby
P (D|M )
?M ? M,
P (M |D) =
P (M ).
(1)
P (D)
In this framework, the new data observation D carries no surprise if it leaves the observer
beliefs unaffected, that is, if the posterior is identical to the prior; conversely, D is surprising if the posterior distribution resulting from observing D significantly differs from
the prior distribution. Therefore we formally measure surprise elicited by data as some
distance measure between the posterior and prior distributions. This is best done using the
relative entropy or Kullback-Leibler (KL) divergence [29]. Thus, surprise is defined by
the average of the log-odd ratio:
Z
P (M |D)
S(D, M) = KL(P (M |D), P (M )) =
P (M |D) log
dM
(2)
P (M )
M
taken with respect to the posterior distribution over the model class M. Note that KL is not
symmetric but has well-known theoretical advantages, including invariance with respect to
Figure 1: Computing surprise in early sensory neurons. (a) Prior data observations, tuning preferences, and top-down influences contribute to shaping a set of ?prior beliefs? a neuron may have over
a class of internal models or hypotheses about the world. For instance, M may be a set of Poisson
processes parameterized by the rate ?, with {P (M )}M ?M = {P (?)}??IR+? the prior distribution
of beliefs about which Poisson models well describe the world as sensed by the neuron. New data
D updates the prior into the posterior using Bayes? theorem. Surprise quantifies the difference between the posterior and prior distributions over the model class M. The remaining panels detail
how surprise differs from conventional model fitting and outlier-based novelty. (b) In standard iterative Bayesian model fitting, at every iteration N , incoming data DN is used to update the prior
{P (M |D1 , D2 , ..., DN ?1 )}M ?M into the posterior {P (M |D1 , D2 , ..., DN )}M ?M . Freezing this
learning at a given iteration, one then picks the currently best model, usually using either a maximum likelihood criterion, or a maximum a posteriori one (yielding MM AP shown). (c) This best
model is used for a number of tasks at the current iteration, including outlier-based novelty detection. New data is then considered novel at that instant if it has low likelihood for the best model
b
a
(e.g., DN
is more novel than DN
). This focus onto the single best model presents obvious limitations, especially in situations where other models are nearly as good (e.g., M? in panel (b) is entirely
ignored during standard novelty computation). One palliative solution is to consider mixture models, or simply P (D), but this just amounts to shifting the problem into a different model class. (d)
Surprise directly addresses this problem by simultaneously considering all models and by measuring
how data changes the observer?s distribution of beliefs from {P (M |D1 , D2 , ..., DN ?1 )}M ?M to
{P (M |D1 , D2 , ..., DN )}M ?M over the entire model class M (orange shaded area).
reparameterizations. A unit of surprise ? a ?wow? ? may then be defined for a single
model M as the amount of surprise corresponding to a two-fold variation between P (M |D)
and P (M ), i.e., as log P (M |D)/P (M ) (with log taken in base 2), with the total number
of wows experienced for all models obtained through the integration in eq. 2.
Surprise and outlier detection. Outlier detection based on the likelihood P (D|M best ) of
D given a single best model Mbest is at best an approximation to surprise and, in some
cases, is misleading. Consider, for instance, a case where D has very small probability
both for a model or hypothesis M and for a single alternative hypothesis M. Although D
is a strong outlier, it carries very little information regarding whether M or M is the better
model, and therefore very little surprise. Thus an outlier detection method would strongly
focus attentional resources onto D, although D is a false positive, in the sense that it carries
no useful information for discriminating between the two alternative hypotheses M and M.
Figure 1 further illustrates this disconnect between outlier detection and surprise.
2 Human experiments
To test the surprise hypothesis ? that surprise attracts human attention and gaze in natural scenes ? we recorded eye movements from eight na??ve observers (three females and
five males, ages 23-32, normal or corrected-to-normal vision). Each watched a subset
from 50 videoclips totaling over 25 minutes of playtime (46,489 video frames, 640 ? 480,
60.27 Hz, mean screen luminance 30 cd/m2 , room 4 cd/m2 , viewing distance 80cm, field
of view 28? ? 21? ). Clips comprised outdoors daytime and nighttime scenes of crowded
environments, video games, and television broadcast including news, sports, and commercials. Right-eye position was tracked with a 240 Hz video-based device (ISCAN RK-464),
with methods as previously [30]. Two hundred calibrated eye movement traces (10,192
saccades) were analyzed, corresponding to four distinct observers for each of the 50 clips.
Figure 2 shows sample scanpaths for one videoclip.
To characterize image regions selected by participants, we process videoclips through computational metrics that output a topographic dynamic master response map, assigning in
real-time a response value to every input location. A good master map would highlight,
more than expected by chance, locations gazed to by observers. To score each metric we
hence sample, at onset of every human saccade, master map activity around the saccade?s
future endpoint, and around a uniformly random endpoint (random sampling was repeated
100 times to evaluate variability). We quantify differences between histograms of master
Figure 2: (a) Sample eye movement
traces from four observers (squares denote saccade endpoints). (b) Our data
exhibits high inter-individual overlap,
shown here with the locations where
one human saccade endpoint was nearby
(? 5? ) one (white squares), two (cyan
squares), or all three (black squares)
other humans. (c) A metric where the
master map was created from the three
eye movement traces other than that being tested yields an upper-bound KL
score, computed by comparing the histograms of metric values at human (narrow blue bars) and random (wider green
bars) saccade targets. Indeed, this metric?s map was very sparse (many random
saccades landing on locations with nearzero response), yet humans preferentially saccaded towards the three active
hotspots corresponding to the eye positions of three other humans (many human saccades landing on locations with
near-unity responses).
map samples collected from human and random saccades using again the Kullback-Leibler
(KL) distance: metrics which better predict human scanpaths exhibit higher distances from
random as, typically, observers non-uniformly gaze towards a minority of regions with
highest metric responses while avoiding a majority of regions with low metric responses.
This approach presents several advantages over simpler scoring schemes [31, 32], including agnosticity to putative mechanisms for generating saccades and the fact that applying
any continuous nonlinearity to master map values would not affect scoring.
Experimental results. We test six computational metrics, encompassing and extending the
state-of-the-art found in previous studies. The first three quantify static image properties
(local intensity variance in 16 ? 16 image patches [31]; local oriented edge density as
measured with Gabor filters [33]; and local Shannon entropy in 16 ? 16 image patches
[34]). The remaining three metrics are more sensitive to dynamic events (local motion
[33]; outlier-based saliency [33]; and surprise [25]).
For all metrics, we find that humans are significantly attracted by image regions with higher
metric responses. However, the static metrics typically respond vigorously at numerous visual locations (Figure 3), hence they are poorly specific and yield relatively low KL scores
between humans and random. The metrics sensitive to motion, outliers, and surprising
events, in comparison, yield sparser maps and higher KL scores.
The surprise metric of interest here quantifies low-level surprise in image patches over
space and time, and at this point does not account for high-level or cognitive beliefs of our
human observers. Rather, it assumes a family of simple models for image patches, each
processed through 72 early feature detectors sensitive to color, orientation, motion, etc.,
and computes surprise from shifts in the distribution of beliefs about which models better
describe the patches (see [25] and [35] for details). We find that the surprise metric significantly outperforms all other computational metrics (p < 10?100 or better on t-tests for
equality of KL scores), scoring nearly 20% better than the second-best metric (saliency)
and 60% better than the best static metric (entropy). Surprising stimuli often substantially
differ from simple feature outliers; for example, a continually blinking light on a static
background elicits sustained flicker due to its locally outlier temporal dynamics but is only
surprising for a moment. Similarly, a shower of randomly-colored pixels continually excites all low-level feature detectors but rapidly becomes unsurprising.
Strongest attractors of human attention. Clearly, in our and previous eye-tracking experiments, in some situations potentially interesting targets were more numerous than in
others. With many possible targets, different observers may orient towards different locations, making it more difficult for a single metric to accurately predict all observers. Hence
we consider (Figure 4) subsets of human saccades where at least two, three, or all four
observers simultaneously agreed on a gaze target. Observers could have agreed based on
bottom-up factors (e.g., only one location had interesting visual appearance at that time),
top-down factors (e.g., only one object was of current cognitive interest), or both (e.g., a
single cognitively interesting object was present which also had distinctive appearance).
Irrespectively of the cause for agreement, it indicates consolidated belief that a location
was attractive. While the KL scores of all metrics improved when progressively focusing
onto only those locations, dynamic metrics improved more steeply, indicating that stimuli
which more reliably attracted all observers carried more motion, saliency, and surprise.
Surprise remained significantly the best metric to characterize these agreed-upon attractors
of human gaze (p < 10?100 or better on t-tests for equality of KL scores).
Overall, surprise explained the greatest fraction of human saccades, indicating that humans
are significantly attracted towards surprising locations in video displays. Over 72% of all
human saccades were targeted to locations predicted to be more surprising than on average.
When only considering saccades where two, three, or four observers agreed on a common
gaze target, this figure rose to 76%, 80%, and 84%, respectively.
Figure 3: (a) Sample video frames, with corresponding human saccades and predictions from the
entropy, surprise, and human-derived metrics. Entropy maps, like intensity variance and orientation
maps, exhibited many locations with high responses, hence had low specificity and were poorly
discriminative. In contrast, motion, saliency, and surprise maps were much sparser and more specific,
with surprise significantly more often on target. For three example frames (first column), saccades
from one subject are shown (arrows) with corresponding apertures over which master map activity
at the saccade endpoint was sampled (circles). (b) KL scores for these metrics indicate significantly
different performance levels, and a strict ranking of variance < orientation < entropy < motion
< saliency < surprise < human-derived. KL scores were computed by comparing the number of
human saccades landing onto each given range of master map values (narrow blue bars) to the number
of random saccades hitting the same range (wider green bars). A score of zero would indicate equality
between the human and random histograms, i.e., humans did not tend to hit various master map values
any differently from expected by chance, or, the master map could not predict human saccades better
than random saccades. Among the six computational metrics tested in total, surprise performed best,
in that surprising locations were relatively few yet reliably gazed to by humans.
Figure 4: KL scores when considering
only saccades where at least one (all
10,192 saccades), two (7,948 saccades),
three (5,565 saccades), or all four (2,951
saccades) humans agreed on a common
gaze location, for the static (a) and dynamic
metrics (b).
Static metrics improved
substantially when progressively focusing
onto saccades with stronger inter-observer
agreement (average slope 0.56 ? 0.37
percent KL score units per 1,000 pruned
saccades). Hence, when humans agreed
on a location, they also tended to be
more reliably predicted by the metrics.
Furthermore, dynamic metrics improved
4.5 times more steeply (slope 2.44 ? 0.37),
suggesting a stronger role of dynamic events
in attracting human attention. Surprising
events were significantly the strongest
(t-tests for equality of KL scores between
surprise and other metrics, p < 10?100 ).
3 Discussion
While previous research has shown with either static scenes or dynamic synthetic stimuli
that humans preferentially fixate regions of high entropy [34], contrast [31], saliency [32],
flicker [36], or motion [37], our data provides direct experimental evidence that humans
fixate surprising locations even more reliably. These conclusions were made possible by
developing new tools to quantify what attracts human gaze over space and time in dynamic
natural scenes. Surprise explained best where humans look when considering all saccades,
and even more so when restricting the analysis to only those saccades for which human
observers tended to agree. Surprise hence represents an inexpensive, easily computable
approximation to human attentional allocation.
In the absence of quantitative tools to measure surprise, most experimental and modeling
work to date has adopted the approximation that novel events are surprising, and has focused on experimental scenarios which are simple enough to ensure an overlap between
informal notions of novelty and surprise: for example, a stimulus is novel during testing if
it has not been seen during training [9]. Our definition opens new avenues for more sophisticated experiments, where surprise elicited by different stimuli can be precisely compared
and calibrated, yielding predictions at the single-unit as well as behavioral levels.
The definition of surprise ? as the distance between the posterior and prior distributions
of beliefs over models ? is entirely general and readily applicable to the analysis of auditory, olfactory, gustatory, or somatosensory data. While here we have focused on behavior
rather than detailed biophysical implementation, it is worth noting that detecting surprise in
neural spike trains does not require semantic understanding of the data carried by the spike
trains, and thus could provide guiding signals during self-organization and development of
sensory areas. At higher processing levels, top-down cues and task demands are known to
combine with stimulus novelty in capturing attention and triggering learning [1, 38], ideas
which may now be formalized and quantified in terms of priors, posteriors, and surprise.
Surprise, indeed, inherently depends on uncertainty and on prior beliefs. Hence surprise
theory can further be tested and utilized in experiments where the prior is biased, for ex-
ample by top-down instructions or prior exposures to stimuli [38]. In addition, simple
surprise-based behavioral measures such as the eye-tracking one used here may prove useful for early diagnostic of human conditions including autism and attention-deficit hyperactive disorder, as well as for quantitative comparison between humans and animals which
may have lower or different priors, including monkeys, frogs, and flies. Beyond sensory
biology, computable surprise could guide the development of data mining and compression systems (giving more bits to surprising regions of interest), to find surprising agents in
crowds, surprising sentences in books or speeches, surprising sequences in genomes, surprising medical symptoms, surprising odors in airport luggage racks, surprising documents
on the world-wide-web, or to design surprising advertisements.
Acknowledgments: Supported by HFSP, NSF and NGA (L.I.), NIH and NSF (P.B.). We thank UCI?s
Institute for Genomics and Bioinformatics and USC?s Center High Performance Computing and
Communications (www.usc.edu/hpcc) for access to their computing clusters.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
Ranganath, C. & Rainer, G. Nat Rev Neurosci 4, 193?202 (2003).
Rao, R. P. & Ballard, D. H. Nat Neurosci 2, 79?87 (1999).
Olshausen, B. A. & Field, D. J. Nature 381, 607?609 (1996).
M?uller, J. R., Metha, A. B., Krauskopf, J. & Lennie, P. Science 285, 1405?1408 (1999).
Dragoi, V., Sharma, J., Miller, E. K. & Sur, M. Nat Neurosci 5, 883?891 (2002).
David, S. V., Vinje, W. E. & Gallant, J. L. J Neurosci 24, 6991?7006 (2004).
Maffei, L., Fiorentini, A. & Bisti, S. Science 182, 1036?1038 (1973).
Movshon, J. A. & Lennie, P. Nature 278, 850?852 (1979).
Fecteau, J. H. & Munoz, D. P. Nat Rev Neurosci 4, 435?443 (2003).
Kurahashi, T. & Menini, A. Nature 385, 725?729 (1997).
Bradley, J., Bonigk, W., Yau, K. W. & Frings, S. Nat Neurosci 7, 705?710 (2004).
Ulanovsky, N., Las, L. & Nelken, I. Nat Neurosci 6, 391?398 (2003).
Solomon, S. G., Peirce, J. W., Dhruv, N. T. & Lennie, P. Neuron 42, 155?162 (2004).
Smirnakis, S. M., Berry, M. J. & et al. Nature 386, 69?73 (1997).
Brown, S. P. & Masland, R. H. Nat Neurosci 4, 44?51 (2001).
Kennedy, H. J., Evans, M. G. & et al. Nat Neurosci 6, 832?836 (2003).
Schultz, W. & Dickinson, A. Annu Rev Neurosci 23, 473?500 (2000).
Fletcher, P. C., Anderson, J. M., Shanks, D. R. et al. Nat Neurosci 4, 1043?1048 (2001).
Knight, R. Nature 383, 256?259 (1996).
Stern, C. E., Corkin, S., Gonzalez, R. G. et al. Proc Natl Acad Sci U S A 93, 8660?8665 (1996).
Li, S., Cullen, W. K., Anwyl, R. & Rowan, M. J. Nat Neurosci 6, 526?531 (2003).
Ebstein, R. P., Novick, O., Umansky, R. et al. Nat Genet 12, 78?80 (1996).
Benjamin, J., Li, L. & et al. Nat Genet 12, 81?84 (1996).
Lusher, J. M., Chandler, C. & Ball, D. Mol Psychiatry 6, 497?499 (2001).
Itti, L. & Baldi, P. In Proc. IEEE CVPR. San Siego, CA (2005 in press).
Cox, R. T. Am. J. Phys. 14, 1?13 (1964).
Savage, L. J. The foundations of statistics (Dover, New York, 1972). (First Edition in 1954).
Jaynes, E. T. Probability Theory. The Logic of Science (Cambridge University Press, 2003).
Kullback, S. Information Theory and Statistics (Wiley, New York:New York, 1959).
Itti, L. Visual Cognition (2005 in press).
Reinagel, P. & Zador, A. M. Network 10, 341?350 (1999).
Parkhurst, D., Law, K. & Niebur, E. Vision Res 42, 107?123 (2002).
Itti, L. & Koch, C. Nat Rev Neurosci 2, 194?203 (2001).
Privitera, C. M. & Stark, L. W. IEEE Trans Patt Anal Mach Intell 22, 970?982 (2000).
All source code for all metrics is freely available at http://iLab.usc.edu/toolkit/.
Theeuwes, J. Percept Psychophys 57, 637?644 (1995).
Abrams, R. A. & Christ, S. E. Psychol Sci 14, 427?432 (2003).
Wolfe, J. M. & Horowitz, T. S. Nat Rev Neurosci 5, 495?501 (2004).
| 2822 |@word cox:1 compression:1 hippocampus:1 stronger:2 open:1 instruction:1 d2:4 sensed:1 pick:1 carry:4 moment:1 vigorously:1 contains:1 score:13 document:1 subjective:3 outperforms:1 bradley:1 current:2 comparing:2 rowan:1 surprising:24 savage:1 jaynes:1 yet:4 assigning:1 attracted:4 must:1 readily:1 evans:1 iscan:1 update:2 mounting:1 progressively:2 cue:1 selected:2 device:2 item:1 leaf:1 dover:1 colored:1 detecting:2 provides:1 contribute:1 location:19 preference:1 attack:1 simpler:1 five:1 mathematical:2 dn:7 direct:3 driver:1 prove:1 sustained:1 fitting:2 combine:1 behavioral:2 baldi:2 olfactory:2 manner:1 acquired:1 inter:2 indeed:2 expected:3 behavior:1 krauskopf:1 relying:1 prolonged:1 little:3 armed:1 unpredictable:1 considering:5 gift:1 provided:1 becomes:1 circuit:1 panel:2 hyperactive:1 what:2 evolved:1 cm:1 consolidated:1 substantially:2 monkey:1 temporal:3 quantitative:3 every:3 smirnakis:1 hit:1 unit:4 medical:1 maffei:1 continually:2 positive:1 attend:1 local:4 acad:1 receptor:1 mach:1 laurent:1 fiorentini:1 ap:1 black:1 emphasis:1 frog:1 quantified:1 conversely:1 shaded:1 limited:1 range:3 directed:1 acknowledgment:1 testing:1 differs:2 area:2 mbest:1 significantly:8 gabor:1 specificity:1 onto:6 influence:1 applying:1 landing:3 conventional:1 deterministic:1 map:15 missing:1 center:1 elusive:1 exposure:2 attention:7 zador:1 focused:2 formalized:2 disorder:1 m2:2 reinagel:1 d1:4 his:1 notion:1 variation:1 updated:1 target:7 reckless:1 trigger:1 commercial:1 dickinson:1 hypothesis:8 agreement:2 trick:1 element:1 wolfe:1 utilized:1 bottom:1 role:2 fly:1 capture:1 region:6 news:1 movement:4 highest:1 knight:1 principled:1 rose:1 predictable:1 transforming:1 environment:1 benjamin:1 reparameterizations:1 dynamic:9 purely:1 distinctive:1 upon:1 vague:1 easily:1 differently:1 various:1 train:2 distinct:1 describe:3 artificial:1 formation:1 crowd:1 widely:1 cvpr:1 ability:1 statistic:2 topographic:1 associative:1 hoc:1 advantage:3 sequence:1 biophysical:1 propose:1 adaptation:2 uci:2 rapidly:2 date:1 poorly:2 everyday:1 los:1 regularity:1 cluster:1 extending:1 generating:1 object:2 wider:2 develop:1 measured:1 excites:1 odd:1 eq:1 strong:1 predicted:2 indicate:2 somatosensory:1 quantify:3 differ:1 filter:1 chandler:1 human:45 engineered:1 viewing:1 require:1 mm:1 koch:1 considered:1 ic:1 normal:2 dhruv:1 great:1 fletcher:1 cognition:1 predict:4 around:2 early:4 proc:2 applicable:2 axiomatic:1 currently:2 sensitive:3 tool:3 uller:1 clearly:1 hotspot:1 rather:2 totaling:1 rainer:1 derived:4 focus:2 consistently:1 likelihood:3 indicates:1 steeply:2 greatly:1 contrast:2 psychiatry:1 sense:1 am:1 posteriori:1 lennie:3 abstraction:1 typically:3 entire:1 initially:1 her:1 lgn:1 pixel:1 overall:1 among:1 orientation:3 development:2 animal:1 art:1 integration:1 orange:1 airport:1 field:2 sampling:1 identical:1 represents:1 biology:1 look:1 nearly:3 future:2 others:1 stimulus:12 quantitatively:1 few:1 oriented:1 randomly:1 simultaneously:3 divergence:1 comprehensive:1 ve:1 intell:1 cognitively:1 usc:4 individual:1 attractor:2 detection:5 organization:1 interest:3 mining:1 male:1 mixture:1 analyzed:1 yielding:2 light:1 natl:1 edge:1 necessary:1 privitera:1 fecteau:1 circle:1 re:1 theoretical:2 minimal:1 instance:2 column:1 modeling:2 rao:1 measuring:2 novick:1 subset:2 hundred:1 wardrobe:1 comprised:1 unsurprising:1 characterize:3 synthetic:1 calibrated:2 density:1 fundamental:2 discriminating:1 probabilistic:1 gaze:11 na:1 again:1 central:2 recorded:1 solomon:1 broadcast:1 watching:1 cognitive:2 book:1 yau:1 horowitz:1 itti:5 stark:1 li:2 account:1 suggesting:1 retinal:1 parkhurst:1 disconnect:1 crowded:1 ranking:1 ad:1 onset:1 depends:1 performed:1 view:1 observer:26 observing:1 characterizes:1 wave:1 bayes:3 participant:1 elicited:3 slope:2 square:4 ir:1 variance:3 percept:1 miller:1 correspond:1 identify:1 yield:3 saliency:6 bayesian:8 accurately:1 niebur:1 worth:1 autism:1 kennedy:1 unaffected:1 detector:3 strongest:2 phys:1 tended:2 definition:7 inexpensive:1 obvious:1 dm:1 fixate:2 static:7 irvine:2 auditory:2 pilot:1 sampled:1 color:1 blinking:1 electrophysiological:1 malfunction:1 shaping:1 agreed:6 sophisticated:1 focusing:3 higher:4 response:9 improved:4 synapse:1 formulation:2 done:1 strongly:2 symptom:1 furthermore:2 just:1 stage:3 anderson:1 web:1 freezing:1 rack:1 quality:1 outdoors:1 olshausen:1 usa:2 effect:1 concept:3 brown:1 www:1 hence:8 equality:4 symmetric:1 leibler:2 semantic:1 white:1 attractive:1 game:2 during:4 self:1 essence:1 whereby:1 d4:1 criterion:1 motion:7 percent:1 reasoning:1 ranging:1 image:7 novel:5 nih:1 common:2 shower:1 ilab:1 tracked:1 endpoint:5 extend:1 association:1 organism:1 trait:1 munoz:1 cambridge:1 tuning:1 similarly:1 stochasticity:1 nonlinearity:1 had:3 toolkit:1 access:1 cortex:2 attracting:1 etc:1 base:1 posterior:14 recent:1 female:1 scenario:1 life:1 scoring:3 transmitted:1 captured:1 additional:1 seen:1 freely:1 novelty:8 sharma:1 signal:1 full:1 adapt:1 believed:1 watched:1 prediction:2 hair:1 vision:2 expectation:2 poisson:2 dopamine:1 metric:32 iteration:3 histogram:3 cell:2 background:2 addition:1 source:2 modality:2 scanpaths:2 biased:1 exhibited:1 strict:1 subject:3 hz:2 tend:1 ample:1 near:1 presence:1 noting:1 enough:1 affect:2 pfbaldi:1 attracts:4 identified:1 triggering:1 regarding:1 idea:1 avenue:1 computable:2 genet:2 angeles:1 shift:2 whether:1 six:2 luggage:1 movshon:1 speech:1 york:3 cause:1 action:1 ignored:1 generally:1 useful:2 detailed:1 involve:1 amount:4 locally:1 clip:2 processed:1 http:1 exist:2 flicker:2 nsf:2 neuroscience:1 diagnostic:1 per:1 blue:2 patt:1 irrespectively:1 key:2 four:5 luminance:1 fraction:1 nga:1 orient:1 parameterized:1 uncertainty:4 master:10 respond:1 family:1 decide:1 patch:5 putative:1 gonzalez:1 decision:1 bit:1 entirely:2 cyan:1 bound:1 capturing:1 shank:1 display:1 fold:1 activity:2 precisely:1 scene:4 nearby:1 pruned:1 relatively:2 department:2 developing:1 ball:1 across:2 character:1 unity:1 rev:5 making:1 outlier:12 explained:2 tsunami:1 taken:3 resource:2 agree:1 previously:1 mechanism:1 informal:2 adopted:1 available:2 eight:1 pierre:1 alternative:2 odor:1 top:4 remaining:2 assumes:1 ensure:1 instant:1 giving:1 especially:1 seeking:1 spike:2 southern:1 exhibit:2 quiet:1 abrams:1 distance:5 attentional:2 elicits:1 deficit:1 thank:1 majority:1 sci:2 cochlear:1 extent:1 collected:1 dragoi:1 minority:1 code:1 sur:1 ratio:1 preferentially:2 difficult:1 potentially:1 trace:3 rise:1 magic:1 attenuates:1 reliably:4 implementation:1 design:1 stern:1 anal:1 gallant:1 upper:1 neuron:5 observation:3 situation:2 variability:1 communication:1 frame:3 intensity:2 david:1 evidenced:1 kl:15 sentence:1 california:4 narrow:2 trans:1 address:1 wow:3 suggested:1 bar:4 usually:1 beyond:1 psychophys:1 including:6 memory:1 video:7 belief:15 shifting:1 green:2 event:8 overlap:2 natural:5 greatest:1 scheme:1 misleading:1 eye:8 numerous:2 created:1 carried:2 psychol:1 daytime:1 genomics:1 prior:22 understanding:2 berry:1 relative:2 law:1 encompassing:1 highlight:1 interesting:3 limitation:1 subcortical:1 allocation:1 masland:1 vinje:1 age:1 foundation:1 degree:1 agent:1 consistent:4 principle:2 cd:2 course:1 supported:2 formal:1 guide:1 institute:1 terrorist:1 wide:1 sparse:1 vocabulary:1 world:6 genome:1 computes:1 sensory:8 nelken:1 collection:1 made:1 san:1 schultz:1 cope:1 correlate:1 ranganath:1 kullback:3 aperture:1 gene:1 logic:1 christmas:1 active:1 incoming:1 spatio:2 discriminative:1 continuous:1 iterative:1 quantifies:3 gustatory:1 learn:1 ballard:1 nature:5 ca:1 inherently:1 mol:1 complex:1 did:1 neurosci:14 arrow:1 arise:1 edition:1 repeated:2 neuronal:1 gazed:2 screen:1 wiley:1 experienced:1 position:2 guiding:1 advertisement:1 down:5 remained:2 theorem:3 minute:1 rk:1 specific:2 annu:1 survival:1 evidence:4 exists:3 essential:1 intrinsic:1 false:1 adding:1 restricting:1 nat:14 illustrates:1 television:2 demand:1 sparser:2 surprise:63 entropy:7 simply:1 appearance:2 ganglion:1 visual:4 unexpected:1 hitting:1 tracking:2 sport:1 christ:1 saccade:30 chance:2 targeted:1 towards:7 room:1 absence:2 change:2 corrected:1 uniformly:2 total:2 hfsp:1 accepted:1 experimental:5 invariance:1 la:1 shannon:1 indicating:2 formally:1 internal:1 bioinformatics:1 evaluate:1 tested:3 avoiding:1 ex:1 |
2,007 | 2,823 | Extracting Dynamical Structure Embedded in
Neural Activity
Byron M. Yu1 , Afsheen Afshar1,2 , Gopal Santhanam1 ,
Stephen I. Ryu1,3 , Krishna V. Shenoy1,4
1
Department of Electrical Engineering, 2 School of Medicine, 3 Department of
Neurosurgery, 4 Neurosciences Program, Stanford University, Stanford, CA 94305
{byronyu,afsheen,gopals,seoulman,shenoy}@stanford.edu
Maneesh Sahani
Gatsby Computational Neuroscience Unit, UCL
London, WC1N 3AR, UK
[email protected]
Abstract
Spiking activity from neurophysiological experiments often exhibits dynamics beyond that driven by external stimulation, presumably reflecting the extensive recurrence of neural circuitry. Characterizing these
dynamics may reveal important features of neural computation, particularly during internally-driven cognitive operations. For example,
the activity of premotor cortex (PMd) neurons during an instructed delay period separating movement-target specification and a movementinitiation cue is believed to be involved in motor planning. We show
that the dynamics underlying this activity can be captured by a lowdimensional non-linear dynamical systems model, with underlying recurrent structure and stochastic point-process output. We present and
validate latent variable methods that simultaneously estimate the system
parameters and the trial-by-trial dynamical trajectories. These methods are applied to characterize the dynamics in PMd data recorded
from a chronically-implanted 96-electrode array while monkeys perform
delayed-reach tasks.
1
Introduction
At present, the best view of the activity of a neural circuit is provided by multiple-electrode
extracellular recording technologies, which allow us to simultaneously measure spike trains
from up to a few hundred neurons in one or more brain areas during each trial. While the
resulting data provide an extensive picture of neural spiking, their use in characterizing the
fine timescale dynamics of a neural circuit is complicated by at least two factors. First,
extracellularly captured action potentials provide only an occasional view of the process
from which they are generated, forcing us to interpolate the evolution of the circuit between
the spikes. Second, the circuit activity may evolve quite differently on different trials that
are otherwise experimentally identical.
The usual approach to handling both problems is to average responses from different trials,
and study the evolution of the peri-stimulus time histogram (PSTH). There is little alternative to this approach when recordings are made one neuron at a time, even when the
dynamics of the system are the subject of study. Unfortunately, such averaging can obscure
important internal features of the response. In many experiments, stimulus events provide
the trigger for activity, but the resulting time-course of the response is internally regulated
and may not be identical on each trial. This is especially important during cognitive processing such as decision making or motor planning. In this case, the PSTH may not reflect
the true trial-by-trial dynamics. For example, a sharp change in firing rate that occurs with
varying latency might appear as a slow smooth transition in the average response.
An alternative approach is to adopt latent variable methods and to identify a hidden dynamical system that can summarize and explain the simultaneously-recorded spike trains.
The central idea is that the responses of different neurons reflect different views of a common dynamical process in the network, whose effective dimensionality is much smaller
than the total number of neurons in the network. While the underlying state trajectory may
be slightly different on each trial, the commonalities among these trajectories can be captured by the network?s parameters, which are shared across trials. These parameters define
how the network evolves over time, as well as how the observed spike trains relate to the
network?s state at each time point.
Dimensionality reduction in a latent dynamical model is crucial and yields benefits beyond
simple noise elimination. Some of these benefits can be illustrated by a simple physical
example. Consider a set of noisy video sequences of a bouncing ball. The trajectory of
the ball may not be identical in each sequence, and so simply averaging the sequences together would provide little information about the dynamics. Independently smoothing the
dynamics of each pixel might identify a dynamical process; however, correctly rejecting
noise might be difficult, and in any case this would yield an inefficient and opaque representation of the underlying physical process. By contrast, a hidden dynamical system
account could capture the video sequence data using a low-dimensional latent variable that
represented only the ball?s position and momentum over time, with dynamical rules that
captured the physics of ballistics and elastic collision. This representation would exploit
shared information from all pixels, vastly simplifying the problem of noise rejection, and
would provide a scientifically useful depiction of the process.
The example also serves to illustrate the two broad benefits of this type of model. The first is
to obtain a low dimensional summary of the dynamical trajectory in any one trial. Besides
the obvious benefits of denoising, such a trajectory can provide an invaluable representation for prediction of associated phenomena. In the video sequence example, predicting the
loudness of the sound on impact might be easy given the estimate of the ball?s trajectory
(and thus its speed), but would be difficult from the raw pixel trajectories, even if denoised.
In the neural case, behavioral variables such as reaction time might similarly be most easily
predicted from the reconstructed trajectory. The second broad goal is systems identification: learning the rules that govern the dynamics. In the video example this would involve
discovery of various laws of physics, as well as parameters describing the ball such as its
coefficient of elasticity. In the neural case this would involve identifying the structure of
dynamics available to the circuit: the number and relationship of attractors, appearance of
oscillatory limit cycles and so on.
The use of latent variable models with hidden dynamics for neural data has, thus far, been
limited. In [1], [2], small groups of neurons in the frontal cortex were modeled using hidden
Markov models, in which the latent dynamical system is assumed to transition between a
set of discrete states. In [3], a state space model with linear hidden dynamics and pointprocess outputs was applied to simulated data. However, these restricted latent models
cannot capture the richness of dynamics that recurrent networks exhibit. In particular,
systems that converge toward point or line attractors, exhibit limit cycle oscillations, or
even transition into chaotic regimes have long been of interest in neural modeling. If such
systems are relevant to real neural data, we must seek to identify hidden models capable of
reflecting this range of behaviors.
In this work, we consider a latent variable model having (1) hidden underlying recurrent
structure with continuous-valued states, and (2) Poisson-distributed output spike counts
(conditioned on the state), as described in Section 2. Inference and learning for this nonlinear model are detailed in Section 3. The methods developed are applied to a delayed-reach
task described in Section 4. Evidence of motor preparation in PMd is given in Section 5.
In Section 6, we characterize the neural dynamics of motor preparation on a trial-by-trial
basis.
2
Hidden non-linear dynamical system
A useful dynamical system model capable of expressing the rich behavior expected of
neural systems is the recurrent neural network (RNN) with Gaussian perturbations
xt | xt?1 ? N (?(xt?1 ), Q)
?(x) = (1 ? k)x + kW g(x),
(1)
(2)
where xt ? IRp?1 is the vector of the node values in the recurrent network at time
t ? {1, . . . , T }, W ? IRp?p is the connection weight matrix, g is a non-linear activation function which acts element-by-element on its vector argument, k ? IR is a parameter
related to the time constant of the network, and Q ? IRp?p is a covariance matrix. The
initial state is Gaussian-distributed
x0 ? N (p0 , V0 ) ,
(3)
where p0 ? IRp?1 and V0 ? IRp?p are the mean vector and covariance matrix, respectively.
Models of this class have long been used, albeit generally without stochastic pertubation,
to describe the dynamics of neuronal responses (e.g., [4]). In this classical view, each node
of the network represents a neuron or a column of neurons. Our use is more abstract. The
RNN is chosen for the range of dynamics it can exhibit, including convergence to point
or surface attractors, oscillatory limit cycles, or chaotic evolution; but each node is simply
an abstract dimension of latent space which may couple to many or all of the observed
neurons.
The output distribution is given by a generalized linear model that describes the relationship
between all nodes in the state xt and the spike count yti ? IR of neuron i ? {1, . . . , q} in
the tth time bin
(4)
yti | xt ? Poisson h ci ? xt + di ? ,
where ci ? IRp?1 and di ? IR are constants, h is a link function mapping IR ? IR+ ,
and ? ? IR is the time bin width. We collect the spike counts from all q simultaneouslyrecorded physical neurons into a vector yt ? IRq?1 , whose ith element is yti . The choice of
the link functions g and h is discussed in Section 3.
3
Inference and Learning
The Expectation-Maximization (EM) algorithm [5] was used to iteratively (1) infer the
underlying hidden state trajectories (i.e., recover a distribution over the hidden sequence
T
{x}T1 corresponding
to the observations {y}
1 ), and (2) learn the model parameters (i.e.,
i
i
estimate ? = W, Q, k, p0 , V0 , {c }, {d } ), given only a set of observation sequences.
Inference (the E-step) involves computing or approximating P {x}T1 | {y}T1 , ?k for each
sequence, where ?k are the parameter estimates at the kth EM iteration. A variant of the
Extended Kalman Smoother (EKS) was used to approximate these joint smoothed state
posteriors. As in the EKS, the non-linear time-invariant state system (1)-(2) was transformed into a linear time-variant sytem using local linearization. The difference from EKS
arises in the measurement update step of the forward pass
P xt | {y}t1 ? P (yt | xt ) P xt | {y}t?1
.
(5)
1
Because P (yt | xt ) is a product of Poissons rather than a Gaussian, the filtered state posterior P (xt | {y}t1 ) cannot be easily computed. Instead, as in [3], we approximated this
posterior with a Gaussian centered at the mode of log P (xt | {y}t1 ) and whose covariance
is given by the negative inverse Hessian of the log posterior at that mode. Certain choices
of h, including ez and log (1 + ez ), lead to a log posterior that is strictly concave in xt . In
these cases, the unique mode can easily be found by Newton?s method.
T
T
Learning (the M-step) requires finding the ? that maximizes E log P {x}1 , {y}1 | ? ,
where the expectation is taken over the posterior state distributions found in the E-step.
Note that, for multiple sequences that are independent conditioned on ?, we use the sum
of expectations over all sequences. Because the posterior state distributions are approximated as Gaussians in the E-step, the above expectation is a Gaussian integral that involves
non-linear functions g and h and cannot be computed analytically in general. Fortunately,
this high-dimensional integral can be reduced to many one-dimensional Gaussian integrals,
which can be accurately and reasonably efficiently approximated using Gaussian quadrature [6], [7].
We found that setting g to be the error function
z
2
2
g(z) = ?
e?t dt
? 0
(6)
made many of the one-dimensional Gaussian integrals involving g analytically tractable.
Those that were not analytically tractable were approximated using Gaussian quadrature.
The error function is one of a family of sigmoid activation functions that yield similar
behavior in a RNN.
If h were chosen to be a simple exponential, all the Gaussian integrals involving h could be
computed exactly. Unfortunately, this exponential mapping would distort the relationship
between perturbations in the latent state (whose size is set by the covariance matrix Q)
and the resulting fluctuations in firing rates. In particular, the size of firing-rate fluctations
would grow exponentially with the mean, an effect that would then add to the usual linear
increase in spike-count variance that comes from the Poisson output distribution. Since
neural firing does not show such a severe scaling in variability, such a model would fit
poorly. Therefore, to maintain more even firing-rate fluctuations, we instead take
h(z) = log (1 + ez ) .
(7)
The corresponding Gaussian integrals must then be approximated by quadrature methods.
Regardless of the forms of g and h chosen, numerical Newton methods are needed for
maximization with respect to {ci } and {di }.
The main drawback of these various approximations is that the overall observation likelihood is no longer guaranteed to increase after each EM iteration. However, in our simulations, we found that sensible results were often produced. As long as the variances of the
posterior state distribution did not diverge, the output distributions described by the learned
model closely approximated those of the actual model that generated the simulated data.
4
Task and recordings
spikes / s
We trained a rhesus macaque monkey to perform delayed center-out reaches to visual targets presented on a fronto-parallel screen. On a given trial, the peripheral target was presented at one of eight radial locations (30, 70, 110, 150, 190, 230, 310, 350?) 10 cm
away, as shown in Figure 1. After a pseudo-randomly chosen delay period of 200, 750, or
1000 ms, the target increased in size as the go cue and the monkey reached to the target. A
96-channel silicon electrode array (Cyberkinetics, Inc.) was implanted straddling PMd and
motor cortex (M1). Spike sorting was performed offline to isolate 22 single-neuron and
109 multi-neuron units.
100
0
200 ms
Delay Peri-Movement
Activity Activity
Figure 1: Delayed reach task and average action potential (spike) emission rate from one
representative unit. Activity is arranged by target location. Vertical dashed lines indicate
peripheral reach target onset (left) and movement onset (right).
5
Motor preparation in PMd
Motor preparation is often studied using the ?instructed delay? behavioral paradigm, as
described in Section 4, where a variable-length ?planning? period temporally separates an
instruction stimulus from a go cue [8]?[13]. Longer delay periods typically lead to shorter
reaction times (RT, defined as time between go cue and movement onset), and this has been
interpreted as evidence for a motor preparation process that takes time [11], [12], [14], [15].
In this view, the delay period allows for motor preparation to complete prior to the go cue,
thus shortening the RT.
Evidence for motor preparation at the neural level is taken from PMd (and, to a lesser
degree, M1), where neurons show sustained activity during the delay period (Figure 1,
delay activity) [8]?[10]. A number of findings support the hypothesis that such activity
is related to motor preparation. First, delay period activity typically shows tuning for the
instruction (i.e., location of reach target; note that the PMd neuron in Figure 1 has greater
delay activity before leftward than before rightward reaches), consistent with the idea that
something specific is being prepared [8], [9], [11], [13]. Second, in the absence of a delay
period, a brief burst of similarly-tuned activity is observed during the RT interval, consistent
with the idea that motor preparation is taking place at that time [12].
Third, we have recently reported that firing rates across trials to the same reach target
become more consistent as the delay period progresses [16]. The variance of firing rate,
measured across trials, divided by mean firing rate (similar to the Fano factor) was computed for each unit and each time point. Averaged across 14 single- and 33 multi-neuron
units, we found that this Normalized Variance (NV) declined 24% (t-test, p <10?10 ) from
200 ms before target onset to the median time of the go cue. This decline spanned ?119 ms
just after target onset and appears to, at least roughly, track the time-course of motor preparation.
The NV may be interpreted as a signature of the approximate degree of motor preparation yet to be accomplished. Shortly after target onset, firing rates are frequently far from
their mean. If the go cue arrives then, it will take time to correct these ?errors? and RTs
will therefore be longer. By the time the NV has completed its decline, firing rates are
consistently near their mean (which we presume is near an ?optimal? configuration for the
impending reach), and RTs will be shorter if the go cue arrives then. This interpretation
assumes that there is a limit on how quickly firing rates can converge to their ideal values
(a limit on how quickly the NV can drop) such that a decline during the delay period saves
time later. The NV was found to be lower at the time of the go cue for trials with shorter
RTs than those with longer RTs [16].
The above data strongly suggest that the network underlying motor preparation exhibits
rich dynamics. Activity is initially variable across trials, but appears to settle during the
delay period. Because the RNN (1)-(2) is capable of exhibiting such dynamics and may
underly motor preparation, we sought to identify such a dynamical system in delay activity.
6
Results and discussion
The NV reveals an average process of settling by measuring the convergence of firing
across different trials. However, it provides little insight into the course of motor planning
on a single trial. A gradual fall in trial-to-trial variance might reflect a gradual convergence
on each trial, or might reflect rapid transitions that occur at different times on different
trials. Similarly, all the NV tells us about the dynamic properties of the underlying network
is the basic fact of convergence from uncontrolled initial conditions to a consistent premovement preparatory state. The structure of any underlying attractors and corresponding
basins of attraction is unobserved. Furthermore, the NV is first computed per-unit and
averaged across units, thus ignoring any structure that may be present in the correlated
firing of units on a given trial. The methods presented here are well-suited to extending the
characterization of this settling process.
We fit the dynamical system model (1)?(4) with three latent dimensions (p = 3) to training
data, consisting of delay activity preceding 70 reaches to the same target (30?). Spike
counts were taken in non-overlapping ? = 20 ms bins at 20 ms time steps from 50 ms
after target onset to 50 ms after the go cue. Then, the fitted model parameters were used to
infer the latent space trajectories for 146 test trials, which are plotted in Figure 2. Despite
the trial-to-trial variability in the delay period neural responses, the state evolves along
a characteristic path on each trial. It could have been that the neural variability across
trials would cause the state trajectory to evolve in markedly different ways on different
trials. Even with the characteristic structure, the state trajectories are not all identical,
however. This presumably reflects the fact that the motor planning process is internallyregulated, and its timecourse may differ from trial to trial, even when the presented stimulus
(in this case, the reach target) is identical. How these timecourses differ from trial to trial
would have been obscured had we combined the neural data across trials, as with the NV
in Section 5.
Is this low-dimensional description of the system dynamics adequate to describe the firing
of all 131 recorded units? We transformed the inferred latent trajectories into trial-by-trial
Latent Dimension 3
Target onset + 50 ms
Go cue + 50 ms
50
0
-50
-2
La
60
ten
-1
tD
im
en
40
0
sio
n1
ion
20
0
1
-20
nt
te
La
2
s
en
Dim
Figure 2: Inferred modal state trajectories in latent (x) space for 146 test trials. Dots
indicate 50 ms after target onset (blue) and 50 ms after the go cue (green). The radius of
the green dots is logarithmically-related to delay period length (200, 750, or 1000ms).
inhomogeneous firing rates using the output relationship from (4)
?it = h ci ? xt + di ,
(8)
where ?it is the imputed firing rate of the ith unit at the tth time bin. Figure 3 shows the
imputed firing rates for 15 representative units overlaid with empirical firing rates obtained
by directly averaging raw spike counts across the same test trials. If the imputed firing rates
truly reflect the rate functions underlying the observed spikes, then the mean behavior of
the imputed firing rates should track the empirical firing rates. On the other hand, if the
latent system were inadequate to describe the activity, we should expect to see dynamical
features in the empirical firing that could not be captured by the imputed firing rates. The
strong agreement observed in Figure 3 and across all 131 units suggests that this simple
dynamical system is indeed capable of capturing significant components of the dynamics
of this neural circuit. We can view the dyamical system approach adopted in this work as a
form of non-linear dynamical embedding of point-process data. This is in contrast to most
current embedding algorithms that rely on continuous data. Figure 2 effectively represents
a three-dimensional manifold in the space of firing rates along which the dynamics unfold.
Beyond the agreement of imputed means demonstrated by Figure 3, we would like to directly test the fit of the model to the neural spike data. Unfortunately, current goodnessof-fit methods for spike trains, such as those based on time-rescaling [17], cannot be applied directly to latent variable models. The difficulty arises because the average trajectory
obtained from marginalizing over the latent variables in the system (by which we might
hope to rescale the inter-spike intervals) is not designed to provide an accurate estimate of
the trial-by-trial firing rate functions. Instead, each trial must be described by a distinct
trajectory in latent space, which can only be inferred after observing the spike trains themselves. This could lead to overfitting. We are currently exploring extensions to the standard
methods which infer latent trajectories using a subset of recorded neurons, and then test
the quality of firing-rate predictions for the remaining neurons. In addition, we plan to
compare models of different latent dimensionalities; here, the latent space was arbitrarily
chosen to be three-dimensional. To validate the learned latent space and inferred trajectories, we would also like to relate these results to trial-by-trial behavior. In particular, given
the evidence from Section 5, how ?settled? the activity is at the time of the go cue should
be predictive of RT.
80
40
0
Firing rate (spikes/s)
80
40
0
80
40
0
0
500
1000
0
500
1000
0
500
1000
0
500
1000
0
500
1000
Time relative to target onset (ms)
Figure 3: Imputed trial-by-trial firing rates (blue) and empirical firing rates (red). Gray
vertical line indicates the time of the go cue. Each panel corresponds to one unit. For
clarity, only test trials with delay periods of 1000 ms (44 trials) are plotted for each unit.
Acknowledgments
This work was supported by NIH-NINDS-CRCNS-R01, NSF, NDSEGF, Gatsby, MSTP,
CRPF, BWF, ONR, Sloan, and Whitaker. We would like to thank Dr. Mark Churchland for
valuable discussions and Missy Howard for expert surgical assistance and veterinary care.
References
[1] M. Abeles, H. Bergman, I. Gat, I. Meilijson, E. Seidemann, N. Tishby, and E. Vaadia.
Proc Natl Acad Sci USA, 92:8616?8620, 1995.
[2] I. Gat, N. Tishby, and M. Abeles. Network, 8(3):297?322, 1997.
[3] A. Smith and E. Brown. Neural Comput, 15(5):965?991, 2003.
[4] S. Amari. Biol Cybern, 27(2):77?87, 1977.
[5] A. Dempster, N. Laird, and D. Rubin. J R Stat Soc Ser B, 39:1?38, 1977.
[6] S. Julier and J. Uhlmann. In Proc. AeroSense: 11th Int. Symp. Aerospace/Defense
Sensing, Simulation and Controls, pp. 182?193, 1997.
[7] U. Lerner. Hybrid Bayesian networks for reasoning about complex systems. PhD
thesis, Stanford University, Stanford, CA, 2002.
[8] J. Tanji and E. Evarts. J Neurophysiol, 39:1062?1068, 1976.
[9] M. Weinrich, S. Wise, and K. Mauritz. Brain, 107:385?414, 1984.
[10] M. Godschalk, R. Lemon, H. Kuypers, and J. van der Steen. Behav Brain Res,
18:143?157, 1985.
[11] A. Riehle and J. Requin. J Neurophysiol, 61:534?549, 1989.
[12] D. Crammond and J. Kalaska. J Neurophysiol, 84:986?1005, 2000.
[13] J. Messier and J. Kalaska. J Neurophysiol, 84:152?165, 2000.
[14] D. Rosenbaum. J Exp Psychol Gen, 109:444?474, 1980.
[15] A. Riehle and J. Requin. J Behav Brain Res, 53:35?49, 1993.
[16] M. Churchland, B. Yu, S. Ryu, G. Santhanam, and K. Shenoy. Soc. for Neurosci.
Abstr., 2004.
[17] E. Brown, R. Barbieri, V. Ventura, R. Kass, and L. Frank. Neural Comput, 14(2):325?
346, 2002.
| 2823 |@word trial:49 steen:1 instruction:2 simulation:2 seek:1 rhesus:1 simplifying:1 covariance:4 p0:3 gradual:2 reduction:1 initial:2 configuration:1 tuned:1 reaction:2 current:2 ka:1 nt:1 activation:2 yet:1 must:3 numerical:1 underly:1 motor:18 drop:1 designed:1 update:1 cue:14 rts:4 ith:2 smith:1 filtered:1 straddling:1 provides:1 characterization:1 node:4 location:3 psth:2 burst:1 along:2 timecourses:1 become:1 sustained:1 yu1:1 behavioral:2 ndsegf:1 symp:1 x0:1 inter:1 indeed:1 expected:1 roughly:1 themselves:1 planning:5 frequently:1 multi:2 brain:4 preparatory:1 rapid:1 behavior:5 td:1 little:3 actual:1 provided:1 underlying:10 circuit:6 maximizes:1 panel:1 cm:1 interpreted:2 monkey:3 developed:1 finding:2 unobserved:1 pseudo:1 act:1 concave:1 exactly:1 uk:2 ser:1 unit:14 internally:2 control:1 appear:1 shenoy:2 t1:6 before:3 engineering:1 local:1 limit:5 acad:1 despite:1 barbieri:1 firing:29 fluctuation:2 path:1 might:8 studied:1 collect:1 suggests:1 limited:1 range:2 averaged:2 unique:1 acknowledgment:1 chaotic:2 unfold:1 area:1 rnn:4 maneesh:2 empirical:4 kuypers:1 radial:1 suggest:1 cannot:4 cybern:1 demonstrated:1 yt:3 center:1 go:13 regardless:1 independently:1 identifying:1 rule:2 insight:1 array:2 attraction:1 spanned:1 embedding:2 veterinary:1 poissons:1 target:18 trigger:1 hypothesis:1 bergman:1 agreement:2 element:3 logarithmically:1 approximated:6 particularly:1 observed:5 electrical:1 capture:2 cycle:3 richness:1 movement:4 valuable:1 byronyu:1 govern:1 dempster:1 dynamic:23 signature:1 trained:1 churchland:2 predictive:1 basis:1 neurophysiol:4 rightward:1 easily:3 joint:1 differently:1 represented:1 various:2 train:5 distinct:1 effective:1 london:1 describe:3 tell:1 quite:1 premotor:1 stanford:5 whose:4 valued:1 otherwise:1 amari:1 pertubation:1 timescale:1 noisy:1 laird:1 sequence:10 vaadia:1 ucl:2 lowdimensional:1 product:1 relevant:1 gen:1 poorly:1 description:1 validate:2 convergence:4 electrode:3 abstr:1 extending:1 illustrate:1 recurrent:5 ac:1 stat:1 measured:1 rescale:1 school:1 progress:1 strong:1 soc:2 predicted:1 involves:2 come:1 indicate:2 rosenbaum:1 exhibiting:1 differ:2 radius:1 drawback:1 closely:1 correct:1 inhomogeneous:1 stochastic:2 centered:1 settle:1 elimination:1 bin:4 im:1 strictly:1 exploring:1 extension:1 exp:1 presumably:2 overlaid:1 mapping:2 sytem:1 circuitry:1 sought:1 adopt:1 commonality:1 proc:2 currently:1 uhlmann:1 reflects:1 hope:1 neurosurgery:1 gopal:1 gaussian:11 rather:1 varying:1 eks:3 emission:1 consistently:1 likelihood:1 indicates:1 contrast:2 dim:1 inference:3 irp:6 typically:2 initially:1 hidden:10 transformed:2 pixel:3 overall:1 among:1 plan:1 smoothing:1 mstp:1 having:1 identical:5 kw:1 broad:2 represents:2 yu:1 stimulus:4 few:1 randomly:1 lerner:1 simultaneously:3 interpolate:1 delayed:4 consisting:1 attractor:4 maintain:1 n1:1 interest:1 severe:1 truly:1 arrives:2 natl:1 wc1n:1 accurate:1 integral:6 capable:4 elasticity:1 shorter:3 re:2 plotted:2 obscured:1 fronto:1 fitted:1 increased:1 column:1 modeling:1 ar:1 measuring:1 maximization:2 subset:1 hundred:1 delay:19 inadequate:1 tishby:2 characterize:2 reported:1 seoulman:1 abele:2 combined:1 peri:2 physic:2 diverge:1 together:1 quickly:2 vastly:1 reflect:5 recorded:4 central:1 settled:1 thesis:1 dr:1 external:1 cognitive:2 expert:1 inefficient:1 rescaling:1 account:1 potential:2 messier:1 coefficient:1 inc:1 int:1 sloan:1 onset:10 performed:1 view:6 later:1 extracellularly:1 observing:1 meilijson:1 reached:1 red:1 recover:1 denoised:1 complicated:1 parallel:1 ir:6 variance:5 characteristic:2 efficiently:1 yield:3 identify:4 surgical:1 raw:2 identification:1 bayesian:1 rejecting:1 accurately:1 produced:1 trajectory:19 presume:1 explain:1 oscillatory:2 reach:11 distort:1 pp:1 involved:1 obvious:1 associated:1 di:4 couple:1 pmd:7 dimensionality:3 reflecting:2 appears:2 dt:1 response:7 modal:1 arranged:1 strongly:1 furthermore:1 just:1 hand:1 nonlinear:1 overlapping:1 mode:3 quality:1 reveal:1 gray:1 usa:1 effect:1 normalized:1 true:1 brown:2 evolution:3 analytically:3 iteratively:1 gopals:1 illustrated:1 riehle:2 assistance:1 during:8 width:1 recurrence:1 scientifically:1 m:15 generalized:1 complete:1 invaluable:1 reasoning:1 wise:1 recently:1 nih:1 common:1 sigmoid:1 stimulation:1 spiking:2 physical:3 exponentially:1 discussed:1 interpretation:1 m1:2 julier:1 expressing:1 measurement:1 silicon:1 significant:1 tuning:1 similarly:3 fano:1 had:1 dot:2 specification:1 cortex:3 depiction:1 v0:3 surface:1 add:1 longer:4 something:1 posterior:8 leftward:1 driven:2 forcing:1 irq:1 seidemann:1 certain:1 onr:1 arbitrarily:1 accomplished:1 der:1 krishna:1 captured:5 fortunately:1 greater:1 preceding:1 care:1 converge:2 paradigm:1 period:14 dashed:1 stephen:1 smoother:1 multiple:2 sound:1 infer:3 smooth:1 believed:1 long:3 kalaska:2 divided:1 impact:1 prediction:2 variant:2 involving:2 basic:1 implanted:2 expectation:4 poisson:3 histogram:1 iteration:2 ion:1 addition:1 fine:1 interval:2 bwf:1 grow:1 median:1 crucial:1 markedly:1 nv:9 recording:3 subject:1 byron:1 isolate:1 extracting:1 near:2 ideal:1 easy:1 fit:4 cyberkinetics:1 idea:3 lesser:1 decline:3 defense:1 hessian:1 cause:1 behav:2 action:2 adequate:1 useful:2 latency:1 collision:1 involve:2 detailed:1 generally:1 shortening:1 prepared:1 ten:1 tth:2 reduced:1 imputed:7 simultaneouslyrecorded:1 nsf:1 neuroscience:2 impending:1 correctly:1 track:2 per:1 blue:2 discrete:1 santhanam:1 group:1 clarity:1 sum:1 aerosense:1 inverse:1 bouncing:1 opaque:1 place:1 family:1 shenoy1:1 oscillation:1 decision:1 ninds:1 scaling:1 capturing:1 uncontrolled:1 guaranteed:1 activity:21 lemon:1 occur:1 speed:1 argument:1 tanji:1 extracellular:1 department:2 peripheral:2 ball:5 smaller:1 slightly:1 across:11 describes:1 em:3 evolves:2 making:1 restricted:1 invariant:1 handling:1 taken:3 describing:1 count:6 needed:1 tractable:2 serf:1 adopted:1 available:1 operation:1 gaussians:1 eight:1 occasional:1 away:1 save:1 alternative:2 shortly:1 assumes:1 remaining:1 completed:1 newton:2 whitaker:1 medicine:1 exploit:1 especially:1 approximating:1 classical:1 r01:1 spike:19 occurs:1 rt:4 usual:2 exhibit:5 regulated:1 loudness:1 kth:1 link:2 separate:1 separating:1 simulated:2 thank:1 sensible:1 sci:1 manifold:1 toward:1 besides:1 kalman:1 modeled:1 relationship:4 length:2 difficult:2 unfortunately:3 ventura:1 relate:2 frank:1 negative:1 perform:2 vertical:2 neuron:18 observation:3 markov:1 howard:1 extended:1 variability:3 perturbation:2 smoothed:1 sharp:1 inferred:4 extensive:2 connection:1 timecourse:1 aerospace:1 learned:2 ryu:1 macaque:1 beyond:3 dynamical:18 goodnessof:1 regime:1 summarize:1 program:1 including:2 green:2 video:4 event:1 difficulty:1 settling:2 rely:1 predicting:1 hybrid:1 technology:1 brief:1 temporally:1 picture:1 declined:1 psychol:1 sahani:1 prior:1 discovery:1 evolve:2 marginalizing:1 relative:1 law:1 embedded:1 expect:1 degree:2 basin:1 consistent:4 rubin:1 obscure:1 course:3 summary:1 supported:1 offline:1 allow:1 fall:1 characterizing:2 taking:1 benefit:4 distributed:2 van:1 dimension:3 transition:4 rich:2 instructed:2 made:2 forward:1 far:2 reconstructed:1 approximate:2 overfitting:1 reveals:1 chronically:1 assumed:1 continuous:2 latent:23 learn:1 reasonably:1 channel:1 ca:2 elastic:1 ignoring:1 complex:1 did:1 main:1 neurosci:1 noise:3 quadrature:3 neuronal:1 representative:2 crcns:1 en:2 screen:1 gatsby:3 slow:1 position:1 momentum:1 exponential:2 comput:2 third:1 xt:15 specific:1 sensing:1 evidence:4 albeit:1 effectively:1 ci:4 gat:2 phd:1 linearization:1 te:1 conditioned:2 sorting:1 rejection:1 suited:1 simply:2 appearance:1 neurophysiological:1 ez:3 visual:1 corresponds:1 goal:1 shared:2 absence:1 yti:3 experimentally:1 change:1 averaging:3 crammond:1 denoising:1 total:1 pas:1 la:2 sio:1 internal:1 support:1 mark:1 arises:2 frontal:1 preparation:13 phenomenon:1 biol:1 correlated:1 |
2,008 | 2,824 | A Criterion for the Convergence of Learning
with Spike Timing Dependent Plasticity
Robert Legenstein and Wolfgang Maass
Institute for Theoretical Computer Science
Technische Universitaet Graz
A-8010 Graz, Austria
{legi,maass}@igi.tugraz.at
Abstract
We investigate under what conditions a neuron can learn by experimentally supported rules for spike timing dependent plasticity (STDP) to predict the arrival times of strong ?teacher inputs? to the same neuron. It
turns out that in contrast to the famous Perceptron Convergence Theorem, which predicts convergence of the perceptron learning rule for a
simplified neuron model whenever a stable solution exists, no equally
strong convergence guarantee can be given for spiking neurons with
STDP. But we derive a criterion on the statistical dependency structure of
input spike trains which characterizes exactly when learning with STDP
will converge on average for a simple model of a spiking neuron. This
criterion is reminiscent of the linear separability criterion of the Perceptron Convergence Theorem, but it applies here to the rows of a correlation
matrix related to the spike inputs. In addition we show through computer
simulations for more realistic neuron models that the resulting analytically predicted positive learning results not only hold for the common
interpretation of STDP where STDP changes the weights of synapses,
but also for a more realistic interpretation suggested by experimental
data where STDP modulates the initial release probability of dynamic
synapses.
1
Introduction
Numerous experimental data show that STDP changes the value wold of a synaptic weight
after pairing of the firing of the presynaptic neuron at time tpre with a firing of the postsynaptic neuron at time tpost = tpre + ?t to wnew = wold + ?w according to the rule
min{wmax , wold + W+ ? e??t/?+ } , if ?t > 0
wnew =
(1)
max{0, wold ? W? ? e?t/?? }
, if ?t ? 0 ,
with some parameters W+ , W? , ?+ , ?? > 0 (see [1]). If during training a teacher induces
firing of the postsynaptic neuron, this rule becomes somewhat analogous to the well-known
perceptron learning rule for McCulloch-Pitts neurons (= ?perceptrons?). The Perceptron
Convergence Theorem states that this rule enables a perceptron to learn, starting from any
initial weights, after finitely many errors any transformation that it could possibly implement. However, we have constructed examples of input spike trains and teacher spike trains
(omitted in this abstract) such that although a weight vector exists which produces the desired firing and which is stable under STDP, learning with STDP does not converge to a
stable solution. On the other hand experiments in vivo have shown that neurons can be
taught by suitable teacher input to adopt a given firing response [2, 3] (although the spiketiming dependence is not exploited there). We show in section 2 that such convergence of
learning can be explained by STDP in the average case, provided that a certain criterion is
met for the statistical dependence among Poisson spike inputs. The validity of the proposed
criterion is tested in section 3 for more realistic models for neurons and synapses.
2
An analytical criterion for the convergence of STDP
The average case analysis in this section is based on the linear Poisson neuron model (see
[4, 5]). This neuron model outputs a spike train S post (t) which is a realization of a Poisson
post
process with the underlying instantaneous firing
(t). We represent a spike train
P rate R
S(t) as a sum of Dirac-? functions S(t) = k ?(t ? tk ), where tk is the k th spike time of
the spike train. The effect of an input spike at input i at time t0 is modeled by an increase
in the instantaneous firing rate of an amount wi (t0 )(t ? t0 ), where is a response kernel
and wi (t0 ) isR the synaptic efficacy of synapse i at time t0 . We assume (s) = 0 for s < 0
?
(causality), 0 ds (s) = 1 (normalization of the response kernel), and (s) ? 0 for all s
as well as wi ? 0 for all i (excitatory inputs). In the linear model, the contributions of all
inputs are summed up linearly:
n Z ?
X
Rpost (t) =
ds wj (t ? s) (s) Sj (t ? s) ,
(2)
j=1
0
where S1 , . . . , Sn are the n presynaptic spike trains. Note that in this spike generation
process, the generation of an output spike is independent of previous output spikes.
The STDP-rule (1) avoids the growth of weights beyond bounds 0 and wmax by simple
clipping. Alternatively one can make the weight update dependent on the actual weight
value. In [5] a general rule is suggested where the weight dependence has the form of a
power law with a non-negative exponent ?. This weight update rule is defined by
W+ ? (1 ? w)? ? e??t/?+ , if ?t > 0
?w =
(3)
?W? ? w? ? e?t/??
, if ?t ? 0 ,
where we assumed for simplicity that wmax = 1. Instead of looking at specific input
spike trains, we consider the average behavior of the weight vector for (possibly correlated)
homogeneous Poisson input spike trains. Hence, the change ?wi is a random variable with
a mean drift and fluctuations around it. We will in the following focus on the drift by
assuming that individual weight changes are very small and only averaged quantities enter
the learning dynamics, see [6]. Let Si be the spike train of input i and let S ? be the output
spike train of the neuron. The mean drift of synapse i at time t can be approximated as
Z ?
Z 0
w? i (t) = W+ (1 ? wi )?
ds e?s/? Ci (s; t) ? W? wi?
ds es/? Ci (s; t) ,
(4)
0
??
where Ci (s; t) = hSi (t)S ? (t + s)iE is the ensemble averaged correlation function between
input i and the output of the neuron (see [5, 6]). For the linear Poisson neuron model, inputoutput correlations can be described by means of correlations in the inputs. We define the
normalized cross correlation between input spike trains Si and Sj with a common rate
r > 0 as
hSi (t) Sj (t + s)iE
0
Cij
(s) =
?1,
(5)
r2
which assumes value 0 for uncorrelated Poisson spike trains. We assume in this article that
0
Cij
is constant over time. In our setup, the output of the neuron during learning is clamped
to the teacher spike train S ? which is the output of a neuron with the target weight vector
w? . Therefore, the input-output correlations Ci (s; t) are also constant over time and we
denote them by Ci (s) in the following. In our neuron model, correlations are shaped by the
response kernel (s) and they enter the learning equation (4) with respect to the learning
?
window. This motivates the definition of window correlations c+
ij and cij for the positive
and negative learning window respectively:
Z
Z ?
1 ?
?s/?
0
c?
=
1
+
ds
e
ds0 (s0 )Cij
(?s ? s0 ) .
(6)
ij
? 0
0
We call the matrices C ? = {c?
ij }i,j=1,...,n the window correlation matrices. Note that
window correlations are non-negative and that for homogeneous Poisson input spike trains
and for a non-negative response kernel, they are positive. For soft weight bounds and
? > 0, a synaptic weight can converge to a value arbitrarily close to 0 or 1, but not to one
of these values directly. This motivates the following definition of learnability.
Definition 2.1 We say that a target weight vector w ? ? {0, 1}n can approximately be
learned in a supervised paradigm by STDP with soft weight bounds on homogeneous Poisson input spike trains (short: ?w? can be learned?) if and only if there exist W+ , W? > 0,
such that for ? ? 0 the ensemble averaged weight vector hw(t)iE with learning dynamics
given by Equation 4 converges to w ? for any initial weight vector w(0) ? [0, 1]n .
We are now ready to formulate an analytical criterion for learnability:
Theorem 2.1 A weight vector w? can be learned (when being teached with S ? ) for homogeneous Poisson input spike trains with window correlation matrices C + and C ? to a
linear Poisson neuron with non-negative response kernel if and only if w ? 6= 0 and
Pn
Pn
? +
? +
k=1 wk cjk
k=1 wk cik
>
Pn
P
n
? ?
? ?
k=1 wk cik
k=1 wk cjk
for all pairs hi, ji ? {1, . . . , n}2 with wi? = 1 and wj? = 0.
Proof idea: The correlation between an input and the teacher induced output is (by Eq. 2):
Z ?
n
X
wj?
ds0 (s0 ) hSi (t) Sj (t + s ? s0 )iE .
Ci (s) = hSi (t) S ? (t + s)iE =
0
j=1
Substitution of this equation into Eq. 4 yields the synaptic drift
?
?
n
n
X
X
?
? .
w? i = ? r2 ?W+ (1 ? wi )?
wj? c+
wj? c?
ij ? W? wi
ij
j=1
(7)
j=1
We find the equilibrium points w?i of synapse i by setting w? i = 0 in Eq. 7. This
?1
Pn
? +
W+
j=1 wj cij
1
Pn
, where ?i denotes W
. Note that the drift is
yields w?i = 1 + 1/?
?
w ? c?
?i
j=1
j
ij
zero if w? = 0 which implies that w? = 0 cannot be learned. For w? 6= 0, one can
show that w? = (w?1 , . . . , w?n ) is the only equilibrium point of the system and that it
is stable. Since the system decomposes into n independent one-dimensional systems,
convergence to w? is guaranteed for all initial conditions. Furthermore, one sees that
lim??0 w?i = 1 if and only if ?i > 1, and lim??0 w?i = 0 if and only if ?i < 1.
Therefore, lim??0 w? = w? holds if and only if ?i > 1 for all i with wi? = 1 and ?i < 1
for all i with wi? = 0. The theorem follows from the definition of ?i .
For a wide class of cross-correlation functions, one can establish a relationship between
learnability by STDP and the well-known concept of linear separability from linear algebra.1 Because of synaptic delays, the response of a spiking neuron to an input spike is
delayed by some time t0 . One can model such a delay in the response kernel by the restriction (s) = 0 for all s ? t0 . In the following Corollary we consider the case where input
0
correlations Cij
(s) appear only in a time window smaller than the delay:
0
Corollary 2.1 If there exists a t0 ? 0 such that (s) = 0 for all s ? t0 and Cij
(s) = 0 for
all s < ?t0 , i, j ? {1, . . . , n}, then the following holds for the case of homogeneous
Poisson input spike trains to a linear Poisson neuron with positive response kernel :
A weight vector w? can be learned if and only if w? 6= 0 and w? linearly separates the list
+
+
?
+
+
?
L = hhc+
1 , w1 i, . . . , hcn , wn ii, where c1 , . . . , cn are the rows of C .
Proof idea: From the assumptions of the corollary it follows that c?
ij = 1. In this case, the
condition in Theorem 2.1 is equivalent to the statement that w ? linearly separates the list
?
+
?
L = hhc+
1 , w1 i, . . . , hcn , wn ii.
Corollary 2.1 can be viewed as an analogon of the Perceptron Convergence Theorem for the
average case analysis of STDP. Its formulation is tight in the sense that linear separability of
the list L alone (as opposed to linear separability by the target vector w ? ) is not sufficient
to imply learnability. For uncorrelated input spike trains of rate r > 0, the normalized
?
0
cross correlation functions are given by Cij
(s) = rij ?(s), where ?ij is the Kronecker
delta function. The positive window correlation matrix C + is therefore essentially a scaled
version of the identity matrix. The following corollary then follows from Corollary 2.1:
Corollary 2.2 A target weight vector w ? ? {0, 1}n can be learned in the case of uncorrelated Poisson input spike trains to a linear Poisson neuron with positive response kernel
such that (s) = 0 for all s ? 0 if and only if w ? 6= 0.
3
Computer simulations of supervised learning with STDP
In order to make a theoretical analysis feasible, we needed to make in section 2 a number of
simplifying assumptions on the neuron model and the synapse model. In addition a number
of approximations had to be used in order to simplify the estimates. We consider in this
section the more realistic integrate-and-fire model2 for neurons and a model for synapses
which are subject to paired-pulse depression and paired-pulse facilitation, in addition to the
long term plasticity induced by STDP [7]. This model describes synapses with parameters
U (initial release probability), D (depression time constant), and F (facilitation time constant) in addition to the synaptic weight w. The parameters U , D, and F were randomly
1
Let c1 , . . . , cm ? Rn and y1 , . . . , ym ? {0, 1}. We say that a vector w ? Rn linearly separates
the list hhc1 , y1 i, . . . , hcm , ym ii if there exists a threshold ? such that yi = sign(ci ? w ? ?)
for i = 1, . . . , m. We define sign(z) = 1 if z ? 0 and sign(z) = 0 otherwise.
2
The membrane potential Vm of the neuron is given by ?m dVdtm = ?(Vm ? Vresting ) + Rm ?
(Isyn (t) + Ibackground + Iinject (t)) where ?m = Cm ? Rm = 30ms is the membrane time constant,
Rm = 1M ? is the membrane resistance, Isyn (t) is the current supplied by the synapses, Ibackground
is a constant background current, and Iinject (t) represents currents induced by a ?teacher?. If V m
exceeds the threshold voltage Vthresh it is reset to Vreset = 14.2mV and held there for the length
Tref ract = 3ms of the absolute refractory period.Neuron parameters: V resting = 0V , Ibackground
randomly chosen for each trial from the interval [13.5nA, 14.5nA]. V thresh was set such that each
neuron spiked at a rate of about 25 Hz. This resulted in a threshold voltage slightly above 15mV .
Synaptic parameters: Synaptic currents were modeled as exponentially decaying currents with decay
time constants ?S = 3ms (?S = 6ms) for excitatory (inhibitory) synapses.
chosen from Gaussian distributions that were based on empirically found data for such connections. We also show that in some cases a less restrictive teacher forcing suffices, that
tolerates undesired firing of the neuron during training. The results of section 2 predict that
the temporal structure of correlations has a strong influence on the outcome of a learning
experiment. We used input spike trains with cross correlations that decay exponentially
with a correlation decay constant ?cc .3 In experiment 1 we consider temporal correlations
with ?cc =10ms. Since such ?broader? correlations are not problematic for STDP, sharper
correlations (?cc =6ms) are considered in experiment 2.
Experiment 1 (correlated input with ?cc =10ms): In this experiment, a leaky integrateand-fire neuron received inputs from 100 dynamic synapses. 90% of these synapses were
excitatory and 10% were inhibitory. For each excitatory synapse, the maximal efficacy
wmax was chosen from a Gaussian distribution with mean 54 and SD 10.8, bounded by
54 ? 3SD. The 90 excitatory inputs were divided into 9 groups of 10 synapses per group.
Spike trains were correlated within groups with correlation coefficients between 0 and 0.8,
whereas there were virtually no correlations between spike trains of different groups.4 Target weight vectors w? were chosen in the most adverse way: half of the weights of w ?
within each group was set to 0, the other half to its maximal value wmax (see Fig. 1C).
A
target
trained
0
B
0.2
0.4
0.6
0.8
1
1.2
time [sec]
C
1
1.4
60
1.6
1.8
2
target
w
40
0.8
20
0.6
0
angular error [rad]
spike correlation [?=5ms]
0.4
D
0
20
60
40
60
Synapse
80
trained
w
40
0.2
20
0
0
500
1000 1500
time [sec]
2000
2500
0
0
20
40
60
Synapse
80
Figure 1: Learning a target weight vector w ? on correlated Poisson inputs. A) Output spike train on
test data after one hour of training (trained) compared to the target output (target). B) Evolution of
the angle between weight vector w(t) and the vector w ? that implements F in radiant (angular error,
solid line), and spike correlation (dashed line). C) Target weight vector w ? consisting of elements
with value 0 or the value wmax assigned to that synapse. D) Corresponding weights of the learned
vector w(t) after 40 minutes of training. (All time data refer to simulated biological time)
Before training, the weights of all excitatory synapses were initialized by randomly chosen
small values. Weights of inhibitory synapses remained fixed throughout the experiment.
Information about the target weight vector w ? was given to the neuron only in the form of
short current injections (1 ?A for 0.2 ms) at those times when the neuron with the weight
vector w? would have produced a spike. Learning was implemented as standard STDP
(see rule 1) with parameters ?+ = ?? = 20ms, W+ = 0.45, W? /W+ = 1.05. Additional
inhibitory input was given to the neuron during training that reduced the occurrence of non3
We constructed input spike trains with normalized cross correlations (see Equation 5) approxiccij ?|s|/?cc
0
mately given by Cij
(s) = 2?cc
e
between inputs i and j for a mean input rate of r = 20Hz,
r
a correlation coefficient cij , and a correlation decay constant of ?cc = 10ms.
4
The correlation coefficient cij for spike trains within group k consisting of 10 spike trains was
set to cij = cck = 0.1 ? (k ? 1) for k = 1, . . . , 9.
teacher-induced firing of the neuron (see text below).5 Two different performance measures
were used for analyzing the learning progress. The ?spike correlation? measures for test
inputs that were not used for training (but had been generated by the same process) the
deviation between the output spike train produced by the target weight vector w ? for this
input, and the output spike train produced for the same input by the neuron with the current
weight vector w(t)6 . The angular error measures the angle between the current weight
vector w(t) and the target weight vector w ? . The results are shown in Fig. 1. One can
see that the deviation of the learned weight vector shown in panel D from the target weight
vector w? (panel C) is very small, even for highly correlated groups of synapses with heterogeneous target weights. No significant changes in the results were observed for longer
simulations (4 hours simulated biological time), showing stability of learning. On 20 trials
(each with a new random distribution of maximal weights wmax , different initializations
w(0) of the weight vector before learning, and new Poisson spike trains), a spike correlation of 0.83 ? 0.06 was achieved (angular error 6.8 ? 4.7 degrees). Note that learning is not
only based on teacher spikes but also on non teacher-induced firing. Therefore, strongly
correlated groups of inputs tend to cause autonomous (i.e., not teacher-induced) firing of
the neuron which results in weight increases for all weights within the corresponding group
of synapses according to well-known results for STDP [8, 5]. Obviously this effect makes
it quite hard to learn a target weight vector w ? where half of the weights for each correlated group have value 0. The effect is reduced by the additional inhibitory input during
training which reduces undesired firing. However, without this input a spike correlation of
0.79 ? 0.09 could still be achieved (angular error 14.1 ? 10 degrees).
A
1
0.35
B
0.8
0.7
0.6
0.5
0.4
0.3
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
input correlation cc
predicted to be learnable
predicted to be not learnable
0.3
1?spike correlation
spike correlation
0.9
0.25
0.2
0.15
0.1
0.05
0
0
5
10
15
weight error [?]
Figure 2: A) Spike correlation achieved for correlated inputs (solid line). Some inputs were correlated with cc plotted on the x-axis. Also, as a control the spike correlation achieved by randomly
drawn weight vectors is shown (dashed line, where half of the weights were set to w max and the other
weights were set to 0). B) Comparison between theory and simulation results for a leaky integrateand-fire neuron and input correlations between 0.1 and 0.5 (?cc = 6ms). Each cross (open circle)
marks a trial where the target vector was learnable (not learnable) according to Theorem 2.1. The
actual learning performance of STDP is plotted for each trial in terms of the weight error (x-axis) and
1 minus the spike correlation (y-axis).
Experiment 2 (testing the theoretical predictions for ?cc =6ms): In order to evaluate the
dependence of correlation among inputs we proceeded in a setup similar to experiment
1. 4 input groups consisting each of 10 input spike trains were constructed for which the
correlations within each group had the same value cc while the input spike train to the
other 50 excitatory synapses were uncorrelated. Again, half of the weights of w ? within
5
We added 30 inhibitory synapses with weights drawn from a gamma distribution with mean 25
and standard deviation 7.5, that received additional 30 uncorrelated Poisson spike trains at 20 Hz.
6
For that purpose each spike in these two output spike trains was replaced by a Gaussian function
with an SD of 5 ms. The spike correlation between both output spike trains was defined as the
correlation between the resulting smooth functions of time (for segments of length 100 s).
each correlated group (and within the uncorrelated group) was set to 0, the other half to a
randomly chosen maximal value. The learning performance after 1 hour of training for 20
trials is plotted in Fig. 2A for 7 different values of the correlation cc (?cc = 6ms) that is
applied in 4 of the input groups (solid line).
In order to test the approximate validity of Theorem 2.1 for leaky integrate-and-fire neurons and dynamic synapses, we repeated the above experiment for input correlations
cc = 0.1, 0.2, 0.3, 0.4, and 0.5. For each correlation value, 20 learning trials (with different
target vectors) were simulated. For each trial we first checked whether the (randomly chosen) target vector w? was learnable according to the condition given in Theorem 2.1 (65%
of the 100 learning trials were classified as being learnable).7 The actual performance of
learning with STDP was evaluated after 50 minutes of training.8 The result is shown in Fig.
2B. It shows that the theoretical prediction of learnability or non-learnability for the case
of simpler neuron models and synapses from Theorem 2.1 translates in a biologically more
realistic scenario into a quantitative grading of the learning performance that can ultimately
be achieved with STDP.
1
target
B
U
A
0.8
0.1
angular error [rad]
weight deviation
spike correlation
0.6
0.4
0.2
0
5
U
20
trained
C
0.2
10
15
Synapse
0.2
0.1
0
0
500
1000 1500 2000 2500
time [sec]
0
5
10
15
Synapse
20
Figure 3: Results of modulation of initial release probabilities U . A) Performance of U -learning for
a generic learning task (see text). B) Twenty values of the target U vector (each component assumes
its maximal possible value or the value 0). C) Corresponding U values after 42 minutes of training.
Experiment 3 (Modulation of initial release probabilities U by STDP): Experimental
data from [9] suggest that synaptic plasticity does not change the uniform scaling of the
amplitudes of EPSPs resulting from a presynaptic spike train (i.e., the parameter w), but
rather redistributes the sum of their amplitudes. If one assumes that STDP changes the parameter U that determines the synaptic release probability for the first spike in a spike train,
whereas the weight w remains unchanged, then the same experimental data that support the
classical rule for STDP, support the following rule for changing U :
min{Umax , Uold + U+ ? e??t/?+ } , if ?t > 0
Unew =
(8)
max{0, Uold ? U? ? e?t/?? }
, if ?t ? 0 ,
with suitable nonnegative parameters Umax , U+ , U? , ?+ , ?? .
Fig. 3 shows results of an experiment where U was modulated with rule (8) (similar to
experiment 1, but with uncorrelated inputs). 20 repetitions of this experiment yielded after
42 minutes of training the following results: spike correlation 0.88 ? 0.036, angular error
27.9 ? 3.7 degrees, for U+ = 0.0012, U? /U+ = 1.055. Apparently the output spike train
is less sensitive to changes in the values of U than to changes in w. Consequently, since
1
We had chosen a response kernel of the form (s) = ?1 ??
(e?s/?1 ? e?s/?2 ) with ?1 = 2ms
2
and ?2 = 1ms (Least mean squares fit of the double exponential to the peri-stimulus-time histogram
(PSTH) of the neuron, which reflects the probability of spiking as a function of time s since an input
?
spike), and calculated the window correlations c+
ij and cij numerically.
8
To guarantee the best possible performance for each learning trial, training was performed on 27
different values for W? /W+ between 1.02 and 1.15.
7
only the behavior of a neuron with vector U? but not the vector U? is made available to
the neuron during training, the resulting correlation between target- and actual output spike
trains is quite high, whereas angular error between U? and U(t), as well as the average
deviation in U , remain rather large.
We also repeated experiment 1 (correlated Poisson inputs) with rule (8) for U -learning.
20 repetitions with different target weights and different initial conditions yielded after 35
minutes of training: spike correlation 0.75 ? 0.08, angular error 39.3 ? 4.8 degrees, for
U+ = 8 ? 10?4 , U? /U+ = 1.09.
4
Discussion
The main conclusion of this article is that for many common distributions of input spikes
a spiking neuron can learn with STDP and teacher-induced input currents any map from
input spike trains to output spike trains that it could possibly implement in a stable manner.
We have shown in section 2 that a mathematical average case analysis can be carried out
for supervised learning with STDP. This theoretical analysis produces the first criterion
that allows us to predict whether supervised learning with STDP will succeed in spite of
correlations among Poisson input spike trains. For the special case of ?sharp correlations?
(i.e. when the cross correlations vanish for time shifts larger than the synaptic delay) this
criterion can be formulated in terms of linear separability of the rows of a correlation matrix
related to the spike input, and its mathematical form is therefore reminiscent of the wellknown condition for learnability in the case of perceptron learning. In this sense Corollary
2.1 can be viewed as an analogon of the Perceptron Convergence Theorem for spiking
neurons with STDP.
Furthermore we have shown that an alternative interpretation of STDP where one assumes
that it modulates the initial release probabilities U of dynamic synapses, rather than their
scaling factors w, gives rise to very satisfactory convergence results for learning.
Acknowledgment: We would like to thank Yves Fregnac, Wulfram Gerstner, and especially Henry Markram for inspiring discussions.
References
[1] L. F. Abbott and S. B. Nelson. Synaptic plasticity: taming the beast. Nature Neurosci., 3:1178?
1183, 2000.
[2] Y. Fregnac, D. Shulz, S. Thorpe, and E. Bienenstock. A cellular analogue of visual cortical
plasticity. Nature, 333(6171):367?370, 1988.
[3] D. Debanne, D. E. Shulz, and Y. Fregnac. Activity dependent regulation of on- and off-responses
in cat visual cortical receptive fields. Journal of Physiology, 508:523?548, 1998.
[4] R. Kempter, W. Gerstner, and J. L. van Hemmen. Intrinsic stabilization of output rates by spikebased hebbian learning. Neural Computation, 13:2709?2741, 2001.
[5] R. G?utig, R. Aharonov, S. Rotter, and H. Sompolinsky. Learning input correlations through
non-linear temporally asymmetric hebbian plasticity. Journal of Neurosci., 23:3697?3714, 2003.
[6] R. Kempter, W. Gerstner, and J. L. van Hemmen. Hebbian learning and spiking neurons. Phys.
Rev. E, 59(4):4498?4514, 1999.
[7] H. Markram, Y. Wang, and M. Tsodyks. Differential signaling via the same axon of neocortical
pyramidal neurons. PNAS, 95:5323?5328, 1998.
[8] S. Song, K. D. Miller, and L. F. Abbott. Competitive hebbian learning through spike-timing
dependent synaptic plasticity. Nature Neuroscience, 3:919?926, 2000.
[9] H. Markram and M. Tsodyks. Redistribution of synaptic efficacy between neocortical pyramidal
neurons. Nature, 382:807?810, 1996.
| 2824 |@word trial:9 proceeded:1 version:1 open:1 simulation:4 pulse:2 simplifying:1 minus:1 solid:3 initial:9 substitution:1 efficacy:3 current:9 si:2 reminiscent:2 realistic:5 plasticity:8 enables:1 update:2 alone:1 half:6 short:2 psth:1 simpler:1 mathematical:2 constructed:3 differential:1 pairing:1 manner:1 behavior:2 actual:4 window:9 becomes:1 provided:1 underlying:1 bounded:1 panel:2 mcculloch:1 vthresh:1 what:1 cm:2 transformation:1 guarantee:2 temporal:2 quantitative:1 legi:1 growth:1 exactly:1 scaled:1 rm:3 control:1 appear:1 positive:6 before:2 timing:3 sd:3 analyzing:1 firing:12 fluctuation:1 approximately:1 modulation:2 initialization:1 averaged:3 acknowledgment:1 testing:1 implement:3 ibackground:3 signaling:1 physiology:1 spite:1 suggest:1 spikebased:1 cannot:1 close:1 influence:1 restriction:1 equivalent:1 map:1 starting:1 formulate:1 simplicity:1 rule:14 facilitation:2 stability:1 autonomous:1 analogous:1 debanne:1 target:24 aharonov:1 homogeneous:5 element:1 approximated:1 asymmetric:1 predicts:1 observed:1 rij:1 wang:1 tsodyks:2 graz:2 wj:6 sompolinsky:1 dynamic:6 ultimately:1 trained:4 tight:1 segment:1 algebra:1 model2:1 cat:1 train:42 outcome:1 quite:2 larger:1 say:2 otherwise:1 redistributes:1 obviously:1 analytical:2 maximal:5 reset:1 realization:1 dirac:1 inputoutput:1 convergence:12 double:1 produce:2 converges:1 tk:2 derive:1 ij:9 finitely:1 tolerates:1 received:2 progress:1 eq:3 epsps:1 implemented:1 predicted:3 strong:3 implies:1 met:1 unew:1 stabilization:1 redistribution:1 integrateand:2 suffices:1 biological:2 hold:3 around:1 considered:1 stdp:31 equilibrium:2 predict:3 pitt:1 adopt:1 omitted:1 cck:1 purpose:1 sensitive:1 repetition:2 reflects:1 gaussian:3 rather:3 pn:5 voltage:2 broader:1 corollary:8 release:6 focus:1 contrast:1 utig:1 sense:2 dependent:5 bienenstock:1 among:3 exponent:1 summed:1 special:1 field:1 shaped:1 represents:1 stimulus:1 simplify:1 thorpe:1 shulz:2 randomly:6 gamma:1 resulted:1 individual:1 delayed:1 replaced:1 consisting:3 fire:4 investigate:1 highly:1 held:1 uold:2 initialized:1 desired:1 plotted:3 circle:1 theoretical:5 soft:2 clipping:1 technische:1 deviation:5 uniform:1 delay:4 learnability:7 dependency:1 teacher:13 peri:1 ie:5 vm:2 off:1 ym:2 fregnac:3 na:2 w1:2 again:1 opposed:1 possibly:3 potential:1 sec:3 wk:4 coefficient:3 igi:1 mv:2 performed:1 wolfgang:1 apparently:1 characterizes:1 competitive:1 decaying:1 vivo:1 contribution:1 square:1 yves:1 ract:1 ensemble:2 yield:2 miller:1 famous:1 produced:3 cc:15 classified:1 synapsis:19 phys:1 whenever:1 synaptic:14 checked:1 definition:4 proof:2 austria:1 lim:3 amplitude:2 cik:2 supervised:4 response:12 synapse:10 formulation:1 evaluated:1 wold:4 strongly:1 furthermore:2 angular:9 correlation:57 d:5 hand:1 wmax:7 effect:3 validity:2 normalized:3 concept:1 evolution:1 analytically:1 hence:1 assigned:1 satisfactory:1 maass:2 undesired:2 during:6 criterion:10 m:17 neocortical:2 instantaneous:2 common:3 spiking:7 ji:1 empirically:1 refractory:1 exponentially:2 interpretation:3 resting:1 numerically:1 refer:1 significant:1 enter:2 had:4 henry:1 stable:5 longer:1 thresh:1 forcing:1 scenario:1 isyn:2 certain:1 wellknown:1 arbitrarily:1 rotter:1 yi:1 exploited:1 additional:3 somewhat:1 converge:3 paradigm:1 period:1 dashed:2 hsi:4 ii:3 pnas:1 reduces:1 hebbian:4 exceeds:1 smooth:1 cross:7 long:1 divided:1 post:2 equally:1 paired:2 prediction:2 heterogeneous:1 essentially:1 poisson:19 histogram:1 kernel:9 represent:1 normalization:1 achieved:5 c1:2 addition:4 background:1 whereas:3 interval:1 pyramidal:2 induced:7 subject:1 hz:3 virtually:1 tend:1 call:1 wn:2 mately:1 fit:1 idea:2 cn:1 translates:1 grading:1 shift:1 t0:10 whether:2 song:1 resistance:1 cause:1 depression:2 amount:1 induces:1 inspiring:1 reduced:2 supplied:1 exist:1 problematic:1 inhibitory:6 sign:3 delta:1 neuroscience:1 per:1 taught:1 group:15 threshold:3 drawn:2 changing:1 abbott:2 sum:2 angle:2 throughout:1 legenstein:1 scaling:2 bound:3 hi:1 guaranteed:1 nonnegative:1 yielded:2 activity:1 kronecker:1 min:2 injection:1 tref:1 according:4 membrane:3 smaller:1 describes:1 slightly:1 separability:5 postsynaptic:2 wi:11 remain:1 beast:1 rev:1 biologically:1 s1:1 explained:1 spiked:1 equation:4 remains:1 turn:1 needed:1 available:1 generic:1 occurrence:1 alternative:1 assumes:4 denotes:1 tugraz:1 spiketiming:1 restrictive:1 especially:1 establish:1 classical:1 unchanged:1 added:1 quantity:1 spike:74 receptive:1 dependence:4 separate:3 thank:1 simulated:3 nelson:1 presynaptic:3 cellular:1 assuming:1 length:2 modeled:2 relationship:1 setup:2 regulation:1 cij:13 robert:1 statement:1 sharper:1 negative:5 rise:1 motivates:2 twenty:1 neuron:48 looking:1 y1:2 rn:2 sharp:1 drift:5 pair:1 connection:1 rad:2 ds0:2 learned:8 hour:3 tpre:2 beyond:1 suggested:2 below:1 max:3 analogue:1 power:1 suitable:2 imply:1 numerous:1 temporally:1 axis:3 ready:1 umax:2 carried:1 sn:1 vreset:1 text:2 taming:1 law:1 kempter:2 generation:2 integrate:2 degree:4 sufficient:1 s0:4 article:2 uncorrelated:7 row:3 excitatory:7 supported:1 perceptron:9 institute:1 wide:1 markram:3 absolute:1 leaky:3 isr:1 van:2 calculated:1 cortical:2 avoids:1 tpost:1 made:1 simplified:1 sj:4 approximate:1 universitaet:1 assumed:1 alternatively:1 decomposes:1 learn:4 nature:4 gerstner:3 main:1 linearly:4 neurosci:2 arrival:1 repeated:2 causality:1 fig:5 hemmen:2 axon:1 exponential:1 clamped:1 vanish:1 hw:1 theorem:12 minute:5 remained:1 specific:1 showing:1 learnable:6 r2:2 list:4 decay:4 exists:4 cjk:2 intrinsic:1 modulates:2 ci:7 visual:2 applies:1 determines:1 wnew:2 succeed:1 viewed:2 identity:1 formulated:1 consequently:1 feasible:1 experimentally:1 change:9 adverse:1 hard:1 wulfram:1 experimental:4 e:1 perceptrons:1 mark:1 support:2 modulated:1 evaluate:1 tested:1 correlated:11 |
2,009 | 2,825 | Fast Online Policy Gradient Learning
with SMD Gain Vector Adaptation
Nicol N. Schraudolph Jin Yu Douglas Aberdeen
Statistical Machine Learning, National ICT Australia, Canberra
{nic.schraudolph,douglas.aberdeen}@nicta.com.au
Abstract
Reinforcement learning by direct policy gradient estimation is attractive
in theory but in practice leads to notoriously ill-behaved optimization
problems. We improve its robustness and speed of convergence with
stochastic meta-descent, a gain vector adaptation method that employs
fast Hessian-vector products. In our experiments the resulting algorithms
outperform previously employed online stochastic, offline conjugate, and
natural policy gradient methods.
1
Introduction
Policy gradient reinforcement learning (RL) methods train controllers by estimating the
gradient of a long-term reward measure with respect to the parameters of the controller [1].
The advantage of policy gradient methods, compared to value-based RL, is that we avoid
the often redundant step of accurately estimating a large number of values. Policy gradient
methods are particularly appealing when large state spaces make representing the exact
value function infeasible, or when partial observability is introduced. However, in practice
policy gradient methods have shown slow convergence [2], not least due to the stochastic
nature of the gradients being estimated.
The stochastic meta-descent (SMD) gain adaptation algorithm [3, 4] can considerably accelerate the convergence of stochastic gradient descent. In contrast to other gain adaptation
methods, SMD copes well not only with stochasticity, but also with non-i.i.d. sampling of
observations, which necessarily occurs in RL. In this paper we derive SMD in the context
of policy gradient RL, and obtain over an order of magnitude improvement in convergence
rate compared to previously employed policy gradient algorithms.
2
2.1
Stochastic Meta-Descent
Gradient-based gain vector adaptation
Let R be a scalar objective function we wish to maximize with respect to its adaptive
parameter vector ? ? Rn , given a sequence of observations xt ? X at time t = 1, 2, . . .
Where R is not available or expensive to compute, we use the stochastic approximation
Rt : Rn ? X ? R of R instead, and maximize the expectation Et [Rt (?t , xt )]. Assuming
that Rt is twice differentiable wrt. ?, with gradient and Hessian given by
gt =
?
?? Rt (?, xt )|?=?t
and Ht =
?2
R (?, xt )|?=?t ,
?? ?? > t
(1)
respectively, we maximize Et [Rt (?)] by the stochastic gradient ascent
?t+1 = ?t + ?t ? gt ,
(2)
+ n
where ? denotes element-wise (Hadamard) multiplication. The gain vector ?t ? (R )
serves as a diagonal conditioner, providing each element of ? with its own positive gradient
step size. We adapt ? by a simultaneous meta-level gradient ascent in the objective Rt . A
straightforward implementation of this idea is the delta-delta algorithm [5], which would
update ? via
?Rt+1 (?t+1 )
?Rt+1 (?t+1 ) ??t+1
= ?t + ?
?
= ?t + ?gt+1 ? gt , (3)
??t
??t+1
??t
where ? ? R is a scalar meta-step size. In a nutshell, gains are decreased where a negative
autocorrelation of the gradient indicates oscillation about a local minimum, and increased
otherwise. Unfortunately such a simplistic approach has several problems: Firstly, (3)
allows gains to become negative. This can be avoided by updating ? multiplicatively, e.g.
via the exponentiated gradient algorithm [6].
Secondly, delta-delta?s cure is worse than the disease: individual gains are meant to address
ill-conditioning, but (3) actually squares the condition number. The autocorrelation of the
gradient must therefore be normalized before it can be used. A popular (if extreme) form of
normalization is to consider only the sign of the autocorrelation. Such sign-based methods
[5, 7?9], however, do not cope well with stochastic approximation of the gradient since the
non-linear sign function does not commute with the expectation operator [10]. More recent
algorithms [3, 4, 10] therefore use multiplicative (hence linear) normalization factors to
condition the meta-level update.
Finally, (3) fails to take into account that gain changes affect not only the current, but also
future parameter updates. In recognition of this shortcoming, gt in (3) is often replaced
with a running average of past gradients. Though such ad-hoc smoothing does improve
performance, it does not properly capture long-term dependences, the average still being
one of immediate, single-step effects. By contrast, Sutton [11] modeled the long-term effect
of gains on future parameter values in a linear system by carrying the relevant partials
forward in time, and found that the resulting gain adaptation can outperform a less than
perfectly matched Kalman filter. Stochastic meta-descent (SMD) extends this approach to
arbitrary twice-differentiable nonlinear systems, takes into account the full Hessian instead
of just the diagonal, and applies a decay to the partials being carried forward.
?t+1 = ?t + ?
2.2
The SMD Algorithm
SMD employs two modifications to address the problems described above: it adjusts gains
in log-space, and optimizes over an exponentially decaying trace of gradients. Thus ln ? is
updated as follows:
ln ?t+1 = ln ?t + ?
t
X
i=0
= ln ?t + ?
?i
?R(?t+1 )
? ln ?t?i
t
?R(?t+1 ) X i ??t+1
?
?
=: ln ?t + ? gt+1 ? vt+1 ,
??t+1
? ln ?t?i
i=0
(4)
where the vector v ? Rn characterizes the long-term dependence of the system parameters
on their gain history over a time scale governed by the decay factor 0 ? ? ? 1. Elementwise exponentiation of (4) yields the desired multiplicative update
?t+1 = ?t ? exp(? gt+1 ? vt+1 ) ? ?t ? max( 12 , 1 + ? gt+1 ? vt+1 ).
u
max( 12 , 1+u)
(5)
The linearization e ?
eliminates an expensive exponentiation for each gain
update, improves its robustness by reducing the effect of outliers (|u| 0), and ensures
that ? remains positive. To compute the gradient trace v efficiently, we expand ?t+1 in
terms of its recursive definition (2):
vt+1 =
t
X
i=0
Noting that
?gt
??t
t
?i
t
X
X ?(?t ? gt )
??t+1
??t
=
+
?i
?i
? ln ?t?i
?
ln
?
? ln ?t?i
t?i
i=0
i=0
#
"
t
?gt X i ??t
? ?vt + ?t ? gt + ?t ?
?
??t i=0 ? ln ?t?i
(6)
is the Hessian Ht of Rt (?t ), we arrive at the simple iterative update
vt+1 = ?vt + ?t ? (gt + ?Ht vt ) ;
v0 = 0 .
(7)
2
Although the Hessian of a system with n parameters has O(n ) entries, efficient indirect
methods from algorithmic differentiation are available to compute its product with an arbitrary vector in the same time as 2?3 gradient evaluations [12, 13]. To improve stability,
SMD employs an extended Gauss-Newton approximation of Ht for which a similar (even
faster) technique is available [4]. An iteration of SMD ? comprising (5), (2), and (7) ?
thus requires less than 3 times the floating-point operations of simple gradient ascent. The
extra computation is typically more than compensated for by the faster convergence of
SMD. Fast convergence minimizes the number of expensive world interactions required,
which in RL is typically of greater concern than computational cost.
3
Policy Gradient Reinforcement Learning
A Markov decision process (MDP) consists of a finite1 set of states s ? S of the world,
actions a ? A available to the agent in each state, and a (possibly stochastic) reward
function r(s) for each state s. In a partially observable MDP (POMDP), the controller sees
only an observation x ? X of the current state, sampled stochastically from an unknown
distribution P(x|s). Each action a determines a stochastic matrix P (a) = [P(s0 |s, a)] of
transition probabilities from state s to state s0 given action a. The methods discussed in this
paper do not assume explicit knowledge of P (a) or of the observation process. All policies
are stochastic, with a probability of choosing action a given state s, and parameters ? ? Rn
of P(a|?, s). The evolution of the state s is Markovian, governed by an |S| ? |S| transition
probability matrix P (?) = [P(s0 |?, s)] with entries given by
X
P(s0 |?, s) =
P(a|?, s) P(s0 |s, a) .
(8)
a?A
3.1
GP OMDP Monte Carlo estimates of gradient and hessian
GP OMDP is an infinite-horizon policy gradient method [1] to compute the gradient of the
" T
#
long-term average reward
X
1
R(?) := lim
E?
r(st ) ,
(9)
T ?? T
t=1
with respect the policy parameters ?. The expectation E? is over the distribution of state
trajectories {s0 , s1 , . . . } induced by P (?).
Theorem 1 (1) Let I be the identity matrix, and u a column vector of ones. The gradient
of the long-term average reward wrt. a policy parameter ?i is
??i R(?) = ?(?)> ??i P (?)[I ? P (?) + u?(?)> ]?1 r ,
(10)
where ?(?) is the stationary distribution of states induced by ?.
1
For uncountably infinite state spaces, the derivation becomes more complex without substantially
altering the resulting algorithms.
Note that (10) requires knowledge of the underlying transition probabilities P (?), and
the inversion of a potentially large matrix. The GP OMDP algorithm instead computes a
Monte-Carlo approximation of (10): the agent interacts with the environment, producing
an observation, action, reward sequence {x1 , a1 , r1 , x2 , . . . , xT , aT , rT }.2 Under mild
technical assumptions, including ergodicity and bounding all the terms involved, Baxter
and Bartlett [1] obtain
T
?1
T
X
X
b? R = 1
?
?? ln P(at |?, st )
? ? ?t?1 r(s? ) ,
T t=0
? =t+1
(11)
where a discount factor ? ? [0, 1) implicitly assumes that rewards are exponentially more
likely to be due to recent actions. Without it, rewards would be assigned over a potentially
infinite horizon, resulting in gradient estimates with infinite variance. As ? decreases, so
does the variance, but the bias of the gradient estimate increases [1]. In practice, (11) is
implemented efficiently via the discounted eligibility trace
et = ?et?1 + ?t , where ?t := ?? P(at |?, st )/ P(at |?, st ) .
(12)
Now gt = rt et is the gradient of R(?) arising from assigning the instantaneous reward to all log action gradients, where ? gives exponentially more credit to recent actions. Likewise, Baxter and Bartlett [1] give the Monte Carlo estimate of the Hessian as
Ht = rt (Et + et e>
t ), using an eligibility trace matrix
Et = ?Et?1 + Gt ? ?t ?t> , where Gt := ??2 P(at |?, st )/ P(at |?, st ) .
(13)
2
Maintaining E would be O(n ), thus computationally expensive for large policy parameter
spaces. Noting that SMD only requires the product of Ht with a vector v, we instead use
>
Ht v = rt [dt + et (e>
t v)] , where dt = ?dt?1 + Gt v ? ?t (?t v)
(14)
is an eligibility trace vector that can be maintained in O(n). We describe the efficient
computation of Gt v in (14) for a specific action selection method in Section 3.3 below.
3.2
GP OMDP-Based optimization algorithms
Baxter et al. [2] proposed two optimization algorithms using GP OMDP?s policy gradient
estimates gt : O L P OMDP is a simple online stochastic gradient descent (2) with scalar gain
?t . Alternatively, C ONJ P OMDP performs Polak-Ribi`ere conjugation of search directions,
using a noise-tolerant line search to find the approximately best scalar step size in a given
search direction. Since conjugate gradient methods are very sensitive to noise [14], C ONJ P OMDP must average gt over many steps to obtain a reliable gradient measurement; this
makes the algorithm inherently inefficient (cf. Section 4).
O L P OMDP, on the other hand, is robust to noise but converges only very slowly. We can,
however, employ SMD?s gain vector adaptation to greatly accelerate it while retaining the
benefits of high noise tolerance and online learning. Experiments (Section 4) show that the
resulting S MD P OMDP algorithm can greatly outperform O L P OMDP and C ONJ P OMDP.
Kakade [15] has applied natural gradient [16] to GP OMDP, premultiplying the policy gradient by the inverse of the online estimate
1
Ft = (1 ? t )Ft?1 +
1
(?t ?t>
t
+ I)
(15)
of the Fisher information matrix for the parameter update: ?t+1 = ?t + ?0 ? rt Ft?1 et . This
approach can yield very fast convergence on small problems, but in our experience does
not scale well at all to larger, more realistic tasks; see our experiments in Section 4.
2
We use rt as shorthand for r(st ), making it clear that only the reward value is known, not the
underlying state st .
3.3
Softmax action selection
For discrete action spaces, a vector of action probabilities zt := P(at |yt ) can be generated
from the output yt := f (?t , xt ) of a parameterised function f : Rn ? X ? R| A | (such as
a neural network) via the softmax function:
eyt
(16)
zt := softmax(yt ) = P| A |
yt
m=1 [e ]m .
Given action at ? zt , GP OMDP?s instantaneous log-action gradient wrt. y is then
g?t := ?y [zt ]at /[zt ]at = uat ? zt ,
(17)
where ui is the unity vector in direction i. The action gradient wrt. ? is obtained by
backpropagating g?t through f ?s adjoint system [13], performing an efficient multiplication
by the transposed Jacobian of f . The resulting gradient ?t := Jf> g?t is then accumulated in
the eligibility trace (12). GP OMDP?s instantaneous Hessian for softmax action selection is
? t := ?2 [zt ]a /[zt ]a = (ua ?zt )(ua ?zt )> + zt z > ? diag(zt ) .
H
y
t
t
t
t
t
(18)
It is indefinite but reasonably well-behaved: the Gerschgorin circle theorem can be employed to show that its eigenvalues must all lie in the interval [? 14 , 2]. Furthermore, its
expectation over possible actions is zero:
? t ) = [diag(zt ) ? 2zt z > + zt z > ] + zt z > ? diag(zt ) = 0 .
Ezt (H
t
t
t
(19)
The extended Gauss-Newton matrix-vector product [4] employed by SMD is then given by
? t Jf vt ,
Gt vt := Jf> H
(20)
where the multiplication by the Jacobian off (resp. its transpose) is implemented efficiently
by propagating vt through f ?s tangent linear (resp. adjoint) system [13].
Algorithm 1 S MD P OMDP with softmax action selection
1. Given (a) an ergodic POMDP with observations xt ? X , actions at ? A,
bounded rewards rt ? R, and softmax action selection
(b) a differentiable parametric map f : Rn ? X ? R|A| (neural network)
(c) f ?s adjoint (u ? Jf> u) and tangent linear (v ? Jf v) maps
n
(d) free parameters: ? ? R+ ; ?, ? ? [0, 1]; ?0 ? Rn
+ ; ?1 ? R
2. Initialize in Rn : e0 = d0 = v0 = 0
3. For t = 1 to ?: (a) interact with POMDP:
i. observe feature vector xt
ii. compute zt := softmax(f (?t , xt ))
iii. perform action at ? zt
iv. observe reward rt
(b) maintain eligibility traces:
i. ?t := Jf> (uat ? zt )
ii. pt := Jf vt
iii. qt := (uat ? zt )(?t> vt ) + zt (zt> pt ) ? zt ? pt
iv. et = ?et?1 + ?t
v. dt = ?dt?1 + Jf> qt ? ?t (?t> vt )
(c) update SMD parameters:
i. ?t = ?t?1 ? max( 12 , 1 + ? rt et ? vt )
ii. ?t+1 = ?t + rt ?t ? et
iii. vt+1 = ?vt + rt ?t ? [(1 + ?e>
t vt )et + ?dt ]
6/18
12/18
12/18
6/18
5/18
5/18
6
12
5
12/18
6/18
5/18
r=0
r=0
r=1
r=1
r = -1
r=8
Fig. 1: Left: Baxter et al.?s simple 3-state POMDP. States are labelled with their observable
features and instantaneous reward r; arrows indicate the 80% likely transition for the ?rst
(solid) resp. second (dashed) action. Right: our modi?ed, more dif?cult 3-state POMDP.
4
Experiments
4.1
Simple Three-State POMDP
Fig. 1 (left) depicts the simple 3-state POMDP used by Baxter et al. [2, Tables 1&2]. Of
the two possible transitions from each state, the preferred one occurs with 80% probability,
the other with 20%. The preferred transition is determined by the action of a simple probabilistic adaptive controller that receives two state-dependent feature values as input, and is
trained to maximize the expected average reward by policy gradient methods.
Using the original code of Baxter et al. [2], we replicated their experimental results for
the O L P OMDP and C ONJ P OMDP algorithms on this simple POMDP. We can accurately
reproduce all essential features of their graphed results on this problem [2, Figures 7&8].
We then implemented S MD P OMDP (Algorithm 1), and ran a comparison of algorithms,
using the best free parameter settings found by Baxter et al. [2] (in particular: ? = 0, ? 0 =
1), and ? = ? = 1 for S MD P OMDP. We always match random seeds across algorithms.
Baxter et al. [2] collect and plot results for C ONJ P OMDP in terms of its T parameter, which
speci?es the number of Markov chain iterations per gradient evaluation. For a fair comparison of convergence speed we added code to record the total number of Markov chain
iterations consumed by C ONJ P OMDP, and plot performance for all three algorithms in
those terms, with error bars along both axes for C ONJ P OMDP.
The results are shown in Fig. 2 (left), averaged over 500 runs. While early on C ONJ P OMDP
on average reaches a given level of performance about three times faster than O L P OMDP,
it does so at the price of far higher variance. Moreover, C ONJ P OMDP is the only algorithm
that fails to asymptotically approach optimal performance (R = 0.8; Fig. 2 left, inset).
Once its step size adaptation gets going, S MD P OMDP converges asymptotically to the op2.6
0.8
Average Reward
0.6
0.5
0.8
0.4
2.2
smd
ol
conj
ng
2
1.8
1.6
1.4
0.79
0.3
0.2
1
2.4
Average Reward
smd
ol
conj
0.7
1.2
0.78
10
100
1000
Total Markov Chain Iterations
10000
100
1000
10000
1e+5
1e+6
Total Markov Chain Iterations
1e+7
Fig. 2: Left: The POMDP of Fig. 1 (left) is easy to learn. C ONJ P OMDP converges faster
but to asymptotically inferior solutions (see inset) than the two online algorithms. Right:
S MD P OMDP outperforms O L P OMDP and C ONJ P OMDP on the dif?cult POMDP of Fig. 1
(right). Natural policy gradient has rapid early convergence but diverges asymptotically.
timal policy about three times faster than O L P OMDP in terms of Markov chain iterations,
making the two algorithms roughly equal in terms of computational expense.
C ONJ P OMDP on average performs less than two iterations of conjugate gradient in each
run. While this is perfectly understandable ? the controller only has two trainable parameters ? it bears keeping in mind that the performance of C ONJ P OMDP here is almost
entirely governed by the line search rather than the conjugation of search directions.
4.2
Modi?ed Three-State POMDP
The three-state POMDP employed by Baxter et al. [2] has the property that greedy maximization of instantaneous reward leads to the optimal policy. Non-trivial temporal credit
assignment ? the hallmark of reinforcement learning ? is not needed. The best results
are obtained with the eligibility trace turned off (? = 0). To create a more challenging
problem, we rearranged the POMDP?s state transitions and reward structure so that the
instantaneous reward becomes deceptive (Fig. 1, right). We also multiplied one state feature by 18 to create an ill-conditioned input to the controller, while leaving the actions and
relative transition probabilities (80% resp. 20%) unchanged. In our modified POMDP, the
high-reward state can only be reached through an intermediate state with negative reward.
Fig. 2 (right) shows our experimental results for this harder POMDP, averaged over 100
runs. Free parameters were tuned to ?1 ? [?0.1, 0.1], ? = 0.6, ?0 = 0.001; T = 105 for
C ONJ P OMDP; ? = 0.002, ? = 1 for S MD P OMDP. C ONJ P OMDP now performs the worst,
which is expected because conjugation of directions is known to collapse in the presence
of noise [14]. S MD P OMDP converges about 20 times faster than O L P OMDP because its
adjustable gains compensate for the ill-conditioned input. Kakade?s natural gradient (using
= 0.01) performs extremely well early on, taking 2?3 times fewer iterations than S MD P OMDP to reach optimal performance (R = 2.6). It does, however, diverge asymptotically.
4.3
Puck World
We also implemented the Puck World benchmark of Baxter et al. [2], with the free parameters settings ?1 ? [?0.1, 0.1], ? = 0.95 ?0 = 2 ? 10?6 ; T = 106 for C ONJ P OMDP;
? = 100, ? = 0.999 for S MD P OMDP; = 0.01 for natural policy gradient. To improve its
stability, we modified SMD here to track instantaneous log-action gradients ?t instead of
noisy rt et estimates of ?? R. C ONJ P OMDP used a quadratic weight penalty of initially 0.5,
with the adaptive reduction schedule described by Baxter et al. [2, page 369]; the online
algorithms did not require a weight penalty.
Fig. 3 shows our results averaged over 100 runs, except for natural policy gradient where
only a single typical run is shown. This is because its O(n3 ) time complexity per iteration3
The Sherman-Morrison formula cannot be used here because of the diagonal term in (15).
-10
1e-6
1e-7
1e+6
1e+7
1e+8
Fig. 3: The action-gradient version of
S MD P OMDP yields better asymptotic
results on PuckWorld than O L P OMDP; C ONJ P OMDP is inefficient;
natural policy gradient even more so.
Average Reward
SMD Gains
3
-20
smd
ol
conj
ng
-30
-40
-50
-60
1e+5
1e+6
1e+7
Iterations
1e+8
makes natural policy gradient intolerably slow for this task, where n = 88. Moreover, its
convergence is quite poor here in terms of the number of iterations required as well.
C ONJ P OMDP is again inferior to the best online algorithms by over an order of magnitude.
Early on, S MD P OMDP matches O L P OMDP, but then reaches superior solutions with small
variance. S MD P OMDP-trained controllers achieve a long-term average reward of -6.5, significantly above the optimum of -8 hypothesized by Baxter et al. [2, page 369] based on
their experiments with C ONJ P OMDP.
5
Conclusion
On several non-trivial RL problems we find that our S MD P OMDP consistently outperforms
O L P OMDP, which in turn outperforms C ONJ P OMDP. Natural policy gradient can converge
rapidly, but is too unstable and computationally expensive for all but very small controllers.
Acknowledgements
We are indebted to John Baxter for his code and helpful comments. National ICT Australia
is funded by the Australian Government?s Backing Australia?s Ability initiative, in part
through the Australian Research Council. This work is also supported by the IST Program
of the European Community, under the Pascal Network of Excellence, IST-2002-506778.
References
[1] J. Baxter and P. L. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Arti?cial
Intelligence Research, 15:319?350, 2001.
[2] J. Baxter, P. L. Bartlett, and L. Weaver. Experiments with infinite-horizon, policy-gradient
estimation. Journal of Arti?cial Intelligence Research, 15:351?381, 2001.
[3] N. N. Schraudolph. Local gain adaptation in stochastic gradient descent. In Proc. Intl. Conf.
Arti?cial Neural Networks, pages 569?574, Edinburgh, Scotland, 1999. IEE, London.
[4] N. N. Schraudolph. Fast curvature matrix-vector products for second-order gradient descent.
Neural Computation, 14(7):1723?1738, 2002.
[5] R. Jacobs. Increased rates of convergence through learning rate adaptation. Neural Networks,
1:295?307, 1988.
[6] J. Kivinen and M. K. Warmuth. Additive versus exponentiated gradient updates for linear
prediction. In Proc. 27th Annual ACM Symposium on Theory of Computing, pages 209?218.
ACM Press, New York, NY, 1995.
[7] T. Tollenaere. SuperSAB: Fast adaptive back propagation with good scaling properties. Neural
Networks, 3:561?573, 1990.
[8] F. M. Silva and L. B. Almeida. Acceleration techniques for the backpropagation algorithm.
In L. B. Almeida and C. J. Wellekens, editors, Neural Networks: Proc. EURASIP Workshop,
volume 412 of Lecture Notes in Computer Science, pages 110?119. Springer Verlag, 1990.
[9] M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: The
RPROP algorithm. In Proc. Intl. Conf. Neural Networks, pages 586?591. IEEE, 1993.
[10] L. B. Almeida, T. Langlois, J. D. Amaral, and A. Plakhov. Parameter adaptation in stochastic
optimization. In D. Saad, editor, On-Line Learning in Neural Networks, Publications of the
Newton Institute, chapter 6, pages 111?134. Cambridge University Press, 1999.
[11] R. S. Sutton. Gain adaptation beats least squares? In Proceedings of the 7th Yale Workshop on
Adaptive and Learning Systems, pages 161?166, 1992.
[12] B. A. Pearlmutter. Fast exact multiplication by the Hessian. Neural Comput., 6(1):147?60, 1994.
[13] A. Griewank. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. Frontiers in Applied Mathematics. SIAM, Philadelphia, 2000.
[14] N. N. Schraudolph and T. Graepel. Combining conjugate direction methods with stochastic
approximation of gradients. In C. M. Bishop and B. J. Frey, editors, Proc. 9th Intl. Workshop
Arti?cial Intelligence and Statistics, pages 7?13, Key West, Florida, 2003.
[15] S. Kakade. A natural policy gradient. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors,
Advances in Neural Information Processing Systems 14, pages 1531?1538. MIT Press, 2002.
[16] S. Amari. Natural gradient works efficiently in learning. Neural Comput., 10(2):251?276, 1998.
| 2825 |@word mild:1 version:1 inversion:1 jacob:1 commute:1 arti:4 solid:1 harder:1 reduction:1 tuned:1 past:1 outperforms:3 current:2 com:1 assigning:1 must:3 john:1 realistic:1 additive:1 plot:2 update:9 stationary:1 greedy:1 fewer:1 intelligence:3 warmuth:1 cult:2 scotland:1 record:1 firstly:1 along:1 direct:2 become:1 symposium:1 initiative:1 consists:1 shorthand:1 autocorrelation:3 excellence:1 expected:2 rapid:1 roughly:1 ol:3 discounted:1 ua:2 becomes:2 estimating:2 matched:1 underlying:2 bounded:1 moreover:2 minimizes:1 substantially:1 differentiation:2 temporal:1 cial:4 nutshell:1 braun:1 producing:1 positive:2 before:1 local:2 frey:1 sutton:2 conditioner:1 approximately:1 twice:2 au:1 deceptive:1 collect:1 challenging:1 dif:2 collapse:1 averaged:3 practice:3 recursive:1 backpropagation:2 riedmiller:1 significantly:1 get:1 cannot:1 selection:5 operator:1 context:1 map:2 compensated:1 yt:4 straightforward:1 pomdp:15 ergodic:1 griewank:1 adjusts:1 his:1 stability:2 updated:1 resp:4 pt:3 exact:2 element:2 expensive:5 particularly:1 updating:1 recognition:1 ft:3 capture:1 worst:1 ensures:1 decrease:1 ran:1 disease:1 environment:1 ui:1 complexity:1 reward:22 trained:2 carrying:1 accelerate:2 indirect:1 chapter:1 derivation:1 train:1 fast:7 shortcoming:1 describe:1 monte:3 london:1 choosing:1 quite:1 larger:1 otherwise:1 amari:1 ability:1 statistic:1 premultiplying:1 gp:8 polak:1 noisy:1 online:8 hoc:1 advantage:1 sequence:2 differentiable:3 eigenvalue:1 interaction:1 product:5 adaptation:12 relevant:1 hadamard:1 turned:1 rapidly:1 combining:1 langlois:1 achieve:1 adjoint:3 rst:1 convergence:11 optimum:1 r1:1 diverges:1 intl:3 converges:4 derive:1 propagating:1 qt:2 implemented:4 indicate:1 australian:2 direction:6 filter:1 stochastic:17 australia:3 require:1 government:1 secondly:1 frontier:1 credit:2 exp:1 seed:1 algorithmic:2 early:4 estimation:3 proc:5 sensitive:1 council:1 ere:1 create:2 onj:21 mit:1 always:1 modified:2 rather:1 avoid:1 publication:1 ax:1 improvement:1 properly:1 consistently:1 indicates:1 greatly:2 contrast:2 helpful:1 dependent:1 accumulated:1 typically:2 initially:1 expand:1 reproduce:1 going:1 comprising:1 backing:1 ill:4 pascal:1 retaining:1 smoothing:1 softmax:7 initialize:1 equal:1 once:1 ng:2 sampling:1 yu:1 future:2 employ:4 modi:2 national:2 individual:1 puck:2 floating:1 replaced:1 eyt:1 maintain:1 evaluation:2 extreme:1 chain:5 partial:3 conj:3 experience:1 iv:2 desired:1 circle:1 e0:1 increased:2 column:1 ribi:1 markovian:1 altering:1 assignment:1 maximization:1 cost:1 entry:2 too:1 iee:1 considerably:1 st:8 siam:1 probabilistic:1 off:2 diverge:1 again:1 possibly:1 slowly:1 worse:1 stochastically:1 conf:2 inefficient:2 derivative:1 account:2 ad:1 multiplicative:2 characterizes:1 reached:1 decaying:1 square:2 variance:4 efficiently:4 likewise:1 yield:3 accurately:2 carlo:3 trajectory:1 notoriously:1 indebted:1 history:1 simultaneous:1 omdp:54 reach:3 ed:2 definition:1 rprop:1 involved:1 transposed:1 gain:21 sampled:1 popular:1 knowledge:2 lim:1 improves:1 graepel:1 schedule:1 actually:1 back:1 higher:1 dt:6 though:1 furthermore:1 just:1 ergodicity:1 parameterised:1 hand:1 receives:1 nonlinear:1 propagation:1 behaved:2 graphed:1 mdp:2 effect:3 hypothesized:1 normalized:1 dietterich:1 evolution:1 hence:1 assigned:1 attractive:1 eligibility:6 inferior:2 maintained:1 backpropagating:1 pearlmutter:1 performs:4 silva:1 hallmark:1 wise:1 instantaneous:7 superior:1 rl:6 conditioning:1 exponentially:3 volume:1 discussed:1 elementwise:1 measurement:1 cambridge:1 mathematics:1 stochasticity:1 sherman:1 funded:1 v0:2 gt:21 curvature:1 own:1 recent:3 optimizes:1 verlag:1 meta:7 vt:18 minimum:1 greater:1 employed:5 speci:1 op2:1 converge:1 maximize:4 redundant:1 dashed:1 ii:3 morrison:1 full:1 d0:1 technical:1 faster:7 adapt:1 match:2 schraudolph:5 long:7 compensate:1 a1:1 prediction:1 simplistic:1 controller:8 expectation:4 plakhov:1 iteration:10 normalization:2 decreased:1 interval:1 leaving:1 extra:1 eliminates:1 saad:1 ascent:3 comment:1 induced:2 noting:2 presence:1 intermediate:1 iii:3 easy:1 baxter:15 affect:1 perfectly:2 observability:1 idea:1 consumed:1 bartlett:4 becker:1 penalty:2 hessian:9 york:1 action:26 clear:1 discount:1 nic:1 rearranged:1 outperform:3 sign:3 estimated:1 delta:4 arising:1 per:2 track:1 discrete:1 ist:2 key:1 indefinite:1 douglas:2 ht:7 asymptotically:5 run:5 inverse:1 exponentiation:2 extends:1 arrive:1 almost:1 oscillation:1 decision:1 scaling:1 entirely:1 smd:19 conjugation:3 yale:1 quadratic:1 annual:1 x2:1 n3:1 speed:2 extremely:1 performing:1 poor:1 conjugate:4 across:1 unity:1 appealing:1 kakade:3 modification:1 s1:1 making:2 outlier:1 ln:12 computationally:2 previously:2 remains:1 turn:1 wellekens:1 wrt:4 mind:1 needed:1 serf:1 available:4 operation:1 multiplied:1 observe:2 supersab:1 robustness:2 florida:1 original:1 denotes:1 running:1 assumes:1 cf:1 maintaining:1 newton:3 ghahramani:1 unchanged:1 objective:2 added:1 occurs:2 parametric:1 rt:21 dependence:2 diagonal:3 interacts:1 md:14 gradient:64 unstable:1 trivial:2 nicta:1 assuming:1 kalman:1 code:3 modeled:1 multiplicatively:1 providing:1 unfortunately:1 potentially:2 expense:1 trace:8 negative:3 implementation:1 zt:24 policy:29 unknown:1 perform:1 understandable:1 adjustable:1 observation:6 markov:6 benchmark:1 jin:1 descent:8 beat:1 immediate:1 extended:2 rn:8 arbitrary:2 community:1 introduced:1 timal:1 required:2 address:2 bar:1 below:1 program:1 max:3 including:1 reliable:1 natural:11 kivinen:1 weaver:1 representing:1 improve:4 carried:1 philadelphia:1 ict:2 acknowledgement:1 tangent:2 nicol:1 multiplication:4 relative:1 asymptotic:1 lecture:1 bear:1 versus:1 agent:2 s0:6 principle:1 editor:4 uncountably:1 supported:1 transpose:1 free:4 infeasible:1 finite1:1 offline:1 bias:1 exponentiated:2 keeping:1 institute:1 taking:1 benefit:1 tolerance:1 edinburgh:1 world:4 cure:1 transition:8 computes:1 evaluating:1 forward:2 reinforcement:4 adaptive:6 avoided:1 replicated:1 far:1 amaral:1 cope:2 observable:2 implicitly:1 preferred:2 tolerant:1 alternatively:1 search:5 iterative:1 table:1 nature:1 reasonably:1 robust:1 learn:1 inherently:1 interact:1 necessarily:1 complex:1 european:1 diag:3 did:1 arrow:1 bounding:1 noise:5 fair:1 intolerably:1 x1:1 fig:11 canberra:1 west:1 depicts:1 slow:2 ny:1 fails:2 wish:1 explicit:1 comput:2 lie:1 governed:3 jacobian:2 uat:3 theorem:2 formula:1 xt:9 specific:1 inset:2 bishop:1 decay:2 concern:1 essential:1 workshop:3 magnitude:2 linearization:1 conditioned:2 horizon:4 aberdeen:2 likely:2 partially:1 scalar:4 applies:1 springer:1 determines:1 acm:2 ezt:1 identity:1 acceleration:1 labelled:1 jf:8 fisher:1 price:1 change:1 eurasip:1 infinite:6 determined:1 reducing:1 except:1 typical:1 total:3 gauss:2 experimental:2 e:1 tollenaere:1 almeida:3 meant:1 trainable:1 |
2,010 | 2,826 | Temporal Abstraction
in Temporal-difference Networks
Richard S. Sutton, Eddie J. Rafols, Anna Koop
Department of Computing Science
University of Alberta
Edmonton, AB, Canada T6G 2E8
{sutton,erafols,anna}@cs.ualberta.ca
Abstract
We present a generalization of temporal-difference networks to include temporally abstract options on the links of the question network.
Temporal-difference (TD) networks have been proposed as a way of representing and learning a wide variety of predictions about the interaction
between an agent and its environment. These predictions are compositional in that their targets are defined in terms of other predictions, and
subjunctive in that that they are about what would happen if an action
or sequence of actions were taken. In conventional TD networks, the
inter-related predictions are at successive time steps and contingent on
a single action; here we generalize them to accommodate extended time
intervals and contingency on whole ways of behaving. Our generalization is based on the options framework for temporal abstraction. The
primary contribution of this paper is to introduce a new algorithm for
intra-option learning in TD networks with function approximation and eligibility traces. We present empirical examples of our algorithm?s effectiveness and of the greater representational expressiveness of temporallyabstract TD networks.
The primary distinguishing feature of temporal-difference (TD) networks (Sutton & Tanner, 2005) is that they permit a general compositional specification of the goals of learning.
The goals of learning are thought of as predictive questions being asked by the agent in the
learning problem, such as ?What will I see if I step forward and look right?? or ?If I open
the fridge, will I see a bottle of beer?? Seeing a bottle of beer is of course a complicated
perceptual act. It might be thought of as obtaining a set of predictions about what would
happen if certain reaching and grasping actions were taken, about what would happen if
the bottle were opened and turned upside down, and of what the bottle would look like if
viewed from various angles. To predict seeing a bottle of beer is thus to make a prediction
about a set of other predictions. The target for the overall prediction is a composition in the
mathematical sense of the first prediction with each of the other predictions.
TD networks are the first framework for representing the goals of predictive learning in a
compositional, machine-accessible form. Each node of a TD network represents an individual question?something to be predicted?and has associated with it a value representing
an answer to the question?a prediction of that something. The questions are represented
by a set of directed links between nodes. If node 1 is linked to node 2, then node 1 rep-
resents a question incorporating node 2?s question; its value is a prediction about node 2?s
prediction. Higher-level predictions can be composed in several ways from lower ones,
producing a powerful, structured representation language for the targets of learning. The
compositional structure is not just in a human designer?s head; it is expressed in the links
and thus is accessible to the agent and its learning algorithm.
The network of these links is referred to as the question network. An entirely separate set
of directed links between the nodes is used to compute the values (predictions, answers)
associated with each node. These links collectively are referred to as the answer network.
The computation in the answer network is compositional in a conventional way?node
values are computed from other node values. The essential insight of TD networks is that
the notion of compositionality should apply to questions as well as to answers.
A secondary distinguishing feature of TD networks is that the predictions (node values)
at each moment in time can be used as a representation of the state of the world at that
time. In this way they are an instance of the idea of predictive state representations (PSRs)
introduced by Littman, Sutton and Singh (2002), Jaeger (2000), and Rivest and Schapire
(1987). Representing a state by its predictions is a potentially powerful strategy for state
abstraction (Rafols et al., 2005). We note that the questions used in all previous work with
PSRs are defined in terms of concrete actions and observations, not other predictions. They
are not compositional in the sense that TD-network questions are.
The questions we have discussed so far are subjunctive, meaning that they are conditional
on a certain way of behaving. We predict what we would see if we were to step forward and
look right, or if we were to open the fridge. The questions in conventional TD networks are
subjunctive, but they are conditional only on primitive actions or open-loop sequences of
primitive actions (as are conventional PSRs). It is natural to generalize this, as we have in
the informal examples above, to questions that are conditional on closed-loop temporally
extended ways of behaving. For example, opening the fridge is a complex, high-level
action. The arm must be lifted to the door, the hand shaped for grasping the handle, etc.
To ask questions like ?if I were to go to the coffee room, would I see John?? would require
substantial temporal abstraction in addition to state abstraction.
The options framework (Sutton, Precup & Singh, 1999) is a straightforward way of talking
about temporally extended ways of behaving and about predictions of their outcomes. In
this paper we extend the options framework so that it can be applied to TD networks.
Significant extensions of the original options framework are needed. Novel features of
our option-extended TD networks are that they 1) predict components of option outcomes
rather than full outcome probability distributions, 2) learn according to the first intra-option
method to use eligibility traces (see Sutton & Barto, 1998), and 3) include the possibility
of options whose ?policies? are indifferent to which of several actions are selected.
1
The options framework
In this section we present the essential elements of the options framework (Sutton, Precup
& Singh, 1999) that we will need for our extension of TD networks. In this framework, an
agent and an environment interact at discrete time steps t = 1, 2, 3.... In each state st ? S,
the agent selects an action at ? A, determining the next state st+1 .1 An action is a way of
behaving for one time step; the options framework lets us talk about temporally extended
ways of behaving. An individual option consists of three parts. The first is the initiation
set, I ? S, the subset of states in which the option can be started. The second component
of an option is its policy, ? : S ? A ? [0, 1], specifying how the agent behaves when
1
Although the options framework includes rewards, we omit them here because we are concerned
only with prediction, not control.
following the option. Finally, a termination function, ? : S ? A ? [0, 1], specifies how
the option ends: ?(s) denotes the probability of terminating when in state s. The option is
thus completely and formally defined by the 3-tuple (I, ?, ?).
2
Conventional TD networks
In this section we briefly present the details of the structure and the learning algorithm
comprising TD networks as introduced by Sutton and Tanner (2005). TD networks address
a prediction problem in which the agent may not have direct access to the state of the
environment. Instead, at each time step the agent receives an observation ot ? O dependent
on the state. The experience stream thus consists of a sequence of alternating actions and
observations, o1 , a1 , o2 , a2 , o3 ? ? ?.
The TD network consists of a set of nodes, each representing a single scalar prediction,
interlinked by the question and answer networks as suggested previously. For a network
of n nodes, the vector of all predictions at time step t is denoted yt = (yt1 , . . . , ytn )T . The
predictions are estimates of the expected value of some scalar quantity, typically of a bit, in
which case they can be interpreted as estimates of probabilities. The predictions are updated
at each time step according to a vector-valued function u with modifiable parameter W,
which is often taken to be of a linear form:
yt = u(yt?1 , at?1 , ot , Wt ) = ?(Wt xt ),
(1)
where xt ? <m is an m-vector of features created from (yt?1 , at?1 , ot ), Wt is an n ?
m matrix (whose elements are sometimes referred to as weights), and ? is the n-vector
form of either the identity function or the S-shaped logistic function ?(s) = 1+e1 ?s . The
feature vector is an arbitrary vector-valued function of yt?1 , at?1 , and ot . For example,
in the simplest case the feature vector is a unit basis vector with the location of the one
communicating the current state. In a partially observable environment, the feature vector
may be a combination of the agent?s action, observations, and predictions from the previous
time step. The overall update u defines the answer network.
The question network consists of a set of target functions, z i : O ? <n ? <, and condition
functions, ci : A?<n ? [0, 1]n . We define zti = z i (ot+1 , ?yt+1 ) as the target for prediction
yti .2 Similarly, we define cit = ci (at , yt ) as the condition at time t. The learning algorithm
for each component wtij of Wt can then be written
?yti
ij
wt+1
= wtij + ? zti ? yti cit ij
,
(2)
?wt
where ? is a positive step-size parameter. Note that the targets here are functions of the
observation and predictions exactly one time step later, and that the conditions are functions
of a single primitive action. This is what makes this algorithm suitable only for learning
about one-step TD relationships. By chaining together multiple nodes, Sutton and Tanner
(2005) used it to predict k steps ahead, for various particular values of k, and to predict the
outcome of specific action sequences (as in PSRs, e.g., Littman et al., 2002; Singh et al.,
2004). Now we consider the extension to temporally abstract actions.
3
Option-extended TD networks
In this section we present our intra-option learning algorithm for TD networks with options and eligibility traces. As suggested earlier, each node?s outgoing link in the question
2
The quantity ?
y is almost the same as y, and we encourage the reader to think of them as identical
here. The difference is that ?
y is calculated by weights that are one step out of date as compared to y,
i.e., ?
yt = u(yt?1 , at?1 , ot , Wt?1 ) (cf. equation 1).
network will now correspond to an option applying over possibly many steps. The policy
of the ith node?s option corresponds to the condition function ci , which we think of as a
recognizer for the option. It inspects each action taken to assess whether the option is being
followed: cit = 1 if the agent is acting consistently with the option policy and cit = 0 otherwise (intermediate values are also possible). When an agent ceases to act consistently with
the option policy, we say that the option has diverged. The possibility of recognizing more
than one action as consistent with the option is a significant generalization of the original
idea of options. If no actions are recognized as acceptable in a state, then the option cannot
be followed and thus cannot be initiated. Here we take the set of states with at least one
recognized action to be the initiation set of the option.
The option-termination function ? generalizes naturally to TD networks. Each node i is
given a corresponding termination function, ? i : O?<n ? [0, 1], where ?ti = ? i (ot+1 , yt )
is the probability of terminating at time t.3 ?ti = 1 indicates that the option has terminated
at time t; ?ti = 0 indicates that it has not, and intermediate values of ? correspond to soft
or stochastic termination conditions. If an option terminates, then zti acts as the target, but
i
if the option is ongoing without termination, then the node?s own next value, y?t+1
, should
be the target. The termination function specifies which of the two targets (or mixture of the
two targets) is used to produce a form of TD error for each node i:
i
?ti = ?ti zti + (1 ? ?ti )?
yt+1
? yti .
(3)
Our option-extended algorithm incorporates eligibility traces (see Sutton & Barto, 1998)
as short-term memory variables organized in an n ? m matrix E, paralleling the weight
matrix. The traces are a record of the effect that each weight could have had on each node?s
prediction during the time the agent has been acting consistently with the node?s option.
The components eij of the eligibility matrix are updated by
?yti
ij
ij
i
i
et = ct ?et?1 (1 ? ?t ) +
,
(4)
?wtij
where 0 ? ? ? 1 is the trace-decay parameter familiar from the TD(?) learning algorithm.
Because of the cit factor, all of a node?s traces will be immediately reset to zero whenever
the agent deviates from the node?s option?s policy. If the agent follows the policy and the
option does not terminate, then the trace decays by ? and increments by the gradient in the
way typical of eligibility traces. If the policy is followed and the option does terminate,
then the trace will be reset to zero on the immediately following time step, and a new trace
will start building. Finally, our algorithm updates the weights on each time step by
ij
wt+1
= wtij + ? ?ti eij
t .
4
(5)
Fully observable experiment
This experiment was designed to test the correctness of the algorithm in a simple gridworld
where the environmental state is observable. We applied an options-extended TD network
to the problem of learning to predict observations from interaction with the gridworld environment shown on the left in Figure 1. Empty squares indicate spaces where the agent can
move freely, and colored squares (shown shaded in the figure) indicate walls. The agent is
egocentric. At each time step the agent receives from the environment six bits representing
the color it is facing (red, green, blue, orange, yellow, or white). In this first experiment
we also provided 6 ? 6 ? 4 = 144 other bits directly indicating the complete state of the
environment (square and orientation).
3
The fact that the option depends only on the current predictions, action, and observation means
that we are considering only Markov options.
Figure 1: The test world (left) and the question network (right) used in the experiments.
The triangle in the world indicates the location and orientation of the agent. The walls
are labeled R, O, Y, G, and B representing the colors red, orange, yellow, green and blue.
Note that the left wall is mostly blue but partly green. The right diagram shows in full the
portion of the question network corresponding to the red bit. This structure is repeated,
but not shown, for the other four (non-white) colors. L, R, and F are primitive actions, and
Forward and Wander are options.
There are three possible actions: A ={F, R, L}. Actions were selected according to a fixed
stochastic policy independent of the state. The probability of the F, L, and R actions were
0.5, 0.25, and 0.25 respectively. L and R cause the agent to rotate 90 degrees to the left or
right. F causes the agent to move ahead one square with probability 1 ? p and to stay in
the same square with probability p. The probability p is called the slipping probability. If
the forward movement would cause the agent to move into a wall, then the agent does not
move. In this experiment, we used p = 0, p = 0.1, and p = 0.5.
In addition to these primitive actions, we provided two temporally abstract options,
Forward and Wander. The Forward option takes the action F in every state and terminates when the agent senses a wall (color) in front of it. The policy of the Wander option
is the same as that actually followed by the agent. Wander terminates with probability 1
when a wall is sensed, and spontaneously with probability 0.5 otherwise.
We used the question network shown on the right in Figure 1. The predictions of nodes 1, 2,
and 3 are estimates of the probability that the red bit would be observed if the corresponding
primitive action were taken. Node 4 is a prediction of whether the agent will see the red bit
upon termination of the Wander option if it were taken. Node 5 predicts the probability of
observing the red bit given that the Forward option is followed until termination. Nodes 6
and 7 represent predictions of the outcome of a primitive action followed by the Forward
option. Nodes 8 and 9 take this one step further: they represent predictions of the red bit if
the Forward option were followed to termination, then a primitive action were taken, and
then the Forward option were followed again to termination.
We applied our algorithm to learn the parameter W of the answer network for this question
network. The step-size parameter ? was 1.0, and the trace-decay parameter ? was 0.9. The
initial W0 , E0 , and y0 were all 0. Each run began with the agent in the state indicated in
Figure 1 (left). In this experiment ?(?) was the identity function.
For each value of p, we ran 50 runs of 20,000 time steps. On each time step, the root-meansquared (RMS) error in each node?s prediction was computed and then averaged over all the
nodes. The nodes corresponding to the Wander option were not included in the average
because of the difficulty of calculating their correct predictions. This average was then
0.4
Fully
Observable
0.4
RMS
Error
RMS
Error
p=0
0
0
Partially
Observable
p = 0.1
5000
p = 0.5
10000
15000
20000
Steps
0
0
100000
200000
Steps
300000
Figure 2: Learning curves in the fully-observable experiment for each slippage probability
(left) and in the partially-observable experiment (right).
itself averaged over the 50 runs and bins of 1,000 time steps to produce the learning curves
shown on the left in Figure 2.
For all slippage probabilities, the error in all predictions fell almost to zero. After approximately 12,000 trials, the agent made almost perfect predictions in all cases. Not surprisingly, learning was slower at the higher slippage probabilities. These results show that our
augmented TD network is able to make a complete temporally-abstract model of this world.
5
Partially observable experiment
In our second experiment, only the six color observation bits were available to the agent.
This experiment provides a more challenging test of our algorithm. To model the environment well, the TD network must construct a representation of state from very sparse
information. In fact, completely accurate prediction is not possible in this problem with
our question network.
In this experiment the input vector consisted of three groups of 46 components each, 138
in total. If the action was R, the first 46 components were set to the 40 node values and the
six observation bits, and the other components were 0. If the action was L, the next group
of 46 components was filled in in the same way, and the first and third groups were zero. If
the action was F, the third group was filled. This technique enables the answer network as
function approximator to represent a wider class of functions in a linear form than would
otherwise be possible. In this experiment, ?(?) was the S-shaped logistic function. The
slippage probability was p = 0.1.
As our performance measure we used the RMS error, as in the first experiment, except that
the predictions for the primitive actions (nodes 1-3) were not included. These predictions
can never become completely accurate because the agent can?t tell in detail where it is
located in the open space. As before, we averaged RMS error over 50 runs and 1,000 time
step bins, to produce the learning curve shown on the right in Figure 2. As before, the RMS
error approached zero.
Node 5 in Figure 1 holds the prediction of red if the agent were to march forward to the
wall ahead of it. Corresponding nodes in the other subnetworks hold the predictions of the
other colors upon Forward. To make these predictions accurately, the agent must keep
track of which wall it is facing, even if it is many steps away from it. It has to learn a sort
of compass that it can keep updated as it turns in the middle of the space. Figure 3 is a
demonstration of the compass learned after a representative run of 200,000 time steps. At
the end of the run, the agent was driven manually to the state shown in the first row (relative
time index t = 1). On steps 1-25 the
agent was spun clockwise in place. The
third column shows the prediction for
node 5 in each portion of the question
network. That is, the predictions shown
are for each color-observation bit at termination of the Forward option. At
t = 1, the agent is facing the orange
wall and it predicts that the Forward
option would result in seeing the orange
bit and none other. Over steps 2-5 we
see that the predictions are maintained
accurately as the agent spins despite the
fact that its observation bits remain the
same. Even after spinning for 25 steps
the agent knows exactly which way it is
facing. While spinning, the agent correctly never predicts seeing the green bit
(after Forward), but if it is driven up
and turned, as in the last row of the figure, the green bit is accurately predicted.
The fourth column shows the prediction
for node 8 in each portion of the question
network. Recall that these nodes correspond to the sequence Forward, L,
Forward. At time t = 1, the agent
accurately predicts that Forward will
bring it to orange (third column) and also
predicts that Forward, L, Forward
will bring it to green. The predictions
made for node 8 at each subsequent step
of the sequence are also correct.
These results show that the agent is able
to accurately maintain its long term predictions without directly encountering
sensory verification. How much larger
would the TD network have to be to handle a 100x100 gridworld? The answer is
not at all. The same question network
applies to any size problem. If the layout of the colored walls remain the same,
then even the answer network transfers
across worlds of widely varying sizes.
In other experiments, training on successively larger problems, we have shown
that the same TD network as used here
can learn to make all the long-term predictions correctly on a 100x100 version
of the 6x6 gridworld used here.
t
y5t
st
y8t
1
1
O Y R BG
O Y R BG
O Y R BG
O Y R BG
O Y R BG
O Y R BG
O Y R BG
O Y R BG
O Y R BG
O Y R BG
O Y R BG
O Y R BG
O Y R BG
O Y R BG
2
3
4
5
25
29
Figure 3: An illustration of part of what the
agent learns in the partially observable environment. The second column is a sequence
of states with (relative) time index as given by
the first column. The sequence was generated
by controlling the agent manually. On steps
1-25 the agent was spun clockwise in place,
and the trajectory after that is shown by the
line in the last state diagram. The third and
fourth columns show the values of the nodes
corresponding to 5 and 8 in Figure 1, one for
each color-observation bit.
6
Conclusion
Our experiments show that option-extended TD networks can learn effectively. They can
learn facts about their environments that are not representable in conventional TD networks or in any other method for learning models of the world. One concern is that our
intra-option learning algorithm is an off-policy learning method incorporating function approximation and bootstrapping (learning from predictions). The combination of these three
is known to produce convergence problems for some methods (see Sutton & Barto, 1998),
and they may arise here. A sound solution may require modifications to incorporate importance sampling (see Precup, Sutton & Dasgupta, 2001). In this paper we have considered
only intra-option eligibility traces?traces extending over the time span within an option
but not persisting across options. Tanner and Sutton (2005) have proposed a method for
inter-option traces that could perhaps be combined with our intra-option traces.
The primary contribution of this paper is the introduction of a new learning algorithm for
TD networks that incorporates options and eligibility traces. Our experiments are small
and do little more than exercise the learning algorithm, showing that it does not break
immediately. More significant is the greater representational power of option-extended
TD networks. Options are a general framework for temporal abstraction, predictive state
representations are a promising strategy for state abstraction, and TD networks are able
to represent compositional questions. The combination of these three is potentially very
powerful and worthy of further study.
Acknowledgments
The authors gratefully acknowledge the ideas and encouragement they have received in this
work from Mark Ring, Brian Tanner, Satinder Singh, Doina Precup, and all the members
of the rlai.net group.
References
Jaeger, H. (2000). Observable operator models for discrete stochastic time series. Neural Computation, 12(6):1371-1398. MIT
Press.
Littman, M., Sutton, R. S., & Singh, S. (2002). Predictive representations of state. In T. G. Dietterich, S. Becker and Z. Ghahramani (eds.), Advances In Neural Information Processing Systems 14, pp. 1555-1561. MIT Press.
Precup, D., Sutton, R. S., & Dasgupta, S. (2001). Off-policy temporal-difference learning with function approximation. In
C. E. Brodley, A. P. Danyluk (eds.), Proceedings of the Eighteenth International Conference on Machine Learning, pp. 417-424.
San Francisco, CA: Morgan Kaufmann.
Rafols, E. J., Ring, M., Sutton, R.S., & Tanner, B. (2005). Using predictive representations to improve generalization in reinforcement learning. To appear in Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence.
Rivest, R. L., & Schapire, R. E. (1987). Diversity-based inference of finite automata. In Proceedings of the Twenty Eighth Annual
Symposium on Foundations of Computer Science, (pp. 78?87). IEEE Computer Society.
Singh, S., James, M. R., & Rudary, M. R. (2004). Predictive state representations: A new theory for modeling dynamical systems.
In Uncertainty in Artificial Intelligence: Proceedings of the Twentieth Conference in Uncertainty in Artificial Intelligence, (pp.
512?519). AUAI Press.
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge, MA: MIT Press.
Sutton, R. S., Precup, D., Singh, S. (1999). Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112, pp. 181-211.
Sutton, R. S., & Tanner, B. (2005).
Conference 17.
Temporal-difference networks.
To appear in Neural Information Processing Systems
Tanner, B., Sutton, R. S. (2005) Temporal-difference networks with history. To appear in Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence.
| 2826 |@word trial:1 middle:1 version:1 briefly:1 open:4 termination:11 sensed:1 ytn:1 accommodate:1 moment:1 initial:1 series:1 o2:1 current:2 must:3 written:1 john:1 subsequent:1 happen:3 enables:1 designed:1 update:2 intelligence:5 selected:2 ith:1 short:1 record:1 colored:2 provides:1 node:41 location:2 successive:1 mathematical:1 direct:1 become:1 symposium:1 consists:4 introduce:1 inter:2 expected:1 zti:4 alberta:1 td:34 little:1 considering:1 provided:2 rivest:2 what:8 rafols:3 interpreted:1 bootstrapping:1 temporal:12 every:1 act:3 ti:7 auai:1 exactly:2 control:1 unit:1 omit:1 appear:3 producing:1 positive:1 before:2 sutton:20 despite:1 initiated:1 approximately:1 might:1 specifying:1 shaded:1 challenging:1 averaged:3 directed:2 acknowledgment:1 spontaneously:1 empirical:1 thought:2 seeing:4 cannot:2 rlai:1 operator:1 applying:1 conventional:6 yt:11 eighteenth:1 primitive:9 go:1 straightforward:1 layout:1 automaton:1 immediately:3 communicating:1 insight:1 handle:2 notion:1 increment:1 updated:3 target:10 controlling:1 ualberta:1 paralleling:1 distinguishing:2 element:2 located:1 predicts:5 labeled:1 slippage:4 observed:1 grasping:2 movement:1 e8:1 ran:1 substantial:1 environment:10 reward:1 asked:1 littman:3 terminating:2 singh:8 predictive:7 upon:2 completely:3 basis:1 triangle:1 joint:2 various:2 represented:1 talk:1 x100:2 artificial:5 approached:1 tell:1 outcome:5 whose:2 larger:2 valued:2 widely:1 say:1 nineteenth:2 otherwise:3 think:2 itself:1 sequence:8 net:1 interaction:2 reset:2 turned:2 loop:2 date:1 representational:2 convergence:1 empty:1 extending:1 jaeger:2 produce:4 perfect:1 ring:2 wider:1 ij:5 received:1 c:1 predicted:2 indicate:2 correct:2 opened:1 stochastic:3 human:1 bin:2 require:2 generalization:4 wall:10 brian:1 extension:3 hold:2 considered:1 predict:6 diverged:1 danyluk:1 a2:1 recognizer:1 correctness:1 mit:3 reaching:1 rather:1 lifted:1 varying:1 barto:4 consistently:3 indicates:3 sense:2 inference:1 abstraction:8 dependent:1 typically:1 selects:1 comprising:1 overall:2 orientation:2 denoted:1 orange:5 construct:1 never:2 shaped:3 sampling:1 manually:2 identical:1 represents:1 look:3 richard:1 opening:1 composed:1 individual:2 familiar:1 maintain:1 ab:1 possibility:2 intra:6 indifferent:1 mixture:1 sens:1 accurate:2 psrs:4 tuple:1 encourage:1 experience:1 filled:2 e0:1 instance:1 column:6 earlier:1 soft:1 compass:2 modeling:1 subset:1 recognizing:1 front:1 answer:11 combined:1 st:3 international:3 accessible:2 stay:1 rudary:1 off:2 tanner:8 together:1 precup:6 concrete:1 again:1 successively:1 possibly:1 spun:2 diversity:1 includes:1 doina:1 depends:1 stream:1 bg:14 later:1 root:1 break:1 closed:1 linked:1 observing:1 portion:3 red:8 sort:1 start:1 option:68 complicated:1 contribution:2 ass:1 square:5 spin:1 kaufmann:1 correspond:3 wtij:4 yellow:2 generalize:2 accurately:5 none:1 trajectory:1 history:1 whenever:1 ed:2 pp:5 james:1 naturally:1 associated:2 ask:1 recall:1 color:8 organized:1 actually:1 higher:2 x6:1 just:1 until:1 hand:1 receives:2 defines:1 logistic:2 perhaps:1 indicated:1 building:1 effect:1 dietterich:1 consisted:1 alternating:1 white:2 during:1 eligibility:8 maintained:1 chaining:1 o3:1 complete:2 bring:2 meaning:1 novel:1 began:1 behaves:1 discussed:1 extend:1 significant:3 composition:1 cambridge:1 encouragement:1 similarly:1 language:1 had:1 gratefully:1 persisting:1 specification:1 access:1 encountering:1 behaving:6 etc:1 something:2 own:1 driven:2 certain:2 initiation:2 rep:1 slipping:1 morgan:1 contingent:1 greater:2 freely:1 recognized:2 clockwise:2 semi:1 upside:1 full:2 multiple:1 sound:1 long:2 e1:1 a1:1 prediction:52 koop:1 sometimes:1 represent:4 addition:2 interval:1 diagram:2 ot:7 fell:1 member:1 incorporates:2 effectiveness:1 door:1 intermediate:2 concerned:1 variety:1 idea:3 whether:2 six:3 rms:6 becker:1 cause:3 compositional:7 action:34 simplest:1 cit:5 schapire:2 specifies:2 designer:1 track:1 correctly:2 modifiable:1 blue:3 discrete:2 dasgupta:2 group:5 four:1 egocentric:1 run:6 angle:1 powerful:3 fourth:2 uncertainty:2 place:2 almost:3 reader:1 yt1:1 acceptable:1 bit:16 entirely:1 ct:1 followed:8 annual:1 ahead:3 span:1 department:1 structured:1 according:3 combination:3 march:1 representable:1 terminates:3 remain:2 across:2 y0:1 modification:1 taken:7 fridge:3 equation:1 previously:1 turn:1 needed:1 know:1 resents:1 end:2 subnetworks:1 informal:1 generalizes:1 available:1 permit:1 apply:1 away:1 slower:1 original:2 denotes:1 include:2 cf:1 calculating:1 ghahramani:1 coffee:1 society:1 move:4 question:27 quantity:2 strategy:2 primary:3 gradient:1 link:7 separate:1 w0:1 spinning:2 o1:1 index:2 relationship:1 illustration:1 demonstration:1 mostly:1 potentially:2 trace:17 policy:12 twenty:1 observation:12 markov:1 acknowledge:1 finite:1 extended:10 head:1 gridworld:4 worthy:1 arbitrary:1 canada:1 expressiveness:1 compositionality:1 introduced:2 bottle:5 meansquared:1 learned:1 address:1 able:3 suggested:2 dynamical:1 interlinked:1 eighth:1 green:6 memory:1 power:1 suitable:1 natural:1 difficulty:1 arm:1 representing:7 improve:1 brodley:1 mdps:2 temporally:7 started:1 created:1 deviate:1 determining:1 relative:2 wander:6 fully:3 inspects:1 facing:4 approximator:1 foundation:1 contingency:1 agent:42 degree:1 verification:1 t6g:1 beer:3 consistent:1 row:2 course:1 surprisingly:1 last:2 wide:1 sparse:1 curve:3 calculated:1 world:6 sensory:1 forward:20 made:2 author:1 san:1 reinforcement:3 far:1 observable:10 keep:2 satinder:1 francisco:1 eddie:1 promising:1 learn:6 terminate:2 transfer:1 ca:2 obtaining:1 interact:1 complex:1 anna:2 terminated:1 whole:1 arise:1 repeated:1 augmented:1 referred:3 representative:1 edmonton:1 exercise:1 perceptual:1 third:5 learns:1 down:1 xt:2 specific:1 showing:1 decay:3 cease:1 concern:1 incorporating:2 essential:2 effectively:1 importance:1 ci:3 eij:2 twentieth:1 expressed:1 partially:5 scalar:2 talking:1 collectively:1 applies:1 corresponds:1 environmental:1 ma:1 conditional:3 goal:3 viewed:1 identity:2 room:1 yti:5 included:2 typical:1 except:1 wt:8 acting:2 called:1 total:1 secondary:1 partly:1 indicating:1 formally:1 mark:1 rotate:1 ongoing:1 incorporate:1 outgoing:1 |
2,011 | 2,827 | Learning vehicular dynamics, with application
to modeling helicopters
Pieter Abbeel
Computer Science Dept.
Stanford University
Stanford, CA 94305
Varun Ganapathi
Computer Science Dept.
Stanford University
Stanford, CA 94305
Andrew Y. Ng
Computer Science Dept.
Stanford University
Stanford, CA 94305
Abstract
We consider the problem of modeling a helicopter?s dynamics based on
state-action trajectories collected from it. The contribution of this paper is two-fold. First, we consider the linear models such as learned by
CIFER (the industry standard in helicopter identification), and show that
the linear parameterization makes certain properties of dynamical systems, such as inertia, fundamentally difficult to capture. We propose an
alternative, acceleration based, parameterization that does not suffer from
this deficiency, and that can be learned as efficiently from data. Second, a
Markov decision process model of a helicopter?s dynamics would explicitly model only the one-step transitions, but we are often interested in a
model?s predictive performance over longer timescales. In this paper, we
present an efficient algorithm for (approximately) minimizing the prediction error over long time scales. We present empirical results on two
different helicopters. Although this work was motivated by the problem
of modeling helicopters, the ideas presented here are general, and can be
applied to modeling large classes of vehicular dynamics.
1
Introduction
In the last few years, considerable progress has been made in finding good controllers for
helicopters. [7, 9, 2, 4, 3, 8] In designing helicopter controllers, one typically begins by
constructing a model for the helicopter?s dynamics, and then uses that model to design a
controller. In our experience, after constructing a simulator (model) of our helicopters, policy search [7] almost always learns to fly (hover) very well in simulation, but may perform
less well on the real-life helicopter. These differences between simulation and real-life
performance can therefore be directly attributed to errors in the simulator (model) of the
helicopter, and building accurate helicopter models remains a key technical challenge in
autonomous flight. Modeling dynamical systems (also referred to as system identification)
is one of the most basic and important problems in control. With an emphasis on helicopter
aerodynamics, in this paper we consider the problem of learning good dynamical models
of vehicles.
Helicopter aerodynamics are, to date, somewhat poorly understood, and (unlike most fixedwing aircraft) no textbook models will accurately predict the dynamics of a helicopter from
only its dimensions and specifications. [5, 10] Thus, at least part of the dynamics must be
R (Comprehensive Identification from Frequency Responses) is
learned from data. CIFER
the industry standard for learning helicopter (and other rotorcraft) models from data. [11, 6]
CIFER
uses frequency response methods to identify a linear model.
The models obtained from CIFER fail to capture some important aspects of the helicopter
dynamics, such as the effects of inertia. Consider a setting in which the helicopter is flying
forward, and suddenly turns sideways. Due to inertia, the helicopter will continue to travel
in the same direction as before, so that it has ?sideslip,? meaning that its orientation is
not aligned with its direction of motion. This is a non-linear effect that depends both on
velocity and angular rates. The linear CIFER model is unable to capture this. In fact, the
models used in [2, 8, 6] all suffer from this problem. The core of the problem is that
the naive body-coordinate representation used in all these settings makes it fundamentally
difficult for the learning algorithm to capture certain properties of dynamical systems such
as inertia and gravity. As such, one places a significantly heavier burden than is necessary
on the learning algorithm.
In Section 4, we propose an alternative parameterization for modeling dynamical systems
that does not suffer from this deficiency. Our approach can be viewed as a hybrid of physical knowledge and learning. Although helicopter dynamics are not fully understood, there
are also many properties?such as the direction and magnitude of acceleration due to gravity; the effects of inertia; symmetry properties of the dynamical system; and so on?which
apply to all dynamical systems, and which are well-understood. All of this can therefore be
encoded as prior knowledge, and there is little need to demand that our learning algorithms
learn them. It is not immediately obvious how such prior knowledge can be encoded into
a complex learning algorithm, but we will describe an acceleration based parameterization
in which this can be done.
Given any model class, we can choose the parameter learning criterion used to learn a
model within the class. CIFER finds the parameters that minimize a frequency domain error criterion. Alternatively, we can minimize the squared one-step prediction error in the
time domain. Forward simulation on a held-out test set is a standard way to assess model
quality, and we use it to compare the linear models learned using CIFER to the same linear
models learned by optimizing the one-step prediction error. As suggested in [1], one can
also learn parameters so as to optimize a ?lagged criterion? that directly measures simulation accuracy?i.e., predictive accuracy of the model over long time scales. However, the
EM algorithm given in [1] is expensive when applied in a continuous state-space setting. In
this paper, we present an efficient algorithm that approximately optimizes the lagged criterion. Our experiments show that the resulting model consistently outperforms the linear
models trained using CIFER or using the one-step error criterion. Combining this with the
acceleration based parameterization results in our best helicopter model.
2
Helicopter state, input and dynamics
The helicopter state s comprises its position (x, y, z), orientation (roll ?, pitch ?, yaw
? ?,
? ?).
?), velocity (x,
? y,
? z)
? and angular velocity (?,
? The helicopter is controlled via a 4dimensional action space:
1. u1 and u2 : The longitudinal (front-back) and latitudinal (left-right) cyclic pitch
controls cause the helicopter to pitch forward/backward or sideways, and can
thereby also affect acceleration in the longitudinal and latitudinal directions.
2. u3 : The tail rotor collective pitch control affects tail rotor thrust, and can be used
to yaw (turn) the helicopter.
3. u4 : The main rotor collective pitch control affects the pitch angle of the main
rotor?s blades, by rotating the blades around an axis that runs along the length of
the blade. As the main rotor blades sweep through the air, the resulting amount of
upward thrust (generally) increases with this pitch angle; thus this control affects
the main rotor?s thrust.
Following standard practice in system identification ([8, 6]), the original 12-dimensional
helicopter state is reduced to an 8-dimensional state represented in body (or robot-centric)
? ?,
? ?).
coordinates sb = (?, ?, x,
? y,
? z,
? ?,
? Where there is risk of confusion, we will use superscript s and b to distinguish between spatial (world) coordinates and body coordinates.
The body coordinate representation specifies the helicopter state using a coordinate frame
in which the x, y, and z axes are forwards, sideways, and down relative to the current orientation of the helicopter, instead of north, east and down. Thus, x? b is the forward velocity,
whereas x? s is the velocity in the northern direction. (? and ? are always expressed in world
coordinates, because roll and pitch relative to the body coordinate frame is always zero.)
By using a body coordinate representation, we encode into our model certain ?symmetries?
of helicopter flight, such as that the helicopter?s dynamics are the same regardless of its absolute position (x, y, z) and heading ? (assuming the absence of obstacles). Even in the
reduced coordinate representation, only a subset of the state variables needs to be modeled
? ?,
? ?),
explicitly using learning. Given a model that predicts only the angular velocities (?,
?
we can numerically integrate to obtain the orientation (?, ?, ?).
We can integrate the reduced body coordinate states to obtain the complete world coordinate states. Integrating body-coordinate angular velocities to obtain world-coordinate
angles is nonlinear, thus the model resulting from this process is necessarily nonlinear.
3
Linear model
The linear model
we learn with CIFER?has the following form:
?
?
`
?
b
?? t+1
? ?? tb = C? ?? tb + C3 (u3 )t + D3 ?t,
?
x? bt+1 ? x? bt = Cx x? bt ? g?t ?t,
?
?
b
y? t+1
? y? tb = Cy y? tb + g?t + D0 ?t,
?
?
b
z?t+1
? z?tb = Cz z?tb + g + C4 (u4 )t + D4 ?t,
?t+1 ? ?t = ?? bt ?t,
?t+1 ? ?t = ??tb ?t.
?? bt+1 ? ?? bt = C? ?? bt + C1 (u1 )t + D1 ?t,
?
?
b
??t+1
? ??tb = C? ??tb + C2 (u2 )t + D2 ?t,
2
Here g = 9.81m/s is the acceleration due to gravity and ?t is the time discretization, which is 0.1 seconds in our experiments. The free parameters in the model are
Cx , Cy , Cz , C? , C? , C? , which model damping, and D0 , C1 , D1 , C2 , D2 , C3 , D3 , C4 , D4 ,
which model the influence of the inputs on the states.1 This parameterization was chosen
using the ?coherence? feature selection algorithm of CIFER. CIFER takes as input the stateaction sequence {(x? bt , y? tb , z?tb , ?? bt , ??tb , ?? tb , ?t , ?t , ut )}t and learns the free parameters using a
frequency domain cost function. See [11] for details.
Frequency response methods (as used in CIFER) are not the only way to estimate the free
parameters. Instead, we can minimize the average squared prediction error of next state
given current state and action. Doing so only requires linear regression. In our experiments (see Section 6) we compare the simulation accuracy over several time-steps of the
differently learned linear models. We also compare to learning by directly optimizing the
simulation accuracy over several time-steps. The latter approach is presented in Section 5.
4
Acceleration prediction model
Due to inertia, if a forward-flying helicopter turns, it will have sideslip (i.e., the helicopter
will not be aligned with its direction of motion). The linear model is unable to capture the
sideslip effect, since this effect depends non-linearly on velocity and angular rates. In fact,
the models used in [2, 8, 6] all suffer from this problem. More generally, these models
do not capture conservation of momentum well. Although careful engineering of (many)
additional non-linear features might fix individual effects such as, e.g., sideslip, it is unclear
how to capture inertia compactly in the naive body-coordinate representation.
1
D0 captures the sideways acceleration caused by the tail rotor?s thrust.
From physics, we have the following update equation for velocity in body-coordinates:
? ?,
? ?)
(x,
? y,
? z)
? bt+1 = R (?,
? bt ? (x,
? y,
? z)
? bt + (?
x, y?, z?)bt ?t .
(1)
? ?,
? ?)
Here, R (?,
? bt is the rotation matrix that transforms from the body-coordinate frame
at time t to the body-coordinate frame at time t+1 (and is determined by the angular veloc? ?,
? ?)
ity (?,
? bt at time t); and (?
x, y?, z?)bt denotes the acceleration vector in body-coordinates
at time t. Forces and torques (and thus accelerations) are often a fairly simple function of
inputs and state. This suggests that a model which learns to predict the accelerations, and
then uses Eqn. (1) to obtain velocity over time, may perform well. Such a model would
naturally capture inertia, by using the velocity update of Eqn. (1). In contrast, the models
of Section 3 try to predict changes in body-coordinate velocity. But the change in bodycoordinate velocity does not correspond directly to physical accelerations, because the
body-coordinate velocity at times t and t + 1 are expressed in different coordinate frames.
Thus, x? bt+1 ? x? bt is not the forward acceleration?because x? bt+1 and x? bt are expressed in different coordinate frames. To capture inertia, these models therefore need to predict not only
the physical accelerations, but also the non-linear influence of the angular rates through the
rotation matrix. This makes for a difficult learning problem, and puts an unnecessary burden on the learning algorithm. Our discussion above has focused on linear velocity, but a
similar argument also holds for angular velocity.
The previous discussion suggests that we learn to predict physical accelerations and then
integrate the accelerations to obtain the state trajectories. To do this, we propose:
??bt = C? ?? t + C1 (u1 )t + D1 , x
?bt = Cx x? bt + (gx )bt ,
??b = C? ??t + C2 (u2 )t + D2 , y?b = Cy y? b + (gy )b + D0 ,
t
t
t
t
?
? tb = C? ?? t + C3 (u3 )t + D3 , z?tb = Cz z?tb + (gz )bt + C4 (u4 )t + D4 .
Here (gx )bt , (gy )bt , (gz )bt are the components of the gravity acceleration vector in each of the
body-coordinate axes at time t; and C? , D? are the free parameters to be learned from data.
The model predicts accelerations in the body-coordinate frame, and is therefore able to take
advantage of the same invariants as discussed earlier, such as invariance of the dynamics to
the helicopter?s (x, y, z) position and heading (?). Further, it additionally captures the fact
that the dynamics are invariant to roll (?) and pitch (?) once the (known) effects of gravity
are subtracted out.
Frequency domain techniques cannot be used to learn the acceleration model above, because it is non-linear. Nevertheless, the parameters can be learned as easily as for the
linear model in the time domain: Linear regression can be used to find the parameters that
minimize the squared error of the one-step prediction in acceleration.2
5
The lagged error criterion
To evaluate the performance of a dynamical model, it is standard practice to run a simulation using the model for a certain duration, and then compare the simulated trajectory with
the real state trajectory. To do well on this evaluation criterion, it is therefore important for
the dynamical model to give not only accurate one-step predictions, but also predictions
that are accurate at longer time-scales. Motivated by this, [1] suggested learning the model
parameters by optimizing the following ?lagged criterion?:
PT ?H PH
(2)
st+h|t ? st+h k22 .
t=1
h=1 k?
Here, H is the time horizon of the simulation, and s?t+h|t is the estimate (from simulation)
of the state at time t + h given the state at time t.
2
Note that, as discussed previously, the one-step difference of body coordinate velocities is not
the acceleration. To obtain actual accelerations, the velocity at time t + 1 must be rotated into the
body-frame at t before taking the difference.
Unfortunately the EM-algorithm given in [1] is prohibitively expensive in our continuous
state-action space setting. We therefore present a simple and fast algorithm for (approximately) minimizing the lagged criterion. We begin by considering a linear model with
update equation:
st+1 ? st = Ast + But ,
(3)
where A, B are the parameters of the model. Minimizing the one-step prediction error
would correspond to finding the parameters that minimize the expected squared difference
between the left and right sides of Eqn. (3).
By summing the update equations for two consecutive time steps, we get that, for simulation to be exact over two time steps, the following needs to hold:
st+2 ? st = Ast + But + A?
st+1|t + But+1 .
(4)
Minimizing the expected squared difference between the left and right sides of Eqn. (4)
would correspond to minimizing the two-step prediction error. More generally, by summing up the update equations for h consecutive timesteps and then minimizing the left
and right sides? expected squared difference, we can minimize the h-step prediction error.
Thus, it may seem that we can directly solve for the parameters that minimize the lagged
criterion of Eqn. (2) by running least squares on the appropriate set of linear combinations
of state update equations.
The difficulty with this procedure is that the intermediate states in the simulation?for
example, s?t+1|t in Eqn. (4)?are also an implicit function of the parameters A and B. This
is because s?t+1|t represents the result of a one-step simulation from st using our model.
Taking into account the dependence of the intermediate states on the parameters makes the
right side of Eqn. (4) non-linear in the parameters, and thus the optimization is non-convex.
If, however, we make an approximation and neglect this dependence, then optimizing the
objective can be done simply by solving a linear least squares problem.
This gives us the following algorithm. We will alternate between a simulation step that
finds the necessary predicted intermediate states, and a least squares step that solves for the
new parameters.
L EARN -L AGGED -L INEAR:
1. Use least squares to minimize the one-step squared prediction error criterion to obtain an
initial model A(0) , B (0) . Set i = 1.
2. For all t = 1, . . . , T , h = 1, . . . , H, simulate in the current model to compute s?t+h|t .
3. Solve the following least squares problem:
? B)
? = arg minA,B PT ?H PH k(st+h ? st ) ? (Ph?1 A?
(A,
st+? |t + But+? )k22 .
t=1
h=1
? =0
? B (i+1) = (1 ? ?)B (i) + ?B.
?3
4. Set A(i+1) = (1 ? ?)A(i) + ?A,
5. If kA(i+1) ? A(i) k + kB (i+1) ? B (i) k ? exit. Otherwise go back to step 2.
Our helicopter acceleration prediction model is not of the simple form st+1 ? st =
Ast + But described above. However, a similar derivation still applies: The change in
velocity over several time-steps corresponds to the sum of changes in velocity over several
single time-steps. Thus by adding the one-step acceleration prediction equations as given
in Section 4, we might expect to obtain equations corresponding to the acceleration over
several time-steps. However, the acceleration equations at different time-steps are in different coordinate frames. Thus we first need to rotate the equations and then add them. In
the algorithm described below, we rotate all accelerations into the world coordinate frame.
The acceleration equations from Section 4 give us (?
x, y?, z?)bt = Apos st + Bpos ut , and
3
This step of the algorithm uses a simple line search to choose the stepsize ?.
(a)
(b)
Figure 1: The XCell Tempest (a) and the Bergen Industrial Twin (b) used in our experiments.
? ?,
??
(?,
? )bt = Arot st + Brot ut , where Apos , Bpos , Arot , Brot are (sparse) matrices that contain the parameters to be learned.4 This gives us the L EARN -L AGGED -ACCELERATION
algorithm, which is identical to L EARN -L AGGED -L INEAR except that step 3 now solves
the following least squares problems:
?pos ) = arg min
(A?pos , B
A,B
?rot ) = arg min
(A?rot , B
A,B
TX
?H X
H
k
t=1 h=1
TX
?H X
H
t=1 h=1
k
h?1
X
?
?
? bt+? ?s (?
R
x, y?, z?)bt+? ? (A?
st+? |t + But+? ) k22
? =0
h?1
X
?
?
? ?,
??
? bt+? ?s (?,
R
? )bt+? ? (A?
st+? |t + But+? ) k22
? =0
? bt ?s denotes the rotation matrix (estimated from simulation using the current
Here R
model) from the body frame at time t to the world frame.
6
Experiments
We performed experiments on two RC helicopters: an XCell Tempest and a Bergen Industrial Twin helicopter. (See Figure 1.) The XCell Tempest is a competition-class aerobatic
helicopter (length 54?, height 19?), is powered by a 0.91-size, two-stroke engine, and has
an unloaded weight of 13 pounds. It carries two sensor units: a Novatel RT2 GPS receiver
and a Microstrain 3DM-GX1 orientation sensor. The Microstrain package contains triaxial
accelerometers, rate gyros, and magnetometers, which are used for inertial sensing. The
larger Bergen Industrial Twin helicopter is powered by a twin cylinder 46cc, two-stroke
engine, and has an unloaded weight of 18 lbs. It carries three sensor units: a Novatel
RT2 GPS receiver, MicroStrain 3DM-G magnetometers, and an Inertial Science ISIS-IMU
(triaxial accelerometers and rate gyros).
For each helicopter, we collected data from two separate flights. The XCell Tempest train
and test flights were 800 and 540 seconds long, the Bergen Industrial Twin train and test
flights were each 110 seconds long. A highly optimized Kalman filter integrates the sensor information and reports (at 100Hz) 12 numbers corresponding to the helicopter?s state
? ?,
? ?).
(x, y, z, x,
? y,
? z,
? ?, ?, ?, ?,
? The data is then downsampled to 10Hz before learning.
For each of the helicopters, we learned the following models:
1. Linear-One-Step: The linear model from Section 3 trained using linear regression to
minimize the one-step prediction error.
2. Linear-CIFER: The linear model from Section 3 trained using CIFER.
3. Linear-Lagged: The linear model from Section 3 trained minimizing the lagged criterion.
4. Acceleration-One-Step: The acceleration prediction model from Section 4 trained using
linear regression to minimize the one step prediction error.
5. Acceleration-Lagged: The acceleration prediction model from Section 4 trained minimizing the lagged criterion.
4
For simplicity of notation we omit the intercept parameters here, but they are easily incorporated,
e.g., by having one additional input which is always equal to one.
For Linear-Lagged and Acceleration-Lagged we used a horizon H of two seconds (20
simulation steps). The CPU times for training the different algorithms were: Less than one
second for linear regression (algorithms 1 and 4 in the list above); one hour 20 minutes
(XCell Tempest data) or 10 minutes (Bergen Industrial Twin data) for the lagged criteria
(algorithms 3 and 5 above); about 5 minutes for CIFER. Our algorithm optimizing the
lagged criterion appears to converge after at most 30 iterations. Since this algorithm is only
approximate, we can then use coordinate descent search to further improve the lagged criterion.5 This coordinate descent search took an additional four hours for the XCell Tempest
data and an additional 30 minutes for the Bergen Industrial Twin data. We report results
both with and without this coordinate descent search. Our results show that the algorithm
presented in Section 5 works well for fast approximate optimization of the lagged criterion,
but that locally greedy search (coordinate descent) may then improve it yet further.
For evaluation, the test data was split in consecutive non-overlapping two second windows.
(This corresponds to 20 simulation steps, s0 , . . . , s20 .) The models are used to predict the
state sequence over the two second window, when started in the true state s0 . We report
the average squared prediction error (difference between the simulated and true state) at
each timestep t = 1, . . . , 20 throughout the two second window. The orientation error is
measured by the squared magnitude of the minimal rotation needed to align the simulated
orientation with the true orientation. Velocity, position, angular rate and orientation errors
are measured in m/s, m, rad/s and rad (squared) respectively. (See Figure 2.)
We see that Linear-Lagged consistently outperforms Linear-CIFER and Linear-OneStep. Similarly, for the acceleration prediction models, we have that AccelerationLagged consistently outperforms Acceleration-One-Step. These experiments support
the case for training with the lagged criterion.
The best acceleration prediction model, Acceleration-Lagged, is significantly more accurate than any of the linear models presented in Section 3. This effect is mostly present
in the XCell Tempest data, which contained data collected from many different parts of
the state space (e.g., flying in a circle); in contrast, the Bergen Industrial Twin data was
collected mostly near hovering (and thus the linearization assumptions were somewhat less
poor there).
7
Summary
We presented an acceleration based parameterization for learning vehicular dynamics. The
model predicts accelerations, and then integrates to obtain state trajectories. We also described an efficient algorithm for approximately minimizing the lagged criterion, which
measures the predictive accuracy of the algorithm over both short and long time-scales.
In our experiments, learning with the acceleration parameterization and using the lagged
criterion gave significantly more accurate models than previous approaches. Using this approach, we have recently also succeeded in learning a model for, and then autonomously
flying, a ?funnel? aerobatic maneuver, in which the helicopter flies in a circle, keeping the
tail pointed at the center of rotation, and the body of the helicopter pitched backwards at a
steep angle (so that the body of the helicopter traces out the surface of a funnel). (Details
will be presented in a forthcoming paper.)
Acknowledgments. We give warm thanks to Adam Coates and to helicopter pilot Ben Tse
for their help on this work.
5
We used coordinate descent on the criterion of Eqn. (2), but reweighted the errors on velocity,
angular velocity, position and orientation to scale them to roughly the same order of magnitude.
XCell Tempest
100
250
1
80
200
0.8
1.4
40
20
150
100
50
orientation
60
angular rate
position
velocity
1.2
0.6
0.4
1
0.8
0.6
0.4
0.2
0.2
0
0
0.5
1
1.5
0
2
0
0.5
t (s)
1
1.5
0
2
0
0.5
t (s)
1
1.5
0
2
0
0.5
t (s)
1
1.5
2
1.5
2
t (s)
Bergen Industrial Twin
1.5
1
0.5
0
0.06
2.5
0.05
2
1.5
1
0.5
0
0.5
1
t (s)
1.5
2
0
0.015
orientation
position
velocity
2
3
angular rate
2.5
0.04
0.03
0.02
0.01
0.005
0.01
0
0.5
1
t (s)
1.5
2
0
0
0.5
1
t (s)
1.5
2
0
0
0.5
1
t (s)
Figure 2: (Best viewed in color.) Average squared prediction errors throughout two-second simulations. Blue, dotted: Linear-One-Step. Green, dash-dotted: Linear-CIFER. Yellow, triangle:
Linear-Lagged learned with fast, approximate algorithm from Section 5. Red, dashed: LinearLagged learned with fast, approximate algorithm from Section 5 followed by greedy coordinate descent search. Magenta, solid: Acceleration-One-Step. Cyan, circle: Acceleration-Lagged learned
with fast, approximate algorithm from Section 5. Black,*: Acceleration-Lagged learned with fast,
approximate algorithm from Section 5 followed by greedy coordinate descent search. The magenta,
cyan and black lines (visually) coincide in the XCell position plots. The blue, yellow, magenta and
cyan lines (visually) coincide in the Bergen angular rate and orientation plots. The red and black lines
(visually) coincide in the Bergen angular rate plot. See text for details.
References
[1] P. Abbeel and A. Y. Ng. Learning first order Markov models for control. In NIPS 18, 2005.
[2] J. Bagnell and J. Schneider. Autonomous helicopter control using reinforcement learning policy
search methods. In International Conference on Robotics and Automation. IEEE, 2001.
[3] V. Gavrilets, I. Martinos, B. Mettler, and E. Feron. Control logic for automated aerobatic flight
of miniature helicopter. In AIAA Guidance, Navigation and Control Conference, 2002.
[4] V. Gavrilets, I. Martinos, B. Mettler, and E. Feron. Flight test and simulation results for an
autonomous aerobatic helicopter. In AIAA/IEEE Digital Avionics Systems Conference, 2002.
[5] J. Leishman. Principles of Helicopter Aerodynamics. Cambridge University Press, 2000.
[6] B. Mettler, M. Tischler, and T. Kanade. System identification of small-size unmanned helicopter
dynamics. In American Helicopter Society, 55th Forum, 1999.
[7] Andrew Y. Ng, Adam Coates, Mark Diel, Varun Ganapathi, Jamie Schulte, Ben Tse, Eric
Berger, and Eric Liang. Autonomous inverted helicopter flight via reinforcement learning. In
International Symposium on Experimental Robotics, 2004.
[8] Andrew Y. Ng, H. Jin Kim, Michael Jordan, and Shankar Sastry. Autnonomous helicopter flight
via reinforcement learning. In NIPS 16, 2004.
[9] Jonathan M. Roberts, Peter I. Corke, and Gregg Buskey. Low-cost flight control system for a
small autonomous helicopter. In IEEE Int?l Conf. on Robotics and Automation, 2003.
[10] J. Seddon. Basic Helicopter Aerodynamics. AIAA Education Series. America Institute of
Aeronautics and Astronautics, 1990.
[11] M.B. Tischler and M.G. Cauffman. Frequency response method for rotorcraft system identification: Flight application to BO-105 couple rotor/fuselage dynamics. Journal of the American
Helicopter Society, 1992.
| 2827 |@word aircraft:1 d2:3 pieter:1 simulation:18 thereby:1 solid:1 blade:4 carry:2 initial:1 cyclic:1 contains:1 series:1 longitudinal:2 outperforms:3 current:4 discretization:1 ka:1 yet:1 must:2 thrust:4 plot:3 update:6 greedy:3 parameterization:8 core:1 short:1 gx:2 height:1 rc:1 along:1 c2:3 symposium:1 expected:3 roughly:1 isi:1 simulator:2 torque:1 little:1 actual:1 cpu:1 considering:1 window:3 begin:2 notation:1 textbook:1 finding:2 stateaction:1 gravity:5 prohibitively:1 control:10 unit:2 omit:1 maneuver:1 before:3 aiaa:3 understood:3 engineering:1 approximately:4 might:2 black:3 emphasis:1 suggests:2 microstrain:3 acknowledgment:1 practice:2 procedure:1 empirical:1 significantly:3 integrating:1 downsampled:1 get:1 cannot:1 selection:1 shankar:1 put:1 risk:1 influence:2 ast:3 intercept:1 optimize:1 gavrilets:2 center:1 go:1 regardless:1 duration:1 convex:1 focused:1 simplicity:1 immediately:1 ity:1 autonomous:5 coordinate:35 pt:2 exact:1 gps:2 us:4 designing:1 velocity:25 expensive:2 u4:3 predicts:3 fly:2 capture:11 cy:3 autonomously:1 rotor:8 dynamic:16 trained:6 solving:1 predictive:3 flying:4 exit:1 eric:2 triangle:1 compactly:1 easily:2 po:2 differently:1 represented:1 tx:2 america:1 derivation:1 train:2 fast:6 describe:1 inear:2 encoded:2 stanford:6 solve:2 larger:1 otherwise:1 superscript:1 sequence:2 advantage:1 took:1 propose:3 jamie:1 hover:1 helicopter:58 aligned:2 combining:1 gregg:1 date:1 poorly:1 competition:1 adam:2 rotated:1 ben:2 help:1 andrew:3 rt2:2 measured:2 progress:1 solves:2 predicted:1 direction:6 filter:1 kb:1 education:1 abbeel:2 fix:1 sideslip:4 hold:2 around:1 visually:3 predict:6 miniature:1 u3:3 consecutive:3 travel:1 integrates:2 sideways:4 sensor:4 always:4 encode:1 ax:2 consistently:3 contrast:2 industrial:8 novatel:2 kim:1 bergen:10 sb:1 typically:1 bt:35 interested:1 upward:1 arg:3 orientation:13 spatial:1 fairly:1 equal:1 once:1 schulte:1 having:1 ng:4 identical:1 represents:1 yaw:2 feron:2 report:3 fundamentally:2 few:1 comprehensive:1 individual:1 cylinder:1 highly:1 evaluation:2 navigation:1 held:1 accurate:5 succeeded:1 necessary:2 experience:1 damping:1 rotating:1 circle:3 guidance:1 minimal:1 industry:2 modeling:6 obstacle:1 earlier:1 tse:2 cost:2 subset:1 front:1 st:17 thanks:1 international:2 physic:1 michael:1 earn:3 squared:11 choose:2 astronautics:1 conf:1 american:2 ganapathi:2 account:1 accelerometer:2 gy:2 twin:9 north:1 automation:2 int:1 explicitly:2 caused:1 depends:2 unloaded:2 vehicle:1 try:1 performed:1 doing:1 red:2 contribution:1 minimize:10 air:1 square:6 ass:1 roll:3 accuracy:5 efficiently:1 correspond:3 identify:1 yellow:2 identification:6 accurately:1 trajectory:5 cc:1 stroke:2 frequency:7 obvious:1 dm:2 naturally:1 attributed:1 pitched:1 couple:1 pilot:1 knowledge:3 ut:3 color:1 inertial:2 back:2 centric:1 appears:1 varun:2 response:4 done:2 pound:1 angular:14 implicit:1 flight:11 eqn:8 nonlinear:2 overlapping:1 quality:1 diel:1 building:1 effect:8 k22:4 contain:1 true:3 reweighted:1 d4:3 criterion:21 mina:1 complete:1 confusion:1 motion:2 meaning:1 recently:1 rotation:5 physical:4 avionics:1 tail:4 discussed:2 numerically:1 cambridge:1 sastry:1 similarly:1 pointed:1 rot:2 specification:1 robot:1 longer:2 surface:1 aeronautics:1 add:1 align:1 optimizing:5 optimizes:1 certain:4 continue:1 life:2 inverted:1 additional:4 somewhat:2 schneider:1 converge:1 dashed:1 leishman:1 d0:4 technical:1 veloc:1 long:5 dept:3 magnetometer:2 controlled:1 prediction:22 pitch:9 basic:2 regression:5 controller:3 iteration:1 cz:3 robotics:3 c1:3 whereas:1 aerodynamics:4 unlike:1 hz:2 seem:1 jordan:1 near:1 backwards:1 intermediate:3 split:1 automated:1 affect:4 timesteps:1 gave:1 forthcoming:1 idea:1 motivated:2 heavier:1 suffer:4 peter:1 cause:1 action:4 generally:3 amount:1 transforms:1 locally:1 ph:3 reduced:3 specifies:1 northern:1 latitudinal:2 coates:2 dotted:2 estimated:1 blue:2 key:1 four:1 nevertheless:1 d3:3 backward:1 timestep:1 year:1 sum:1 run:2 angle:4 package:1 place:1 almost:1 throughout:2 decision:1 coherence:1 cyan:3 dash:1 distinguish:1 followed:2 fold:1 deficiency:2 aspect:1 u1:3 argument:1 simulate:1 min:2 vehicular:3 alternate:1 combination:1 poor:1 em:2 invariant:2 equation:10 remains:1 previously:1 turn:3 fail:1 needed:1 apply:1 appropriate:1 stepsize:1 subtracted:1 alternative:2 original:1 imu:1 denotes:2 running:1 neglect:1 society:2 suddenly:1 forum:1 sweep:1 objective:1 dependence:2 bagnell:1 unclear:1 unable:2 separate:1 simulated:3 collected:4 assuming:1 length:2 kalman:1 modeled:1 berger:1 gx1:1 minimizing:9 liang:1 difficult:3 unfortunately:1 mostly:2 steep:1 robert:1 trace:1 corke:1 lagged:24 design:1 collective:2 policy:2 perform:2 markov:2 descent:7 jin:1 incorporated:1 frame:12 tempest:8 lb:1 triaxial:2 c3:3 optimized:1 unmanned:1 rad:2 c4:3 engine:2 s20:1 learned:14 hour:2 nip:2 able:1 suggested:2 dynamical:9 below:1 challenge:1 tb:16 green:1 difficulty:1 hybrid:1 force:1 warm:1 improve:2 axis:1 started:1 gz:2 naive:2 text:1 prior:2 powered:2 aerobatic:4 relative:2 fully:1 expect:1 xcell:9 funnel:2 digital:1 integrate:3 tischler:2 s0:2 principle:1 summary:1 last:1 free:4 keeping:1 heading:2 side:4 institute:1 taking:2 absolute:1 sparse:1 dimension:1 transition:1 world:6 inertia:9 made:1 forward:7 coincide:3 reinforcement:3 approximate:6 logic:1 summing:2 receiver:2 conservation:1 unnecessary:1 alternatively:1 search:9 continuous:2 additionally:1 kanade:1 learn:6 mettler:3 ca:3 symmetry:2 complex:1 necessarily:1 constructing:2 domain:5 rotorcraft:2 timescales:1 main:4 linearly:1 martinos:2 body:22 referred:1 position:8 comprises:1 momentum:1 learns:3 down:2 minute:4 magenta:3 sensing:1 list:1 burden:2 adding:1 magnitude:3 linearization:1 demand:1 horizon:2 cx:3 gyro:2 simply:1 expressed:3 contained:1 bo:1 u2:3 applies:1 corresponds:2 viewed:2 acceleration:44 careful:1 brot:2 absence:1 considerable:1 change:4 onestep:1 determined:1 except:1 invariance:1 experimental:1 east:1 support:1 hovering:1 latter:1 rotate:2 mark:1 jonathan:1 evaluate:1 d1:3 |
2,012 | 2,828 | Asymptotics of Gaussian Regularized
Least-Squares
Ross A. Lippert
M.I.T., Department of Mathematics
77 Massachusetts Avenue
Cambridge, MA 02139-4307
[email protected]
Ryan M. Rifkin
Honda Research Institute USA, Inc.
145 Tremont Street
Boston, MA 02111
[email protected]
Abstract
We consider regularized least-squares (RLS) with a Gaussian kernel. We
prove that if we let the Gaussian bandwidth ? ? ? while letting the
regularization parameter ? ? 0, the RLS solution tends to a polynomial
whose order is controlled by the rielative rates of decay of ?12 and ?: if
? = ? ?(2k+1) , then, as ? ? ?, the RLS solution tends to the kth order
polynomial with minimal empirical error. We illustrate the result with an
example.
1
Introduction
Given a data set (x1 , y1 ), (x2 , y2 ), . . . , (xn , yn ), the inductive learning task is to build a
function f (x) that, given a new x point, can predict the associated y value. We study the
Regularized Least-Squares (RLS) algorithm for finding f , a common and popular algorithm [2, 5] that can be used for either regression or classification:
n
1X
(f (xi ) ? yi )2 + ?||f ||2K .
f ?H n
i=1
min
Here, H is a Reproducing Kernel Hilbert Space (RKHS) [1] with associated kernel function
K, ||f ||2K is the squared norm in the RKHS, and ? is a regularization constant controlling
the tradeoff between fitting the training set accurately and forcing smoothness of f .
The
Pn Representer Theorem [7] proves that the RLS solution will have the form f (x) =
i=1 ci K(xi , x), and it is easy to show [5] that we can find the coefficients c by solving
the linear system
(K + ?nI)c = y,
(1)
where K is the n by n matrix satisfying Kij = K(xi , xj ). We focus on the Gaussian kernel
K(xi , xj ) = exp(?||xi ? xj ||2 /2? 2 ).
Our work was originally motivated by the empirical observation that on a range of benchmark classification tasks, we achieved surprisingly accurate classification using a Gaussian
kernel with a very large ? and a very small ? (Figure 1; additional examples in [6]). This
prompted us to study the large-? asymptotics of RLS. As ? ? ?, K(xi , xj ) ? 1 for
arbitrary xi and xj . Consider a single test point x0 . RLS will first find c using Equation 1,
1.2
RLSC Results for GALAXY Dataset
m=0.9
m=0.99999
0.8
m=1.0d?249
0.4
0.6
Accuracy
1.0
1e?11
1e?08
1e?05
0.01
10
10000
1e?04
1e?01
1e+02
1e+05
Sigma
Fig. 1. RLS classification accuracy results for the UCI Galaxy dataset over a range of ? (along the
x-axis) and ? (different lines) values. The vertical labelled lines show m, the smallest entry in the
kernel matrix for a given ?. We see that when ? = 1e ? 11, we can classify quite accurately when
the smallest entry of the kernel matrix is .99999.
then compute f (x0 ) = ct k where k is the kernel vector, ki = K(xi , x0 ). Combining the
training and testing steps, we see that f (x0 ) = y t (K + ?nI)?1 k.
Both K and k are close to 1 for large ?, i.e. Kij = 1 + ij and ki = 1 + i . If we directly
compute c = (K + ?nI)?1 y, we will tend to wash out the effects of the ij term as ?
becomes large. If, instead, we compute f (x0 ) by associating to the right, first computing
point affinities (K + ?nI)?1 k, then the ij and j interact meaningfully; this interaction is
crucial to our analysis.
Our approach is to Taylor expand the kernel elements (and thus K and k) in 1/?, noting
that as ? ? ?, consecutive terms in the expansion differ enormously. In computing (K +
?nI)?1 k, these scalings cancel each other out, and result in finite point affinities even as
? ? ?. The asymptotic affinity formula can then be ?transposed? to create an alternate
expression for f (x0 ). Our main result is that if we set ? 2 = s2 and ? = s?(2k+1) , then, as
s ? ?, the RLS solution tends to the kth order polynomial with minimal empirical error.
The main theorem is proved in full. Due to space restrictions, the proofs of supporting
lemmas and corollaries are omitted; an expanded version containing all proofs is available
[4].
2
Notation and definitions
Definition 1. Let xi be a set of n + 1 points (0 ? i ? n) in a d dimensional space. The
scalar xia denotes the value of the ath vector component of the ith point.
The n ? d matrix, X is given by Xia = xia .
We think of X as the matrix of training data x1 , . . . , xn and x0 as an 1?d matrix consisting
of the test point.
Let 1m , 1lm denote the m dimensional vector and l ? m matrix with components all 1,
similarly for 0m , 0lm . We will dispense with such subscripts when the dimensions are clear
from context.
Definition 2 (Hadamard products and powers). For two l ? m matrices, N, M , N M
denotes the l ? m matrix given by (N M )ij = Nij Mij . Analogously, we set (N c )ij =
c
Nij
.
Definition 3 (polynomials in the data). Let I ? Zd?0 (non-negative multi-indices) and
Qd
Y be a k ? d matrix. Y I is the k dimensional vector given by Y I i = a=1 YiaIa . If
h : Rd ? R then h(Y ) is the k dimensional vector given by (h(Y ))i = h(Yi1 , . . . , Yid ).
The d canonical vectors, ea ? Zd?0 , are given by (ea )b = ?ab .
Any scalar function, f : R ? R, applied to any matrix or vector, A, will be assumed to
denote the elementwise application of f . We will treat y ? ey as a scalar function (we
have no need of matrix exponentials in this work, so the notation is unambiguous).
We can re-express the kernel matrix and kernel vector in this notation:
1
K = e 2?2
Pd
a=1
t
2X ea (X ea )t ?X 2ea 1tn ?1n (X 2ea )
? 2?12 ||X||2
= diag e
1
k = e 2?2
Pd
a=1
3
e
1
?2
XX t
? 2?12 ||X||2
diag e
(2)
a
2X ea xe0a ?X 2ea 11 ?1n x2e
0
? 2?12 ||X||2
= diag e
e
1
?2
Xxt0 ? 2?12 ||x0 ||2
e
(3)
(4)
.
(5)
Orthogonal polynomial bases
Sc
Let Vc = span{X I : |I| = c} and V?c = a=0 Vc which can be thought of as the set of all
d variable polynomials of degree c, evaluated on the training data. Since the data are finite,
there
exists
b such that V?c = V?b for all c ? b. Generically, b is the smallest c such that
c+d
? n.
d
Let Q be an orthonormal matrix in Rn?n whose columns progressively span the V?c
spaces, i.e. Q = ( B0 B1 ? ? ? Bb ) where Qt Q = I and colspan{( B0 ? ? ? Bc )} =
V?c . We might imagine building such a Q via the Gramm-Schmidt process on the vectors
X 0 , X e1 , . . . , X ed , . . . X I , . . . taken in order of non-decreasing |I|.
|I|
Letting CI =
be multinomial coefficients, the following relations between
I1 . . . Id
Q, X, and x0 are easily proved.
X
(Xxt0 )c =
CI X I (xI0 )t hence (Xxt0 )c ? Vc
|I|=c
t c
(XX )
=
X
|I|=c
CI X I (X I )t
hence
colspan{(XX t )c } = Vc
and thus, Bit (Xxt0 )c = 0 if i > c, Bit (XX t )c Bj = 0 if i > c or j > c, and
Bct (XX t )c Bc is non-singular.
P
Finally, we note that argminv?V?c {||y ? v||} = a?c Ba (Bat y).
4
Taking the ? ? ? limit
We will begin with a few simple lemmas about the limiting solutions of linear systems.
At the end of this section we will arrive at the limiting form of suitably modified RLSC
equations.
Lemma 1. Let i1 < ? ? ? < iq be positive integers. Let A(s), y(s) be a block matrix and
block vector given by
?
?
?
?
A00 (s)
si1 A01 (s) ? ? ? siq A0q (s)
b0 (s)
i
i
i
i
1
1
q
? s b1 (s) ?
?
? 1
A(s) = ? s A10 (s) s A11 (s) ? ? ? s A1q (s) ? , y(s) = ?
??? ?
???
???
???
???
iq
iq
iq
iq
s bq (s)
s Aq0 (s) s Aq1 (s) ? ? ? s Aqq (s)
where Aij (s) and bi (s) are continuous matrix-valued and vector-valued functions of s with
Aii (0) non-singular for all i.
?
??1 ?
?
A00 (0)
0
???
0
b0 (0)
? A (0) A11 (0) ? ? ?
0 ? ? b1 (0) ?
lim A?1 (s)y(s) = ? 10
???
???
???
??? ? ? ??? ?
s?0
Aq0 (0) Aq1 (0) ? ? ? Aqq (0)
bq (0)
We are now ready to state and prove the main result of this section, characterizing the
limiting large-? solution of Gaussian RLS.
Theorem 1. Let q be an integer satisfying q < b, and let p = 2q + 1. Let ? = C? ?p for
c
(c)
(c)
some constant C. Define Aij = c!1 Bit (XX t )c Bj , and bi = c!1 Bit (Xxt0 ) .
lim K + nC? ?p I
???
?1
k=v
where
v = ( B0 ? ? ? Bq ) w
?
? ? (0)
A00
0
???
0
?
? ? (1)
(1)
0 ?w
A11 ? ? ?
? ?A
? ? ? ? ? = ? ? 10
??
??? ??? ??? ?
(q)
(q)
(q)
(q)
Aq0 Aq1 ? ? ? Aqq
bq
(0)
b0
? (1)
? b1
(6)
?
(7)
We first manipulate the equation (K + n?I)y = k according to the factorizations in (3)
and (5).
1
2
t
2
1
1
K = diag e? 2?2 ||X|| e ?2 XX diag e? 2?2 ||X|| = N P N
1
2
t
2
1
1
k = diag e? 2?2 ||X|| e ?2 Xx0 e? 2?2 ||x0 || = N w?
1
2
2
1
Noting that lim??? e? 2?2 ||x0 || diag e 2?2 ||X|| = lim??? ?N ?1 = I,
we have
v ? lim (K + nC? ?p I)?1 k
???
= lim (N P N + ?I)?1 N w?
???
= lim ?N ?1 (P + ?N ?2 )?1 w
???
1
?1 1
1
t
2
t
= lim e ?2 XX + nC? ?p diag e ?2 ||X||
e ?2 Xx0 .
???
Changing bases with Q,
1
?1
t
2
t
1
1
Qt v = lim Qt e ?2 XX Q + nC? ?p Qt diag e ?2 ||X|| Q
Qt e ?2 Xx0 .
???
Expanding via Taylor series and writing in block form (in the b ? b block structure of Q),
t
1
1
1
Qt (XX t )1 Q +
Qt (XX t )2 Q + ? ? ?
Qt e ?2 XX Q = Qt (XX t )0 Q +
4
1!? 2
2!?
? (1)
?
? (0)
?
(1)
A00 A01 ? ? ? 0
A00
0 ??? 0
1 ? A(1) A(1) ? ? ? 0 ?
?
0 ??? 0 ?
? + ???
=? 0
10
11
?+ 2 ?
? ? ???
??? ??? ??? ???
??? ??? ????
0
0 ??? 0
0
0
??? 0
t
1
1
1
Qt e ?2 Xx0 = Qt (Xxt0 )0 + 2 Qt (Xxt0 )1 + 4 Qt (Xxt0 )2 + ? ? ?
?
?
? (1) ?
? (0) ?
b0
b0
(1) ?
1 ?
? 0 ?
?
+ ???
=?
? + 2 ? b1 ?
?
???
??? ?
0
0
1
2
nC? ?p Qt diag e ?2 ||X|| Q = nC? ?p I + ? ? ? .
(c)
Since the Acc are non-singular, Lemma 3 applies, giving our result.
5
t
u
The classification function
When performing RLS, the actual prediction of the limiting classifier is given via
f? (x0 ) ? lim y t (K + nC? ?p I)?1 k.
???
Theorem 1 determines v = lim??? (K + nC? ?p I)?1 k,showing that f? (x0 ) is a polynomial in the training data X. In this section, we show that f? (x0 ) is, in fact, a polynomial
in the test point x0 . We continue to work with the orthonormal vectors Bi as well as the
(c)
(c)
auxilliary quantities Aij and bi from Theorem 1.
Theorem 1 shows that v ? V?q : the point affinity function is a polynomial of degree q in
the training data, determined by (7).
X
X
(c)
(c)
c!Bi Aij Bjt = (XX t )c hence
c!Bc Acj Bjt = Bc Bct (XX t )c
i,j?c
j?c
X
i?c
(c)
c!Bi bi
= (Xxt0 )c
hence
(c)
c!Bc bi
= Bc Bct (Xxt0 )c
we can restate Equation 7 in an equivalent form:
?
?? (0) ? ?
?
(0)
? t?
? t ?t
0!A00
0
???
0
0!b
0
B0
?
?? (1) ? ?
? B0
(1)
(1)
0 ? ? ? ? ? ? v? = 0
? ? ? ? ? ?? 1!b1 ? ? ? 1!A10 1!A11 ? ? ?
?
?
?
?
?
?
???
???
???
???
???
Bqt
Bqt
(q)
(q)
(q)
(q)
q!Aq0 q!Aq1 ? ? ? q!Aqq
q!bq
X
XX
(c)
c!Bc b(c)
c!Bc Acj Bjt v = 0
c ?
c?q
X
(8)
(9)
c?q j?c
Bc Bct (Xxt0 )c ? (XX t )c v = 0.
(10)
c?q
Up to this point, our results hold for arbitrary training data X. To proceed, we require a
mild condition on our training set.
Definition 4. X is called generic if X I1 , . . . , X In are linearly independent for any distinct
multi-indices {Ii }.
Lemma 2. For generic X, the solution to Equation 7 (or equivalently, Equation 10) is
determined by the conditions ?I : |I| ? q, (X I )t v = xI0 , where v ? V?q .
n
Theorem 2. For generic data, let v be
P the solution to Equation 10. For any y ? R ,
f (x0 ) = y t v = h(x0 ), where h(x) = |I|?q aI xI is a multivariate polynomial of degree
q minimizing ||y ? h(X)||.
We see that as ? ? ?, the RLS solution tends to the minimum empirical error kth order
polynomial.
6
Experimental Verification
In this section, we present a simple experiment that illustrates our results. We consider
a fith-degree polynomial function. Figure 2 plots f , along with a 150 point dataset drawn
by choosing xi uniformly in [0, 1], and choosing y = f (x) + i , where i is a Gaussian
random variable with mean 0 and standard deviation .05. Figure 2 also shows (in red) the
best polynomial approximations to the data (not to the ideal f ) of various orders. (We omit
third order because it is nearly indistinguishable from second order.)
According to Theorem 1, if we parametrize our system by a variable s, and solve a Gaussian
regularized least-squares problem with ? 2 = s2 and ? = Cs?(2k+1) for some integer
k, then, as s ? ?, we expect the solution to the system to tend to the kth-order databased polynomial approximation to f . Asymptotically, the value of the constant C does
not matter, so we (arbitrarily) set it to be 1. Figure 3 demonstrates this result.
We note that these experiments frequently require setting ? much smaller than machine. As a consequence, we need more precision than IEEE double-precision floating-point,
and our results cannot be obtained via many standard tools (e.g., MATLAB(TM)) We performed our experiments using CLISP, an implementation of Common Lisp that includes
arithmetic operations on arbitrary-precision floating point numbers.
7
Discussion
Our result provides insight into the asymptotic behavior of RLS, and (partially) explains
Figure 1: in conjunction with additional experiments not reported here, we believe that
0.6
0.8
f(x), Random Sample of f(x), and Polynomial Approximations
?
?
?
?
? ?
??
?
????
?
?
?? ?
?
?
? ??
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
??
0.4
?
? ?
?
??
?
?
?
??
? ? ?
?
? ?
??
?
?? ? ?
??? ? ?
?
?
?
??
?
? ?
?
?
?
?
?
?
?
??
?
? ? ?
??
??
?
??
?
?
?
??
?
? ?
?
? ?
?
y
?
?
0.2
??
?
?
??
0.0
?
?0.2
?
?
?
f
0th order
1st order
2nd order
4th order
5th order
?
?
?
?
??
?0.4
?
?
?
? ?
?
?
?
?
??
?
0.0
0.2
0.4
0.6
0.8
1.0
x
Fig. 2. f (x) = .5(1 ? x) + 150x(x ? .25)(x ? .3)(x ? .75)(x ? .95), a random dataset drawn from
f (x) with added Gaussian noise, and data-based polynomial approximations to f .
we are recovering second-order polynomial behavior, with the drop-off in performance at
various ??s occurring at the transition to third-order behavior, which cannot be accurately
recovered in IEEE double-precision floating-point. Although we used the specific details
of RLS in deriving our solution, we expect that in practice, a similar result would hold for
Support Vector Machines, and perhaps for Tikhonov regularization with convex loss more
generally.
An interesting implication of our theorem is that for very large ?, we can obtain various
order polynomial classifications by sweeping ?. In [6], we present an algorithm for solving
for a wide range of ? for essentially the same cost as using a single ?. This algorithm is not
currently practical for large ?, due to the need for extended-precision floating point.
Our work also has implications for approximations to the Gaussian kernel. Yang et al. use
the Fast Gauss Transform (FGT) to speed up matrix-vector multiplications when performing RLS [8]. In [6], we studied this work; we found that while Yang et al. used moderate-tosmall values of ? (and did not tune ?), the FGT sacrificed substantial accuracy compared
to the best achievable results on their datasets. We showed empirically that the FGT becomes much more accurate at larger values of ?; however, at large-?, it seems likely we
are merely recovering low-order polynomial behavior. We suggest that approximations to
the Gaussian kernel must be checked carefully, to show that they produce sufficiently good
results are moderate values of ?; this is a topic for future work.
References
1. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society,
68:337?404, 1950.
2. Evgeniou, Pontil, and Poggio. Regularization networks and support vector machines. Advances
In Computational Mathematics, 13(1):1?50, 2000.
0.6
0.4
0.2
0.0
0.0
0.2
0.4
0.6
0.8
1st order solution, and successive approximations.
0.8
0th order solution, and successive approximations.
?0.4
?0.2
Deg. 1 polynomial
s = 1.d+1
s = 1.d+2
?0.4
?0.2
Deg. 0 polynomial
s = 1.d+1
s = 1.d+2
s = 1.d+3
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.4
0.6
0.8
1.0
0.6
0.4
0.2
0.0
0.0
0.2
0.4
0.6
0.8
5th order solution, and successive approximations.
0.8
4th order solution, and successive approximations.
0.2
?0.2
Deg. 5 polynomial
s = 1.d+1
s = 1.d+3
s = 1.d+5
s = 1.d+6
?0.4
?0.4
?0.2
Deg. 4 polynomial
s = 1.d+1
s = 1.d+2
s = 1.d+3
s = 1.d+4
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
Fig. 3. As s ? ?, ? 2 = s2 and ? = s?(2k+1) , the solution to Gaussian RLS approaches the kth
order polynomial solution.
3. Keerthi and Lin. Asymptotic behaviors of support vector machines with gaussian kernel. Neural
Computation, 15(7):1667?1689, 2003.
4. Ross Lippert and Ryan Rifkin. Asymptotics of gaussian regularized least-squares. Technical
Report MIT-CSAIL-TR-2005-067, MIT Computer Science and Artificial Intelligence Laboratory,
2005.
5. Rifkin. Everything Old Is New Again: A Fresh Look at Historical Approaches to Machine Learning. PhD thesis, Massachusetts Institute of Technology, 2002.
6. Rifkin and Lippert. Practical regularized least-squares: ?-selection and fast leave-one-outcomputation. In preparation, 2005.
7. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional Conference
Series in Applied Mathematics. Society for Industrial & Applied Mathematics, 1990.
8. Yang, Duraiswami, and Davis. Efficient kernel machines using the improved fast Gauss transform.
In Advances in Neural Information Processing Systems, volume 16, 2004.
| 2828 |@word mild:1 version:1 achievable:1 polynomial:24 norm:1 nd:1 suitably:1 seems:1 tr:1 series:2 rkhs:2 bc:9 fgt:3 recovered:1 com:1 must:1 plot:1 drop:1 progressively:1 intelligence:1 yi1:1 ith:1 provides:1 math:1 honda:2 successive:4 si1:1 mathematical:1 along:2 prove:2 fitting:1 x0:17 behavior:5 colspan:2 frequently:1 multi:2 decreasing:1 actual:1 becomes:2 begin:1 xx:17 notation:3 finding:1 acj:2 classifier:1 demonstrates:1 omit:1 yn:1 positive:1 treat:1 tends:4 limit:1 consequence:1 id:1 subscript:1 might:1 studied:1 factorization:1 range:3 bi:8 bat:1 practical:2 testing:1 practice:1 block:4 pontil:1 asymptotics:3 empirical:4 thought:1 suggest:1 cannot:2 close:1 selection:1 context:1 writing:1 restriction:1 equivalent:1 convex:1 insight:1 orthonormal:2 deriving:1 limiting:4 controlling:1 imagine:1 auxilliary:1 element:1 satisfying:2 substantial:1 pd:2 dispense:1 solving:2 easily:1 aii:1 various:3 sacrificed:1 distinct:1 fast:3 artificial:1 sc:1 choosing:2 whose:2 quite:1 larger:1 valued:2 solve:1 think:1 transform:2 interaction:1 product:1 uci:1 combining:1 ath:1 hadamard:1 rifkin:4 double:2 produce:1 a11:4 leave:1 illustrate:1 iq:5 ij:5 qt:14 b0:10 recovering:2 c:1 qd:1 differ:1 bjt:3 restate:1 vc:4 observational:1 everything:1 explains:1 require:2 ryan:2 hold:2 sufficiently:1 exp:1 predict:1 bj:2 lm:2 consecutive:1 smallest:3 omitted:1 currently:1 ross:2 create:1 tool:1 mit:3 gaussian:14 modified:1 pn:1 conjunction:1 corollary:1 focus:1 industrial:1 a01:2 relation:1 expand:1 i1:3 classification:6 evgeniou:1 look:1 rls:16 cancel:1 representer:1 nearly:1 future:1 report:1 spline:1 few:1 floating:4 consisting:1 keerthi:1 ab:1 generically:1 implication:2 accurate:2 poggio:1 orthogonal:1 bq:5 taylor:2 old:1 re:1 nij:2 minimal:2 kij:2 classify:1 column:1 cost:1 deviation:1 entry:2 reported:1 aq0:4 st:2 csail:1 off:1 analogously:1 squared:1 siq:1 again:1 thesis:1 containing:1 american:1 includes:1 coefficient:2 inc:1 matter:1 bqt:2 performed:1 red:1 square:6 ni:5 accuracy:3 accurately:3 acc:1 ed:1 checked:1 definition:5 a10:2 galaxy:2 associated:2 proof:2 transposed:1 xx0:4 dataset:4 proved:2 massachusetts:2 popular:1 lim:11 hilbert:1 carefully:1 ea:8 cbms:1 originally:1 improved:1 duraiswami:1 evaluated:1 aronszajn:1 perhaps:1 believe:1 building:1 effect:1 usa:1 y2:1 inductive:1 regularization:4 hence:4 laboratory:1 indistinguishable:1 unambiguous:1 davis:1 tn:1 common:2 multinomial:1 empirically:1 volume:2 xi0:2 elementwise:1 a00:6 cambridge:1 ai:1 smoothness:1 rd:1 mathematics:4 similarly:1 base:2 multivariate:1 showed:1 moderate:2 forcing:1 argminv:1 tikhonov:1 continue:1 arbitrarily:1 yi:1 minimum:1 additional:2 ey:1 ii:1 arithmetic:1 full:1 technical:1 lin:1 rlsc:2 e1:1 manipulate:1 controlled:1 prediction:1 regression:1 essentially:1 kernel:16 achieved:1 singular:3 crucial:1 regional:1 tend:2 meaningfully:1 lisp:1 integer:3 noting:2 ideal:1 yang:3 easy:1 xj:5 bandwidth:1 associating:1 wahba:1 tm:1 avenue:1 tradeoff:1 motivated:1 expression:1 proceed:1 matlab:1 aqq:4 generally:1 clear:1 tune:1 bct:4 canonical:1 nsf:1 zd:2 express:1 drawn:2 changing:1 asymptotically:1 merely:1 arrive:1 scaling:1 bit:4 ki:2 ct:1 ri:1 x2:1 speed:1 span:2 min:1 performing:2 expanded:1 department:1 according:2 alternate:1 smaller:1 taken:1 equation:7 letting:2 end:1 available:1 parametrize:1 operation:1 generic:3 schmidt:1 denotes:2 giving:1 build:1 prof:1 society:2 lippert:4 added:1 quantity:1 affinity:4 kth:5 street:1 topic:1 fresh:1 index:2 prompted:1 minimizing:1 nc:8 equivalently:1 sigma:1 negative:1 ba:1 implementation:1 vertical:1 observation:1 datasets:1 benchmark:1 finite:2 supporting:1 extended:1 y1:1 rn:1 reproducing:2 arbitrary:3 sweeping:1 power:1 regularized:6 yid:1 technology:1 axis:1 ready:1 multiplication:1 asymptotic:3 loss:1 expect:2 interesting:1 degree:4 verification:1 surprisingly:1 aij:4 institute:2 wide:1 taking:1 characterizing:1 xia:3 dimension:1 xn:2 transition:1 historical:1 transaction:1 bb:1 deg:4 b1:6 assumed:1 xi:11 continuous:1 expanding:1 interact:1 expansion:1 diag:10 did:1 main:3 linearly:1 s2:3 noise:1 x1:2 fig:3 enormously:1 precision:5 exponential:1 third:2 theorem:9 formula:1 specific:1 showing:1 decay:1 exists:1 ci:4 phd:1 wash:1 illustrates:1 occurring:1 boston:1 likely:1 partially:1 scalar:3 applies:1 mij:1 determines:1 ma:2 labelled:1 determined:2 uniformly:1 lemma:5 called:1 experimental:1 gauss:2 support:3 preparation:1 |
2,013 | 2,829 | Two view learning: SVM-2K, Theory and
Practice
Jason D.R. Farquhar
[email protected]
David R. Hardoon
[email protected]
Hongying Meng
[email protected]
John Shawe-Taylor
[email protected]
Sandor Szedmak
[email protected]
School of Electronics and Computer Science,
University of Southampton, Southampton, England
Abstract
Kernel methods make it relatively easy to define complex highdimensional feature spaces. This raises the question of how we can
identify the relevant subspaces for a particular learning task. When two
views of the same phenomenon are available kernel Canonical Correlation Analysis (KCCA) has been shown to be an effective preprocessing
step that can improve the performance of classification algorithms such
as the Support Vector Machine (SVM). This paper takes this observation to its logical conclusion and proposes a method that combines this
two stage learning (KCCA followed by SVM) into a single optimisation
termed SVM-2K. We present both experimental and theoretical analysis
of the approach showing encouraging results and insights.
1
Introduction
Kernel methods enable us to work with high dimensional feature spaces by defining weight
vectors implicitly as linear combinations of the training examples. This even makes it
practical to learn in infinite dimensional spaces as for example when using the Gaussian
kernel. The Gaussian kernel is an extreme example, but techniques have been developed to
define kernels for a range of different datatypes, in many cases characterised by very high
dimensionality. Examples are the string kernels for text, graph kernels for graphs, marginal
kernels, kernels for image data, etc.
With this plethora of high dimensional representations it is frequently helpful to assist
learning algorithms by preprocessing the feature space in projecting the data into a low
dimensional subspace that contains the relevant information for the learning task. Methods
of performing this include principle components analysis (PCA) [7], partial least squares
[8], kernel independent component analysis (KICA) [1] and kernel canonical correlation
analysis (KCCA) [5].
The last method requires two views of the data both of which contain all of the relevant
information for the learning task, but which individually contain representation specific
details that are different and irrelevant. Perhaps the simplest example of this situation is a
paired document corpus in which we have the same information in two languages. KCCA
attempts to isolate feature space directions that correlate between the two views and hence
might be expected to represent the common relevant information. Hence, one can view this
preprocessing as a denoising of the individual representations through cross-correlating
them.
Experiments have shown how using this as a preprocessing step can improve subsequent
analysis in for example classification experiments using a support vector machine (SVM)
[6]. This is explained by the fact that the signal to noise ratio has improved in the identified
subspace.
Though the combination of KCCA and SVM seems effective, there appears no guarantee
that the directions identified by KCCA will be best suited to the classification task. This
paper therefore looks at the possibility of combining the two distinct stages of KCCA and
SVM into a single optimisation that will be termed SVM-2K.
The next section introduces the new algorithm and discusses its structure. Experiments are
then given showing the performance of the algorithm on an image classification task.
Though the performance is encouraging it is in many ways counter-intuitive, leading to
speculation about why an improvement is seen. To investigate this question an analysis of
its generalisation properties is given in the following two sections, before drawing conclusions.
2
SVM-2K Algorithm
We assume that we are given two views of the same data, one expressed through a feature
projection ?A with corresponding kernel ?A and the other through a feature projection ?B
with kernel ?B . A paired data set is then given by a set
S = {(?A (x1 ), ?B (x1 )), . . . , (?A (x? ), ?B (x? ))},
where for example ?A could be the feature vector associated with one language and ?B
that associated with a second language. For a classification task each data item would also
include a label.
The KCCA algorithm looks for directions in the two feature spaces such that when the
training data is projected onto those directions the two vectors (one for each view) of values
obtained are maximally correlated. One can also characterise these directions as those that
minimise the two norm between the two vectors under the constraint that they both have
norm 1 [5].
We can think of this as constraining the choice of weight vectors in the two spaces. KCCA
would typically find a sequence of projection directions of dimension anywhere between
50 and 500 that can then be used as the feature space for training an SVM [6].
An SVM can be thought of as a 1-dimensional projection followed by thresholding, so
SVM-2K combines the two steps by introducing the constraint of similarity between two
1-dimensional projections identifying two distinct SVMs one in each of the two feature
spaces. The extra constraint is chosen slightly differently from the 2-norm that characterises KCCA. We rather take an ?-insensitive 1-norm using slack variables to measure the
amount by which points fail to meet ? similarity:
|hwA , ?A (xi )i + bA ? hwB , ?B (xi )i ? bB | ? ?i + ?,
where wA , bA (wB , bB ) are the weight and threshold of the first (second) SVM.
Combining this constraint with the usual 1-norm SVM constraints and allowing different
regularisation constants gives the following optimisation:
min L
?
?
?
X
X
X
1
1
2
2
kwA k + kwB k + C A
?iA + C B
?iB + D
?i
2
2
i=1
i=1
i=1
=
|hwA , ?A (xi )i + bA ? hwB , ?B (xi )i ? bB | ? ?i + ?
such that
yi (hwA , ?A (xi )i + bA ) ? 1 ? ?iA
yi (hwB , ?B (xi )i + bB ) ? 1 ? ?iB
?iA ? 0,
?iB ? 0,
?i ? 0 all for
1 ? i ? ?.
? A, w
? B , ?bA , ?bB be the solution to this optimisation problem. The final SVM-2K
Let w
decision function is then h(x) = sign(f (x)), where
? A , ?A (x)i + ?bA + hw
? B , ?B (x)i + ?bB = 0.5 (fA (x) + fB (x)) .
f (x) = 0.5 hw
Applying the usual Lagrange multiplier techniques we arrive at the following dual problem:
max W
= ?
such that
?
?
X
1 X A A
gi gj ?A (xi , xj ) + giB gjB ?B (xi , xj ) +
(?iA + ?iB )
2 i,j=1
i=1
giA = ?iA yi ? ?i+ + ?i? ,
?
X
giA = 0 =
i=1
?
X
giB = ?iB yi + ?i+ ? ?i? ,
giB ,
i=1
0?
0?
A/B
?i
?
+/?
?i ,
C A/B
?i+ + ?i? ? D
with the functions
fA/B (x) =
?
X
A/B
gi
?A/B (xi , x) + bA/B .
i=1
3
Experimental results
Figure 1: Typical example images from the PASCAL VOC challenge database. Classes
are; Bikes (top-left), People (top-right), Cars (bottom-left) and Motorbikes (bottom-right).
The performance of the algorithms developed in this paper we evaluated on PASCAL Visual Object Classes (VOC) challenge dataset test11 . This is a new dataset consisting of
four object classes in realistic scenes. The object classes are, motorbikes (M), bicycles (B),
people (P) and cars (C) with the dataset containing 684 training set images consisting of
(214, 114, 84, 272) images in each class and 689 test set images with (216, 114, 84, 275)
for each class. As can be seen in Figure 1 this is a very challenging dataset with objects of
widely varying type, pose, illumination, occlusion, background, etc.
The task is to classify the image according to whether it contains a given object type. We
tested the images containing the object (i.e. categories M, B, C and P) against non-object
images from the database (i.e. category N). The training set contained 100 positive and 100
negative images. The tests are carried out on 100 new images, half belonging to the learned
class and half not.
Like many other successful methods [3, 4] we take a ?set-of-patches? approach to this
problem. These methods represent an image in terms of the features of a set of small
image patches. By carefully choosing the patches and their features this representation can
be made largely robust to the common types of image transformation, e.g. scale, rotation,
perspective, occlusion.
Two views were provided of each image through the use of different patch types. One was
from affine invariant interest point detectors with a moment invariant descriptor calculated
for each interest point. The second were key point features from SIFT detectors. For one
image, several hundred characteristic patches were detected according to the complexity
of the images. These were then clustered around K = 400 centres for each feature space.
Each image is then represented as a histogram over these centres. So finally, for one image
there are two feature vectors of length 400 that provide the two views.
SVM 1
SVM 2
KCCA + SVM
SVM 2K
Motorbike
94.05
91.15
94.19
94.34
Bicycle
91.58
91.15
90.28
93.47
People
91.58
90.57
90.57
92.74
Car
87.95
86.21
88.68
90.13
Table 1: Results for 4 datasets showing test accuracy of the individual SVMs and SVM-2K.
Figure 1 show the results of the test errors obtained for the different categories for the
individual SVMs and the SVM-2K. There is a clear improvement in performance of the
SVM-2K over the two individual SVMs in all four categories.
If we examine the structure of the optimisation, the restriction that the output of the two
linear functions be similar seems to be an arbitrary restriction particularly for points that are
far from the margin or are misclassified. Intuitively it would appear better to take advantage
of the abilities of the different representations to better fit the data.
In order to understand this apparent contradiction we now consider a theoretical analysis
of the generalisation of the SVM-2K using the framework provided by Rademacher complexity bounds.
4
Background theory
We begin with the definitions required for Rademacher complexity, see for example Bartlett
and Mendelson [2] (see also [9] for an introductory exposition).
Definition 1. For a sample S = {x1 , ? ? ? , x? } generated by a distribution D on a set
1
Available
from
http://www.pascal-network.org/challenges/VOC/voc/
160305 VOCdata.tar.gz
X and a real-valued function class F with a domain X, the empirical Rademacher
complexity of F is the random variable
"
#
?
2 X
? ? (F) = E? sup
R
?i f (xi ) x1 , ? ? ? , x?
f ?F ? i=1
where ? = {?1 , ? ? ? , ?? } are independent uniform {?1}-valued Rademacher random variables. The Rademacher complexity of F is
?
#
"
2 X
h
i
? ? (F) = ES? sup
R? (F) = ES R
?i f (xi )
?
f ?F
i=1
We use ED to denote expectation with respect to a distribution D and ES when the distribution is the uniform (empirical) distribution on a sample S.
Theorem 1. Fix ? ? (0, 1) and let F be a class of functions mapping from S to [0, 1].
?
Let (xi )i=1 be drawn independently according to a probability distribution D. Then with
probability at least 1 ? ? over random draws of samples of size ?, every f ? F satisfies
q
ED [f (x)] ? ES [f (x)] + R? (F) + 3 ln(2/?)
2?
q
ln(2/?)
?
? ES [f (x)] + R? (F) + 3
2?
Given a training set S the class of functions that we will primarily be considering are linear
functions with bounded norm
n
o
P?
x ? i=1 ?i ? (xi , x) : ?? K? ? B 2
? {x ? hw, ? (x)i : kwk ? B} = FB
where ? is the feature mapping corresponding to the kernel ? and K is the corresponding
kernel matrix for the sample S. The following result bounds the Rademacher complexity
of linear function classes.
Theorem 2. [2] If ? : X ? X ? R is a kernel, and S = {x1 , ? ? ? , x? } is a sample of
point from X, then the empirical Rademacher complexity of the class FB satisfies
v
u ?
uX
2B
2B p
t
?
R? (F) ?
tr (K)
? (xi , xi ) =
?
?
i=1
4.1
Analysing SVM-2K
?
?
For SVM-2K, the two feature sets from the same objects are (?A (xi ))i=1 and (?B (xi ))i=1
respectively. We assume the notation and optimisation of SVM-2K given in section 2,
equation (1).
First observe that an application of Theorem 1 shows that
ES [|fA (x) ? fB (x)|]
? A , ?A (x)i + ?bA ? hw
? B , ?B (x)i ? ?bB |]
? ES [|hw
r
?
1X
2C p
ln(2/?)
tr(KA ) + tr(KB ) + 3
? ?+
?i +
=: D
? i=1
?
2?
with probability at least 1??. We have assumed that kwA k2 +b2A ? C 2 and kwB k2 +b2B ?
C 2 for some prefixed C. Hence, the class of functions we are considering when applying
SVM-2K to this problem can be restricted to
(
!
?
X
A
B
FC,D =
f f : x ? 0.5
gi ?A (xi , x) + gi ?B (xi , x) + bA + bB ,
i=1
g A? KA g A + b2A ? C 2 , g B ? KB g B + b2B ? C 2 , ES [|fA (x) ? fB (x)|] ? D
The class FC,D is clearly closed under negation.
Applying the usual Rademacher techniques for margin bounds on generalisation we obtain
the following result.
Theorem 3. Fix ? ? (0, 1) and let FC,D be the class of functions described above. Let
?
(xi )i=1 be drawn independently according to a probability distribution D. Then with probability at least 1 ? ? over random draws of samples of size ?, every f ? FC,D satisfies
r
?
ln(2/?)
0.5 X A
B
?
(? + ?i ) + R? (FC,D ) + 3
.
P(x,y)?D (sign(f (x)) 6= y) ?
? i=1 i
2?
It therefore remains to compute the empirical Rademacher complexity of FC,D , which
is the critical discriminator between the bounds for the individual SVMs and that of the
SVM-2K.
4.2
Empirical Rademacher complexity of FC,D
We now define an auxiliary function of two weight vectors wA and wB ,
D(wA , wB ) := ED [|hwA , ?A (x)i + bA ? hwB , ?B (x)i ? bB |]
With this notation we can consider computing the Rademacher complexity of the class
FC,D .
?
#
2 X
? ? (FC,D ) = E?
R
sup
?i f (xi )
?
f ?FC,D
i=1
"
X
#
1 ?
= E?
sup
?i [hwA , ?A (xi )i + bA + hwB , ?B (xi )i + bB ]
?
kwA k?C, kwB k?C
i=1
"
D(wA ,wB )?D
Our next observation follows from a reversed version of the basic Rademacher complexity
theorem reworked to reverse the roles of the empirical and true expectations:
Theorem 4. Fix ? ? (0, 1) and let F be a class of functions mapping from S to [0, 1].
?
Let (xi )i=1 be drawn independently according to a probability distribution D. Then with
probability at least 1 ? ? over random draws of samples of size ?, every f ? F satisfies
q
ES [f (x)] ? ED [f (x)] + R? (F) + 3 ln(2/?)
2?
q
? ? (F) + 3 ln(2/?)
? ED [f (x)] + R
2?
The proof tracks that of Theorem 1 but is omitted through lack of space.
For weight vectors wA and wB satisfying D(wA , wB ) ? D, an application of Theorem 4
shows that with probability at least 1 ? ? we have
? A , wB )
D(w
:= ES [|hwA , ?A (x)i + bA ? hwB , ?B (x)i ? bB |]
r
ln(2/?)
2C p
tr(KA ) + tr(KB ) + 3
? D+
?
2?
r
?
X
p
ln(2/?)
1
4C
?
? ?+
?i +
tr(KA ) + tr(KB ) + 6
=: D
? i=1
?
2?
We now return to bounding the Rademacher complexity of FC,D . The above result shows
that with probability greater than 1 ? ?
? ? FC,D
R
P
1 ?
? E? sup kwA k?C ? i=1 ?i [hwA , ?A (xi )i + bA + hwB , ?B (xi )i + bB ]
kwB k?C
?
?
D(w
A ,wB )?D
First note that the expression in square brackets is concentrated under the uniform distribution of Rademacher variables. Hence, we can estimate the complexity for a fixed instantiation ?
? of the the Rademacher variables ?. We now must find the value of wA and wB that
maximises the expression
"*
+
*
+
#
?
?
?
?
X
X
X
X
1
?
?i ?A (xi ) + bA
?
?i + wB ,
?
?i ?B (xi ) + bB
?
?i
wA ,
?
i=1
i=1
i=1
i=1
1 ?
= ?
? KA g A + ?
? ? KB g B + (bA + bB )?
? ? j
?
subject to the constraints g A? KA g A ? C 2 , g B ? KB g B ? C 2 , and
1 ?
?
1 abs(KA g A ? KB g B + (bA ? bB )1) ? D
?
where 1 is the all ones vector and abs(u) is the vector obtained by applying the abs function
to u component-wise. The resulting value of the objective function is the estimate of the
Rademacher complexity. This is the optimisation solved in the brief experiments described
below.
4.3
Experiments with Rademacher complexity
We computed the Rademacher complexity for the problems considered in the experimental
section above. We wished to verify that the Rademacher complexity of the space FC,D ,
where C and D are determined by applying the SVM-2K, are indeed significantly lower
than that obtained for the SVMs in each space individually.
SVM 1
Rad 1
SVM 2
Rad 2
SVM 2K
Rad 2K
Motorbike
94.05
1.65
91.15
1.72
94.34
1.26
Bicycle
91.58
0.93
91.15
1.48
93.47
1.28
People
91.58
0.91
90.57
0.87
92.74
0.82
Car
87.95
1.60
86.21
1.64
90.13
1.26
Table 2: Results for 4 datasets showing test accuracy and Rademacher complexity (Rad) of
the individual SVMs and SVM-2K.
Table 2 shows the results for the motorbike, bicycle, people and car datasets. We show
the Rademacher complexities for the individual SVMs and for the SVM-2K along with
the generalisation results already given in Table 1. In the case of SVM-2K we sampled
the Rademacher variables 10 times and give the corresponding standard deviation. As predicted the Rademacher complexity is significantly smaller for SVM-2K, hence confirming
the intuition that led to the introduction of the approach, namely that the complexity of the
class is reduced by restricting the weight vectors to align on the training data. Provided
both representations contain the necessary data we can therefore expect an improvement in
generalisation as observed in the reported experiments.
5
Conclusions
With the plethora of data now being collected in a wide range of fields there is frequently
the luxury of having two views of the same phenomenon. The simplest example is paired
corpora of documents in different languages, but equally we can think of examples from
bioinformatics, machine vision, etc. Frequently it is also reasonable to assume that both
views contain all of the relevant information required for a classification task.
We have demonstrated that in such cases it can be possible to leaver the correlation between the two views to improve classification accuracy. This has been demonstrated in
experiments with a machine vision task. Furthermore, we have undertaken a theoretical
analysis to illuminate the source and extent of the advantage that can be obtained, showing in the cases considered a significant reduction in the Rademacher complexity of the
corresponding function classes.
References
[1] Francis R. Bach and Michael I. Jordan. Kernel independent component analysis. Journal of Machine Learning Research, 3:1?48, 2002.
[2] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: risk bounds
and structural results. Journal of Machine Learning Research, 3:463?482, 2002.
[3] G. Csurka, C. Bray, C. Dance, and L. Fan. Visual categorization with bags of keypoints.
In XRCE Research Reports, XEROX. The 8th European Conference on Computer Vision - ECCV, Prague, 2004.
[4] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised
scale-invariant learning. In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2003.
[5] David Hardoon, Sandor Szedmak, and John Shawe-Taylor. Canonical correlation
analysis: An overview with application to learning methods. Neural Computation,
16:2639?2664, 2004.
[6] Yaoyong Li and John Shawe-Taylor. Using kcca for japanese-english cross-language
information retrieval and classification. to appear in Journal of Intelligent Information
Systems, 2005.
[7] S. Mika, B. Sch?olkopf, A. Smola, K.-R. M?uller, M. Scholz, and G. R?atsch. Kernel
PCA and de-noising in feature spaces. In Advances in Neural Information Processing
Systems 11, 1998.
[8] R. Rosipal and L. J. Trejo. Kernel partial least squares regression in reproducing kernel
hilbert space. Journal of Machine Learning Research, 2:97?123, 2001.
[9] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge
University Press, Cambridge, UK, 2004.
| 2829 |@word version:1 seems:2 norm:6 gjb:1 tr:7 reduction:1 moment:1 electronics:1 contains:2 document:2 ka:7 must:1 john:3 realistic:1 subsequent:1 confirming:1 xrce:1 half:2 item:1 org:1 along:1 combine:2 introductory:1 indeed:1 expected:1 frequently:3 examine:1 voc:4 encouraging:2 considering:2 hardoon:2 provided:3 begin:1 bounded:1 notation:2 bike:1 string:1 developed:2 transformation:1 guarantee:1 every:3 k2:2 uk:6 appear:2 before:1 positive:1 meng:1 meet:1 might:1 mika:1 challenging:1 scholz:1 range:2 practical:1 practice:1 empirical:6 thought:1 significantly:2 projection:5 onto:1 noising:1 risk:1 applying:5 www:1 restriction:2 demonstrated:2 independently:3 identifying:1 contradiction:1 insight:1 satisfying:1 particularly:1 recognition:2 database:2 bottom:2 role:1 observed:1 solved:1 counter:1 intuition:1 complexity:23 cristianini:1 raise:1 differently:1 represented:1 distinct:2 effective:2 detected:1 choosing:1 apparent:1 widely:1 valued:2 tested:1 drawing:1 ability:1 gi:4 think:2 final:1 sequence:1 advantage:2 relevant:5 combining:2 intuitive:1 olkopf:1 plethora:2 rademacher:25 categorization:1 object:9 ac:5 pose:1 school:1 wished:1 auxiliary:1 c:1 gib:3 predicted:1 direction:6 kb:7 enable:1 jst:1 fix:3 clustered:1 around:1 considered:2 bicycle:4 mapping:3 gia:2 omitted:1 bag:1 label:1 individually:2 uller:1 clearly:1 gaussian:3 rather:1 tar:1 varying:1 improvement:3 helpful:1 typically:1 perona:1 misclassified:1 classification:8 dual:1 pascal:3 proposes:1 marginal:1 field:1 having:1 look:2 reworked:1 unsupervised:1 report:1 intelligent:1 primarily:1 individual:7 consisting:2 occlusion:2 luxury:1 negation:1 attempt:1 ab:3 interest:2 possibility:1 investigate:1 introduces:1 extreme:1 bracket:1 yaoyong:1 partial:2 necessary:1 taylor:4 theoretical:3 classify:1 wb:10 introducing:1 deviation:1 southampton:2 hundred:1 uniform:3 successful:1 reported:1 michael:1 containing:2 leading:1 return:1 li:1 de:1 csurka:1 view:12 jason:1 closed:1 kwk:1 sup:5 francis:1 hwa:7 square:3 accuracy:3 descriptor:1 largely:1 characteristic:1 identify:1 detector:2 ed:5 definition:2 against:1 associated:2 proof:1 soton:4 sampled:1 dataset:4 logical:1 car:5 dimensionality:1 hilbert:1 carefully:1 appears:1 zisserman:1 improved:1 maximally:1 evaluated:1 though:2 furthermore:1 anywhere:1 stage:2 smola:1 correlation:4 lack:1 perhaps:1 contain:4 multiplier:1 true:1 verify:1 hence:5 b2b:2 image:19 wise:1 common:2 rotation:1 kwa:4 overview:1 insensitive:1 significant:1 cambridge:2 centre:2 shawe:4 language:5 similarity:2 gj:1 etc:3 align:1 datatypes:1 perspective:1 irrelevant:1 reverse:1 termed:2 yi:4 seen:2 greater:1 signal:1 keypoints:1 england:1 cross:2 bach:1 retrieval:1 equally:1 paired:3 kcca:12 basic:1 regression:1 optimisation:7 expectation:2 vision:4 histogram:1 kernel:22 represent:2 background:2 source:1 sch:1 extra:1 isolate:1 subject:1 jordan:1 prague:1 structural:1 constraining:1 easy:1 xj:2 fit:1 identified:2 minimise:1 whether:1 expression:2 pca:2 bartlett:2 assist:1 york:1 clear:1 characterise:1 amount:1 concentrated:1 svms:8 category:4 simplest:2 reduced:1 http:1 canonical:3 sign:2 track:1 key:1 four:2 threshold:1 drawn:3 undertaken:1 graph:2 arrive:1 reasonable:1 patch:5 draw:3 decision:1 bound:5 followed:2 fan:1 bray:1 constraint:6 scene:1 min:1 performing:1 relatively:1 according:5 xerox:1 combination:2 belonging:1 smaller:1 slightly:1 intuitively:1 restricted:1 projecting:1 explained:1 invariant:3 ln:8 equation:1 remains:1 discus:1 slack:1 fail:1 prefixed:1 b2a:2 available:2 observe:1 motorbike:5 top:2 include:2 sandor:2 objective:1 question:2 already:1 fa:4 usual:3 illuminate:1 subspace:3 reversed:1 collected:1 extent:1 length:1 ratio:1 farquhar:1 negative:1 ba:16 allowing:1 maximises:1 observation:2 datasets:3 defining:1 situation:1 reproducing:1 arbitrary:1 david:2 namely:1 required:2 speculation:1 discriminator:1 rad:4 learned:1 below:1 pattern:2 challenge:3 rosipal:1 max:1 ia:5 critical:1 improve:3 brief:1 carried:1 gz:1 szedmak:2 text:1 characterises:1 regularisation:1 expect:1 affine:1 principle:1 thresholding:1 eccv:1 last:1 english:1 understand:1 wide:1 dimension:1 calculated:1 fb:5 made:1 preprocessing:4 projected:1 ec:4 far:1 correlate:1 bb:15 drh:1 implicitly:1 correlating:1 instantiation:1 corpus:2 assumed:1 xi:28 fergus:1 why:1 table:4 learn:1 robust:1 complex:1 european:1 japanese:1 domain:1 bounding:1 noise:1 x1:5 ib:5 hw:5 theorem:8 specific:1 showing:5 sift:1 svm:36 mendelson:2 restricting:1 trejo:1 illumination:1 margin:2 suited:1 led:1 fc:13 visual:2 lagrange:1 expressed:1 contained:1 ux:1 satisfies:4 exposition:1 analysing:1 infinite:1 characterised:1 generalisation:5 typical:1 determined:1 denoising:1 experimental:3 e:10 atsch:1 highdimensional:1 support:2 people:5 bioinformatics:1 dance:1 phenomenon:2 correlated:1 |
2,014 | 283 | 340
Carter, Rudolph and Nucci
Operational Fault Tolerance
of CMAC Networks
Michael J. Carter
Franklin J. Rudolph
Adam J. Nucci
Intelligent Structures Group
Department of Electrical and Computer Engineering
University of New Hampshire
Durham, NH 03824-3591
ABSTRACT
The performance sensitivity of Albus' CMAC network was studied for
the scenario in which faults are introduced into the adjustable weights
after training has been accomplished. It was found that fault sensitivity
was reduced with increased generalization when "loss of weight" faults
were considered, but sensitivity was increased for "saturated weight"
faults.
1
INTRODUCTION
Fault-tolerance is often cited as an inherent property of neural networks, and is thought by
many to be a natural consequence of "massively parallel" computational architectures.
Numerous anecdotal reports of fault-tolerance experiments, primarily in pattern
classification tasks, abound in the literature. However, there has been surprisingly little
rigorous investigation of the fault-tolerance properties of various network architectures in
other application areas. In this paper we investigate the fault-tolerance of the CMAC
(Cerebellar Model Arithmetic Computer) network [Albus 1975] in a systematic manner.
CMAC networks have attracted much recent attention because of their successful
application in robotic manipulator control [Ersu 1984, Miller 1986, Lane 1988]. Since
fault-tolerance is a key concern in critical control tasks, there is added impetus to study
Operational Fault Tolerance of CMAC Networks
this aspect of CMAC performance. In particular. we examined the effect on network
performance of faults introduced into the adjustable weight layer after training has been
accomplished in a fault-free environment The degradation of approximation error due to
faults was studied for the task of learning simple real functions of a single variable. The
influence of receptive field width and total CMAC memory size on the fault sensitivity of
the network was evaluated by means of simulation.
2
THE CMAC NETWORK ARCHITECTURE
The CMAC network shown in Figure 1 implements a form of distributed table lookup.
It consists of two parts: 1) an address generator module. and 2) a layer of adjustable
weights. The address generator is a fixed algorithmic transformation from the input space
to the space of weight addresses. This transformation has two important properties: 1)
Only a fixed number C of weights are activated in response to any particular input. and
more importantly, only these weights are adjusted during training; 2) It is locally
generalizing. in the sense that any two input points separated by Euclidean distance less
than some threshold produce activated weight subsets that are close in Hamming distance,
i.e. the two weight subsets have many weights in common. Input points that are
separated by more than the threshold distance produce non-overlapping activated weight
subsets. The first property gives rise to the extremely fast training times noted by all
CMAC investigators. The number of weights activated by any input is referred to as the
"generalization parameter". and is typically a small number ranging from 4 to 128 in
practical applications [Miller 1986]. Only the activated weights are summed to form the
response to the current input. A simple delta rule adjustment procedure is used to update
the activated weights in response to the presentation of an input-desired output exemplar
pair. Note that there is no adjustment of the address generator transformation during
learning. and indeed, there are no "weights" available for adjustment in the address
generator. It should also be noted that the hash-coded mapping is in general necessary
because there are many more resolution cells in the input space than there are unique
finite combinations of weights in the physical memory. As a result, the local
generalization property will be disturbed because some distant inputs share common
weight addresses in their activated weight subsets due to hashing collisions.
While the CMAC network readily lends itself to the task of learning and mimicking
multidimensional nonlinear transformations. the investigation of network fault-tolerance
in this setting is daunting! For reasons discussed in the next section. we opted to study
CMAC fault-tolerance for simple one-dimensional input and output spaces without the
use of the hash-coded mapping.
3 .FAULT-TOLERANCE EXPERIMENTS
We distinguish between two types of fault-tolerance in neural networks [Carter 1988]:
operational fault-tolerance and learning fault-tolerance. Operational fault-tolerance deals
with the sensitivity of network performance to faults inttoduced after learning has been
341
342
Carter, Rudolph and Nucci
accomplished in a fault-free environment. Learning fault-tolerance refers to the sensitivity
of network performance to faults (either permanent or transient) which are present during
training. It should be noted that the term fault-tolerance as used here applies only to
faults that represent perturbations in network parameters or topology, and does not refer
to noisy or censored input data. Indeed, we believe that the latter usage is both
inappropriate and inconsistent with conventional usage in the computer fault-tolerance
community.
3.1
EXPERIMENT DESIGN PHILOSOPHY
Since the CMAC network is widely used for learning nonlinear functions (e.g. the motor
drive voltage to joint angle transformation for a multiple degree-of-freedom robotic
manipulator), the obvious measure of network performance is function approximation
error. The sensitivity of approximation error to faults is the subject of this paper. There
are several types of faults that are of concern in the CMAC architecture. Faults that occur
in the address generator module may ultimately have the most severe impact on
approximation error since the selection of incorrect weight addresses will likely produce a
bad response. On the other hand, since the address generator is an algorithm rather than a
true network of simple computational units, the fault-tolerance of any serial processor
implementation of the algorithm will be difficult to study. For this reason we initially
elected to study the fault sensitivity of the adjustable weight layer only.
The choice of fault types and fault placement strategies for neural network fault tolerance
studies is not at all straightforward. Unlike classical fault-tolerance studies in digital
systems which use "stuck-at-zero" and "stuck-at-one" faults, neural networks which use
analog or mixed analog/digital implementations may suffer from a host of fault types. In
order to make some progress, and to study the fault tolerance of the CMAC network at
the architectural level rather than at the device level, we opted for a variation on the
"stuck-at" fault model of digital systems. Since this study was concerned only with the
adjustable weight layer, and since we assumed that weight storage is most likely to be
digital (though this will certainly change as good analog memory technologies are
developed), we considered two fault models which are admittedly severe. The first is a
"loss of weight" fault which results in the selected weight being set to zero, while the
second is a "saturated weight" fault which might correspond to the situation of a
stuck-at-one fault in the most significant bit of a single weight register.
The question of fault placement is also problematic. In the absence of a specific circuit
level implementation of the network, it is difficult to postulate a model for fault
distribution. We adopted a somewhat perverse outlook in the hope of characterizing the
network's fault tolerance under a worst-case fault placement strategy. The insight gained
will still prove to be valuable in more benign fault placement tests (e.g. random fault
placement), and in addition, if one can devise network modifications which yield good
fault-tolerance in this extreme case, there is hope of still better performance in more
Operational Fault Tolerance of CMAC Networks
typical instances of circuit failure. When placing "loss of weight" faults, we attacked
large magnitude weight locations fast, and continued to add more such faults to locations
ranked in descending order of weight magnitude. Likewise, when placing saturated weight
faults we attacked small magnitude weight locations first, and successive faults were
placed in locations ordered by ascending weight magnitude. Since the activated weights
are simply summed to form a response in CMAC, faults of both types create an error in
the response which is equal to the weight change in the faulted location. Hence, our
strategy was designed to produce the maximum output error for a given number of faults.
In placing faults of either type, however, we did not place two faults within a single
activated weight subset. Our strategy was thus not an absolute worst-case strategy, but
was still more stressful than a purely random fault placement strategy. Finally, we did
not mix fault types in any single experiment.
The fault tolerance experiments presented in the next section all had the same general
structure. The network under study was trained to reproduce (to a specified level of
approximation error) a real function of a single variable, y=f(x), based upon presentation
of (x,y) exemplar pairs. Faults of the types described previously were then introduced,
and the resulting degradation in approximation error was logged versus the number of
faults. Many such experiments were conducted with varying CMAC memory size and
generalization parameter while learning the same exemplar function. We considered
smoothly varying functions (sinusoids of varying spatial frequency) and discontinuous
functions (step functions) on a bounded interval.
3.2
EXPERIMENT RESULTS AND DISCUSSION
In this section we present the results of experiments in which the function to be learned is
held fixed, while the generalization parameter of the CMAC network to be tested is
varied. The total number of weights (also referred to here as memory locations) is the
same in each batch of experiments. Memory sizes of 50, 250, and 1000 were
investigated, but only the results for the case N=250 are presented here. They exemplify
the trends observed for all memory sizes.
Figure 2 shows the dependence of RMS (root mean square) approximation error on the
number of loss-of-weight faults injected for generalization parameter values C=4, 8, 16.
The task was that of reproducing a single cycle of a sinusoidal function on the input
interval. Note that approximation error was diminished with increasing generalization at
any fault severity level. For saturated weight faults, however, approximation error
incr.eased with increasing generalization! The reason for this contrasting behavior
becomes clear upon examination of Figure 3. Observe also in Figure 2 that the increase
in RMS error due to the introduction of a single fault can be as much as an order of
magnitude. This is somewhat deceptive since the scale of the error is rather small
(typically 10- 3 or so), and so it may not seem of great consequence. However, as one
may note in Figure 3, the effect of a single fault is highly localized, so RMS
approximation error may be a poor choice of performance measure in selected
343
344
Carter, Rudolph and Nucci
applications. In particular, saturated weight faults in nominally small weight magnitude
locations create a large relative response error, and this may be devastating in real-time
control applications. Loss-of-weight faults are more benign, and their impact may be
diluted by increasing generalization. The penalty for doing so, however, is increased
sensitivity to saturated weight faults because larger regions of the network mapping are
affected by a single fault
Figure 4 displays some of the results of fault-tolerance tests with a discontinuous
exemplar function. Note the large variation in stored weight values necessary to
reproduce the step function. When a large magnitude weight needed to form the step
transition was faulted, the result was a double step (Figure 4(b? or a shifted transition
point (Figure 4(c?. The extent of the fault impact was diminished with decreasing
generalization. Since pattern classification tasks are equivalent to learning a
discontinuous function over the input feature space, this finding suggests that improved
fault-tolerance in such tasks might be obtained by reducing the generalization parameter
C. This would limit the shifting of pattern class boundaries in the presence of weight
faults. Preliminary experiments, however, also showed that learning of discontinuous
exemplar functions proceeded much more slowly with small values of the generalization
parameter.
4
CONCLUSIONS AND OPEN QUESTIONS
The CMAC network is well-suited to applications that demand fast learning of unknown
multidimensional, static mappings (such as those arising in nonlinear control and signal
processing systems). The results of the preliminary investigations reported here suggest
that the fault-tolerance of conventional CMAC networks may not be as great as one
might hope on the basis of anecdotal evidence in the prior literature with other network
architectures. Network fault sensitivity does not seem to be uniform, and the location of
particularly sensitive weights is very much dependent on the exemplar function to be
le3f!led. Furthermore, the obvious fault-tolerance enhancement technique of increasing
generalization (i.e. distributing the response computation over more weight locations) has
the undesirable effect of increasing sensitivity to saturated weight faults. While the
local generalization feature of CMAC has the desirable attribute of limiting the region of
fault impact, it suggests that global approximation error measures may be misleading.
A low value of RMS error degradation may in fact mask a much more severe response
error over a small region of the mapping. Finally, one must be cautious in making
assessments of the fault-tolerance of a fixed network on the basis of tests using a single
mapping. Discontinuous exemplar functions produce stored weight distributions which
are much more fault-sensitive than those associated with smoothly varying functions, and
such functions are clearly of interest in pattern classification.
Many important open questions remain concerning the fault-tolerance properties of the
CMAC network. The effect of faults on the address generator module has yet to be
determined. Collisions in the hash-coded mapping effectively propagate weight faults to
Operational Fault Tolerance of CMAC Networks
remote regions of the input space, and the impact of this phenomenon on overall
fault-tolerance has not been assessed. Much more work is needed on the role that
exemplar function smoothness plays in detennining the fault-tolerance of a fIxed topology
network.
Acknowledgements
The authors would like to thank Tom Miller, Fil Glanz. Gordon Kraft. and Edgar An for
many helpful discussions on the CMAC network architecture. This work was supported
in part by an Analog Devices Career Development Professorship and by a General Electric
Foundation Young Faculty Grant awarded to MJ. Carter.
References
J.S. Albus. (1975) "A new approach to manipulator control: the Cerebellar Model
Articulation Controller (CMAC)," Trans. ASME- 1. Dynamic Syst .. Meas .. Contr. 97 ;
220-227.
MJ. Carter. (1988) "The illusion of fault-tolerance in neural networks for pattern
recognition and signal processing," Proc. Technical Session on Fault-Tolerant Integrated
Systems. Durham, NH: University of New Hampshire.
E. Ersu and J. Militzer. (1984) "Real-time implementation of an aSSOC13Uve
memory-based learning control scheme for non-linear multivariable processes," Proc. 1st
Measurement and Control Symposium on Applications of Multivariable Systems
Techniques; 109-119.
S. Lane, D. Handelman, and J. Gelfand. (1988) "A neural network computational map
approach to reflexive motor control," Proc. IEEE Intelligent Control Conf. Arlington,
VA.
W.T. Miller. (1986) "A nonlinear learning controller for robotic manipulators," Proc.
SPIE: Intelligent Robots and Computer Vision 726; 416-423.
C addrc:a elo:c:liaa
IiDu
--~ ~
x
..----0
y
345
346
Carter, Rudolph and Nucci
..
~------------------------~
?? w?
c
oo
~
4
~
_1'
?
-I
?.
~~--------------.....-.
?
Figure 2: Sinusoid Approximation Enor vs. Number of "Loss-of-Weight" Faults
u
?
?!
'i.
??
?
U
-...
. . 1--.-----.......------.. . .----
i4.1
?
?
??
.
~
~
???
u
loU
?
__
Ii
-
......
.........
zea--? -----......-----......
,.
?
u
??
t
u
________-::,.:~
! u~----------~--
....
-
. . +-----------...-?
II.
-.,~
Figure 3: Network Response and Stored Weight Values. a) single lost weight.
generalization C=4; b) single lost weight, C=16; c) single saturated weight, C=16.
Operational Fault Tolerance of CMAC Networks
1.5
1.0
v
'0
-
0.5
<
0.0
:I
~
e
v
.::s
-<1.5
V
~
?
a:
neIWOrk response
stOn:d weighlS
?1.0
?1.5
a
100
Memory Localioa
200
?
0.8
.
'0
:I
~
::
?
.'"
.-
-<1.2
.
~
0
=
netWOrk response
stored WCIg/l1S
?1.2
a
100
Memory Loatioa
200
2
SatUrated Fault ~
.-~
-
?
a
0
?2
a
100
Memory Locatioa
network response
stOred wcighlS
200
Figure 4: Network Response and Stored Weight Values. a) no faults. transition at
location 125. C=8; b) single lost weight, C=16; c) single saturated weight, C=16.
347
| 283 |@word proceeded:1 faculty:1 open:2 simulation:1 propagate:1 outlook:1 franklin:1 current:1 yet:1 attracted:1 readily:1 must:1 distant:1 benign:2 motor:2 designed:1 update:1 hash:3 v:1 selected:2 device:2 location:10 successive:1 symposium:1 incorrect:1 consists:1 prove:1 manner:1 mask:1 indeed:2 behavior:1 decreasing:1 little:1 inappropriate:1 increasing:5 abound:1 becomes:1 bounded:1 circuit:2 developed:1 contrasting:1 enor:1 finding:1 transformation:5 multidimensional:2 control:9 unit:1 grant:1 engineering:1 local:2 limit:1 consequence:2 might:3 studied:2 examined:1 deceptive:1 suggests:2 professorship:1 practical:1 unique:1 lost:3 implement:1 illusion:1 procedure:1 cmac:27 area:1 thought:1 refers:1 suggest:1 close:1 selection:1 undesirable:1 storage:1 influence:1 descending:1 disturbed:1 conventional:2 equivalent:1 map:1 straightforward:1 attention:1 resolution:1 rule:1 insight:1 continued:1 importantly:1 variation:2 limiting:1 play:1 trend:1 recognition:1 particularly:1 observed:1 role:1 module:3 electrical:1 worst:2 region:4 cycle:1 remote:1 valuable:1 environment:2 dynamic:1 ultimately:1 trained:1 purely:1 upon:2 kraft:1 basis:2 joint:1 various:1 separated:2 fast:3 gelfand:1 widely:1 larger:1 rudolph:5 itself:1 noisy:1 impetus:1 albus:3 cautious:1 double:1 enhancement:1 produce:5 adam:1 diluted:1 oo:1 exemplar:8 progress:1 discontinuous:5 attribute:1 transient:1 generalization:15 investigation:3 preliminary:2 adjusted:1 fil:1 considered:3 great:2 algorithmic:1 mapping:7 elo:1 proc:4 sensitive:2 create:2 hope:3 anecdotal:2 clearly:1 rather:3 varying:4 voltage:1 opted:2 rigorous:1 sense:1 helpful:1 contr:1 dependent:1 militzer:1 typically:2 integrated:1 initially:1 reproduce:2 mimicking:1 overall:1 classification:3 development:1 spatial:1 summed:2 field:1 equal:1 devastating:1 placing:3 report:1 intelligent:3 inherent:1 primarily:1 gordon:1 faulted:2 freedom:1 interest:1 investigate:1 highly:1 severe:3 saturated:10 certainly:1 extreme:1 activated:9 held:1 necessary:2 censored:1 euclidean:1 desired:1 increased:3 instance:1 reflexive:1 subset:5 uniform:1 successful:1 conducted:1 stored:6 reported:1 st:1 cited:1 sensitivity:11 systematic:1 michael:1 postulate:1 slowly:1 glanz:1 conf:1 syst:1 sinusoidal:1 lookup:1 permanent:1 register:1 root:1 doing:1 parallel:1 square:1 likewise:1 miller:4 correspond:1 yield:1 edgar:1 drive:1 processor:1 failure:1 frequency:1 obvious:2 associated:1 spie:1 hamming:1 static:1 stressful:1 exemplify:1 hashing:1 arlington:1 tom:1 response:14 improved:1 daunting:1 evaluated:1 though:1 furthermore:1 hand:1 nonlinear:4 overlapping:1 assessment:1 believe:1 manipulator:4 usage:2 effect:4 true:1 perverse:1 hence:1 sinusoid:2 deal:1 during:3 width:1 incr:1 noted:3 multivariable:2 asme:1 elected:1 ranging:1 common:2 physical:1 detennining:1 nh:2 discussed:1 analog:4 refer:1 significant:1 measurement:1 smoothness:1 session:1 had:1 robot:1 add:1 recent:1 showed:1 awarded:1 massively:1 scenario:1 fault:99 accomplished:3 devise:1 somewhat:2 signal:2 arithmetic:1 ii:2 multiple:1 mix:1 desirable:1 technical:1 host:1 serial:1 concerning:1 coded:3 va:1 impact:5 controller:2 vision:1 cerebellar:2 represent:1 cell:1 addition:1 interval:2 unlike:1 subject:1 inconsistent:1 seem:2 presence:1 concerned:1 architecture:6 topology:2 rms:4 distributing:1 penalty:1 ersu:2 suffer:1 collision:2 clear:1 locally:1 carter:8 reduced:1 problematic:1 shifted:1 delta:1 arising:1 affected:1 group:1 key:1 threshold:2 angle:1 injected:1 logged:1 place:1 eased:1 architectural:1 bit:1 layer:4 distinguish:1 display:1 i4:1 occur:1 placement:6 handelman:1 lane:2 aspect:1 extremely:1 department:1 combination:1 poor:1 remain:1 modification:1 making:1 previously:1 needed:2 ascending:1 adopted:1 available:1 observe:1 batch:1 classical:1 added:1 question:3 receptive:1 strategy:6 dependence:1 lends:1 distance:3 thank:1 lou:1 extent:1 reason:3 difficult:2 rise:1 design:1 implementation:4 adjustable:5 unknown:1 finite:1 attacked:2 situation:1 severity:1 perturbation:1 varied:1 reproducing:1 community:1 introduced:3 pair:2 specified:1 learned:1 trans:1 address:10 pattern:5 articulation:1 memory:11 shifting:1 critical:1 natural:1 ranked:1 examination:1 scheme:1 technology:1 misleading:1 numerous:1 l1s:1 prior:1 literature:2 acknowledgement:1 relative:1 loss:6 mixed:1 versus:1 localized:1 generator:7 digital:4 foundation:1 degree:1 share:1 surprisingly:1 placed:1 free:2 supported:1 characterizing:1 absolute:1 tolerance:36 distributed:1 boundary:1 transition:3 stuck:4 author:1 global:1 robotic:3 tolerant:1 assumed:1 table:1 mj:2 career:1 operational:7 investigated:1 electric:1 did:2 referred:2 young:1 bad:1 specific:1 meas:1 concern:2 evidence:1 effectively:1 gained:1 magnitude:7 demand:1 durham:2 suited:1 smoothly:2 generalizing:1 led:1 simply:1 likely:2 adjustment:3 ordered:1 nominally:1 applies:1 presentation:2 absence:1 change:2 diminished:2 typical:1 determined:1 reducing:1 degradation:3 hampshire:2 total:2 admittedly:1 latter:1 assessed:1 philosophy:1 investigator:1 tested:1 phenomenon:1 |
2,015 | 2,830 | Saliency Based on Information Maximization
Neil D.B. Bruce and John K. Tsotsos
Department of Computer Science and Centre for Vision Research
York University, Toronto, ON, M2N 5X8
{neil,tsotsos}@cs . yorku. c a
Abstract
A model of bottom-up overt attention is proposed based on the principle
of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in
a neural circuit, which is demonstrated as having close ties with the circuitry existent in the primate visual cortex. It is further shown that the
proposed saliency measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Resu lts
on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt
attention as compared with existing efforts.
1 Introduction
There has long been interest in the nature of eye movements and fixation behavior following early studies by Buswell [I] and Yarbus [2]. However, a complete description of
the mechanisms underlying these peculiar fixation patterns remains elusive. Th is is further
complicated by the fact that task demands and contextual knowledge factor heavily in how
sampling of visual content proceeds.
Current bottom-up models of attention posit that saliency is the impetus for selection of
fixation points. Each model differs in its definition of saliency. In perhaps the most popular
model of bottom-up attention, saliency is based on centre-surround contrast of units modeled on known properties of primary visual cortical cells [3]. In other efforts, saliency is
defined by more ad hoc quantities having less connection to biology [4] . In this paper, we
explore the notion that information is the driving force behind attentive sampling.
The application of information theory in this context is not in itself novel. There exist
several previous efforts that define saliency based on Shannon entropy of image content
defined on a local neighborhood [5, 6, 7, 8]. The model presented in this work is based on
the closely related quantity of self-information [9]. In section 2.2 we discuss differences
between entropy and self-information in this context, including why self-information may
present a more appropriate metric than entropy in this domain. That said, contributions of
this paper are as follows:
1. A bottom-up model of overt attention with selection based on the self-information
of local image content.
2. A qualitative and quantitative comparison of predictions of the model with human
eye tracking data, contrasted against the model ofItti and Koch [3] .
3. Demonstration that the model is neurally plausible via implementation based on a
neural circuit resembling circuitry involved in early visual processing in primates.
4. Discussion of how the proposal generalizes to address issues that deny explanation
by existing saliency based attention models.
2
The Proposed Saliency Measure
There exists much evidence indicating that the primate visual system is built on the principle of establishing a sparse representation of image statistics. In the most prominent of
such studies, it was demonstrated that learning a sparse code for natural image statistics
results in the emergence of simple-cell receptive fields similar to those appearing in the
primary visual cortex of primates [10, 11]. The apparent benefit of such a representation
comes from the fact that a sparse representation allows certain independence assumptions
with regard to neural firing. This issue becomes important in evaluating the likelihood of a
set of local image statistics and is elaborated on later in this section.
In this paper, saliency is determined by quantifying the self-information of each local image patch. Even for a very small image patch, the probability distribution resides in a very
high dimensional space. There is insufficient data in a single image to produce a reasonable estimate of the probability distribution. For this reason, a representation based on
independent components is employed for the independence assumption it affords. leA is
performed on a large sample of 7x7 RGB patches drawn from natural images to determine
a suitable basis. For a given image, an estimate of the distribution of each basis coefficient
is learned across the entire image through non-parametric density estimation. The probability of observing the RGB values corresponding to a patch centred at any image location
may then be evaluated by independently considering the likelihood of each corresponding
basis coefficient. The product of such likelihoods yields the joint likelihood of the entire
set of basis coefficients. Given the basis determined by ICA, the preceding computation
may be realized entirely in the context of a biologically plausible neural circuit. The overall architecture is depicted in figure 1. Details of each of the aforesaid model components
including the details of the neural circuit are as follows:
Projection into independent component space provides, for each local neighborhood of the
image, a vector W consisting of N variables Wi with values Vi. Each W i specifies the contribution of a particular basis function to the representation of the local neighborhood. As
mentioned, these basis functions, learned from statistical regularities observed in a large set
of natural images show remarkable similarity to V 1 cells [10, 11]. The ICA projection then
allows a representation w, in which the components W i are as independent as possible. For
further details on the ICA projection of local image statistics see [12]. In this paper, we propose that salience may be defined based on a strategy for maximum information sampling.
In particular, Shannon's self-information measure [9], -log(p(x )), applied to the joint likelihood of statistics in a local neighborhood decribed by w, provides an appropriate transformation between probability and the degree of infom1ation inherent in the local statistics.
It is in computing the observation likelihood that a sparse representation is instrumental:
Consider the probability density function p( W l = Vl, Wz = Vz, ... , Wn = v n ) which quantifies the likelihood of observing the local statistics with values Vl, ... , Vn within a particular
context. An appropriate context may include a larger area encompassing the local neigbourhood described by w, or the entire scene in question. The presumed independence of the
ICA decomposition means that P(WI = VI, Wz = V2, ... , Wn = V n ) = rr ~= l P(Wi = Vi) .
Thus, a sparse representation allows the estimation of the n-dimensional space described
by W to be derived from n one dimensional probability density functions. Evaluating
p( Wl = VI, W2 = V2, ... , Wn = v n ) requires considering the distribution of values taken on
by each W i in a more global context. In practice, this might be derived on the basis of a
nonparametric or histogram density estimate. In the section that follows, we demonstrate
that an operation equivalent to a non-parametric density estimate may be achieved using a
suitable neural circuit.
2.1
Likelihood Estimation in A Neural Circuit
In the following formulation, we assume an estimate of the likelihood of the components
of W based on a Gaussian kernel density estimate. Any other choice of kernel may be
substituted, with a Gaussian window chosen only for its common use in density estimation
and without loss of generality.
Let Wi ,j ,k denote the set of independent coefficients based on the neighborhood centered at
j , k. An estimate of p( Wi,j,k = Vi,j,k) based on a Gaussian window is given by:
(1)
with L s ,t w(s, t) = 1 where \f! is the context on which the probability estimate of the coefficients of w is based. w (s, t) describes the degree to which the coefficient w at coordinates
s, t contributes to the probability estimate. On the basis of the form given in equation I it is
evident that this operation may equivalently be implemented by the neural circuit depicted
in figure 2. Figure 2 demonstrates only coefficients derived from a horizontal cross-section.
The two dimensional case is analogous with parameters varying in i, j, and k dimensions.
K consists of the Kernel function employed for density estimation. In our case this is a
Gaussian of the form 0"~e-x2 /20- 2. w(s, t) is encoded based on the weight of connections to K. As x = Vi ,j,k - Vi,s,t the output of this operation encodes the impact of the
Kernel function with mean Vi,s,t on the value of p( Wi,j,k = Vi,j,k). Coefficients at the input
layer correspond to coefficients of v. The logarithmic operator at the final stage might also
be placed before the product on each incoming connection, with the product then becoming a summation. It is interesting to note that the structure of this circuit at the level of
within feature spatial competition is remarkably similar to the standard feedforward model
of lateral inhibition, a ubiquitous operation along the visual pathways thought to playa
chief role in attentional processing [14]. The similarity between independent components
and VI cells, in conjunction with the aforementioned consideration lends credibility to the
proposal that information may contribute to driving overt attentional selection.
One aspect lacking from the preceding description is that the saliency map fails to take into
account the dropoff in visual acuity moving peripherally from the fovea. In some instances
the maximum information accommodating for visual acuity may correspond to the center
of a cluster of salient items, rather than centered on one such item. For this reason, the
resulting saliency map is convolved with a Gaussian with parameters chosen to correspond
approximately to the drop off in visual acuity observed in the human visual system.
2.2
Self-Information versus Entropy
It is important to distinguish between self-information and entropy since these terms are
often confused. The difference is subtle but important on two fronts. The first consideration
lies in the expected behavior in popout paradigms and the second in the neural circuitry
involved.
Let X = [Xl, X2, ... , xnl denote a vector of RGB values corresponding to image patch X,
and D a probability density function describing the distribution of some feature set over
X. For example, D might correspond to a histogram estimate of intensity values within
X or the relative contribution of different orientations within a local neighborhood situated on the boundary of an object silhouette [6]. Assuming an estimate of D based on N
10 Examplo basad
on shlfUn g wi ndow
Orfglnallmage
In horizontal direction
of A
r----ii-iiliiiil ---A
----:
I... ._..
:11 ?????
:??????
:.:. ?????
Ba~is
:
:
Functions :
!:
r--------------
I
Inromax
leA
I ...
:I
??
?
I.
?
?
360,000
I Random
?
. .. :
::
Patches I
- - - - - - - ______ 1
~ ---------------------
Figure I : The framework that achieves the desired information measure. Shown is the computation corresponding to three horizontally adjacent neighbourhoods with flow through
the network indicated by the orange, purple, and cyan windows and connections. The
connections shown facilitate computation of the information measure corresponding to the
pixel centered in the purple window. The network architecture produces this measure on
the basis of evaluating the probability of these coefficients with consideration to the values
of such coefficients in neighbouring regions.
bins, the entropy of D is given by: - L~l Di1og(Di). In this example, entropy characterizes the extent to which the feature(s) characterized by D are uniformly distributed on X .
Self-information in the proposed saliency measure is given by -log(p(X)). That is, SelfinfolTIlation characterizes the raw likelihood of the specific n-dimensional vector of ROB
values given by X . p(X) in this case is based on observing a number of n-dimensional
feature vectors based on patches drawn from the area surrounding X . Thus, p( X) characterizes the raw likelihood of observing X based on its surround and -l og(p(X)) becomes
closer to a measure of local contrast whereas entropy as defined in the usual manner is
closer to a measure of local activity. The importance of this di sti nction is evident in considering figure 3. Figure 3 depicts a variety of candles of varying orientation, and color.
There is a tendency to fixate the empty region on the left, which is the location of lowest
entropy in the image. In contrast, this region receives the highest confidence from the algorithm proposed in this paper as it is highly informative in the context of this image. In
classic popout experiments, a vertical line among horizontal lines presents a highly salient
target. The same vertical line among many lines of random orientations is not, although the
entropy associated with the second scenario is much greater.
With regard to the neural circuitry involved, we have demonstrated that self-information
may be computed using a neural circuit in the absence of a representation of the entire
probability distribution. Whether an equivalent operation may be achieved in a biologically
plausible manner for the computation of entropy remains to be established.
j..1 ,
~2
j..1,
~1
j..1 .
j..1 ,
j..1 ,
i,
i,
i
i+1
/+2
,2
,1
i
i,
i,
i+1.
i+1 ,
i+1,
i+1,
i+1 ,
i+1
/+2
1-2
;.'
/
/+'
iT,
Figure 2: AID depiction of the neural architecture that computes the self-information of a
set of local statistics. The operation is equivalent to a Kernel density estimate. Coefficients
correspond to subscripts of Vi,j,k. The small black circles indicate an inhibitory relationship
and the small white circles an excitatory relationship
Figure 3: An image that highlights the difference between entropy and self-information.
Fixation invariably falls on the empty patch, the locus of minimum entropy in orientation
and color but maximum in self-information when the surrounding context is considered.
3
Experimental Validation
The following section evaluates the output of the proposed algorithm as compared with the
bottom-up model of Itti and Koch [3]. The model of Itti and Koch is perhaps the most
popular model of saliency based attention and currently appears to be the yardstick against
which other models are measured.
3.1
Experimental eye tracking data
The data that forms the basis for performance evaluation is derived from eye tracking experiments performed while subjects observed 120 different color images. Images were
presented in random order for 4 seconds each with a mask between each pair of images.
Subjects were positioned 0.75m from a 21 inch CRT monitor and given no particular instructions except to observe the images. Images consist of a variety of indoor and outdoor
scenes, some with very salient items, others with no particular regions of interest. The eye
tracking apparatus consisted of a standard non head-mounted device. The parameters of the
setup are intended to quantify salience in a general sense based on stimuli that one might
expect to encounter in a typical urban environment. Data was collected from 20 different
subjects for the full set of 120 images.
The issue of comparing between the output of a particular algorithm, and the eye tracking data is non-trivial. Previous efforts have selected a number of fixation points based
on the saliency map, and compared these with the experimental fixation points derived
from a small number of subjects and images (7 subjects and 15 images in a recent effort
[4]). There are a variety of methodological issues associated with such a representation.
The most important such consideration is that the representation of perceptual importance
is typically based on a saliency map. Observing the output of an algorithm that selects
fixation points based on the underlying saliency map obscures observation of the degree
to which the saliency maps predict important and unimportant content and in particular,
ignores confidence away from highly salient regions. Secondly, it is not clear how many
fixation points should be selected. Choosing this value based on the experimental data will
bias output based on information pertaining to the content of the image and may produce
artificially good results.
The preceding discussion is intended to motivate the fact that selecting discrete fixation coordinates based on the saliency map for comparison may not present the most appropriate
representation to use for performance evaluation. In this effort, we consider two different
measures of performance. Qualitative comparison is based on the representation proposed
in [16]. In this representation, a fixation density map is produced for each image based on
all fixation points, and subjects. Given a fixation point, one might consider how the image
under consideration is sampled by the human visual system as photoreceptor density drops
steeply moving peripherally from the centre of the fovea. This dropoff may be modeled
based on a 2D Gaussian distribution with appropriately chosen parameters, and centred on
the measured fixation point. A continuous fixation density map may be derived for a particular image based on the sum of all 2D Gaussians corresponding to each fixation point,
from each subject. The density map then comprises a measure of the extent to which each
pixel of the image is sampled on average by a human observer based on observed fixations.
This affords a representation for which similarity to a saliency map may be considered at
a glance. Quantitative performance evaluation is achieved based on the measure proposed
in [15]. The saliency maps produced by each algorithm are treated as binary classifiers for
fixation versus non-fixation points. The choice of several different thresholds and assessment of performance in predicting fixated versus not fixated pixel locations allows an ROC
curve to be produced for each algorithm.
3.2
Experimental Results
Figure 4 affords a qualitative comparison of the output of the proposed model with the
experimental eye tracking data for a variety of images. Also depicted is the output of the
Itti and Koch algorithm for comparison.
In the implementation results shown, the ICA basis set was learned from a set of 360,000
7x7x3 image patches from 3600 natural images using the Lee et al. extended infomax
algorithm [17]. Processed images are 340 by 255 pixels. W consists of the entire extent
of the image and w(s, t) = ~ 'V s, t with p the number of pixels in the image. One might
make a variety of selections for these variables based on arguments related to the human
visual system, or based on performance. In our case, the values have been chosen on the
basis of simplicity and do not appear to dramatically affect the predictive capacity of the
model in the simulation results. In particular, we wished to avoid tuning these parameters
to the available data set. Future work may include a closer look at some of the parameters
involved in order to determine the most appropriate choices. The ROC curves appearing in
figure 5 give some sense of the efficacy of the model in predicting which regions of a scene
human observers tend to fixate. As may be observed, the predictive capacity of the model is
on par with the approach of lui and Koch. Encouraging is the fact that similar perfonnance
is achieved using a method derived from first principles, and with no parameter tuning or
ad hoc design choices.
Figure 4: Results for qualitative comparison. Within each boxed region defined by solid
lines: (Top Left) Original Image (Top Right) Saliency map produced by Itti + Koch algorithm. (Bottom Left) Saliency map based on information maximization. (Bottom Right)
Fixation density map based on experimental human eye tracking data.
4
On Biological Plausibility
Although the proposed approach, along with the model of lui and Koch describe saliency
on the basis of a single topographical saliency map, there is mounting evidence that saliency
in the primate brain is represented at several levels based on a hierarchical representation
[18] of visual content. The proposed approach may accommodate such a configuration
with the single necessary condition being a sparse representation at each layer.
As we have described in section 2, there is evidence that suggests the possibility that the
primate visual system may consist of a multi-layer sparse coding architecture [10, 11]. The
proposed algorithm quantifies information on the basis of a neural circuit, on units with
response properties corresponding to neurons appearing in the primary visual cortex. However, given an analogous representation corresponding to higher visual areas that encode
form, depth, convexity etc. the proposed method may be employed without any modification. Since the popout of features can occur on the basis of more complex properties such
as a convex surface among concave surfaces [19], this is perhaps the next stage in a system
that encodes saliency in the same manner as primates. Given a multi-layer architecture, the
mechanism for selecting the locus of attention becomes less clear. In the model of Itti and
Koch, a multi-layer winner-take-all network acts directly on the saliency map and there
is no hierarchical representation of image content. There are however attention models
that subscribe to a distributed representation of saliency (e.g. [20]), that may implement
attentional selection with the proposed neural circuit encoding saliency at each layer.
??
?0
~
cr
05
~ 05
??
01
02
0)
O'
OS
06
07
01
0'
1
False Alarm Rate
Figure 5: ROC curves for Self-information (blue) and Itti and Koch (red) saliency maps.
Area under curves is 0.7288 and 0.7277 respectively.
5
Conclusion
We have described a strategy that predicts human attentional deployment on the principle of
maximizing information sampled from a scene. Although no computational machinery is
included strictly on the basis of biological plausibility, nevertheless the formulation results
in an implementation based on a neurally plausible circuit acting on units that resemble
those that facilitate early visual processing in primates. Comparison with an existing attention model reveals the efficacy of the proposed model in predicting salient image content.
Finally, we demonstrate that the proposal might be generalized to facilitate selection based
on high-level features provided an appropriate sparse representation is available.
References
[I] G.T. BusweII, How people look at pictures. Chicago: The University of Chicago Press.
[2] A. Yarbus, Eye movements and vision. New York: Plenum Press.
[3] L. Itti, C. Koch, E. Niebur, IEEE T PAMI, I I: I 254- I 259, 1998.
[4] C. M. Privitera and L.w. Stark, IEEE T PAMI 22:970-981,2000.
[5] F. Fritz, C. Seifert, L. Paletta, H. Bischof, Proc. WAPCV, Graz, Austria, 2004.
[6] L.W. Renninger, J. Coughlan, P. Verghese, J. Malik, Proceedings NIPS 17, Vancouver, 2004.
[7] T. Kadir, M. Brady, IJCV 45(2):83-105,2001.
[8] T.S. Lee, S. Yu, Advances in NIPS 12:834-840, Ed. S.A. Solla, T.K. Leen, K. Muller, MIT Press.
[9] C. E. Shannon, The BeII Systems Technical Journal, 27:93- I 54, 1948.
[10] D.J. Field, and B. A. Olshausen, Nature 381 :607-609,1996.
[I I] A.J. BeII, TJ. Sejnowski, Vision Research 37:3327-3338,1997.
[12] N. Bruce, Neurocomputing, 65-66: I 25-133, 2005.
[13] P. Comon, Signal Processing 36(3):287-314,1994.
[14] M.W. Cannon and S.c. Fullenkamp, Vision Research 36(8):1 I 15-1 125, 1996.
[15] B.W. Tatler, R.J. Baddeley, J.D. Gilchrist, Vision Research 45(5):643-659,2005.
[16] H. Koesling, E. Carbone, H. Ritter, University of Bielefeld, Technical Report, 2002.
[17] T.w. Lee, M. Girolami, TJ . Sejnowski, Neural Computation 11:417-441 , 1999.
[1 8] J. Braun, C. Koch, D. K. Lee, L. Itti, In: Visual Attention and Cortical Circuits, (J . Braun, C.
Koch, J. Davis Ed.), 215-242, Cambridge, MA:MIT Press, 200 I.
[19] J. HuIIman, W. Te Winkel, F. Boselie, Perception and Psychophysics 62: 162-174, 2000.
[20] J.K. Tsotsos, S. Culhane, W. Wai, Y. Lai, N. Davis, F. Nuflo, Art. Intel!. 78(1-2):507-547,1995.
| 2830 |@word instrumental:1 instruction:1 simulation:1 rgb:3 decomposition:1 solid:1 accommodate:1 configuration:1 efficacy:3 selecting:2 existing:3 current:1 contextual:1 comparing:1 john:1 chicago:2 informative:1 drop:2 mounting:1 selected:2 device:1 item:3 coughlan:1 provides:2 contribute:1 toronto:1 location:3 yarbus:2 along:2 qualitative:4 consists:2 fixation:19 ijcv:1 pathway:1 manner:3 mask:1 presumed:1 expected:1 ica:5 behavior:2 multi:3 brain:1 encouraging:1 window:4 considering:3 becomes:3 confused:1 provided:1 underlying:2 circuit:13 lowest:1 transformation:1 brady:1 quantitative:2 act:1 concave:1 braun:2 tie:1 demonstrates:1 classifier:1 unit:3 appear:1 before:1 local:15 apparatus:1 encoding:1 establishing:1 subscript:1 firing:1 becoming:1 approximately:1 pami:2 might:7 black:1 suggests:1 deployment:2 practice:1 implement:1 differs:1 area:4 thought:1 revealing:1 projection:3 confidence:2 close:1 selection:6 operator:1 context:9 equivalent:3 map:18 demonstrated:3 center:1 maximizing:2 elusive:1 resembling:1 attention:11 independently:1 convex:1 renninger:1 simplicity:1 classic:1 notion:1 coordinate:2 analogous:2 plenum:1 target:1 heavily:1 neighbouring:1 elude:1 predicts:1 bottom:7 observed:5 role:1 region:7 graz:1 solla:1 movement:2 highest:1 mentioned:1 environment:1 convexity:1 existent:1 motivate:1 predictive:2 basis:17 joint:2 represented:1 surrounding:2 culhane:1 describe:1 sejnowski:2 pertaining:1 neighborhood:6 choosing:1 apparent:1 encoded:1 larger:1 plausible:4 kadir:1 nction:1 statistic:8 neil:2 emergence:1 itself:1 final:1 hoc:2 rr:1 propose:1 product:3 tatler:1 impetus:1 description:2 competition:1 regularity:1 cluster:1 empty:2 produce:3 object:1 measured:2 wished:1 implemented:1 c:1 resemble:1 come:1 indicate:1 quantify:1 girolami:1 direction:1 posit:1 closely:1 centered:3 human:8 crt:1 bin:1 biological:2 summation:1 dropoff:2 secondly:1 strictly:1 koch:12 considered:2 predict:1 circuitry:4 driving:2 achieves:1 early:3 estimation:5 proc:1 overt:4 currently:2 vz:1 wl:1 mit:2 gaussian:6 rather:1 avoid:1 cr:1 cannon:1 varying:2 og:1 obscures:1 conjunction:1 encode:1 derived:7 acuity:3 verghese:1 methodological:1 likelihood:11 steeply:1 contrast:3 sense:2 vl:2 entire:5 typically:1 seifert:1 selects:1 pixel:5 issue:5 overall:1 aforementioned:1 orientation:4 among:3 spatial:1 art:1 orange:1 psychophysics:1 field:2 having:2 sampling:3 biology:1 look:2 yu:1 future:1 others:1 stimulus:1 report:1 inherent:1 neurocomputing:1 intended:2 consisting:1 invariably:1 interest:2 highly:3 possibility:1 evaluation:3 behind:1 tj:2 peculiar:1 closer:3 necessary:1 winkel:1 privitera:1 perfonnance:1 machinery:1 desired:1 circle:2 instance:1 maximization:2 front:1 density:15 fritz:1 subscribe:1 lee:4 off:1 ritter:1 infomax:1 itti:8 stark:1 account:1 centred:2 coding:1 coefficient:12 ad:2 vi:11 later:1 performed:2 observer:2 observing:5 characterizes:3 red:1 complicated:1 bruce:2 elaborated:1 contribution:3 aforesaid:1 decribed:1 purple:2 yield:1 saliency:32 correspond:5 inch:1 raw:2 produced:4 niebur:1 ed:2 wai:1 definition:1 attentive:1 against:2 evaluates:1 involved:4 fixate:2 associated:2 di:2 sampled:4 popular:2 austria:1 knowledge:1 color:3 ubiquitous:1 subtle:1 positioned:1 appears:1 higher:1 response:1 formulation:2 evaluated:1 leen:1 generality:1 stage:2 receives:1 horizontal:3 o:1 assessment:1 glance:1 indicated:1 perhaps:3 olshausen:1 facilitate:3 consisted:1 lts:1 white:1 adjacent:1 self:15 davis:2 generalized:1 prominent:1 evident:2 complete:1 demonstrate:2 image:42 consideration:5 novel:1 common:1 gilchrist:1 winner:1 surround:2 cambridge:1 credibility:1 tuning:2 deny:1 centre:3 moving:2 cortex:3 similarity:3 inhibition:1 depiction:1 etc:1 surface:2 playa:1 recent:1 scenario:1 certain:1 binary:1 buswell:1 muller:1 minimum:1 greater:1 preceding:3 employed:3 determine:2 paradigm:1 signal:1 ii:1 neurally:2 full:1 technical:2 characterized:1 plausibility:2 cross:1 long:1 lai:1 impact:1 prediction:1 vision:5 metric:1 popout:3 histogram:2 kernel:5 achieved:5 cell:4 lea:2 proposal:3 whereas:1 remarkably:1 appropriately:1 w2:1 subject:7 tend:1 flow:1 feedforward:1 wn:3 variety:5 independence:3 affect:1 architecture:5 whether:1 effort:6 york:2 dramatically:1 clear:2 unimportant:1 nonparametric:1 situated:1 processed:1 specifies:1 exist:1 affords:3 inhibitory:1 blue:1 discrete:1 salient:5 threshold:1 nevertheless:1 monitor:1 drawn:2 urban:1 peripherally:2 tsotsos:3 sum:1 sti:1 bielefeld:1 reasonable:1 vn:1 patch:9 entirely:1 layer:6 cyan:1 distinguish:1 activity:1 occur:1 scene:5 x2:2 encodes:2 x7:1 aspect:1 argument:1 nuflo:1 department:1 across:1 describes:1 wi:7 m2n:1 rob:1 biologically:2 primate:8 candle:1 modification:1 comon:1 taken:1 equation:1 remains:2 discus:1 describing:1 mechanism:2 locus:2 generalizes:1 operation:7 gaussians:1 available:2 observe:1 hierarchical:2 v2:2 appropriate:6 away:1 appearing:3 neighbourhood:1 encounter:1 convolved:1 original:1 top:2 include:2 malik:1 question:1 quantity:2 realized:1 receptive:1 primary:3 parametric:2 strategy:2 usual:1 said:1 lends:1 fovea:2 attentional:4 lateral:1 capacity:2 accommodating:1 extent:3 collected:1 trivial:1 reason:2 assuming:1 code:1 modeled:2 relationship:2 insufficient:1 demonstration:1 equivalently:1 setup:1 ba:1 implementation:3 design:1 vertical:2 observation:2 neuron:1 extended:2 head:1 intensity:1 pair:1 connection:5 bischof:1 learned:3 established:1 nip:2 address:2 proceeds:1 pattern:1 perception:1 indoor:1 built:1 including:2 wz:2 explanation:2 suitable:2 natural:5 force:1 treated:1 predicting:4 eye:10 picture:1 x8:1 vancouver:1 relative:1 lacking:1 encompassing:1 loss:1 highlight:1 expect:1 par:1 interesting:1 resu:1 mounted:1 versus:3 remarkable:1 validation:1 xnl:1 degree:3 principle:4 excitatory:1 placed:1 salience:2 bias:1 fall:1 sparse:8 benefit:1 regard:2 boundary:1 dimension:1 cortical:2 evaluating:3 distributed:2 resides:1 computes:1 ignores:1 curve:4 depth:1 silhouette:1 global:1 incoming:1 reveals:1 fixated:2 continuous:1 quantifies:2 why:1 chief:1 nature:2 contributes:1 boxed:1 complex:1 artificially:1 domain:2 substituted:1 alarm:1 intel:1 roc:3 depicts:1 paletta:1 aid:1 fails:1 comprises:1 xl:1 lie:1 outdoor:1 perceptual:1 specific:1 carbone:1 evidence:3 exists:1 consist:2 false:1 importance:2 te:1 demand:1 entropy:13 depicted:3 logarithmic:1 explore:1 visual:19 horizontally:1 tracking:8 ma:1 quantifying:1 absence:1 content:8 included:1 determined:2 except:1 contrasted:1 uniformly:1 typical:1 lui:2 acting:1 experimental:8 tendency:1 shannon:4 photoreceptor:1 indicating:1 people:1 yardstick:1 topographical:1 baddeley:1 |
2,016 | 2,831 | Faster Rates in Regression via Active Learning
Rui Castro
Rice University
Houston, TX 77005
[email protected]
Rebecca Willett
University of Wisconsin
Madison, WI 53706
[email protected]
Robert Nowak
University of Wisconsin
Madison, WI 53706
[email protected]
Abstract
This paper presents a rigorous statistical analysis characterizing regimes
in which active learning significantly outperforms classical passive learning. Active learning algorithms are able to make queries or select sample
locations in an online fashion, depending on the results of the previous
queries. In some regimes, this extra flexibility leads to significantly faster
rates of error decay than those possible in classical passive learning settings. The nature of these regimes is explored by studying fundamental performance limits of active and passive learning in two illustrative
nonparametric function classes. In addition to examining the theoretical potential of active learning, this paper describes a practical algorithm
capable of exploiting the extra flexibility of the active setting and provably improving upon the classical passive techniques. Our active learning
theory and methods show promise in a number of applications, including
field estimation using wireless sensor networks and fault line detection.
1
Introduction
In this paper we address the theoretical capabilities of active learning for estimating functions in noise. Several empirical and theoretical studies have shown that selecting samples
or making strategic queries in order to learn a target function/classifier can outperform
commonly used passive methods based on random or deterministic sampling [1?5]. There
are essentially two different scenarios in active learning: (i) selective sampling, where we
are presented a pool of examples (possibly very large), and for each of these we can decide whether to collect a label associated with it, the goal being learning with the least
amount of carefully selected labels [3]; (ii) adaptive sampling, where one chooses an experiment/sample location based on previous observations [4,6]. We consider adaptive sampling in this paper. Most previous analytical work in active learning regimes deals with
very stringent conditions, like the ability to make perfect or nearly perfect decisions at
every stage in the sampling procedure. Our working scenario is significantly less restrictive, and based on assumptions that are more reasonable for a broad range of practical
applications.
We investigate the problem of nonparametric function regression, where the goal is to estimate a function from noisy point-wise samples. In the classical (passive) setting the
sampling locations are chosen a priori, meaning that the selection of the sample locations
precedes the gathering of the function observations. In the active sampling setting, however, the sample locations are chosen in an online fashion: the decision of where to sample
next depends on all the observations made previously, in the spirit of the ?Twenty Questions? game (in passive sampling all the questions need to be asked before any answers
are given). The extra degree of flexibility garnered through active learning can lead to significantly better function estimates than those possible using classical (passive) methods.
However, there are very few analytical methodologies for these Twenty Questions problems when the answers are not entirely reliable (see for example [6?8]); this precludes
performance guarantees and limits the applicability of many such methods. To address this
critical issue, in this paper we answer several pertinent questions regarding the fundamental
performance limits of active learning in the context of regression under noisy conditions.
Significantly faster rates of convergence are generally achievable in cases involving functions whose complexity (in a the Kolmogorov sense) is highly concentrated in small regions of space (e.g., functions that are smoothly varying apart from highly localized abrupt
changes such as jumps or edges). We illustrate this by characterizing the fundamental
limits of active learning for two broad nonparametric function classes which map [0, 1]d
onto the real line: (i) H?older smooth functions (spatially homogeneous complexity) and
(ii) piecewise constant functions that are constant except on a d ? 1 dimensional boundary
set or discontinuity embedded in the d dimensional function domain (spatially concentrated
complexity). The main result of this paper is two-fold. First, when the complexity of the
function is spatially homogeneous, passive learning algorithms are near-minimax optimal
over all estimation methods and all (active or passive) learning schemes, indicating that
active learning methods cannot provide faster rates of convergence in this regime. Second,
for piecewise constant functions, active learning methods can capitalize on the highly localized nature of the boundary by focusing the sampling process in the estimated vicinity
of the boundary. We present an algorithm that provably improves on the best possible passive learning algorithm and achieves faster rates of error convergence. Furthermore, we
show that this performance cannot be significantly improved on by any other active learning method (in a minimax sense). Earlier existing work had focused on one dimensional
problems [6, 7], and very specialized multidimensional problems that can be reduced to a
series of one dimensional problems [8]. Unfortunately these techniques cannot be extended
to more general piecewise constant/smooth models, and to the best of our knowledge our
work is the first addressing active learning in this class of models.
Our active learning theory and methods show promise for a number of problems. In particular, in imaging techniques such as laser scanning it is possible to adaptively vary the
scanning process. Using active learning in this context can significantly reduce image acquisition times. Wireless sensor network constitute another key application area. Because
of necessarily small batteries, it is desirable to limit the number of measurements collected
as much as possible. Incorporating active learning strategies into such systems can dramatically lengthen the lifetime of the system. In fact, active learning problems like the one we
pose in Section 4 have already found application in fault line detection [7] and boundary
estimation in wireless sensor networking [9].
2
Problem Statement
Our goal is to estimate f : [0, 1]d ? R from a finite number of noise-corrupted samples.
We consider two different scenarios: (a) passive learning, where the location of the sample
points is chosen statistically independently of the measurement outcomes; and (b) active
learning, where the location of the ith sample point can be chosen as a function of the
samples points and samples collected up to that instant. The statistical model we consider
builds on the following assumptions:
(A1) The observations {Yi }ni=1 are given by
Yi = f (Xi ) + Wi , i ? {1, . . . , n}.
(A2) The random variables Wi are Gaussian zero mean and variance ? 2 . These are
independent and identically distributed (i.i.d.) and independent of {Xi }ni=1 .
(A3.1) Passive Learning: The sample locations Xi ? [0, 1]d are either deterministic or
random, but independent of {Yj }j6=i . They do not depend in any way on f .
(A3.2) Active Learning: The sample locations Xi are random, and depend only on
{Xj , Yj }i?1
j=1 . In other words the sample locations Xi have only a causal dependency on the system variables {Xi , Yi }. Finally, given {Xj , Yj }i?1
j=1 the random
variable Xi does not depend in any way on f .
Let f?n : [0, 1]d ? R denote an estimator based on the training samples {X i , Yi }ni=1 .
When constructing an estimator under the active learning paradigm there is another degree of freedom: we are allowed to choose our sampling strategy, that is, we can specify Xi |X1 . . . Xi?1 , Y1 . . . Yi?1 . We will denote the sampling strategy by Sn . The pair
(f?n , Sn ) is called the estimation strategy. Our goal is to construct estimation strategies
which minimize the expected squared error,
Ef,S [kf?n ? f k2 ],
n
where Ef,Sn is the expectation with respect to the probability measure of {Xi , Yi }ni=1
induced by model f and sampling strategy Sn , and k ? k is the usual L2 norm.
3
Learning in Classical Smoothness Spaces
In this section we consider classes of functions whose complexity is homogeneous over the
entire domain, so that there are no localized features, as in Figure 1(a). In this case we do
not expect the extra flexibility of the active learning strategies to provide any substantial
benefit over passive sampling strategies, since a simple uniform sampling scheme is naturally matched to the homogeneous ?distribution? of the target function?s complexity. To
exemplify this consider the H?older smooth function class: a function f : [0, 1]d ? R is
H?older smooth if it has continuous partial derivatives up to order k = b?c 1 and
? z, x ? [0, 1]d : |f (z) ? Px (z)| ? Lkz ? xk? ,
where L, ? > 0, and Px (?) denotes the order k Taylor polynomial approximation of f
expanded around x. Denote this class of functions by ?(L, ?). Functions in ?(L, ?) are
essentially C ? functions when ? ? N. The first of our two main results is a minimax lower
bound on the performance of all active estimation strategies for this class of functions.
Theorem 1. Under the requirements of the active learning model we have the minimax
bound
2?
inf
sup Ef,Sn [kf?n ? f k2 ] ? cn? 2?+d ,
(1)
(f?n ,Sn )??active f ??(L,?)
c(L, ?, ? 2 ) > 0 and ?active is
where c ?
includes also passive strategies).
the set of all active estimation strategies (which
Note that the rate in Theorem 1 is the same as the classical passive learning rate [10, 11]
but the class of estimation strategies allowed is now much bigger. The proof of Theorem 1
is presented in our technical report [12] and uses standard tools of minimax analysis, such
as Assouad?s Lemma. The key idea of the proof is to reduce the problem of estimating
a function in ?(L, ?) to the problem of deciding among a finite number of hypotheses.
The key aspects of the proof for the passive setting [13] apply to the active scenario due
to the fact that we can choose an adequate set of hypotheses without knowledge of the
sampling strategy, although some modifications are required due to the extra flexibility of
the sampling strategy. There are various practical estimators achieving the performance
predicted by Theorem 1, including some based on kernels, splines or wavelets [13].
1
k = b?c is the maximal integer such that k < ?.
4
The Active Advantage
In this section we address two key questions: (i) when does active learning provably yield
better results, and (ii) what are the fundamental limitations of active learning? These are
difficult questions to answer in general. We expect that, for functions whose complexity
is spatially non-uniform and highly concentrated in small subsets of the domain, the extra
spatial adaptivity of the active learning paradigm can lead into significant performance
gains. We study a class of functions which highlights this notion of ?spatially concentrated
complexity?. Although this is a canonical example and a relatively simple function class,
it is general enough to provide insights into methodologies for broader classes.
A function f : [0, 1]d ? R is called piecewise constant if it is locally constant2 in any
point x ? [0, 1]d \ B(f ), where B(f ) ? [0, 1]d , the boundary set, has upper box-counting
dimension at most d ? 1. Furthermore let f be uniformly bounded on [0, 1]d (that is,
|f (x)| ? M, ?x ? [0, 1]d ) and let B(f ) satisfy N (r) ? ?r?(d?1) for all r > 0, where
? > 0 is a constant and N (r) is the minimal number of closed balls of diameter r that
covers B(f ). The set of all piecewise constant functions f satisfying the above conditions
is denoted by PC(?, M ).
The conditions above mean that (a) the functions are constant except along d ? 1dimensional ?boundaries? where they are discontinuous and (b) the boundaries between
the various constant regions are (d ? 1)-dimensional non-fractal sets. If the boundaries
B(f ) are smooth then ? is an approximate bound on their total d ? 1 dimensional volume
(e.g., the length if d = 2). An example of such a function is depicted in Figure 1(b). The
class PC(?, M ) has the main ingredients that make active learning appealing: a function
f is ?well-behaved? everywhere on the unit square, except on a small subset B(f ). We
will see that the critical task for any good estimator is to accurately find the location of the
boundary B(f ).
4.1
Passive Learning Framework
To obtain minimax lower bounds for PC(?, M ) we consider a smaller class of functions,
namely the boundary fragment class studied in [11]. Let g : [0, 1]d?1 ? [0, 1] be a Lipshitz
function with graph in [0, 1]d , that is
|g(x) ? g(z)| ? kx ? zk, 0 ? g(x) ? 1, ? x, z ? [0, 1]d?1 .
Define G = {(x, y) : 0 ? y ? g(x), x ? [0, 1]d?1 }. Finally define f : [0, 1]d ? R by
f (x) = 2M 1G (x) ? M . The class of all the functions of this form is called the boundary
fragment class (usually M = 1), denoted by BF(M ). Note that there are only two regions,
and the boundary separating those is a function of the first d ? 1 variables.
It is straightforward to show that BF(M ) ? PC(?, M ) for a suitable constant ?; therefore
a minimax lower bound for the boundary fragment class is trivially a lower bound for the
piecewise constant class. From the results in [11] we have
inf
(f?n ,Sn )??passive
1
sup
Ef,Sn [d2 (f?n , f )] ? cn? d ,
f ?PC(?,M )
(2)
where c ? c(?, M, ? 2 ) > 0.
There exist practical passive learning strategies that are near-minimax optimal. For example, tree-structured estimators based on Recursive Dyadic Partitions (RDPs) are capable of
2
A function f : [0, 1]d ? R is locally constant at a point x ? [0, 1]d if
? > 0 : ?y ? [0, 1]d :
kx ? yk < ? f (y) = f (x).
(a)
(b)
Figure 1: Examples of functions in the classes considered: (a) H?older smooth function. (b)
Piecewise constant function.
nearly attaining the minimax rate above [14]. These estimators are constructed as follows:
(i) Divide [0, 1]d into 2d equal sized hypercubes. (ii) Repeat this process again on each
hypercube. Repeating this process log2 m times gives rise to a partition of the unit hypercube into md hypercubes of identical size. This process can be represented as a 2d -ary
tree structure (where a leaf of the tree corresponds to a partition cell). Pruning this tree
gives rise to an RDP with non-uniform resolution. Let ? denote the class of all possible
pruned RDPs. The estimators we consider are constructed by decorating the elements of a
partition with
Pconstants. Let ? be an RDP; the estimators built over this RDP have the form
f?(?) (x) ? A?? cA 1{x ? A}.
Since the location of the boundary is a priori unknown it is natural to distribute the sample
points uniformly over the unit cube. There are various ways of doing this; for example, the
points can be placed deterministically over a lattice, or randomly sampled from a uniform
distribution. We will use the latter strategy. Assume that {X i }ni=1 are i.i.d. uniform over
[0, 1]d . De?ne the complexity regularized estimator as
)
( n
2
X
1
log
n
(?)
|?| ,
(3)
f? (X i ) ? Yi + ?
f?n ? arg min
n i=1
n
f?(?) :???
where |?| denotes the number of elements of ? and ? > 0. The above optimization can be
solved ef?ciently in O(n) operations using a bottom-up tree pruning algorithm [14].
The performance of the estimator in (3) can be assessed using bounding techniques in the
spirit of [14, 15]. From that analysis we conclude that
1
sup
Ef [kf?n ? f k2 ] ? C(n/ log n)? d ,
f ?PC(?,M )
(4)
where C ? C(?, M, ? 2 ) > 0. This shows that, up to a logarithmic factor, the rate in (2) is
the optimal rate of convergence for passive strategies. A complete derivation of the above
result is available in [12].
4.2
Active Learning Framework
We now turn our attention to the active learning scenario. In [8] this was studied for the
boundary fragment class. From that work and noting again that BF(M ) ? PC(?, M ) we
have, for d ? 2,
1
sup
Ef,Sn [kf?n ? f k2 ] ? cn? d?1 ,
(5)
inf
(f?n ,Sn )??active f ?PC(?,M )
where c ? c(M, ? 2 ) > 0.
In contrast with (2), we observe that with active learning we have a potential performance
gain over passive strategies, effectively equivalent to a dimensionality reduction. Essentially the exponent in (5) depends now on the dimension of the boundary set, d ? 1, instead
of the dimension of the entire domain, d. In [11] an algorithm capable of achieving the
above rate for the boundary fragment class is presented, but this algorithm takes advantage
of the very special functional form of the boundary fragment functions. The algorithm
begins by dividing the unit hypercube into ?strips? and performing a one-dimensional
change-point estimation in each of the strips. This change-point detection can be performed extremely accurately using active learning, as shown in the pioneering work of
Burnashev and Zigangirov [6]. Unfortunately, the boundary fragment class is very restrictive and impractical for most applications. Recall that boundary fragments consist of only
two regions, separated by a boundary that is a function of the first d ? 1 coordinates. The
class PC(?, M ) is much larger and more general and the algorithmic ideas that work for
boundary fragments can no longer be used. A completely different approach is required,
using radically different tools.
We now propose an active learning scheme for the piecewise constant class. The proposed
scheme is a two-step approach based in part on the tree-structured estimators described
above for passive learning. In the first step, called the preview step, a rough estimator
of f is constructed using n/2 samples (assume for simplicity that n is even), distributed
uniformly over [0, 1]d . In the second step, called the re?nement step, we select n/2 samples
near the perceived locations of the boundaries (estimated in the preview step) separating
constant regions. At the end of this process we will have half the samples concentrated in
the vicinity of the boundary set B(f ). Since accurately estimating f near B(f ) is key to
obtaining faster rates, the strategy described seems quite sensible. However, it is critical
that the preview step is able to detect the boundary with very high probability. If part of
the boundary is missed, then the error incurred is going to propagate into the final estimate,
ultimately degrading the performance. Therefore extreme care must be taken to detect the
boundary in the preview step, as described below.
Preview: The goal of this stage is to provide a coarse estimate of the location of B(f ).
Specifically, collect n0 ? n/2 samples at points distributed uniformly over [0, 1]d . Next
proceed by using the passive learning algorithm described before, but restrict the estimator
d?1
0
0
to RDPs with leafs at a maximum depth of J = (d?1)
2 +d log(n / log(n )). This ensures
that, on average, every element of the RDP contains many sample points; therefore we
obtain a low variance estimate, although the estimator bias is going to be large. In other
words, we obtain a very ?stable? coarse estimate of f , where stable means that the estimator
does not change much for different realizations of the data.
The above strategy ensures that most of the time, leafs that intersect the boundary are at the
maximum allowed depth (because otherwise the estimator would incur too much empirical
error) and leafs away from the boundary are at shallower depths. Therefore we can ?detect?
the rough location of the boundary just by looking at the deepest leafs. Unfortunately, if
the set B(f ) is somewhat aligned with the dyadic splits of the RDP, leafs intersecting the
boundary can be pruned without incurring a large error. This is illustrated in Figure 2(b);
the cell with the arrow was pruned and contains a piece of the boundary, but the error
incurred by pruning is small since that region is mostly a constant region. However, worstcase analysis reveals that the squared bias induced by these small volumes can add up,
precluding the desired rates. A way of mitigating this issue is to consider multiple RDPbased estimators, each one using RDPs appropriately shifted. We use d + 1 estimators in
the preview step: one on the initial uniform partition, and d over partitions whose dyadic
splits have been translated by 2?J in each one of the d coordinates. Any leaf that is at the
maximum depth of any of the d + 1 RDPs pruned in the preview step indicates the highly
probable presence of a boundary, and will be refined in the next stage.
Re?nement: With high probability, the boundary is contained in the leafs at the maximum
depth. In the refinement step we collect additional n/2 samples in the corresponding partition cells, using these to obtain a refined estimate of the function f by again applying
(a)
(b)
(c)
(d)
Figure 2: The two step procedure for d = 2: (a) Initial unpruned RDP and n/2 samples.
(b) Preview step RDP. Note that the cell with the arrow was pruned, but it contains a part
of the boundary. (c) Additional sampling for the refinement step. (d) Refinement step.
an RDP-based estimator. This produces a higher resolution estimate in the vicinity of the
boundary set B(f ), yielding better performance than the passive learning technique.
To formally show that this algorithm attains the faster rates we desire we have to consider a
further technical assumption, namely that the boundary set is ?cusp-free?3 . This condition
is rather technical, but it is not very restrictive, and encompasses many interesting situations, including of course boundary fragments. For a more detailed explanation see [12].
Under this condition we have the following:
Theorem 2. Under the active learning scenario we have, for d ? 2 and functions f whose
boundary is cusp-free,
h
i
E kf?n ? f k2
?C
n
log n
1
? d?1+1/d
,
(6)
where C > 0.
This bound improves on (4), demonstrating that this technique performs better than the
best possible passive learning estimator. The proof of Theorem 2 is quite involved and is
presented in detail in [12]. The main idea behind the proof is to decompose the error of the
estimator for three different cases: (i) the error incurred during the preview stage in regions
?away? from the boundary; (ii) the error incurred by not detecting a piece of the boundary
(and therefore not performing the refinement step in that area); (iii) the error remaining
in the refinement region at the end of the process. By restricting the maximum depth of
the trees in the preview stage we can control the type-(i) error, ensuring that it does not
exceed the error rate in (6). Type-(ii) error corresponds to the situations when a part of the
boundary was not detected in the preview step. This can happen because of the inherent
randomness of the noise and sampling distribution, or because the boundary is somewhat
aligned with the dyadic splits. The latter can be a problem and this is why one needs to
perform d + 1 preview estimates over shifted partitions. If the boundary is cusp-free then
it is guaranteed that one of those preview estimators is going to ?feel? the boundary since
it is not aligned with the corresponding partition. Finally, the type-(iii) error is very easy to
analyze, using the same techniques we used for the passive estimator.
A couple of remarks are important at this point. Instead of a two-step procedure one can
reiterate this idea, performing multiple steps (e.g., for a three-step approach replace the
refinement step with the two-step approach described above). Doing so can further improve
the performance. One can show that the expected error will decay like n?1/(d?1+) , with
> 0, given a sufficiently large number of steps. Therefore we can get rates arbitrarily
close to the lower bound rates in (5).
3
A cusp-free boundary cannot have the behavior you observe in the graph of |x|1/2 at the origin.
Less ?aggressive? kinks are allowed, such as in the graph of |x|.
5
Final Remarks
The results presented in this paper show that in certain scenarios active learning attains
provable gains over the classical passive approaches. Active learning is an intuitively appealing idea and may find application in many practical problems. Despite these draws,
the analysis of such active methods is quite challenging due to the loss of statistical independence in the observations (recall that now the sample locations are coupled with all the
observations made in the past). The two function classes presented are non-trivial canonical
examples illustrating under what conditions one might expect active learning to improve
rates of convergence. The algorithm presented here for actively learning members of the
piecewise constant class demonstrates the possibilities of active learning. In fact, this algorithm has already been applied in the context of field estimation using wireless sensor
networks [9]. Future work includes the further development of the ideas presented here to
the context of binary classification and active learning of the Bayes decision boundary.
References
[1] D. Cohn, Z. Ghahramani, and M. Jordan, ?Active learning with statistical models,? Journal of
Arti?cial Intelligence Research, pp. 129?145, 1996.
[2] D. J. C. Mackay, ?Information-based objective functions for active data selection,? Neural
Computation, vol. 4, pp. 698?714, 1991.
[3] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby, ?Information, prediction, and query by
committee,? Proc. Advances in Neural Information Processing Systems, 1993.
[4] K. Sung and P. Niyogi, ?Active learning for function approximation,? Proc. Advances in Neural
Information Processing Systems, vol. 7, 1995.
[5] G. Blanchard and D. Geman, ?Hierarchical testing designs for pattern recognition,? to appear
in Annals of Statistics, 2005.
[6] M. V. Burnashev and K. Sh. Zigangirov, ?An interval estimation problem for controlled observations,? Problems in Information Transmission, vol. 10, pp. 223?231, 1974.
[7] P. Hall and I. Molchanov, ?Sequential methods for design-adaptive estimation of discontinuities
in regression curves and surfaces,? The Annals of Statistics, vol. 31, no. 3, pp. 921?941, 2003.
[8] Alexander Korostelev, ?On minimax rates of convergence in image models under sequential
design,? Statistics & Probability Letters, vol. 43, pp. 369?375, 1999.
[9] R. Willett, A. Martin, and R. Nowak, ?Backcasting: Adaptive sampling for sensor networks,?
in Proc. Information Processing in Sensor Networks, 26-27 April, Berkeley, CA, USA, 2004.
[10] Charles J. Stone, ?Optimal rates of convergence for nonparametric estimators,? The Annals of
Statistics, vol. 8, no. 6, pp. 1348?1360, 1980.
[11] A.P. Korostelev and A.B. Tsybakov, Minimax Theory of Image Reconstruction, Springer Lecture Notes in Statistics, 1993.
[12] R. Castro, R. Willett, and R. Nowak, ?Fast rates in regression via active learning,? Tech.
Rep., University of Wisconsin, Madison, June 2005, ECE-05-3 Technical Report (available at
http://homepages.cae.wisc.edu/ rcastro/ECE-05-3.pdf).
[13] Alexandre B. Tsybakov, Introduction a` l?estimation non-param?etrique, Math?ematiques et
Applications, 41. Springer, 2004.
[14] R. Nowak, U. Mitra, and R. Willett, ?Estimating inhomogeneous fields using wireless sensor
networks,? IEEE Journal on Selected Areas in Communication, vol. 22, no. 6, pp. 999?1006,
2004.
[15] Andrew R. Barron, ?Complexity regularization with application to artificial neural networks,?
in Nonparametric Functional Estimation and Related Topics. 1991, pp. 561?576, Kluwer Academic Publishers.
| 2831 |@word illustrating:1 polynomial:1 achievable:1 norm:1 seems:1 bf:3 d2:1 propagate:1 arti:1 reduction:1 initial:2 series:1 fragment:10 selecting:1 contains:3 precluding:1 outperforms:1 existing:1 past:1 must:1 partition:9 happen:1 lengthen:1 pertinent:1 n0:1 half:1 selected:2 leaf:8 intelligence:1 xk:1 ith:1 coarse:2 detecting:1 math:1 location:16 along:1 constructed:3 expected:2 behavior:1 param:1 begin:1 estimating:4 matched:1 bounded:1 homepage:1 what:2 degrading:1 impractical:1 sung:1 guarantee:1 cial:1 berkeley:1 every:2 multidimensional:1 classifier:1 k2:5 demonstrates:1 control:1 unit:4 lipshitz:1 appear:1 before:2 mitra:1 limit:5 despite:1 might:1 studied:2 collect:3 challenging:1 range:1 statistically:1 practical:5 yj:3 testing:1 recursive:1 procedure:3 intersect:1 area:3 decorating:1 empirical:2 significantly:7 word:2 get:1 onto:1 cannot:4 selection:2 close:1 context:4 applying:1 equivalent:1 deterministic:2 map:1 straightforward:1 attention:1 independently:1 focused:1 resolution:2 simplicity:1 abrupt:1 estimator:24 insight:1 notion:1 coordinate:2 feel:1 annals:3 target:2 shamir:1 homogeneous:4 us:1 hypothesis:2 origin:1 element:3 satisfying:1 recognition:1 geman:1 bottom:1 solved:1 region:9 ensures:2 yk:1 substantial:1 complexity:10 asked:1 battery:1 seung:1 ultimately:1 engr:1 depend:3 incur:1 upon:1 completely:1 translated:1 cae:2 various:3 tx:1 kolmogorov:1 represented:1 derivation:1 laser:1 separated:1 fast:1 query:4 precedes:1 detected:1 artificial:1 outcome:1 refined:2 whose:5 quite:3 larger:1 otherwise:1 precludes:1 ability:1 niyogi:1 statistic:5 noisy:2 final:2 online:2 advantage:2 analytical:2 propose:1 reconstruction:1 maximal:1 aligned:3 realization:1 flexibility:5 exploiting:1 convergence:7 kink:1 requirement:1 transmission:1 produce:1 perfect:2 depending:1 illustrate:1 andrew:1 pose:1 dividing:1 predicted:1 inhomogeneous:1 discontinuous:1 cusp:4 stringent:1 decompose:1 probable:1 around:1 rdps:5 considered:1 sufficiently:1 deciding:1 hall:1 algorithmic:1 achieves:1 vary:1 a2:1 perceived:1 estimation:14 proc:3 label:2 tool:2 rough:2 sensor:7 gaussian:1 rather:1 varying:1 broader:1 june:1 indicates:1 tech:1 contrast:1 rigorous:1 attains:2 sense:2 detect:3 zigangirov:2 entire:2 selective:1 going:3 provably:3 mitigating:1 issue:2 among:1 arg:1 classification:1 denoted:2 priori:2 exponent:1 development:1 spatial:1 special:1 mackay:1 cube:1 field:3 construct:1 equal:1 sampling:19 identical:1 broad:2 capitalize:1 nearly:2 future:1 report:2 spline:1 piecewise:9 inherent:1 few:1 randomly:1 preview:13 freedom:1 detection:3 investigate:1 highly:5 possibility:1 rdp:8 extreme:1 sh:1 yielding:1 pc:9 behind:1 edge:1 nowak:5 capable:3 partial:1 tree:7 taylor:1 divide:1 re:2 desired:1 causal:1 theoretical:3 minimal:1 earlier:1 cover:1 lattice:1 strategic:1 applicability:1 addressing:1 subset:2 uniform:6 examining:1 too:1 tishby:1 dependency:1 answer:4 scanning:2 corrupted:1 chooses:1 adaptively:1 hypercubes:2 fundamental:4 pool:1 intersecting:1 squared:2 again:3 choose:2 possibly:1 derivative:1 actively:1 aggressive:1 potential:2 distribute:1 de:1 attaining:1 includes:2 blanchard:1 satisfy:1 depends:2 reiterate:1 piece:2 performed:1 closed:1 doing:2 sup:4 analyze:1 bayes:1 capability:1 minimize:1 square:1 ni:5 variance:2 korostelev:2 yield:1 accurately:3 j6:1 ary:1 randomness:1 networking:1 strip:2 acquisition:1 pp:8 involved:1 naturally:1 associated:1 proof:5 couple:1 gain:3 sampled:1 recall:2 knowledge:2 exemplify:1 improves:2 dimensionality:1 carefully:1 focusing:1 alexandre:1 higher:1 molchanov:1 methodology:2 specify:1 improved:1 april:1 box:1 furthermore:2 lifetime:1 stage:5 just:1 working:1 cohn:1 behaved:1 usa:1 vicinity:3 regularization:1 spatially:5 illustrated:1 deal:1 game:1 during:1 illustrative:1 stone:1 pdf:1 complete:1 performs:1 passive:28 meaning:1 wise:1 image:3 ef:7 charles:1 specialized:1 garnered:1 functional:2 volume:2 kluwer:1 willett:5 measurement:2 significant:1 smoothness:1 trivially:1 had:1 stable:2 longer:1 surface:1 add:1 inf:3 apart:1 scenario:7 certain:1 binary:1 arbitrarily:1 rep:1 fault:2 yi:7 additional:2 houston:1 care:1 somewhat:2 paradigm:2 ii:6 multiple:2 desirable:1 smooth:6 technical:4 faster:7 academic:1 bigger:1 a1:1 controlled:1 ensuring:1 prediction:1 involving:1 regression:5 essentially:3 expectation:1 kernel:1 cell:4 addition:1 interval:1 publisher:1 appropriately:1 extra:6 induced:2 member:1 spirit:2 jordan:1 integer:1 ciently:1 near:4 counting:1 noting:1 presence:1 split:3 identically:1 enough:1 iii:2 exceed:1 xj:2 easy:1 independence:1 restrict:1 reduce:2 regarding:1 cn:3 idea:6 whether:1 nement:2 burnashev:2 proceed:1 constitute:1 remark:2 adequate:1 fractal:1 dramatically:1 generally:1 detailed:1 amount:1 nonparametric:5 repeating:1 tsybakov:2 locally:2 concentrated:5 diameter:1 reduced:1 http:1 outperform:1 exist:1 canonical:2 shifted:2 estimated:2 promise:2 vol:7 key:5 demonstrating:1 achieving:2 wisc:3 imaging:1 graph:3 everywhere:1 you:1 letter:1 reasonable:1 decide:1 missed:1 draw:1 decision:3 entirely:1 bound:8 guaranteed:1 fold:1 aspect:1 min:1 extremely:1 pruned:5 performing:3 expanded:1 px:2 relatively:1 martin:1 structured:2 ball:1 describes:1 smaller:1 wi:4 appealing:2 making:1 modification:1 castro:2 intuitively:1 gathering:1 taken:1 previously:1 turn:1 committee:1 end:2 studying:1 available:2 operation:1 incurring:1 apply:1 observe:2 hierarchical:1 away:2 barron:1 ematiques:1 denotes:2 remaining:1 log2:1 madison:3 instant:1 restrictive:3 ghahramani:1 build:1 classical:8 hypercube:3 objective:1 question:6 already:2 strategy:20 usual:1 md:1 separating:2 sensible:1 topic:1 collected:2 trivial:1 provable:1 length:1 difficult:1 unfortunately:3 mostly:1 robert:1 statement:1 rise:2 design:3 twenty:2 unknown:1 shallower:1 upper:1 perform:1 observation:7 finite:2 situation:2 extended:1 looking:1 communication:1 y1:1 rebecca:1 pair:1 required:2 namely:2 discontinuity:2 address:3 able:2 usually:1 below:1 pattern:1 regime:5 encompasses:1 pioneering:1 built:1 including:3 reliable:1 explanation:1 critical:3 suitable:1 natural:1 regularized:1 minimax:11 older:4 scheme:4 improve:2 ne:1 coupled:1 sn:10 l2:1 deepest:1 kf:5 wisconsin:3 embedded:1 loss:1 expect:3 highlight:1 freund:1 adaptivity:1 interesting:1 limitation:1 lecture:1 localized:3 ingredient:1 incurred:4 degree:2 unpruned:1 course:1 repeat:1 wireless:5 placed:1 free:4 bias:2 characterizing:2 distributed:3 benefit:1 boundary:47 dimension:3 depth:6 curve:1 commonly:1 adaptive:4 made:2 jump:1 refinement:6 approximate:1 pruning:3 active:55 reveals:1 conclude:1 xi:10 continuous:1 why:1 nature:2 learn:1 zk:1 ca:2 obtaining:1 improving:1 necessarily:1 constructing:1 domain:4 main:4 arrow:2 bounding:1 noise:3 allowed:4 dyadic:4 x1:1 fashion:2 deterministically:1 wavelet:1 theorem:6 explored:1 decay:2 a3:2 incorporating:1 consist:1 restricting:1 sequential:2 effectively:1 rui:1 kx:2 smoothly:1 depicted:1 logarithmic:1 desire:1 contained:1 springer:2 corresponds:2 radically:1 worstcase:1 assouad:1 rice:2 goal:5 sized:1 replace:1 change:4 specifically:1 except:3 uniformly:4 lemma:1 called:5 total:1 ece:2 indicating:1 select:2 formally:1 latter:2 assessed:1 alexander:1 |
2,017 | 2,832 | Layered Dynamic Textures
Antoni B. Chan
Nuno Vasconcelos
Department of Electrical and Computer Engineering
University of California, San Diego
[email protected], [email protected]
Abstract
A dynamic texture is a video model that treats a video as a sample from
a spatio-temporal stochastic process, specifically a linear dynamical system. One problem associated with the dynamic texture is that it cannot
model video where there are multiple regions of distinct motion. In this
work, we introduce the layered dynamic texture model, which addresses
this problem. We also introduce a variant of the model, and present the
EM algorithm for learning each of the models. Finally, we demonstrate
the efficacy of the proposed model for the tasks of segmentation and synthesis of video.
1 Introduction
Traditional motion representations, based on optical flow, are inherently local and have significant difficulties when faced with aperture problems and noise. The classical solution to
this problem is to regularize the optical flow field [1, 2, 3, 4], but this introduces undesirable
smoothing across motion edges or regions where the motion is, by definition, not smooth
(e.g. vegetation in outdoors scenes). More recently, there have been various attempts to
model video as a superposition of layers subject to homogeneous motion. While layered
representations exhibited significant promise in terms of combining the advantages of regularization (use of global cues to determine local motion) with the flexibility of local representations (little undue smoothing), this potential has so far not fully materialized. One of
the main limitations is their dependence on parametric motion models, such as affine transforms, which assume a piece-wise planar world that rarely holds in practice [5, 6]. In fact,
layers are usually formulated as ?cardboard? models of the world that are warped by such
transformations and then stitched to form the frames in a video stream [5]. This severely
limits the types of video that can be synthesized: while layers showed most promise as
models for scenes composed of ensembles of objects subject to homogeneous motion (e.g.
leaves blowing in the wind, a flock of birds, a picket fence, or highway traffic), very little
progress has so far been demonstrated in actually modeling such scenes.
Recently, there has been more success in modeling complex scenes as dynamic textures or,
more precisely, samples from stochastic processes defined over space and time [7, 8, 9, 10].
This work has demonstrated that modeling both the dynamics and appearance of video
as stochastic quantities leads to a much more powerful generative model for video than
that of a ?cardboard? figure subject to parametric motion. In fact, the dynamic texture
model has shown a surprising ability to abstract a wide variety of complex patterns of
motion and appearance into a simple spatio-temporal model. One major current limitation
of the dynamic texture framework, however, is its inability to account for visual processes
consisting of multiple, co-occurring, dynamic textures. For example, a flock of birds flying
in front of a water fountain, highway traffic moving at different speeds, video containing
both trees in the background and people in the foreground, and so forth. In such cases,
the existing dynamic texture model is inherently incorrect, since it must represent multiple
motion fields with a single dynamic process.
In this work, we address this limitation by introducing a new generative model for video,
which we denote by the layered dynamic texture (LDT). This consists of augmenting the
dynamic texture with a discrete hidden variable, that enables the assignment of different
dynamics to different regions of the video. Conditioned on the state of this hidden variable,
the video is then modeled as a simple dynamic texture. By introducing a shared dynamic
representation for all the pixels in the same region, the new model is a layered representation. When compared with traditional layered models, it replaces the process of layer
formation based on ?warping of cardboard figures? with one based on sampling from the
generative model (for both dynamics and appearance) provided by the dynamic texture.
This enables a much richer video representation. Since each layer is a dynamic texture,
the model can also be seen as a multi-state dynamic texture, which is capable of assigning
different dynamics and appearance to different image regions.
We consider two models for the LDT, that differ in the way they enforce consistency of
layer dynamics. One model enforces stronger consistency but has no closed-form solution for parameter estimates (which require sampling), while the second enforces weaker
consistency but is simpler to learn. The models are applied to the segmentation and synthesis of sequences that are challenging for traditional vision representations. It is shown
that stronger consistency leads to superior performance, demonstrating the benefits of sophisticated layered representations. The paper is organized as follows. In Section 2, we
introduce the two layered dynamic texture models. In Section 3 we present the EM algorithm for learning both models from training data. Finally, in Section 4 we present an
experimental evaluation in the context of segmentation and synthesis.
2 Layered dynamic textures
We start with a brief review of dynamic textures, and then introduce the layered dynamic
texture model.
2.1 Dynamic texture
A dynamic texture [7] is a generative model for video, based on a linear dynamical system.
The basic idea is to separate the visual component and the underlying dynamics into two
n
processes. While the dynamics are represented as a time-evolving state process x t ? R ,
N
the appearance of frame yt ? R is a linear function of the current state vector, plus some
observation noise. Formally, the system is described by
xt = Axt?1 ?
+ Bvt
(1)
yt = Cxt + rwt
n?n
N ?n
where A ? R? is a transition matrix, C ? R
a transformation matrix, Bvt ?iid
N (0, Q,) and rwt ?iid N (0, rIN ) the state and observation noise processes parametern?n
n
ized by B ? R
and r ? R, and the initial state x0 ? R is a constant. One interpretation of the dynamic texture model is that the columns of C are the principal components
of the video frames, and the state vectors the PCA coefficients for each video frame. This
is the case when the model is learned with the method of [7].
Figure 1: The layered dynamic texture (left), and the approximate layered dynamic texture
(right). yi is an observed pixel over time, xj is a hidden state process, and Z is the collection
of layer assignment variables zi that assigns each pixels to one of the state processes.
An alternative interpretation considers a single pixel as it evolves over time. Each coordinate of the state vector xt defines a one-dimensional random trajectory in time. A pixel
is then represented as a weighted sum of random trajectories, where the weighting coefficients are contained in the corresponding row of C. This is analogous to the discrete
Fourier transform in signal processing, where a signal is represented as a weighted sum of
complex exponentials although, for the dynamic texture, the trajectories are not necessarily
orthogonal. This interpretation illustrates the ability of the dynamic texture to model the
same motion under different intensity levels (e.g. cars moving from the shade into sunlight)
by simply scaling the rows of C. Regardless of interpretation, the simple dynamic texture
model has only one state process, which restricts the efficacy of the model to video where
the motion is homogeneous.
2.2 Layered dynamic textures
We now introduce the layered dynamic texture (LDT), which is shown in Figure 1 (left).
The model addresses the limitations of the dynamic texture by relying on a set of state processes X = {x(j) }K
j=1 to model different video dynamics. The layer assignment variable
zi assigns pixel yi to one of the state processes (layers), and conditioned on the layer assignments, the pixels in the same layer are modeled as a dynamic texture. In addition, the
collection of layer assignments Z = {zi }N
i=1 is modeled as a Markov random field (MRF)
to ensure spatial layer consistency. The linear system equations for the layered dynamic
texture are
(
(j)
(j)
(j) (j)
xt = A(j) xt?1 + B
j ? {1, ? ? ? , K}
? vt
(2)
(zi ) (zi )
i ? {1, ? ? ? , N }
yi,t = Ci xt + r(zi ) wi,t
(j)
1?n
where Ci ? R
is the transformation from the hidden state to the observed pixel
n?n
and r(j) ?
domain for each pixel yi and each layer j, the noise parameters are B (j) ? R
(j)
R, the iid noise processes are wi,t ?iid N (0, 1) and vt ?iid N (0, In ), and the initial
(j)
states are drawn from x1 ? N (?(j) , S (j) ). As a generative model, the layered dynamic
texture assumes that the state processes X and the layer assignments Z are independent, i.e.
layer motion is independent of layer location, and vice versa. As will be seen in Section 3,
this makes the expectation-step of the EM algorithm intractable to compute in closed-form.
To address this issue, we also consider a slightly different model.
2.3 Approximate layered dynamic texture
We now consider a different model, the approximate layered dynamic texture (ALDT),
shown in Figure 1 (right). Each pixel yi is associated with its own state process xi , and a
different dynamic texture is defined for each pixel. However, dynamic textures associated
with the same layer share the same set of dynamic parameters, which are assigned by the
layer assignment variable zi . Again, the collection of layer assignments Z is modeled as an
MRF but, unlike the first model, conditioning on the layer assignments makes all the pixels
independent. The model is described by the following linear system equations
xi,t = A(zi ) xi,t?1 ?
+ B (zi ) vi,t
i ? {1, ? ? ? , N }
(3)
(zi )
yi,t = Ci xi,t + r(zi ) wi,t
where the noise processes are wi,t ?iid N (0, 1) and vi,t ?iid N (0, In ), and the initial
states are given by xi,1 ? N (?(zi ) , S (zi ) ). This model can also be seen as a video extension
of the popular image MRF models [11], where class variables for each pixel form an MRF
grid and each class (e.g. pixels in the same segment) has some class-conditional distribution
(in our case a linear dynamical system).
The main difference between the two proposed models is in the enforcement of consistency
of dynamics within a layer. With the LDT, consistency of dynamics is strongly enforced by
requiring each pixel in the layer to be associated with the same state process. On the other
hand, for the ALDT, consistency within a layer is weakly enforced by allowing the pixels
to be associated with many instantiations of the state process (instantiations associated with
the same layer sharing the same dynamic parameters). This weaker dependency structure
enables a more efficient learning algorithm.
2.4 Modeling layer assignments
The MRF which determines layer assignments has the following distribution
Y
1 Y
p(Z) =
?i,j (zi , zj )
?i (zi )
Z i
(4)
(i,j)?E
where E is the set of edges in the MRF grid, Z a normalization constant (partition function),
and ?i and ?i,j potential functions of the form
?
?
? ?1 , zi = 1
? 1 , zi = z j
..
..
(5)
?
(z
,
z
)
=
?i (zi ) =
i,j
i
j
.
.
?2 , zi 6= zj
?
? ?
, zi = K
K
The potential function ?i defines a prior likelihood for each layer, while ?i,j attributes
higher probability to configurations where neighboring pixels are in the same layer. While
the parameters for the potential functions could be learned for each model, we instead treat
them as constants that can be estimated from a database of manually segmented training
video.
3 Parameter estimation
The parameters of the model are learned using the Expectation-Maximization (EM) algorithm [12], which iterates between estimating hidden state variables X and hidden layer
assignments Z from the current parameters, and updating the parameters given the current
hidden variable estimates. One iteration of the EM algorithm contains the following two
steps
? =E
? E-Step: Q(?; ?)
? (log p(X, Y, Z; ?))
X,Z|Y ;?
? ? = argmax? Q(?; ?)
?
? M-Step: ?
In the remainder of this section, we briefly describe the EM algorithm for the two proposed
models. Due to the limited space available, we refer the reader to the companion technical
report [13] for further details.
3.1 EM for the layered dynamic texture
The E-step for the layered dynamic texture computes the conditional mean and covari(j)
ance of xt given the observed video Y . These expectations are intractable to compute in
closed-form since it is not known to which state process each of the pixels y i is assigned,
and it is therefore necessary to marginalize over all configurations of Z. This problem also
appears for the computation of the posterior layer assignment probability p(z i = j|Y ). The
method of approximating these expectations which we currently adopt is to simply average
over draws from the posterior p(X, Z|Y ) using a Gibbs sampler. Other approximations,
e.g. variational methods or belief propagation, could be used as well. We plan to consider them in the future. Once the expectations are known, the M-step parameter updates
are analogous to those required to learn a regular linear dynamical system [15, 16], with a
(j)
minor modification in the updates if the transformation matrices Ci . See [13] for details.
3.2 EM for the approximate layered dynamic texture
The ALDT model is similar to the mixture of dynamic textures [14], a video clustering
model that treats a collection of videos as a sample from a collection of dynamic textures.
Since, for the ALDT model, each pixel is sampled from a set of one-dimensional dynamic
textures, the EM algorithm is similar to that of the mixture of dynamic textures. There
are only two differences. First, the E-step computes the posterior assignment probability
p(zi |Y ) given all the observed data, rather than conditioned on a single data point p(z i |yi ).
The posterior p(zi |Y ) can be approximated by sampling from the full posterior p(Z|Y )
using Markov-Chain Monte Carlo [11], or with other methods, such as loopy belief propa(j)
gation. Second, the transformation matrix Ci is different for each pixel, and the E and M
steps must be modified accordingly. Once again, the details are available in [13].
4 Experiments
In this section, we show the efficacy of the proposed model for segmentation and synthesis
of several videos with multiple regions of distinct motion. Figure 2 shows the three video
sequences used in testing. The first (top) is a composite of three distinct video textures
of water, smoke, and fire. The second (middle) is of laundry spinning in a dryer. The
laundry in the bottom left of the video is spinning in place in a circular motion, and the
laundry around the outside is spinning faster. The final video (bottom) is of a highway [17]
where the traffic in each lane is traveling at a different speed. The first, second and fourth
lanes (from left to right) move faster than the third and fifth. All three videos have multiple
regions of motion and are therefore properly modeled by the models proposed in this paper,
but not by a regular dynamic texture.
Four variations of the video models were fit to each of the three videos. The four models were the layered dynamic texture and the approximate layered dynamic texture models
(LDT and ALDT), and those two models without the MRF layer assignment (LDT-iid and
ALDT-iid). In the latter two cases, the layers assignments zi are distributed as iid multinomials. In all the experiments, the dimension of the state space was n = 10. The MRF grid
was based on the eight-neighbor system (with cliques of size 2), and the parameters of the
potential functions were ?1 = 0.99, ?2 = 0.01, and ?j = 1/K. The expectations required
by the EM algorithm were approximated using Gibbs sampling for the LDT and LDT-iid
models and MCMC for the ALDT model. We first present segmentation results, to show
that the models can effectively separate layers with different dynamics, and then discuss
results relative to video synthesis from the learned models.
4.1 Segmentation
The videos were segmented by assigning each of the pixels to the most probable layer
conditioned on the observed video, i.e.
zi? = argmax p(zi = j|Y )
(6)
j
Another possibility would be to assign the pixels by maximizing the posterior of all the pixels p(Z|Y ). While this maximizes the true posterior, in practice we obtained similar results
with the two methods. The former method was chosen because the individual posterior
distributions are already computed during the E-step of EM.
The columns of Figure 3 show the segmentation results obtained with for the four models:
LDT and LDT-iid in columns (a) and (b), and ALDT and ALDT-iid in columns (c) and (d).
The segmented video is also available at [18]. From the segmentations produced by the iid
models, it can be concluded that the composite and laundry videos can be reasonably well
segmented without the MRF prior. This confirms the intuition that the various video regions
contain very distinct dynamics, which can only be modeled with separate state processes.
Otherwise, the pixels should be either randomly assigned among the various layers, or uniformly assigned to one of them. The segmentations of the traffic video using the iid models
are poor. While the dynamics are different, the differences are significantly more subtle,
and segmentation requires stronger enforcement of layer consistency. In general, the segmentations using LDT-iid are better than to those of the ALDT-iid, due to the weaker form
of layer consistency imposed by the ALDT model. While this deficiency is offset by the introduction of the MRF prior, the stronger consistency enforced by the LDT model always
results in better segmentations. This illustrates the need for the design of sophisticated
layered representations when the goal is to model video with subtle inter-layer variations.
As expected, the introduction of the MRF prior improves the segmentations produced by
both models. For example, in the composite sequence all erroneous segments in the water
region are removed, and in the traffic sequence, most of the speckled segmentation also
disappears.
In terms of the overall segmentation quality, both LDT and ALDT are able to segment
the composite video perfectly. The segmentation of the laundry video by both models
is plausible, as the laundry tumbling around the edge of the dryer moves faster than that
spinning in place. The two models also produce reasonable segmentations of the traffic
video, with the segments roughly corresponding to the different lanes of traffic. Much of
the errors correspond to regions that either contain intermittent motion (e.g. the region
between the lanes) or almost no motion (e.g. truck in the upper-right corner and flat-bed
truck in the third lane). Some of these errors could be eliminated by filtering the video
before segmentation, but we have attempted no pre or post-processing. Finally, we note
that the laundry and traffic videos are not trivial to segment with standard computer vision
techniques, namely methods based on optical flow. This is particularly true in the case of
the traffic video where the abundance of straight lines and flat regions makes computing
the correct optical flow difficult due to the aperture problem.
4.2 Synthesis
The layered dynamic texture is a generative model, and hence a video can be synthesized
by drawing a sample from the learned model. A synthesized composite video using the
LDT, ALDT, and the normal dynamic texture can be found at [18]. When modeling a
video with multiple motions, the regular dynamic texture will average different dynamics.
Figure 2: Frames from the test video sequences: (top) composite of water, smoke, and fire
video textures; (middle) spinning laundry in a dryer; and (bottom) highway traffic with
lanes traveling at different speeds.
(a)
(b)
(c)
(d)
Figure 3: Segmentation results for each of the test videos using: (a) the layered dynamic
texture, and (b) the layered dynamic texture without MRF; (c) the approximate layered
dynamic texture, and (d) the approximate LDT without MRF.
This is noticeable in the synthesized video, where the fire region does not flicker at the same
speed as in the original video. Furthermore, the motions in different regions are coupled,
e.g. when the fire begins to flicker faster, the water region ceases to move smoothly. In
contrast, the video synthesized from the layered dynamic texture is more realistic, as the
fire region flickers at the correct speed, and the different regions follow their own motion
patterns. The video synthesized from the ALDT appears noisy because the pixels evolve
from different instantiations of the state process. Once again this illustrates the need for
sophisticated layered models.
References
[1] B. K. P. Horn. Robot Vision. McGraw-Hill Book Company, New York, 1986.
[2] B. Horn and B. Schunk. Determining optical flow. Artificial Intelligence, vol. 17, 1981.
[3] B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo
vision. Proc. DARPA Image Understanding Workshop, 1981.
[4] J. Barron, D. Fleet, and S. Beauchemin. Performance of optical flow techniques. International
Journal of Computer Vision, vol. 12, 1994.
[5] J. Wang and E. Adelson. Representing moving images with layers. IEEE Trans. on Image
Processing, vol. 3, September 1994.
[6] B. Frey and N. Jojic. Estimating mixture models of images and inferring spatial transformations
using the EM algorithm. In IEEE Conference on Computer Vision and Pattern Recognition,
1999.
[7] G. Doretto, A. Chiuso, Y. N. Wu, and S. Soatto. Dynamic textures. International Journal of
Computer Vision, vol. 2, pp. 91-109, 2003.
[8] G. Doretto, D. Cremers, P. Favaro, and S. Soatto. Dynamic texture segmentation. In IEEE
International Conference on Computer Vision, vol. 2, pp. 1236-42, 2003.
[9] P. Saisan, G. Doretto, Y. Wu, and S. Soatto. Dynamic texture recognition. In IEEE Conference
on Computer Vision and Pattern Recognition, Proceedings, vol. 2, pp. 58-63, 2001.
[10] A. B. Chan and N. Vasconcelos. Probabilistic kernels for the classification of auto-regressive
visual processes. In IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp.
846-51, 2005.
[11] S. Geman and D. Geman. Stochastic relaxation, Gibbs distribution, and the Bayesian restoration
of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 6(6), pp. 72141, 1984.
[12] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. Journal of the Royal Statistical Society B, vol. 39, pp. 1-38, 1977.
[13] A. B. Chan and N. Vasconcelos. The EM algorithm for layered dynamic textures. Technical
Report SVCL-TR-2005-03, June 2005. http://www.svcl.ucsd.edu/.
[14] A. B. Chan and N. Vasconcelos. Mixtures of dynamic textures. In IEEE International Conference on Computer Vision, vol. 1, pp. 641-47, 2005.
[15] R. H. Shumway and D. S. Stoffer. An approach to time series smoothing and forecasting using
the EM algorithm. Journal of Time Series Analysis, vol. 3(4), pp. 253-64, 1982.
[16] S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation, vol. 11, pp. 305-45, 1999.
[17] http://www.wsdot.wa.gov
[18] http://www.svcl.ucsd.edu/?abc/nips05/
| 2832 |@word middle:2 briefly:1 stronger:4 confirms:1 bvt:2 tr:1 initial:3 configuration:2 contains:1 efficacy:3 series:2 existing:1 current:4 surprising:1 assigning:2 must:2 realistic:1 partition:1 enables:3 update:2 cue:1 leaf:1 generative:6 intelligence:2 accordingly:1 regressive:1 iterates:1 location:1 simpler:1 favaro:1 chiuso:1 incorrect:1 consists:1 laundry:8 introduce:5 x0:1 inter:1 expected:1 roughly:1 multi:1 relying:1 company:1 gov:1 little:2 provided:1 estimating:2 underlying:1 begin:1 maximizes:1 transformation:6 temporal:2 axt:1 schunk:1 before:1 engineering:1 frey:1 local:3 treat:3 limit:1 severely:1 plus:1 bird:2 challenging:1 co:1 limited:1 horn:2 enforces:2 testing:1 practice:2 ance:1 evolving:1 significantly:1 composite:6 pre:1 regular:3 cannot:1 undesirable:1 layered:31 marginalize:1 context:1 www:3 imposed:1 demonstrated:2 yt:2 maximizing:1 regardless:1 assigns:2 fountain:1 regularize:1 coordinate:1 variation:2 analogous:2 diego:1 homogeneous:3 approximated:2 particularly:1 updating:1 recognition:4 geman:2 database:1 observed:5 bottom:3 electrical:1 wang:1 region:17 removed:1 intuition:1 dempster:1 dynamic:77 weakly:1 segment:5 flying:1 rin:1 svcl:3 darpa:1 various:3 represented:3 distinct:4 describe:1 monte:1 artificial:1 formation:1 outside:1 richer:1 plausible:1 drawing:1 otherwise:1 ability:2 transform:1 noisy:1 laird:1 final:1 ldt:15 advantage:1 sequence:5 remainder:1 neighboring:1 combining:1 flexibility:1 roweis:1 forth:1 bed:1 produce:1 object:1 augmenting:1 minor:1 noticeable:1 progress:1 differ:1 correct:2 attribute:1 stochastic:4 require:1 assign:1 probable:1 extension:1 hold:1 around:2 normal:1 major:1 adopt:1 estimation:1 proc:1 currently:1 superposition:1 highway:4 vice:1 weighted:2 always:1 gaussian:1 modified:1 rather:1 fence:1 june:1 properly:1 likelihood:2 contrast:1 hidden:7 pixel:25 issue:1 among:1 overall:1 classification:1 undue:1 lucas:1 plan:1 smoothing:3 spatial:2 field:3 once:3 vasconcelos:4 sampling:4 manually:1 eliminated:1 adelson:1 foreground:1 future:1 report:2 randomly:1 composed:1 individual:1 argmax:2 consisting:1 fire:5 attempt:1 circular:1 possibility:1 beauchemin:1 evaluation:1 stoffer:1 introduces:1 mixture:4 stitched:1 chain:1 edge:3 capable:1 necessary:1 orthogonal:1 tree:1 incomplete:1 cardboard:3 column:4 modeling:5 restoration:1 assignment:16 maximization:1 loopy:1 introducing:2 front:1 dependency:1 international:4 probabilistic:1 synthesis:6 again:3 containing:1 corner:1 book:1 warped:1 account:1 potential:5 coefficient:2 cremers:1 vi:2 stream:1 piece:1 wind:1 closed:3 traffic:10 start:1 cxt:1 ensemble:1 correspond:1 bayesian:1 produced:2 iid:17 carlo:1 trajectory:3 straight:1 sharing:1 definition:1 pp:9 nuno:2 associated:6 sampled:1 popular:1 car:1 improves:1 segmentation:20 organized:1 subtle:2 blowing:1 sophisticated:3 actually:1 appears:2 higher:1 follow:1 planar:1 strongly:1 furthermore:1 traveling:2 hand:1 propagation:1 smoke:2 defines:2 quality:1 outdoors:1 requiring:1 true:2 contain:2 former:1 regularization:1 assigned:4 hence:1 jojic:1 soatto:3 during:1 hill:1 demonstrate:1 motion:22 image:8 wise:1 variational:1 recently:2 superior:1 multinomial:1 conditioning:1 interpretation:4 vegetation:1 synthesized:6 significant:2 refer:1 versa:1 gibbs:3 consistency:11 grid:3 flock:2 moving:3 robot:1 posterior:8 own:2 chan:4 showed:1 sunlight:1 success:1 vt:2 yi:7 seen:3 determine:1 doretto:3 signal:2 full:1 multiple:6 smooth:1 segmented:4 technical:2 faster:4 post:1 variant:1 basic:1 mrf:13 vision:11 expectation:6 iteration:1 represent:1 normalization:1 kernel:1 background:1 rwt:2 addition:1 concluded:1 unlike:1 exhibited:1 subject:3 flow:6 variety:1 xj:1 fit:1 zi:25 perfectly:1 idea:1 fleet:1 pca:1 forecasting:1 stereo:1 york:1 transforms:1 http:3 restricts:1 zj:2 flicker:3 estimated:1 discrete:2 promise:2 vol:12 four:3 demonstrating:1 drawn:1 registration:1 relaxation:1 sum:2 enforced:3 powerful:1 fourth:1 place:2 almost:1 reader:1 reasonable:1 wu:2 draw:1 scaling:1 layer:40 replaces:1 truck:2 precisely:1 deficiency:1 scene:4 flat:2 lane:6 abchan:1 speed:5 fourier:1 optical:6 department:1 materialized:1 poor:1 across:1 slightly:1 em:15 wi:4 evolves:1 modification:1 dryer:3 equation:2 discus:1 enforcement:2 available:3 eight:1 barron:1 enforce:1 alternative:1 original:1 assumes:1 clustering:1 ensure:1 top:2 unifying:1 ghahramani:1 approximating:1 classical:1 society:1 covari:1 warping:1 move:3 already:1 quantity:1 parametric:2 dependence:1 traditional:3 september:1 separate:3 considers:1 trivial:1 water:5 spinning:5 modeled:6 difficult:1 ized:1 design:1 allowing:1 upper:1 observation:2 markov:2 frame:5 ucsd:4 intermittent:1 intensity:1 namely:1 required:2 california:1 learned:5 trans:1 address:4 able:1 dynamical:4 usually:1 pattern:6 royal:1 video:56 belief:2 difficulty:1 representing:1 brief:1 disappears:1 picket:1 coupled:1 auto:1 faced:1 review:2 prior:4 understanding:1 evolve:1 determining:1 relative:1 shumway:1 fully:1 limitation:4 filtering:1 affine:1 rubin:1 share:1 row:2 weaker:3 wide:1 neighbor:1 fifth:1 benefit:1 distributed:1 dimension:1 world:2 transition:1 computes:2 collection:5 san:1 far:2 transaction:1 approximate:7 mcgraw:1 aperture:2 clique:1 global:1 instantiation:3 spatio:2 xi:5 propa:1 iterative:1 kanade:1 learn:2 reasonably:1 inherently:2 complex:3 necessarily:1 domain:1 main:2 noise:6 x1:1 inferring:1 exponential:1 weighting:1 third:2 abundance:1 companion:1 antoni:1 shade:1 xt:6 erroneous:1 offset:1 cease:1 intractable:2 workshop:1 effectively:1 ci:5 texture:61 conditioned:4 occurring:1 illustrates:3 smoothly:1 simply:2 appearance:5 visual:3 contained:1 determines:1 abc:1 conditional:2 goal:1 formulated:1 shared:1 gation:1 specifically:1 uniformly:1 sampler:1 principal:1 ece:1 experimental:1 attempted:1 rarely:1 formally:1 people:1 latter:1 inability:1 mcmc:1 |
2,018 | 2,833 | Walk-Sum Interpretation and Analysis of
Gaussian Belief Propagation
Jason K. Johnson, Dmitry M. Malioutov and Alan S. Willsky
Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Cambridge, MA 02139
{jasonj,dmm,willsky}@mit.edu
Abstract
This paper presents a new framework based on walks in a graph for analysis and inference in Gaussian graphical models. The key idea is to decompose correlations between variables as a sum over all walks between
those variables in the graph. The weight of each walk is given by a
product of edgewise partial correlations. We provide a walk-sum interpretation of Gaussian belief propagation in trees and of the approximate
method of loopy belief propagation in graphs with cycles. This perspective leads to a better understanding of Gaussian belief propagation and of
its convergence in loopy graphs.
1
Introduction
We consider multivariate Gaussian distributions defined on graphs. The nodes of the graph
denote random variables and the edges indicate statistical dependencies between variables.
The family of all Gauss-Markov models defined on a graph is naturally represented in the
information form of the Gaussian density which is parameterized by the inverse covariance matrix, i.e., the information matrix. This information matrix is sparse, reflecting the
structure of the defining graph such that only the diagonal elements and those off-diagonal
elements corresponding to edges of the graph are non-zero.
Given such a model, we consider the problem of computing the mean and variance of
each variable, thereby determining the marginal densities as well as the mode. In principle, these can be obtained by inverting the information matrix, but the complexity of this
computation is cubic in the number of variables. More efficient recursive calculations are
possible in graphs with very sparse structure ? e.g., in chains, trees and in graphs with
?thin? junction trees. For these models, belief propagation (BP) or its junction tree variants efficiently compute the marginals [1]. In more complex graphs, even this approach
can become computationally prohibitive. Then, approximate methods such as loopy belief
propagation (LBP) provide a tractable alternative to exact inference [1, 2, 3, 4].
We develop a ?walk-sum? formulation for computation of means, variances and correlations that holds in a wide class of Gauss-Markov models which we call walk-summable. In
particular, this leads to a new interpretation of BP in trees and of LBP in general. Based on
this interpretation we are able to extend the previously known sufficient conditions for con-
vergence of LBP to the class of walk-summable models (which includes all of the following: trees, attractive models, and pairwise-normalizable models). Our sufficient condition
is tighter than that given in [3] as the class of diagonally-dominant models is a strict subset
of the class of pairwise-normalizable models. Our results also explain why no examples
were found in [3] where LBP did not converge. The reason is that they presume a pairwisenormalizable model. We also explain why, in walk-summable models, LBP converges to
the correct means but not to the correct variances (proving ?walk-sum? analogs of results in
[3]). In general, walk-summability is not necessary for LBP convergence. Hence, we also
provide a tighter (essentially necessary) condition for convergence of LBP variances based
on walk-summability of the LBP computation tree. This provides deeper insight into why
LBP can fail to converge ? because the LBP computation tree is not always well-posed ?
which suggests connections to [5]. This paper presents the key ideas and outlines proofs of
the main results. A more detailed presentation will appear in a technical report [6].
2
Preliminaries
A Gauss-Markov model (GMM) is defined by a graph G = (V, E) with edge set E ? V2 ,
i.e., some set of two-element subsets of V , and a collection of random variables x =
(xi , i ? V ) with probability density given in information form1 :
1
(1)
p(x) ? exp{? x0 Jx + h0 x}
2
where J is a symmetric positive definite (J 0) matrix which is sparse so as to respect the
graph G: if {i, j} 6? E then Ji,j = 0. We call J the information matrix and h the potential
vector. Let N (i) = {j|{i, j} ? E} denote the neighbors of i in the graph. The mean
? ? E{x} and covariance P ? E{(x ? ?)(x ? ?)0 } are given by:
? = J ?1 h
and P = J ?1
(2)
The partial correlation coefficients are given by:
?i,j ? p
cov(xi ; xj |xV \{i,j} )
var(xi |xV \{i,j} )var(xj |xV \{i,j} )
Ji,j
Ji,i Jj,j
= ?p
(3)
Thus, Jij = 0 if and only if xi and xj are independent given the other variables xV \{i,j} .
We say that this model is attractive if all partial correlations are non-negative. It is pairwisenormalizable if there exists a diagonal matrix D 0 and a collection of non-negative
definite matrices {Je 0, e ? E}, where (Je )i,j is zero unless i, j ? e, such that:
X
J =D+
Je
(4)
e?E
P
It is diagonally-dominant if for all i ? V : j6=i |Ji,j | < Ji,i . The class of diagonallydominant models is a strict subset of the class of pairwise-normalizable models [6].
Gaussian Elimination and Belief Propagation Integrating (1) over all possible values
of xi reduces to Gaussian elimination (GE) in the information form (see also [7]), i.e.,
Z
1
? 0 x\i }
p(x\i ) ? p(x\i , xi )dxi ? exp{? x0\i J?\i x\i + h
(5)
\i
2
where \i ? V \ {i}, i.e. all variables except i, and
?1
? \i = h\i ? J\i,i J ?1 hi
J?\i = J\i,\i ? J\i,i Ji,i
Ji,\i and h
i,i
1
(6)
The work also applies to p(x|y), i.e. where some variables y are observed. However, the observations y are fixed, and we redefine
Qp(x) , p(x|y) (conditioning on y is implicit throughout). With
local observations p(x|y) ? p(x) i p(yi |xi ), conditioning does not change the graph structure.
1
2
?
??
1
?
4
??
?
?
(a)
2
?
3
2
n=1
3
?
3
n=2
?
4
?
2
?
?
1
4 1
?? ?
?
?
?
3
?
3
?
3
4
??
1
4
?
?
?
? ?
1 1
2
2
4
?
4
1
??
?
2
4
?
n=3
(b)
3
1
(c)
Figure 1: (a) Graph of a GMM with nodes {1, 2, 3, 4} and with edge weights (partial correlations) as shown. In (b) and (c) we illustrate the first three levels of the LBP computation
tree rooted at nodes 1 and 2. After 3 iterations of LBP in (a), the marginals at nodes 1 and
2 are identical to the marginals at the root of (b) and (c) respectively.
In trees, the marginal of any given node can be efficiently computed by sequentially eliminating leaves of the tree until just that node remains. BP may be seen as a message-passing
form of GE in which a message passed from node i to node j ? N (i) captures the effect of
eliminating the subtree rooted at i. Thus, by a two-pass procedure, BP efficiently computes
the marginals at all nodes of the tree. The equations for LBP are identical except that messages are updated iteratively and in parallel. There are two messages per edge, one for each
ordered pair (i, j) ? E. We specify each message in information form with parameters:
(n)
(n)
?hi?j , ?Ji?j (initialized to zero for n = 0). These are iteratively updated as follows.
For each (i, j) ? E, messages from N (i) \ j are fused at node i:
X
X
(n)
(n)
(n)
? (n) = hi +
h
?h
and J? = Ji,i +
?J
(7)
k?i
i\j
k?i
i\j
k?N (i)\j
k?N (i)\j
This fused information at node i is predicted to node j:
(n+1)
(n)
? (n) and ?J (n+1) = ?Jj,i (J?(n) )?1 Ji,j
?hi?j = ?Jj,i (J?i\j )?1 h
i?j
i\j
i\j
(8)
After n iterations, the marginal of node i is obtained by fusing all incoming messages:
X
X
(n)
(n)
(n)
? (n) = hi +
h
?hk?i and J?i = Ji,i +
?Jk?i
(9)
i
k?N (i)
k?N (i)
(n)
? (n)
(J?i )?1 h
i
(n)
(J?i )?1 .
The mean and variance are given by
and
In trees, this is the
marginal at node i conditioned on zero boundary conditions at nodes (n + 1) steps away
and LBP converges to the correct marginals after a finite number of steps equal to the diameter of the tree. In graphs with cycles, LBP may not converge and only yields approximate
marginals when it does. A useful fact about LBP is the following [2, 3, 5]: the marginal
computed at node i after n iterations is identical to the marginal at the root of the n-step
computation tree rooted at node i. This tree is obtained by ?unwinding? the loopy graph
for n steps (see Fig. 1). Note that each node of the graph may be replicated many times
in the computation tree. Also, neighbors of a node in the computation tree correspond exactly with neighbors of the associated node in the original graph (except at the last level of
the tree where some neighbors are missing). The corresponding J matrix defined on the
computation tree has the same node and edge values as in the original GMM.
3
Walk-Summable Gauss-Markov Models
In this section we present the walk-sum formulation of inference in GMMs. Let %(A)
denote the spectral radius of a symmetric matrix A, defined to be the maximum of the
absolute values of the eigenvalues of A. The geometric series (I +A+A2 +. . . ) converges
if and only if %(A) < 1. If it converges, it converges to (I ? A)?1 . Now, consider a GMM
with information matrix J. Without loss of generality, let J be normalized (by rescaling
variables) to have Ji,i = 1 for all i. Then, ?i,j = ?Ji,j and the (zero-diagonal) matrix of
partial correlations is given by R = I ? J. If %(R) < 1, then we have a geometric series
for the covariance matrix:
?
X
Rl = (I ? R)?1 = J ?1 = P
(10)
l=0
? = (|rij |) denote the matrix of element-wise absolute values. We say that the model
Let R
? < 1. Walk-summability implies %(R) < 1 and J 0.
is walk-summable if %(R)
Example 1. Consider a 5-node cycle with normalized information matrix J, which has
all partial correlations on the edges set to ?. If ? = ?.45, then the model is valid (i.e.
positive definite) with minimum eigenvalue ?min (J) ? .2719 > 0, and walk-summable
? = .9 < 1. However, when ? = ?.55, then the model is still valid with
with %(R)
? = 1.1 > 1.
?min (J) ? .1101 > 0, but no longer walk-summable with %(R)
Walk-summability allows us to interpret (10) as computing walk-sums in the graph. Recall
that the matrix R reflects graph structure: ?i,j = 0 if {i, j} 6? E. These act as weights
on the edges of the graph. A walk w = (w0 , w1 , ..., wl ) is a sequence of nodes wi ? V
connected by edges {wi , wi+1 } ? E where l is the length of the walk. The weight ?(w) of
walk w is the product of edge weights along the walk:
l
Y
?(w) =
?ws?1 ,ws
(11)
s=1
At each node i ? V , we also define a zero-length walk w = (i) for which ?(w) = 1.
Walk-Sums. Given a set of walks W, we define the walk-sum over W by
X
?(w)
?(W) =
(12)
w?W
? < 1 implies
which is well-defined (i.e., independent of summation order) because %(R)
absolute convergence. Let W l denote the set of l-length walks from i to j and let
i?j
Wi?j = ??
l=0 W l . The relation between walks and the geometric series (10) is that
i?j
the entries of Rl correspond to walk-sums over l-length walks from i to j in the graph, i.e.,
(Rl )i,j = ?(W l ). Hence,
i?j
Pi,j =
?
X
(Rl )i,j =
l=0
X
?(W
l
i?j
) = ?(?l W
l
i?j
) = ?(Wi?j )
(13)
l
In particular, the variance ?i2 ? Pi,i of variable i is the walk-sum taken over the set Wi?i
of self-return walks that begin and end at i (defined so that (i) ? Wi?i ). The means can
be computed as reweighted walk-sums, i.e., where each walk
P is scaled by the potential at
the start of the walk: ?(w; h) = hw0 ?(w), and ?(W; h) = w?W ?(w; h). Then,
X
X
?i =
Pi,j hj =
?(Wj?i )hj = ?(W??i ; h)
(14)
j?V
j
where W??i ? ?j?V Wj?i is the set of all walks which end at node i.
We have found that a wide class of GMMs are walk-summable:
Proposition 1 (Walk-Summable GMMs) All of the following classes of GMMs are walksummable:2 (i) attractive models, (ii) trees and (iii) pairwise-normalizable3 models.
2
3
? < 1.
That is if we take a valid model (with J 0) in these classes then it automatically has %(R)
In [6], we also show that walk-summability is actually equivalent to pairwise-normalizability.
? and J = I ? R
? 0 implies ?max (R)
? < 1. Because R
? has
Proof Outline. (i) R = R
?
?
non-negative elements, %(R) = ?max (R) < 1. In (ii) & (iii), negating any ?ij , it still
holds that J = I ? R 0 : (ii) negating ?ij doesn?t affect the eigenvalues of J (remove
edge {i, j} and, in each eigenvector, negate all entries in one subtree); (iii) negating ?ij
? 0 and
preserves J{i,j} 0 in (4) so J 0. Thus, making all ?ij > 0, we find I ? R
? ? I. Similarly, making all ?ij < 0, ?R
? ? I. Therefore, %(R)
? < 1.
R
4
Recursive Walk-Sum Calculations on Trees
In this section we derive a recursive algorithm which accrues the walk-sums (over infinite
sets of walks) necessary for exact inference on trees and relate this to BP. Walk-summability
guarantees correctness of this algorithm which reorders walks in a non-trivial way.
We start with a chain of N nodes: its graph G has nodes V = {1, . . . , N } and edges
E = {e1 , .., eN ?1 } where ei = {i, i + 1}. The variance at node i is ?i2 = ?(Wi?i ). The
set Wi?i can be partitioned according to the number of times that walks return to node
(r)
(r)
i: Wi?i = ??
r=0 Wi?i where Wi?i is the set of all self-return walks which return to i
(0)
(0)
exactly r times. In particular, Wi?i = {(i)} for which ?(Wi?i ) = 1. A walk which
starts at node i and returns r times is a concatenation of r single-revisit self-return walks,
(r)
(1)
so ?(Wi?i ) = ?(Wi?i )r . This means:
?
?
X
X
1
(r)
(r)
(1)
?(Wi?i ) =
?(Wi?i )r =
?(Wi?i ) = ?(??
(15)
r=0 Wi?i ) =
(1)
1 ? ?(Wi?i )
r=0
r=0
This geometric series converges since the model is walk-summable. Hence, calculating the
(1)
single-revisit self-return walk-sum ?(Wi?i ) determines the variance ?i2 . The single-revisit
walks at node i consist of walks in the left subchain, and walks in the right subchain. Let
Wi?i\j be the set of self-return walks of i which never visit j, so e.g. all w ? Wi?i\i+1
are contained in the subgraph {1, . . . , i}. With this notation:
(1)
(1)
(1)
?(Wi?i ) = ?(Wi?i\i+1 ) + ?(Wi?i\i?1 )
(16)
(1)
The left single-revisit self-return walk-sums ?(Wi?i\i+1 ) can be computed recursively
(1)
starting from node 1. At node 1, ?(W1?1\2 ) = 0 and ?(W1?1\2 ) = 1. A single-revisit
self-return walk from node i in the left subchain consists of a step to node i ? 1, then some
number of self-return walks in the subgraph {1, . . . , i ? 1}, and a step from i ? 1 back to i:
(1)
?(Wi?i\i+1 ) = ?2i,i?1 ?(Wi?1?i?1\i ) =
?2i,i?1
(1)
1 ? ?(Wi?1?i?1\i )
(17)
Thus single-revisit (and multiple revisit) walk-sums in the left subchain of every node i can
be calculated in one forward pass through the chain. The same can be done for the right
subchain walk-sums at every node i, by starting at node N , and going backwards. Using
equations (15) and (16) these quantities suffice to calculate the variances at all nodes of the
chain. A similar forwards-backwards procedure computes the means as reweighted walksums over the left and right single-visit walks for node i, which start at an arbitrary node
(in the left or right subchain) and end at i, never visiting i before that [6]. In fact, these
recursive walk-sum calculations map exactly to operations in BP ? e.g., in a normalized
(1)
(1)
chain ?Ji?1?i = ??(Wi?i\i+1 ) and ?hi?1?i = ??(W??i\i+1 ; h). The same strategy
applies for trees: both single-revisit and single-visit walks at node i can be partitioned
according to which subtree (rooted at a neighbor j ? N (i) of i) the walk lives in. This
leads to a two-pass walks-sum calculation on trees (from the leaves to the root, and back)
to calculate means and variances at all nodes.
5
Walk-sum Analysis of Loopy Belief Propagation
First, we analyze LBP in the case that the model is walk-summable and show that LBP
converges and includes all the walks for the means, but only a subset of the walks for the
variances. Then, we consider the case of non-walksummable models and relate convergence of the LBP variances to walk-summability of the computation tree.
5.1
LBP in walk-summable models
To compute means and variances in a walk-summable model, we need to calculate walksums for certain sets of walks in the graph G. Running LBP in G is equivalent to exact
inference in the computation tree for G, and hence calculating walk-sums for certain walks
in the computation tree. In the computation tree rooted at node i, walks ending at the root
have a one-to-one correspondence with walks ending at node i in G. Hence, LBP captures
all of the walks necessary to calculate the means. For variances, the walks captured by
LBP have to start and end at the root in the computation tree. However, some of the selfreturn walks in G translate to walks in the computation tree that end at the root but start
at a replica of the root, rather than at the root itself. These walks are not captured by the
LBP variances. For example, in Fig. 1(a), the walk (1, 2, 3, 1) is a self-return walk in the
original graph G but is not a self-return walk in the computation tree shown in Fig. 1(b).
LBP variances capture only those self-return walks of the original graph G which also
are self-return walks in the computation tree ? e.g., the walk (1, 3, 2, 3, 4, 3, 1) is a selfreturn walk in both Figs. 1(a) and (b). We call these backtracking walks. These simple
observations lead to our main result:
Proposition 2 (Convergence of LBP for walk-summable GMMs) If the model is walksummable, then LBP converges: the means converge to the true means and the LBP variances converge to walk-sums over just the backtracking self-return walks at each node.
Proof Outline. All backtracking walks have positive weights, since each edge is traversed
an even number of times. For a walk-summable model, LBP variances are walks-sums
over the backtracking walks and are therefore monotonically increasing with the iterations.
They
P ? lalso are bounded above by the absolute self-return walk-sums
P? l (diagonal elements of
means:
the
series
l R ) and hence converge. For the
l=0 R h converges absolutely
? l |h|, and the series P R
? l |h| is a linear combination of terms of the absince |Rl h| ? R
l
P ?l
solutely convergent
l R . The LBP means are a rearrangement of the absolutely
Pseries
?
convergent series l=0 Rl h, so they converge to the same values.
As a corollary, LBP converges for all of the model classes listed in Proposition 1. Also, in
attractive models, the LBP variances are less than or equal to the true variances. Correctness
of the means was also shown in [3] for pairwise-normalizable models.4 They also show that
LBP variances omit some terms needed for the correct variances. These terms correspond
to correlations between the root and its replicas in the computation tree. In our framework,
each such correlation is a walk-sum over the subset of non-backtracking self-return walks
in G which, in the computation tree, begin at a particular replica of the root.
Example 2. Consider the graph in Fig. 1(a). For ? = .39, the model is walk-summable with
? ? .9990. For ? = .395 and ? = .4, the model is still valid but is not walk-summable,
%(R)
? ? 1.0118 and 1.0246 respectively. In Fig. 2(a) we show LBP variances for
with %(R)
node 1 (the other nodes are similar) vs. the iteration number. As ? increases, first the
model is walk-summable and LBP converges, then for a small interval the model is not
walk-summable but LBP still converges,5 and for larger ? LBP does not converge. Also,
4
5
However, they only prove convergence for the subset of diagonally dominant models.
Hence, walk-summability is sufficient but not necessary for convergence of LBP.
5
0
0
1.1
? = 0.39
? = 0.395
10
20
30
40
1
0.9
200
0.8
0
0.7
?200
0
? = 0.4
50
100
150
200
(a)
0
LBP converges
LBP does not converge
10
20
30
(b)
Figure 2: (a) LBP variances vs. iteration. (b) %(Rn ) vs. iteration.
P
P ?l
for ? = .4, we note that %(R) = .8 < 1 and the series l Rl converges (but l R
does
not) and LBP does not converge. Hence, %(R) < 1 is not sufficient for LBP convergence
? < 1.
showing the importance of the stricter walk-summability condition %(R)
5.2
LBP in non-walksummable models
We extend our analysis to develop a tighter condition for convergence of LBP variances
based on walk-summability of the computation tree (rather than walk-summability on G).6
? < 1, hence
For trees, walk-summability and validity are equivalent, i.e. J 0 ? %(R)
our condition is equivalent to validity of the computation tree.
First, we note that when a model on G is valid (J is positive-definite) but not walksummable, then some finite computation trees may be invalid (indefinite). This turns out
to be the reason why LBP variances can fail to converge. Walk-summability of the original
GMM implies walk-summability (and hence validity) of all of its computation trees. But
if the GMM is not walk-summable, then its computation tree may or may not be walksummable. In Example 2, for ? = .395 the computation tree is still walk-summable (even
though the model on G is not) and LBP converges. For ? = .4, the computation tree is not
walk-summable and LBP does not converge. Indeed, LBP is not even well-posed in this
case (because the computation tree is indefinite) which explains its strange behavior seen
in the bottom plot of Fig. 2(a) (e.g., non-monotonicity and negative variances).
We characterize walk-summability of the computation tree as follows. Let Tn be the nstep computation tree rooted at some node i and define Rn , In ? Jn where Jn is the
normalized information matrix on Tn and In is the n ? n identity matrix. The n-step
?n) =
computation tree Tn is walk-summable (valid) if and only if %(Rn ) < 1 (in trees, %(R
?
%(Rn )). The sequence {%(Rn )} is monotonically increasing and bounded above by %(R)
(see [6]) and hence converges. We are interested in the quantity %? ? limn?? %(Rn ).
Proposition 3 (LBP validity/variance convergence) (i) If %? < 1, then all finite computation trees are valid and the LBP variances converge. (ii) If %? > 1, then the computation
tree eventually becomes invalid and LBP is ill-posed.
Proof Outline. (i) For some ? > 0, %(Rn ) ? 1 ? ? for all n which implies: all computation trees are walk-summable and variances monotonically increase; ?max (Rn ) ? 1 ? ?,
?min (Jn ) ? ?, and (Pn )i,i ? ?max (Pn ) ? 1? . The variances are monotonically increasing
6
We can focus on one tree: if the computation tree rooted at node i is walk-summable, then so is
the computation tree rooted at any node j. Also, if a finite computation tree rooted at node i is not
walk-summable, then some finite tree at node j also becomes non-walksummable for n large enough.
and bounded above, hence they converge. (ii) If limn?? %(Rn ) > 1, then there exists an
m for which %(Rn ) > 1 for all n ? m and the computation tree is invalid.
As discussed in [6], LBP is well-posed if and only if the information numbers computed on
the right in (7) and (9) are strictly positive for all n. Hence, it is easily detected if the LBP
computation tree becomes invalid. In this case, continuing to run LBP is not meaningful
and will lead to division by zero and/or negative variances.
Example 3. Consider a 4-node cycle with edge weights (??, ?, ?, ?). In Fig. 2(b), for
? = .49 we plot %(Rn ) vs. n (lower curve) and observe that limn?? %(Rn ) ? .98 < 1,
and LBP converges (similar to the upper plot of Fig. 2(a)). For ? = .51 (upper curve), the
model defined on the 4-node cycle is still valid but limn?? %(Rn ) ? 1.02 > 1 so LBP is
ill-posed and does not converge (similar to the lower plot of Fig. 2(a)).
In non-walksummable models, the series LBP computes for the means is not absolutely
convergent and may diverge even when variances converge (e.g., in Example 2 with
? = .39867). However, in all cases where variances converge we have observed that with
enough damping of BP messages7 we also obtain convergence of the means. Apparently, it
is the validity of the computation tree that is critical for convergence of Gaussian LBP.
6
Conclusion
We have presented a walk-sum interpretation of inference in GMMs and have applied this
framework to analyze convergence of LBP extending previous results. In future work,
we plan to develop extended walk-sum algorithms which gather more walks than LBP.
Another approach is to estimate variances by sampling random walks in the graph. We
also are interested to explore possible connections between results in [8] ? based on selfavoiding walks in Ising models ? and sufficient conditions for convergence of discrete LBP
[9] which have some parallels to our walk-sum analysis in the Gaussian case.
Acknowledgments This research was supported by the Air Force Office of Scientific
Research under Grant FA9550-04-1, the Army Research Office under Grant W911NF-051-0207 and by a grant from MIT Lincoln Laboratory.
References
[1] J. Pearl. Probabilistic inference in intelligent systems. Morgan Kaufmann, 1988.
[2] J. Yedidia, W. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations.
Exploring AI in the new millennium, pages 239?269, 2003.
[3] Y. Weiss and W. Freeman. Correctness of belief propagation in Gaussian graphical models of
arbitrary topology. Neural Computation, 13:2173?2200, 2001.
[4] P. Rusmevichientong and B. Van Roy. An analysis of belief propagation on the turbo decoding
graph with Gaussian densities. IEEE Trans. Information Theory, 48(2):745?765, Feb. 2001.
[5] S. Tatikonda and M. Jordan. Loopy belief propagation and Gibbs measures. UAI, 2002.
[6] J. Johnson, D. Malioutov, and A. Willsky. Walk-Summable Gaussian Networks and Walk-Sum
Interpretation of Gaussian Belief Propagation. TR-2650, LIDS, MIT, 2005.
[7] K. Plarre and P. Kumar. Extended message passing algorithm for inference in loopy Gaussian
graphical models. Ad Hoc Networks, 2004.
[8] M. Fisher. Critical temperatures of anisotropic Ising lattices II, general upper bounds. Physical
Review, 162(2), 1967.
[9] A. Ihler, J. Fisher III, and A. Willsky. Message Errors in Belief Propagation. NIPS, 2004.
? ) with 0 < ? ? 1.
Modify (8) as follows: ?hi?j = (1 ? ?)?hi?j + ?(?Ji,j (J?i\j )?1 h
i\j
In Example 2, with ? = .39867 and ? = .9 the means converge.
7
(n+1)
(n)
(n)
(n)
| 2833 |@word eliminating:2 covariance:3 thereby:1 tr:1 recursively:1 series:9 remove:1 plot:4 v:4 prohibitive:1 leaf:2 fa9550:1 walksummable:8 provides:1 node:55 along:1 become:1 consists:1 prove:1 redefine:1 x0:2 pairwise:6 indeed:1 behavior:1 freeman:2 automatically:1 increasing:3 becomes:3 begin:2 notation:1 suffice:1 bounded:3 eigenvector:1 guarantee:1 every:2 act:1 stricter:1 exactly:3 scaled:1 grant:3 omit:1 appear:1 positive:5 before:1 engineering:1 local:1 modify:1 xv:4 suggests:1 acknowledgment:1 recursive:4 definite:4 procedure:2 integrating:1 equivalent:4 map:1 missing:1 starting:2 insight:1 proving:1 updated:2 exact:3 element:6 roy:1 jk:1 ising:2 observed:2 bottom:1 electrical:1 capture:3 rij:1 calculate:4 wj:2 cycle:5 connected:1 complexity:1 division:1 easily:1 represented:1 pseries:1 detected:1 h0:1 posed:5 larger:1 say:2 cov:1 itself:1 hoc:1 sequence:2 eigenvalue:3 product:2 jij:1 subgraph:2 translate:1 lincoln:1 convergence:15 extending:1 converges:17 illustrate:1 develop:3 derive:1 ij:5 predicted:1 indicate:1 implies:5 radius:1 correct:4 elimination:2 explains:1 generalization:1 decompose:1 preliminary:1 proposition:4 tighter:3 summation:1 traversed:1 strictly:1 exploring:1 hold:2 exp:2 jx:1 a2:1 tatikonda:1 wl:1 correctness:3 reflects:1 mit:3 gaussian:15 always:1 normalizability:1 rather:2 pn:2 hj:2 office:2 corollary:1 focus:1 hk:1 inference:8 w:2 relation:1 going:1 interested:2 ill:2 plan:1 marginal:6 equal:2 never:2 sampling:1 identical:3 thin:1 future:1 report:1 intelligent:1 preserve:1 rearrangement:1 message:9 chain:5 edge:14 partial:6 necessary:5 unless:1 tree:59 damping:1 continuing:1 walk:128 initialized:1 negating:3 w911nf:1 lattice:1 loopy:7 fusing:1 subset:6 entry:2 johnson:2 characterize:1 dependency:1 density:4 probabilistic:1 off:1 decoding:1 diverge:1 fused:2 w1:3 summable:27 rescaling:1 return:18 potential:2 rusmevichientong:1 includes:2 coefficient:1 ad:1 root:10 jason:1 analyze:2 apparently:1 start:6 parallel:2 air:1 variance:35 kaufmann:1 efficiently:3 yield:1 correspond:3 presume:1 malioutov:2 j6:1 explain:2 naturally:1 proof:4 dxi:1 associated:1 edgewise:1 con:1 ihler:1 massachusetts:1 recall:1 actually:1 reflecting:1 back:2 nstep:1 specify:1 wei:2 formulation:2 done:1 though:1 generality:1 just:2 implicit:1 dmm:1 correlation:10 until:1 ei:1 propagation:14 mode:1 scientific:1 effect:1 validity:5 normalized:4 true:2 hence:13 symmetric:2 iteratively:2 laboratory:1 i2:3 attractive:4 reweighted:2 self:15 rooted:9 outline:4 tn:3 temperature:1 wise:1 ji:15 qp:1 rl:7 physical:1 conditioning:2 anisotropic:1 extend:2 interpretation:6 analog:1 discussed:1 marginals:6 interpret:1 cambridge:1 gibbs:1 ai:1 similarly:1 longer:1 feb:1 dominant:3 multivariate:1 perspective:1 certain:2 life:1 yi:1 seen:2 minimum:1 captured:2 morgan:1 converge:18 monotonically:4 ii:6 multiple:1 reduces:1 alan:1 technical:1 calculation:4 e1:1 visit:3 variant:1 essentially:1 iteration:7 lbp:62 interval:1 limn:4 strict:2 gmms:6 jordan:1 call:3 backwards:2 iii:4 enough:2 xj:3 affect:1 topology:1 idea:2 passed:1 passing:2 jj:3 useful:1 detailed:1 listed:1 diameter:1 revisit:8 per:1 discrete:1 key:2 indefinite:2 gmm:6 replica:3 graph:32 sum:30 run:1 inverse:1 parameterized:1 family:1 throughout:1 strange:1 bound:1 hi:8 convergent:3 correspondence:1 turbo:1 bp:7 normalizable:4 min:3 kumar:1 department:1 according:2 combination:1 wi:32 partitioned:2 lid:1 making:2 taken:1 computationally:1 equation:2 previously:1 remains:1 turn:1 eventually:1 fail:2 needed:1 ge:2 tractable:1 end:5 junction:2 operation:1 yedidia:1 observe:1 v2:1 away:1 spectral:1 alternative:1 jn:3 original:5 running:1 graphical:3 calculating:2 quantity:2 strategy:1 diagonal:5 visiting:1 concatenation:1 w0:1 trivial:1 reason:2 willsky:4 length:4 relate:2 negative:5 upper:3 observation:3 markov:4 finite:5 defining:1 extended:2 rn:13 arbitrary:2 inverting:1 pair:1 connection:2 pearl:1 nip:1 trans:1 able:1 max:4 belief:14 critical:2 force:1 millennium:1 technology:1 form1:1 review:1 understanding:2 geometric:4 determining:1 loss:1 summability:15 var:2 gather:1 sufficient:5 principle:1 pi:3 diagonally:3 supported:1 last:1 deeper:1 institute:1 wide:2 neighbor:5 absolute:4 sparse:3 van:1 boundary:1 calculated:1 curve:2 valid:8 ending:2 computes:3 doesn:1 forward:2 collection:2 replicated:1 subchain:6 approximate:3 dmitry:1 monotonicity:1 sequentially:1 incoming:1 uai:1 reorder:1 xi:7 vergence:1 why:4 complex:1 did:1 main:2 fig:10 je:3 en:1 cubic:1 accrues:1 showing:1 negate:1 exists:2 consist:1 importance:1 subtree:3 conditioned:1 backtracking:5 explore:1 army:1 ordered:1 contained:1 applies:2 determines:1 ma:1 identity:1 presentation:1 invalid:4 fisher:2 change:1 infinite:1 except:3 pas:3 gauss:4 meaningful:1 hw0:1 absolutely:3 |
2,019 | 2,834 | Cyclic Equilibria in Markov Games
Martin Zinkevich and Amy Greenwald
Department of Computer Science
Brown University
Providence, RI 02912
{maz,amy}@cs.brown.edu
Michael L. Littman
Department of Computer Science
Rutgers, The State University of NJ
Piscataway, NJ 08854?8019
[email protected]
Abstract
Although variants of value iteration have been proposed for finding Nash
or correlated equilibria in general-sum Markov games, these variants
have not been shown to be effective in general. In this paper, we demonstrate by construction that existing variants of value iteration cannot find
stationary equilibrium policies in arbitrary general-sum Markov games.
Instead, we propose an alternative interpretation of the output of value iteration based on a new (non-stationary) equilibrium concept that we call
?cyclic equilibria.? We prove that value iteration identifies cyclic equilibria in a class of games in which it fails to find stationary equilibria. We
also demonstrate empirically that value iteration finds cyclic equilibria in
nearly all examples drawn from a random distribution of Markov games.
1
Introduction
Value iteration (Bellman, 1957) has proven its worth in a variety of sequential-decisionmaking settings, most significantly single-agent environments (Puterman, 1994), team
games, and two-player zero-sum games (Shapley, 1953). In value iteration for Markov
decision processes and team Markov games, the value of a state is defined to be the maximum over all actions of the value of the combination of the state and action (or Q value).
In zero-sum environments, the max operator becomes a minimax over joint actions of the
two players. Learning algorithms based on this update have been shown to compute equilibria in both model-based scenarios (Brafman & Tennenholtz, 2002) and Q-learning-like
model-free scenarios (Littman & Szepesv?ari, 1996).
The theoretical and empirical success of such algorithms has led researchers to apply the
same approach in general-sum games, in spite of exceedingly weak guarantees of convergence (Hu & Wellman, 1998; Greenwald & Hall, 2003). Here, value-update rules based on
select Nash or correlated equilibria have been evaluated empirically and have been shown
to perform reasonably in some settings. None has been identified that computes equilibria in general, however, leaving open the question of whether such an update rule is even
possible.
Our main negative theoretical result is that an entire class of value-iteration update rules,
including all those mentioned above, can be excluded from consideration for computing
stationary equilibria in general-sum Markov games. Briefly, existing value-iteration algorithms compute Q values as an intermediate result, then derive policies from these Q
values. We demonstrate a class of games in which Q values, even those corresponding to
an equilibrium policy, contain insufficient information for reconstructing an equilibrium
policy.
Faced with the impossibility of developing algorithms along the lines of traditional value
iteration that find stationary equilibria, we suggest an alternative equilibrium concept?
cyclic equilibria. A cyclic equilibrium is a kind of non-stationary joint policy that satisfies
the standard conditions for equilibria (no incentive to deviate unilaterally). However, unlike
conditional non-stationary policies such as tit-for-tat and finite-state strategies based on the
?folk theorem? (Osborne & Rubinstein, 1994), cyclic equilibria cycle rigidly through a set
of stationary policies.
We present two positive results concerning cyclic equilibria. First, we consider the class of
two-player two-state two-action games used to show that Q values cannot reconstruct all
stationary equilibrium. Section 4.1 shows that value iteration finds cyclic equilibria for all
games in this class. Second, Section 5 describes empirical results on a more general set of
games. We find that on a significant fraction of these games, value iteration updates fail to
converge. In contrast, value iteration finds cyclic equilibria for nearly all the games.
The success of value iteration in finding cyclic equilibria suggests this generalized solution
concept could be useful for constructing robust multiagent learning algorithms.
2
An Impossibility Result for Q Values
In this section, we consider a subclass of Markov games in which transitions are deterministic and are controlled by one player at a time. We show that this class includes games
that have no deterministic equilibrium policies. For this class of games, we present (proofs
available in an extended technical report) two theorems. The first, a negative result, states
that the Q values used in existing value-iteration algorithms are insufficient for deriving
equilibrium policies. The second, presented in Section 4.1, is a positive result that states
that value iteration does converge to cyclic equilibrium policies in this class of games.
2.1
Preliminary Definitions
Given a finite set X, define ?(X) to be the set of all probability distributions over X.
Definition 1 A Markov game ? = [S, N, A, T, R, ?] is a set of states S, a set of players
N = {1, . . . , n}, a set of actions for each player in
(where we
S each state {A
Q i,s }s?S,i?N
represent the set of all state-action pairs as A ? s?S {s} ? i?N Ai,s , a transition
function T : A ? ?(S), a reward function R : A ? Rn , and a discount factor ?.
Q
Given a Markov game ?, let As = i?N Ai,s . A stationary policy is a set of distributions
{?(s) : s ? S}, where for all s ? S, ?(s) ? ? (As ). Given a stationary policy ?, define
V ?,? : S ? Rn and Q?,? : A ? Rn to be the unique pair of functions satisfying the
following system of equations: for all i ? N , for all (s, a) ? A,
X
Vi?,? (s) =
?(s)(a)Q?,?
(1)
i (s, a),
a?As
Q?,?
i (s, a)
= Ri (s, a) + ?
X
T (s, a)(s? )Vi?,? (s? ).
(2)
s? ?S
A deterministic Markov game is a Markov game ? where the transition function is deterministic: T : A ? S. A turn-taking game is a Markov game ? where in every state, only
one player has a choice of action. Formally, for all s ? S, there exists a player i ? N such
that for all other players j ? N \{i}, |Aj,s | = 1.
2.2
A Negative Result for Stationary Equilibria
A NoSDE (pronounced ?nasty?) game is a deterministic turn-taking Markov game ? with
two players, two states, no more than two actions for either player in either state, and no
deterministic stationary equilibrium policy. That the set of NoSDE games is non-empty is
demonstrated by the game depicted in Figure 1. This game has no deterministic stationary
equilibrium policy: If Player 1 sends, Player 2 prefers to send; but, if Player 2 sends,
Player 1 prefers to keep; and, if Player 1 keeps, Player 2 prefers to keep; but, if Player 2
keeps, Player 1 prefers to send. No deterministic policy is an equilibrium because one
player will always have an incentive to change policies.
R1 (2, noop, send) = 0
R1 (1, keep, noop) = 1
1
2
R1 (2, noop, keep) = 3
R1 (1, send, noop) = 0
R2 (2, noop, send) = 0
R2 (1, keep, noop) = 0
1
2
R2 (2, noop, keep) = 1
R2 (1, send, noop) = 3
Figure 1: An example of a NoSDE game. Here, S = {1, 2}, A1,1 = A2,2 =
{keep, send}, A1,2 = A2,1 = {noop}, T (1, keep, noop) = 1, T (1, send, noop) = 2,
T (2, noop, keep) = 2, T (2, noop, send) = 1, and ? = 3/4. In the unique stationary
equilibrium, Player 1 sends with probability 2/3 and Player 2 sends with probability 5/12.
Lemma 1 Every NoSDE game has a unique stationary equilibrium policy.1
It is well known that, in general Markov games, random policies are sometimes needed to
achieve an equilibrium. This fact can be demonstrated simply by a game with one state
where the utilities correspond to a bimatrix game with no deterministic equilibria (penny
matching, say). Random actions in these games are sometimes linked with strategies that
use ?faking? or ?bluffing? to avoid being exploited. That NoSDE games exist is surprising,
in that randomness is needed even though actions are always taken with complete information about the other player?s choice and the state of the game. However, the next result
is even more startling. Current value-iteration algorithms attempt to find the Q values of a
game with the goal of using these values to find a stationary equilibrium of the game. The
main theorem of this paper states that it is not possible to derive a policy from the Q values
for NoSDE games, and therefore in general Markov games.
Theorem 1 For any NoSDE game ? = [S, N, A, T, R] with a unique equilibrium policy
?, there exists another NoSDE game ?? = [S, N, A, T, R? ] with its own unique equilibrium
?
?
?
?
policy ? ? such that Q?,? = Q? ,? but ? 6= ? ? and V ?,? 6= V ? ,? .
This result establishes that computing or learning Q values is insufficient to compute a
stationary equilibrium of a game.2 In this paper we suggest an alternative, where we still
1
The policy is both a correlated equilibrium and a Nash equilibrium.
Although maintaining Q values and state values and deriving policies from both sets of functions
might circumvent this problem, we are not aware of existing value-iteration algorithms or learning
algorithms that do so. This observation presents a possible avenue of research not followed in this
paper.
2
do value iteration in the same way, but we extract a cyclic equilibrium from the sequence
of values instead of a stationary one.
3
A New Goal: Cyclic Equilibria
A cyclic policy is a finite sequence of stationary policies ? = {?1 , . . . , ?k }. Associated
with ? is a sequence of value functions {V ?,?,j } and Q-value functions {Q?,?,j } such that
X
Vi?,?,j (s) =
?j (s)(a) Q?,?,j
(s, a) and
(3)
i
a?As
Q?,?,j
(s, a)
i
= Ri (s, a) + ?
X
?,?,inck (j)
T (s, a)(s? ) Vi
(s? )
(4)
s? ?S
where for all j ? {1, . . . , k}, inck (k) = 1 and inck (j) = j + 1 if j < k.
Definition 2 Given a Markov game ?, a cyclic correlated equilibrium is a cyclic policy ?,
where for all j ? {1, . . . , k}, for all i ? N , for all s ? S, for all ai , a?i ? Ai,s :
X
X
(s, ai , a?i ) ?
?j (s)(ai , a?i ) Q?,?,j (s, a?i , a?i ). (5)
?j (s)(ai , a?i ) Q?,?,j
i
a?i ?A?i,s
(ai ,a?i )?As
Here, a?i denotes a joint action for all players except i. A similar definition can be constructed for Nash equilibria by insisting that all policies ?j (s) are product distributions. In
Definition 2, we imagine that action choices are moderated by a referee with a clock that
indicates the current stage j of the cycle. At each stage, a typical correlated equilibrium
is executed, meaning that the referee chooses a joint action a from ?j (s), tells each agent
its part of that joint action, and no agent can improve its value by eschewing the referee?s
advice. If no agent can improve its value by more than ? at any stage, we say ? is an
?-correlated cyclic equilibrium.
A stationary correlated equilibrium is a cyclic correlated equilibrium with k = 1. In the
next section, we show how value iteration can be used to derive cyclic correlated equilibria.
4
Value Iteration in General-Sum Markov Games
For a game ?, define Q? = (Rn )A to be the set of all state-action (Q) value functions,
V? = (Rn )S to be the set of all value functions, and ?? to be the set of all stationary
policies. Traditionally, value iteration can be broken down into estimating a Q value based
upon a value function, selecting a policy ? given the Q values, and deriving a value function
based upon ? and the Q value functions. Whereas the first and the last step are fairly
straightforward, the step in the middle is quite tricky. A pair (?, Q) ? ?? ? Q? agree (see
Equation 5) if, for all s ? S, i ? N , ai , a?i ? Ai,s :
X
X
?(s)(ai , a?i ) Qi (s, ai , a?i ) ?
?(s)(ai , a?i ) Q(s, a?i , a?i ). (6)
a?i ?A?i,s
(ai ,a?i )?As
Essentially, Q and ? agree if ? is a best response for each player given payoffs Q. An
equilibrium-selection rule is a function f : Q? ? ?? such that for all Q ? Q? ,
(f (Q), Q) agree. The set of all such rules is F? . In essence, these rules update values
assuming an equilibrium policy for a one-stage game with Q(s, a) providing the terminal
rewards. Examples of equilibrium-selection rules are best-Nash, utilitarian-CE, dictatorialCE, plutocratic-CE, and egalitarian-CE (Greenwald & Hall, 2003). (Utilitarian-CE, which
we return to later, selects the correlated equilibrium in which total of the payoffs is maximized.) Foe-VI and Friend-VI (Littman, 2001) do not fit into our formalism, but it can
be proven that in NoSDE games they converge to deterministic policies that are neither
stationary nor cyclic equilibria. Define d? : V? ? V? ? R to be a distance metric over
value functions, such that
d? (V, V ? ) = max |Vi (s) ? Vi? (s)|.
(7)
s?S,i?N
Using our notation, the value-iteration algorithm for general-sum Markov games can be
described as follows.
Algorithm 1: ValueIteration(game ?, V 0 ? V? , f ? F? , Integer T )
For t := 1 to T :
P
1. ?s ? S, a ? A, Qt (s, a) := R(s, a) + ? s? ?S T (s, a)(s? ) V t?1 (s? ).
2. ? t = f (Qt ).
3. ?s ? S, V t (s) =
P
a?As
? t (s)(a) Qt (s, a).
Return {Q1 , . . . , QT }, {? 1 , . . . , ? T }, {V 1 , . . . , V T }.
If a stationary equilibrium is sought, the final policy is returned.
Algorithm 2: GetStrategy(game ?, V 0 ? V? , f ? F? , Integer T )
1. Run (Q1 . . . QT , ? 1 . . . ? T , V 1 . . . V T ) = ValueIteration(?, V 0 , f, T ).
2. Return ? T .
For cyclic equilibria, we have a variety of options for how many past stationary policies
we want to consider for forming a cycle. Our approach searches for a recent value function
that matches the final value function (an exact match would imply a true cycle). Ties are
broken in favor of the shortest cycle length. Observe that the order of the policies returned
by value iteration is reversed to form a cyclic equilibrium.
Algorithm 3: GetCycle(game ?, V 0 ? V? , f ? F? , Integer T , Integer maxCycle)
1. If maxCycle ? T , maxCycle := T ? 1.
2. Run (Q1 . . . QT , ? 1 . . . ? T , V 1 . . . V T ) = ValueIteration(?, V 0 , f, T ).
3. Define k := argmint?{1,...,maxCycle} d(V T , V T ?t ).
4. For each t ? {1, . . . , k} set ?t := ? T +1?t .
4.1
Convergence Conditions
Fact 1 If d(V T , V T ?1 ) = ? in GetStrategy, then GetStrategy returns an
equilibrium.
??
1?? -correlated
Fact 2 If GetCycle returns a cyclic policy of length k and d(V T , V T ?k ) = ?, then GetCy??
cle returns an 1??
k -correlated cyclic equilibrium.
Since, given V 0 and ?, the space of value functions is bounded, eventually there will be two
value functions in {V 1 , . . . , V T } that are close according to d? . Therefore, the two practical (and open) questions are (1) how many iterations does it take to find an ?-correlated
cyclic equilibrium? and (2) How large is the cyclic equilibrium that is found?
In addition to approximate convergence described above, in two-player turn-taking games,
one can prove exact convergence. In fact, all the members of F? described above can be
construed as generalizations of utilitarian-CE in turn-taking games, and utilitarian-CE is
proven to converge.
Theorem 2 Given the utilitarian-CE equilibrium-selection rule f , for every NoSDE game
?, for every V 0 ? V? , there exists some finite T such that GetCycle(?, V 0 , f, T, ?T /2?)
returns a cyclic correlated equilibrium.
Theoretically, we can imagine passing infinity as a parameter to value iteration. Doing so
shows the limitation of value-iteration in Markov games.
Theorem 3 Given the utilitarian-CE equilibrium-selection rule f , for any NoSDE game ?
with unique equilibrium ?, for every V 0 ? V? , the value-function sequence {V 1 , V 2 , . . .}
returned from ValueIteration(?, V 0 , f, ?) does not converge to V ? .
Since all of the other rules specified above (except friend-VI and foe-VI) can be implemented with the utilitarian-CE equilibrium-selection rule, none of these rules will be guaranteed to converge, even in such a simple class as turn-taking games!
Theorem 4 Given the game ? in Figure 1 and its stationary equilibrium ?, given Vi0 (s) =
0 for all i ? N , s ? S, then for any update rule f ? F? , the value-function sequence
{V 1 , V 2 , . . .} returned from ValueIteration(?, V 0 , f, ?) does not converge to V ? .
5
Empirical Results
To complement the formal results of the previous sections, we ran two batteries of tests
on value iteration in randomly generated games. We assessed the convergence behavior of
value iteration to stationary and cyclic equilibria.
5.1
Experimental Details
Our game generator took as input the set of players N , the set of states S, and for each
player i and state s, the actions Ai,s . To construct a game, for each state-joint action pair
(s, a) ? A, for each agent i ? N , the generator sets Ri (s, a) to be an integer between
0 and 99, chosen uniformly at random. Then, it selects T (s, a) to be deterministic, with
the resulting state chosen uniformly at random. We used a consistent discount factor of
? = 0.75 to decrease experimental variance.
The primary dependent variable in our results was the frequency with which value iteration converged to a stationary Nash equilibrium or a cyclic Nash equilibrium (of length
less than 100). To determine convergence, we first ran value iteration for 1000 steps. If
d? (V 1000 , V 999 ) ? 0.0001, then we considered value iteration to have converged to a
stationary policy. If for some k ? 100
max
t?{1,...,k}
d? (V 1001?t , V 1001?(t+k) ) ? 0.0001,
(8)
then we considered value iteration to have converged to a cycle.3
To determine if a game has a deterministic equilibrium, for every deterministic policy ?,
we ran policy evaluation (for 1000 iterations) to estimate V ?,? and Q?,? , and then checked
if ? was an ?-correlated equilibrium for ?=0.0001.
5.2
Turn-taking Games
In the first battery of tests, we considered sets of turn-taking games with x states and y
actions: formally, there were x states {1, . . . , x}. In odd-numbered states, Player 1 had y
3
In contrast to the GetCycle algorithm, we are here concerned with finding a cyclic equilibrium
so we check an entire cycle for convergence.
100
Percent Converged Games
Games Without Convergence (out of 1000)
120
100
80
60
40
5 States
4 States
3 States
2 States
20
0
2
3
4
5
6
7
Actions
8
Cyclic uCE
OTComb
OTBest
uCE
90
80
70
60
50
40
9
10
2
3
4
5
6
7
States
8
9
10
Figure 2: (Left) For each combination of states and actions, 1000 deterministic turn-taking
games were generated. The graph plots the number of games where value iteration did
not converge to a stationary equilibrium. (Right) Frequency of convergence on 100 randomly generated games with simultaneous actions. Cyclic uCE is the number of times
utilitarian-CE converged to a cyclic equilibrium. OTComb is the number of games where
any one of Friend-VI, Foe-VI, utilitarian-NE-VI, and 5 variants of correlated equilibriumVI: dictatorial-CE-VI (First Player), dictatorial-CE-VI (Second Player), utilitarian-CE-VI,
plutocratic-CE-VI, and egalitarian-VI converged to a stationary equilibrium. OTBest is
the maximum number of games where the best fixed choice of the equilibrium-selection
rule converged. uCE is the number of games in which utilitarian-CE-VI converged to a
stationary equilibrium.
actions and Player 2 had one action: in even-numbered states, Player 1 had one action and
Player 2 had y actions. We varied x from 2 to 5 and y from 2 to 10. For each setting of x
and y, we generated and tested one thousand games.
Figure 2 (left) shows the number of generated games for which value iteration did not
converge to a stationary equilibrium. We found that nearly half (48%, as many as 5% of
the total set) of these non-converged games had no stationary, deterministic equilibria (they
were NoSDE games). The remainder of the stationary, deterministic equilibria were simply
not discovered by value iteration. We also found that value iteration converged to cycles of
length 100 or less in 99.99% of the games.
5.3
Simultaneous Games
In a second set of experiments, we generated two-player Markov games where both agents
have at least two actions in every state. We varied the number of states between 2 and 9,
and had either 2 or 3 actions for every agent in every state.
Figure 2 (right) summarizes results for 3-action games (2-actions games were qualitatively
similar, but converged more often). Note that the fraction of random games on which the
algorithms converged to stationary equilibria decreases as the number of states increases.
This result holds because the larger the game, the larger the chance that value iteration will
fall into a cycle on some subset of the states. Once again, we see that the cyclic equilibria are found much more reliably than stationary equilibria by value-iteration algorithms.
For example, utilitarian-CE converges to a cyclic correlated equilibrium about 99% of the
time, whereas with 10 states and 3 actions, on 26% of the games none of the techniques
converge.
6
Conclusion
In this paper, we showed that value iteration, the algorithmic core of many multiagent
planning reinforcement-learning algorithms, is not well behaved in Markov games. Among
other impossibility results, we demonstrated that the Q-value function retains too little
information for constructing optimal policies, even in 2-state, 2-action, deterministic turntaking Markov games. In fact, there are an infinite number of such games with different
Nash equilibrium value functions that have identical Q-value functions. This result holds
for proposed variants of value iteration from the literature such as updating via a correlated
equilibrium or a Nash equilibrium, since, in turn-taking Markov games, both rules reduce
to updating via the action with the maximum value for the controlling player.
Our results paint a bleak picture for the use of value-iteration-based algorithms for computing stationary equilibria. However, in a class of games we called NoSDE games, a
natural extension of value iteration converges to a limit cycle, which is in fact a cyclic
(nonstationary) Nash equilibrium policy. Such cyclic equilibria can also be found reliably
for randomly generated games and there is evidence that they appear in some naturally
occurring problems (Tesauro & Kephart, 1999). One take-away message of our work is
that nonstationary policies may hold the key to improving the robustness of computational
approaches to planning and learning in general-sum games.
Acknowledgements
This research was supported by NSF Grant #IIS-0325281, NSF Career Grant #IIS0133689, and the Alberta Ingenuity Foundation through the Alberta Ingenuity Centre for
Machine Learning.
References
Bellman, R. (1957). Dynamic programming. Princeton, NJ: Princeton University Press.
Brafman, R. I., & Tennenholtz, M. (2002). R-MAX?a general polynomial time algorithm
for near-optimal reinforcement learning. Journal of Machine Learning Research, 3,
213?231.
Greenwald, A., & Hall, K. (2003). Correlated Q-learning. Proceedings of the Twentieth
International Conference on Machine Learning (pp. 242?249).
Hu, J., & Wellman, M. (1998). Multiagent reinforcement learning:theoretical framework
and an algorithm. Proceedings of the Fifteenth International Conference on Machine
Learning (pp. 242?250). Morgan Kaufman.
Littman, M. (2001). Friend-or-foe Q-learning in general-sum games. Proceedings of
the Eighteenth International Conference on Machine Learning (pp. 322?328). Morgan
Kaufmann.
Littman, M. L., & Szepesv?ari, C. (1996). A generalized reinforcement-learning model:
Convergence and applications. Proceedings of the Thirteenth International Conference
on Machine Learning (pp. 310?318).
Osborne, M. J., & Rubinstein, A. (1994). A Course in Game Theory. The MIT Press.
Puterman, M. (1994). Markov decision processes: Discrete stochastic dynamic programming. Wiley-Interscience.
Shapley, L. (1953). Stochastic games. Proceedings of the National Academy of Sciences
of the United States of America, 39, 1095?1100.
Tesauro, G., & Kephart, J. (1999). Pricing in agent economies using multi-agent Qlearning. Proceedings of Fifth European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (pp. 71?86).
| 2834 |@word briefly:1 maz:1 polynomial:1 middle:1 open:2 hu:2 tat:1 q1:3 cyclic:38 selecting:1 united:1 past:1 existing:4 current:2 surprising:1 plot:1 update:7 stationary:38 half:1 core:1 along:1 constructed:1 prove:2 shapley:2 interscience:1 theoretically:1 ingenuity:2 behavior:1 nor:1 planning:2 multi:1 terminal:1 bellman:2 alberta:2 little:1 becomes:1 estimating:1 notation:1 bounded:1 kind:1 kaufman:1 finding:3 nj:3 guarantee:1 quantitative:1 every:9 subclass:1 tie:1 tricky:1 grant:2 appear:1 positive:2 limit:1 rigidly:1 might:1 suggests:1 unique:6 practical:1 utilitarian:12 empirical:3 significantly:1 matching:1 numbered:2 spite:1 suggest:2 symbolic:1 cannot:2 close:1 selection:6 operator:1 zinkevich:1 deterministic:17 demonstrated:3 eighteenth:1 send:9 straightforward:1 amy:2 rule:15 deriving:3 unilaterally:1 traditionally:1 moderated:1 construction:1 imagine:2 controlling:1 exact:2 programming:2 referee:3 satisfying:1 updating:2 thousand:1 cycle:10 decrease:2 ran:3 mentioned:1 environment:2 nash:10 broken:2 reward:2 littman:5 battery:2 dynamic:2 tit:1 upon:2 joint:6 noop:13 america:1 effective:1 eschewing:1 rubinstein:2 tell:1 quite:1 larger:2 say:2 reconstruct:1 favor:1 final:2 sequence:5 took:1 propose:1 product:1 remainder:1 achieve:1 academy:1 pronounced:1 convergence:10 empty:1 decisionmaking:1 r1:4 converges:2 derive:3 friend:4 odd:1 qt:6 implemented:1 c:2 stochastic:2 generalization:1 preliminary:1 extension:1 hold:3 hall:3 considered:3 equilibrium:93 algorithmic:1 sought:1 a2:2 establishes:1 mit:1 always:2 avoid:1 indicates:1 check:1 impossibility:3 contrast:2 economy:1 dependent:1 entire:2 selects:2 among:1 fairly:1 aware:1 construct:1 once:1 identical:1 nearly:3 report:1 randomly:3 national:1 attempt:1 message:1 evaluation:1 wellman:2 folk:1 vi0:1 theoretical:3 kephart:2 formalism:1 retains:1 subset:1 too:1 providence:1 chooses:1 international:4 michael:1 again:1 return:7 egalitarian:2 includes:1 vi:19 later:1 linked:1 doing:1 option:1 construed:1 variance:1 kaufmann:1 maximized:1 correspond:1 weak:1 none:3 worth:1 researcher:1 randomness:1 startling:1 foe:4 converged:12 simultaneous:2 checked:1 definition:5 frequency:2 pp:5 naturally:1 proof:1 associated:1 response:1 evaluated:1 though:1 stage:4 clock:1 aj:1 behaved:1 pricing:1 brown:2 concept:3 contain:1 true:1 excluded:1 puterman:2 game:99 essence:1 generalized:2 complete:1 demonstrate:3 percent:1 reasoning:1 meaning:1 consideration:1 ari:2 empirically:2 interpretation:1 significant:1 ai:15 centre:1 had:6 mlittman:1 own:1 recent:1 showed:1 tesauro:2 scenario:2 success:2 exploited:1 morgan:2 converge:10 shortest:1 determine:2 ii:1 technical:1 match:2 concerning:1 a1:2 controlled:1 qi:1 variant:5 essentially:1 metric:1 rutgers:2 fifteenth:1 iteration:44 sometimes:2 represent:1 szepesv:2 whereas:2 want:1 addition:1 thirteenth:1 leaving:1 sends:4 unlike:1 member:1 call:1 integer:5 nonstationary:2 near:1 intermediate:1 concerned:1 variety:2 fit:1 identified:1 reduce:1 avenue:1 whether:1 utility:1 returned:4 passing:1 action:32 prefers:4 useful:1 discount:2 exist:1 nsf:2 discrete:1 incentive:2 key:1 drawn:1 neither:1 ce:16 graph:1 fraction:2 sum:10 run:2 uncertainty:1 decision:2 summarizes:1 followed:1 guaranteed:1 infinity:1 ri:4 martin:1 department:2 developing:1 according:1 piscataway:1 combination:2 describes:1 reconstructing:1 taken:1 equation:2 agree:3 turn:9 eventually:1 fail:1 needed:2 available:1 apply:1 observe:1 away:1 alternative:3 robustness:1 denotes:1 maintaining:1 question:2 paint:1 strategy:2 primary:1 traditional:1 distance:1 reversed:1 assuming:1 length:4 insufficient:3 providing:1 executed:1 negative:3 reliably:2 policy:41 perform:1 observation:1 markov:25 finite:4 payoff:2 extended:1 team:2 nasty:1 rn:5 varied:2 discovered:1 arbitrary:1 complement:1 pair:4 specified:1 tennenholtz:2 max:4 including:1 natural:1 circumvent:1 minimax:1 improve:2 imply:1 ne:1 picture:1 identifies:1 extract:1 faced:1 deviate:1 literature:1 acknowledgement:1 multiagent:3 limitation:1 proven:3 generator:2 foundation:1 agent:9 consistent:1 course:1 brafman:2 last:1 free:1 supported:1 formal:1 fall:1 taking:9 fifth:1 penny:1 cle:1 transition:3 exceedingly:1 computes:1 qualitatively:1 reinforcement:4 approximate:1 qlearning:1 argmint:1 keep:11 search:1 reasonably:1 robust:1 career:1 improving:1 european:1 constructing:2 did:2 main:2 osborne:2 advice:1 wiley:1 fails:1 theorem:7 down:1 r2:4 evidence:1 exists:3 sequential:1 occurring:1 depicted:1 led:1 simply:2 twentieth:1 forming:1 bluffing:1 satisfies:1 chance:1 insisting:1 conditional:1 goal:2 greenwald:4 change:1 typical:1 except:2 uniformly:2 infinite:1 lemma:1 total:2 called:1 experimental:2 player:36 select:1 formally:2 assessed:1 princeton:2 tested:1 correlated:19 |
2,020 | 2,835 | Fast Gaussian Process Regression
using KD-Trees
Yirong Shen
Electrical Engineering Dept.
Stanford University
Stanford, CA 94305
Andrew Y. Ng
Computer Science Dept.
Stanford University
Stanford, CA 94305
Matthias Seeger
Computer Science Div.
UC Berkeley
Berkeley, CA 94720
Abstract
The computation required for Gaussian process regression with n training examples is about O(n3 ) during training and O(n) for each prediction. This makes Gaussian process regression too slow for large datasets.
In this paper, we present a fast approximation method, based on kd-trees,
that significantly reduces both the prediction and the training times of
Gaussian process regression.
1 Introduction
We consider (regression) estimation of a function x 7? u(x) from noisy observations. If
the data-generating process is not well understood, simple parametric learning algorithms,
for example ones from the generalized linear model (GLM) family, may be hard to apply
because of the difficulty of choosing good features. In contrast, the nonparametric Gaussian process (GP) model [19] offers a flexible and powerful alternative. However, a major
drawback of GP models is that the computational cost of learning is about O(n 3 ), and the
cost of making a single prediction is O(n), where n is the number of training examples.
This high computational complexity severely limits its scalability to large problems, and
we believe has proved a significant barrier to the wider adoption of the GP model.
In this paper, we address the scaling issue by recognizing that learning and predictions with
a GP regression (GPR) model can be implemented using the matrix-vector multiplication
(MVM) primitive z 7? K z. Here, K ? Rn,n is the kernel matrix, and z ? Rn is an
arbitrary vector. For the wide class of so-called isotropic kernels, MVM can be approximated efficiently by arranging the dataset in a tree-type multiresolution data structure such
as kd-trees [13], ball trees [11], or cover trees [1]. This approximation can sometimes be
made orders of magnitude faster than the direct computation, without sacrificing much in
terms of accuracy.
Further, the storage requirements for the tree is O(n), while a direct storage of the kernel
matrix would require O(n2 ) spare. We demonstrate the efficiency of the tree approach on
several large datasets.
In the sequel, for the sake of simplicity we will focus on kd-trees (even though it is known
that kd-trees do not scale well to high dimensional data). However, it is also completely
straightforward to apply the ideas in this paper to other tree-type data structures, for example ball trees and cover trees, which typically scale significantly better to high dimensional
data.
2 The Gaussian Process Regression Model
Suppose that we observe some data D = {(xi , yi ) | i = 1, . . . , n}, xi ? X , yi ? R,
sampled independently and identically distributed (i.i.d.) from some unknown distribution.
Our goal is to predict the response y? on future test points x? with small mean-squared
error under the data distribution. Our model consists of a latent (unobserved) function
x 7? u so that yi = ui + ?i , where ui = u(xi ), and the ?i are independent Gaussian noise
variables with zero mean and variance ? 2 > 0. Following the Bayesian paradigm, we place
a prior distribution P (u(?)) on the function u(?) and use the posterior distribution
P (u(?)|D) ? N (y|u, ? 2 I)P (u(?))
in order to predict y? on new points x? . Here, y = [y1 , . . . , yn ]T and u = [u1 , . . . , un ]T
are vectors in Rn , and N (?|?, ?) is the density of a Gaussian with mean ? and covariance
?. For a GPR model, the prior distribution is a (zero-mean) Gaussian process defined
in terms of a positive definite kernel (or covariance) function K : X 2 ? R. For the
purposes of this paper, a GP can be thought of as a mapping from arbitrary finite subsets
? i } ? X of points, to corresponding zero-mean Gaussian distributions with covariance
{x
? = (K(x
? is a matrix whose (i, j)? i, x
? j ))i,j . (This notation indicates that K
matrix K
? i, x
? j ).) In this paper, we focus on the problem of speeding up GPR under
element is K(x
the assumption that the kernel is monotonic isotropic. A kernel function K(x, x 0 ) is called
isotropic if it depends only on the Euclidean distance r = kx ? x0 k2 between the points,
and it is monotonic isotropic if it can be written as a monotonic function of r.
3 Fast GPR predictions
Since u(x1 ), u(x2 ), . . . , u(xn ) and u(x? ) are jointly Gaussian, it is easy to see that the
predictive (posterior) distribution P (u? |D), u? = u(x? ) is given by
P (u? |D) = N u? | kT? M ?1 y, K(x? , x? ) ? kT? M ?1 k? ,
(1)
where k? = [K(x? , x1 ), . . . , K(x? , xn )]T ? Rn , and M = K + ? 2 I, K =
(K(xi , xj ))i,j . Therefore, if p = M ?1 y, the optimal prediction under the model is
u
?? = kT? p, and the predictive variance (of P (u? |D)) can be used to quantify our uncertainty in the prediction. Details can be found in [19]. ([16] also provides a tutorial on
GPs.)
Once p is determined, making a prediction now requires that we compute
n
n
X
X
kT? p =
K(x? , xi )pi =
wi p i
(2)
i=1
i=1
which is O(n) since it requires scanning through the entire training set and computing
K(x? , xi ) for each xi in the training set. When the training set is very large, this becomes
prohibitively slow. In such situations, it is desirable to use a fast approximation instead of
the exact direct implementation.
3.1 Weighted Sum Approximation
The computations in Equation 2 can be thought of as a weighted sum, where w i =
K(x? , xi ) is the weight on the i-th summand pi . We observe that if the dataset is divided into groups where all data points in a group have similar weights, then it is possible
to compute a fast approximation to the above weighted sum. For example, let G be a set of
data points that all have weights near some value w. The contribution to the weighted sum
by points in G is
X
X
X
X
X
wi p i =
wpi +
(wi ? w)pi = w
pi +
i p i
i:x ?G
i:x ?G
i:x ?G
i:x ?G
i:x ?G
i
i
i
i
i
P
P
Where i = wi ? w. Assuming that i:xi ?G pi is known in advance, P
w i:xi ?G pi
can
Pthen be computed in constant time and used as an approximation to i:xi ?G wi pi
if i:xi ?G i pi is small.
We note that for a continuous isotropic kernel function, the weights w i = K(x? , xi ) and
wj = K(x? , xj ) will be similar if xi and xj are close to each other. In addition, if the
Figure 1: Example of bounding rectangles for nodes in the first three levels of a kd-tree.
kernel function monotonically decreases to zero with increasing ||x i ? xj ||, then points
that are far away from the query point x? will all have weights near zero.
Given a new query, we would like to automatically group points together that have similar
weights. But the weights are dependent on the query point and hence the best grouping
of the data will also be dependent on the query point. Thus, the problem we now face is,
given query point, how to quickly divide the dataset into groups such that data points in the
same group have similar weights. Our solution to this problem takes inspiration and ideas
from [9], and uses an enhanced kd-tree data structure.
3.2 The kd-tree algorithm
A kd-tree [13] is a binary tree that recursively partitions a set of data points. Each node
in the kd-tree contains a subset of the data, and records the bounding hyper-rectangle for
this subset. The root node contains the entire dataset. Any node that contains more than 1
data point has two child nodes, and the data points contained by the parent node are split
among the children by cutting the parent node?s bounding hyper-rectangle in the middle of
its widest dimension.1 An example with inputs of dimension 2 is illustrated in Figure 1.
For our algorithm, we will enhance the kd-tree with additional cached information at each
node. At a node ND whose set of data points is XND , in addition to the bounding box we
also store
1. NND = |XND |: the number of data points contained by ND.
P
Unweighted
2. SND
= xi ?XND pi : the unweighted sum corresponding to the data contained by ND.
Now, let
Weighted
SND
=
X
K(x? , xi )pi
(3)
i:xi ?XND
Weighted
be the weighted sum corresponding to node ND. One way to calculate S ND
is to
Weighted
Weighted
simply have the 2 children of ND recursively compute SLeft(ND) and SRight(ND) (where
1
There are numerous other possible kd-tree splitting criteria. Our criteria is the same as the one
used in [9] and [5]
Left(ND) and Right(ND) are the 2 children of ND) and then sum the two results. This
takes O(n) time?same as the direct computation?since all O(n) nodes need to be processed. However, if we only want an approximate result for the weighted sum, then we can
cut off the recursion at nodes whose data points have nearly identical weights for the given
query point.
Since each node maintains a bounding box of the data points that it owns, we can easily
bound the maximum weight variation of the data points owned by a node (as in [9]). The
nearest and farthest points in the bounding box to the query point can be computed in
O(input dimension) operations, and since the kernel function is isotropic monotonic, these
points give us the maximum and minimum possible weights wmax and wmin of any data
point in the bounding box.
Now, whenever the difference between wmax and wmin is small, we can cutoff the recursion
Unweighted
and approximate the weighted sum in Equation 3 by w ? SND
where w = 21 (wmin +
wmax ). The speed and accuracy of the approximation is highly dependent on the cutoff
criteria. Moore et al. used the following cutoff rule in [9]:
wmax ? wmin ? 2(WSoFar + NND wmin ).
Here, WSoFar is the weight accumulated so far in the computation and WSoFar +NND wmin
serves as a lower bound on the total sum of weights involved in the regression. In our
experiments, we found that although the above cutoff rule ensures the error incurred at
any particular data point in ND is small, the total error incurred by all the data points in
ND can still be high if NND is very large. In our experiments (not reported here), their
method gave poor performance on the GPR task, in many cases incurring significant errors
in the predictions (or, alternatively running no faster than exact computation, if sufficiently
small is chosen to prevent the large accumulation of errors). Hence, we chose instead the
following cutoff rule:
NND (wmax ? wmin ) ? 2(WSoFar + NND wmin ),
which also takes into account the total number of points contained in a node.
From the forumla above, we see that the decision of whether to cutoff computation at a
node depends on the value of WSoFar (the total weight of all the points that have been
added to the summation so far). Thus it is desirable to quickly accumulate weights at the
beginning of the computations, so that more of the later recursions can be cut off. This can
be accomplished by going into the child node that?s nearer to the query point first when we
recurse into the children of a node that doesn?t meet the cutoff criteria. (In contrast, [9] always visits the children in left-right order, which in our experiments also gave significantly
worse performance than our version.) Our overall algorithm is summarized below:
WeightedSum(x? , ND, WSoFar , )
compute wmax and wmin for the given query point x?
Weighted
SND
=0
if (wmax ? wmin ) ? 2(WSoFar + NND wmin )
then
Unweighted
Weighted
SND
= 21 (wmin + wmax )SND
WSoFar = WSoFar + wmin NND
Weighted
return SND
else
determine which child is nearer to the query point x?
Weighted
SNearer
= WeightedSum(x? , Nearer child of ND, WSoFar , )
Weighted
SFarther = WeightedSum(x? , Farther child of ND, WSoFar , )
Weighted
Weighted
Weighted
SND
= SNearer
+ SFarther
Weighted
return SND
4 Fast Training
Training (or first-level inference) in the GPR model requires solving the positive definite
linear system
M p = y, M = K + ? 2 I
(4)
for the vector p, which in the previous section we assumed had already been pre-computed.
Directly calculating p by inverting the matrix M costs about O(n3 ) in general. However,
in practice there are many ways to quickly obtain approximate solutions to linear systems.
Since the system matrix is symmetric positive definite, the conjugate gradient (CG) algorithm can be applied. CG is an iterative method which searches for p by maximizing the
quadratic function
1
q(z) = y T z ? z T M z.
2
Briefly, CG ensures that z after iteration k is a maximizer of q over a (Krylow) subspace
of dimension k. For details about CG and many other approximate linear solvers, see
[15]. Thus, z ?converges? to p (the unconstrained maximizer of q) after n steps, but
intermediate z can be used as approximate solutions. The speed of convergence depends
on the eigenstructure of M . In our case, M typically has only a few large eigenvalues, and
most of the spectrum is close to the lower bound ? 2 ; under these conditions CG is known
to produce good approximations after only a few iterations. Crucially, the only operation
on M performed in each iteration of CG is a matrix-vector multiplication (MVM) with
M.
Since M = K + ? 2 I, speeding up MVM with M is critically dependent on our ability to
perform fast MVM with the kernel matrix K . We can apply the algorithm from Section 3
to perform fast MVM.
Specifically, observe that the i-th row of K is given by k i = [K(xi , x1 ), . . . , K(xi , xn )]T .
Thus, ki has the same form as that of the vector k ? used in the prediction step. Hence to
compute the matrix-vector product K v, we simply need to compute the inner products
n
X
kTi v =
K(xi , xj )vj
j=1
for i = 1, . . . , n. Following exactly the method presented in Section 3, we can do this
efficiently using a kd-tree, where here v now plays the role of p in Equation 2.
Two additional optimizations are possible. First, in different iterations of conjugate gradient, we can use the same kd-tree structure to compute k Ti v for different i and different
v. Indeed, given a dataset, we need only ever find a single kd-tree structure for it, and the
same kd-tree structure can then be used to make multiple predictions or multiple MVM
operations. Further, given fixed v, to compute k Ti v for different i = 1, . . . , n (to obtain
the vector resulting from one MVM operation), we can also share the same pre-computed
partial unweighted sums in the internal nodes of the tree. Only when v (or p) changes do
we need to change the partial unweighted sums (discussed in Section 3.2) of v stored in
the internal nodes (an O(n) operation).
5 Performance Evaluation
We evaluate our kd-tree implementation of GPR and an implementation that uses direct
computation for the inner products. Our experiments were performed on the nine regression
datasets in Table 1. 2
2
Data for the Helicopter experiments come from an autonomous helicopter flight project, [10]
and the three tasks were to model three subdynamics of the helicopter, namely its yaw rate, forward
velocity, and lateral velocity one timestep later as a function of the helicopter?s current state. The
temperature and humidity experiments use data from a sensornet comprising a network of simple
sensor motes, [2] and the goal here is to predict the conditions at a mote from the measurements
Data set name
Helicopter yaw rate
Helicopter x-velocity
Helicopter y-velocity
Mote 10 temperature
Mote 47 temperature
Mote 47 humidity
Housing income
Housing value
Housing age
Input dimension
3
2
2
2
3
3
2
2
2
Training set size
40000
40000
40000
20000
20000
20000
18000
18000
18000
Test set size
4000
4000
4000
5000
5000
5000
2000
2000
2000
Table 1: Datasets used in our experiments.
Helicopter yaw rate
Helicopter x-velocity
Helicopter y-velocity
Mote 10 temperature
Mote 47 temperature
Mote 47 humidity
Housing income
Housing value
Housing age
Exact cost
14.95
12.37
11.25
4.54
4.34
3.87
2.75
4.47
3.21
Tree cost
0.31
0.41
0.41
0.69
1.11
0.82
0.76
0.51
1.15
Speedup
47.8
30.3
27.3
6.6
3.9
4.7
3.6
8.8
2.8
Exact error
0.336
0.594
0.612
0.278
0.385
1.189
0.478
0.496
0.787
Tree error
0.336
0.595
0.614
0.258
0.433
1.273
0.478
0.496
0.785
Table 2: Prediction performance on 9 regression problems. Exact uses exact computation
of Equation 2. Tree is the kd-tree based implementation described in Section 3.2. Cost is
the computation time measured in milliseconds per prediction. The error reported is the
mean absolute prediction error.
For all experiments, we used the Gaussian RBF kernel
kx ? x0 k2
,
K(x, x0 ) = exp ?
2d2
which is monotonic isotropic, with d and ? chosen to be reasonable values for each problem
(via cross validation). The parameter used in the cutoff rule was set to be 0.001 for all
experiments.
5.1 Prediction performance
Our first set of experiments compare the prediction time of the kd-tree algorithm with exact
computation, given a precomputed p. Our average prediction times are given in Table 2.
These numbers include the cost of building the kd-tree (but remain small since the cost is
then amortized over all the examples in the test set). As we see, our algorithm runs 2.847.8 times faster than exact computation. Further, it incurs only a very small amount of
additional error compared to the exact algorithm.
5.2 Learning performance
Our second set of experiments examine the running times for learning (i.e., solving the
system of Equations 4,) using our kd-tree algorithm for the MVM operation, compared to
exact computation. For both approximate and exact MVM, conjugate gradient was used
of nearby motes. The housing experiments make use of data collected from the 1990 Census in
California. [12] The median income of a block group is predicted from the median house value and
average number of rooms per person; the median house value is predicted using median housing
age and median income; the median housing age is predicted using median house value and average
number of rooms per household.
Helicopter yaw rate
Helicopter x-velocity
Helicopter y-velocity
Mote 10 temperature
Mote 47 temperature
Mote 47 humidity
Housing income
Housing value
Housing age
Exact cost
22885
23412
14341
2071
2531
2121
1922
997
1496
Tree cost
279
619
443
253
487
398
581
138
338
Speedup
82.0
37.9
32.4
8.2
5.2
5.3
3.3
7.2
4.4
Exact error
0.336
0.594
0.612
0.278
0.385
1.189
0.478
0.496
0.787
Tree error
0.336
0.595
0.614
0.258
0.433
1.273
0.478
0.496
0.785
Table 3: Training time on the 9 regression problems. Cost is the computation time measured
in seconds.
(with the same number of iterations). Here, we see that our algorithm performs 3.3-82
times faster than exact computation.3
6 Discussion
6.1 Related Work
Multiresolution tree data structures have been used to speed up the computation of a wide
variety of machine learning algorithms [9, 5, 7, 14]. GP regression was introduced to
the machine learning community by Rasmussen and Williams [19]. The use of CG for
efficient first-level inference is described by Gibbs and MacKay [6]. The stability of Krylov
subspace iterative solvers (such as CG) with approximate matrix-vector multiplication is
discussed in [4].
Sparse approximations to GP inference provide a different way of overcoming the O(n 3 )
scaling [18, 3, 8], by selecting a representative subset of D of size d n. Sparse methods
can typically be trained in O(n d2 ) (including the active forward selection of the subset)
and require O(d) prediction time only. In contrast, in our work here we make use of all of
the data for prediction, achieving better scaling by exploiting cluster structure in the data
through a kd-tree representation.
More closely related to our work is [20], where the MVM primitive is also approximated
using a special data structure for D. Their approach, called the improved fast Gauss transform (IFGT), partitions the space with a k-centers clustering of D and uses a Taylor expansion of the RBF kernel in order to cache repeated computations. The IFGT is limited to the
RBF kernel, while our method can be used with all monotonic isotropic kernels. As a topic
for future work, we believe it may be possible to apply IFGT?s Taylor expansions at each
node of the kd-tree?s query-dependent multiresolution clustering, to obtain an algorithm
that enjoys the best properties of both.
6.2 Isotropic Kernels
Recall that an isotropic kernel K(x, x0 ) can be written as a function of the Euclidean
distance r = kx ? x0 k. While the RBF kernel of the form exp(?r 2 ) is the most frequently
used isotropic kernel in machine learning, there are many other isotropic kernels to which
our method here can be applied without many changes (since the kd-tree cutoff criteria
depends on the pairwise Euclidean distances only). An interesting class of kernels is the
Mat?ern model (see [17], Sect. 2.10) K(r) ? (?r)? K? (?r), ? = 2? 1/2 , where K? is the
modified Bessel function of the second kind. The parameter ? controls the roughness of
functions sampled from the process, in that they are b?c times mean-square differentiable.
3
The errors reported in this table are identical to Table 2, since for the kd-tree results we always
trained and made predictions both using the fast approximate method. This gives a more reasonable
test of the ?end-to-end? use of kd-trees.
For ? = 1/2 we have the ?random walk? Ornstein-Uhlenbeck kernel of the form e ??r , and
the RBF kernel is obtained in the limit ? ? ?. The RBF kernel forces u(?) to be very
smooth, which can lead to bad predictions for training data with partly rough behaviour, and
its uncritical usage is therefore discouraged in Geostatistics (where the use of GP models
was pioneered). Here, other Mat?ern kernels are sometimes preferred. We believe that our
kd-trees approach holds rich promise for speeding up GPR with other isotropic kernels
such the Mat?ern and Ornstein-Uhlenbeck kernels.
References
[1] Alina Beygelzimer, Sham Kakade, and John Langford. Cover trees for nearest neighbor. (Unpublished manuscript), 2005.
[2] Phil Buonadonna, David Gay, Joseph M. Hellerstein, Wei Hong, and Samuel Madden. Task:
Sensor network in a box. In Proceedings of European Workshop on Sensor Networks, 2005.
[3] Lehel Csat?o and Manfred Opper. Sparse on-line Gaussian processes. Neural Computation,
14:641?668, 2002.
[4] Nando de Freitas, Yang Wang, Maryam Mahdaviani, and Dustin Lang. Fast krylov methods for
n-body learning. In Advances in NIPS 18, 2006.
[5] Kan Deng and Andrew Moore. Multiresolution instance-based learning. In Proceedings of the
Twelfth International Joint Conference on Artificial Intellingence, pages 1233?1239. Morgan
Kaufmann, 1995.
[6] Mark N. Gibbs. Bayesian Gaussian Processes for Regression and Classification. PhD thesis,
University of Cambridge, 1997.
[7] Alexander Gray and Andrew Moore. N-body problems in statistical learning. In Advances in
NIPS 13, 2001.
[8] N. D. Lawrence, M. Seeger, and R. Herbrich. Fast sparse Gaussian process methods: The
informative vector machine. In Advances in NIPS 15, pages 609?616, 2003.
[9] Andrew Moore, Jeff Schneider, and Kan Deng. Efficient locally weighted polynomial regression predictions. In Proceedings of the Fourteenth International Conference on Machine Learning, pages 236?244. Morgan Kaufmann, 1997.
[10] Andrew Y. Ng, Adam Coates, Mark Diel, Varun Ganapathi, Jamie Schulte, Ben Tse, Eric
Berger, and Eric Liang. Inverted autonomous helicopter flight via reinforcement learning. In
International Symposium on Experimental Robotics, 2004.
[11] Stephen M. Omohundro. Five balltree construction algorithms. Technical Report TR-89-063,
International Computer Science Institute, 1989.
[12] R. Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statistics and Probability
Letters, 33(3):291?297, May 5 1997.
[13] F.P. Preparata and M. Shamos. Computational Geometry. Springer-Verlag, 1985.
[14] Nathan Ratliff and J. Andrew Bagnell. Kernel conjugate gradient. Technical Report CMU-RITR-05-30, Robotics Institute, Carnegie Mellon University, June 2005.
[15] Y. Saad. Iterative Methods for Sparse Linear Systems. International Thomson Publishing, 1st
edition, 1996.
[16] M. Seeger. Gaussian processes for machine learning. International Journal of Neural Systems,
14(2):69?106, 2004.
[17] M. Stein. Interpolation of Spatial Data: Some Theory for Kriging. Springer, 1999.
[18] Michael Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of
Machine Learning Research, 1:211?244, 2001.
[19] C. Williams and C. Rasmussen. Gaussian processes for regression. In Advances in NIPS 8,
1996.
[20] C. Yang, R. Duraiswami, and L. Davis. Efficient kernel machines using the improved fast Gauss
transform. In Advances in NIPS 17, pages 1561?1568, 2005.
| 2835 |@word version:1 middle:1 polynomial:1 briefly:1 nd:16 humidity:4 twelfth:1 d2:2 crucially:1 covariance:3 incurs:1 tr:1 recursively:2 contains:3 selecting:1 freitas:1 current:1 beygelzimer:1 lang:1 written:2 john:1 ronald:1 partition:2 informative:1 isotropic:13 beginning:1 farther:1 record:1 manfred:1 provides:1 node:21 herbrich:1 five:1 direct:5 symposium:1 consists:1 pairwise:1 x0:5 indeed:1 examine:1 frequently:1 automatically:1 cache:1 solver:2 increasing:1 becomes:1 project:1 notation:1 kind:1 unobserved:1 berkeley:2 ti:2 exactly:1 prohibitively:1 k2:2 control:1 farthest:1 yn:1 eigenstructure:1 positive:3 engineering:1 understood:1 limit:2 severely:1 meet:1 interpolation:1 chose:1 limited:1 adoption:1 practice:1 block:1 definite:3 significantly:3 thought:2 pre:2 close:2 selection:1 storage:2 accumulation:1 center:1 maximizing:1 phil:1 primitive:2 straightforward:1 williams:2 independently:1 shen:1 simplicity:1 splitting:1 rule:4 stability:1 yirong:1 variation:1 arranging:1 autonomous:2 enhanced:1 suppose:1 play:1 pioneered:1 exact:14 construction:1 gps:1 us:4 element:1 velocity:8 approximated:2 amortized:1 cut:2 xnd:4 role:1 electrical:1 wang:1 calculate:1 wj:1 ensures:2 sect:1 decrease:1 kriging:1 complexity:1 ui:2 trained:2 solving:2 predictive:2 efficiency:1 eric:2 completely:1 easily:1 joint:1 fast:13 query:11 artificial:1 hyper:2 choosing:1 shamos:1 whose:3 stanford:4 ability:1 statistic:1 gp:8 jointly:1 noisy:1 transform:2 housing:12 eigenvalue:1 differentiable:1 matthias:1 jamie:1 maryam:1 product:3 helicopter:14 multiresolution:4 scalability:1 exploiting:1 parent:2 convergence:1 requirement:1 cluster:1 produce:1 generating:1 cached:1 converges:1 adam:1 ben:1 wider:1 andrew:6 measured:2 nearest:2 implemented:1 predicted:3 come:1 quantify:1 drawback:1 closely:1 nnd:8 nando:1 spare:1 require:2 behaviour:1 summation:1 roughness:1 hold:1 sufficiently:1 exp:2 lawrence:1 mapping:1 predict:3 major:1 purpose:1 estimation:1 weighted:21 rough:1 sensor:3 gaussian:17 always:2 modified:1 focus:2 june:1 indicates:1 seeger:3 contrast:3 cg:8 inference:3 dependent:5 accumulated:1 typically:3 entire:2 lehel:1 going:1 comprising:1 issue:1 among:1 flexible:1 overall:1 classification:1 spatial:2 special:1 mackay:1 uc:1 once:1 schulte:1 ng:2 identical:2 nearly:1 yaw:4 future:2 report:2 preparata:1 summand:1 few:2 geometry:1 highly:1 evaluation:1 recurse:1 kt:4 partial:2 tree:44 euclidean:3 divide:1 taylor:2 walk:1 sacrificing:1 instance:1 tse:1 cover:3 cost:11 subset:5 recognizing:1 too:1 reported:3 stored:1 scanning:1 person:1 density:1 international:6 st:1 sequel:1 off:2 michael:1 enhance:1 quickly:3 together:1 squared:1 thesis:1 worse:1 return:2 ganapathi:1 account:1 de:1 summarized:1 depends:4 ornstein:2 later:2 root:1 performed:2 maintains:1 contribution:1 square:1 accuracy:2 variance:2 kaufmann:2 efficiently:2 bayesian:3 critically:1 whenever:1 involved:1 sampled:2 proved:1 dataset:5 recall:1 mahdaviani:1 manuscript:1 tipping:1 varun:1 response:1 improved:2 wei:1 duraiswami:1 though:1 box:5 langford:1 flight:2 wmax:8 maximizer:2 gray:1 believe:3 diel:1 usage:1 building:1 name:1 gay:1 hence:3 inspiration:1 symmetric:1 moore:4 illustrated:1 during:1 davis:1 samuel:1 criterion:5 generalized:1 hong:1 omohundro:1 demonstrate:1 thomson:1 performs:1 temperature:7 wpi:1 discussed:2 accumulate:1 significant:2 measurement:1 mellon:1 cambridge:1 gibbs:2 unconstrained:1 kelley:1 had:1 posterior:2 store:1 verlag:1 binary:1 yi:3 accomplished:1 inverted:1 morgan:2 minimum:1 additional:3 schneider:1 deng:2 determine:1 paradigm:1 barry:1 monotonically:1 bessel:1 stephen:1 multiple:2 desirable:2 sham:1 reduces:1 smooth:1 technical:2 faster:4 offer:1 cross:1 dept:2 divided:1 visit:1 prediction:22 regression:15 cmu:1 iteration:5 kernel:28 sometimes:2 uhlenbeck:2 robotics:2 addition:2 want:1 else:1 median:7 saad:1 ifgt:3 near:2 yang:2 intermediate:1 split:1 identically:1 easy:1 variety:1 xj:5 gave:2 inner:2 idea:2 whether:1 nine:1 amount:1 nonparametric:1 stein:1 locally:1 processed:1 coates:1 tutorial:1 millisecond:1 per:3 csat:1 pace:1 carnegie:1 mat:3 promise:1 group:6 achieving:1 alina:1 prevent:1 cutoff:9 rectangle:3 timestep:1 sum:12 run:1 fourteenth:1 letter:1 powerful:1 uncertainty:1 place:1 family:1 reasonable:2 decision:1 scaling:3 bound:3 ki:1 quadratic:1 n3:2 x2:1 sake:1 nearby:1 u1:1 speed:3 nathan:1 mote:12 speedup:2 ern:3 ball:2 poor:1 kd:27 conjugate:4 remain:1 wi:5 kakade:1 joseph:1 making:2 census:1 glm:1 equation:5 precomputed:1 serf:1 end:2 operation:6 incurring:1 apply:4 observe:3 away:1 hellerstein:1 alternative:1 running:2 include:1 clustering:2 publishing:1 household:1 calculating:1 widest:1 added:1 already:1 parametric:1 bagnell:1 balltree:1 div:1 gradient:4 discouraged:1 subspace:2 distance:3 lateral:1 topic:1 collected:1 assuming:1 berger:1 liang:1 ratliff:1 implementation:4 unknown:1 perform:2 observation:1 datasets:4 finite:1 situation:1 ever:1 y1:1 rn:4 arbitrary:2 community:1 overcoming:1 introduced:1 inverting:1 namely:1 required:1 unpublished:1 david:1 california:1 nearer:3 geostatistics:1 address:1 nip:5 krylov:2 below:1 including:1 difficulty:1 force:1 recursion:3 numerous:1 madden:1 speeding:3 prior:2 autoregressions:1 multiplication:3 interesting:1 age:5 validation:1 incurred:2 kti:1 pi:10 share:1 row:1 rasmussen:2 enjoys:1 institute:2 wide:2 neighbor:1 face:1 barrier:1 wmin:13 absolute:1 sparse:7 distributed:1 dimension:5 xn:3 opper:1 unweighted:6 doesn:1 rich:1 forward:2 made:2 reinforcement:1 far:3 income:5 approximate:8 cutting:1 preferred:1 active:1 owns:1 assumed:1 xi:20 snd:9 alternatively:1 spectrum:1 un:1 latent:1 continuous:1 iterative:3 search:1 table:7 ca:3 expansion:2 european:1 vj:1 bounding:7 noise:1 edition:1 n2:1 child:10 repeated:1 x1:3 body:2 representative:1 slow:2 mvm:11 pthen:1 house:3 gpr:8 dustin:1 bad:1 grouping:1 workshop:1 phd:1 magnitude:1 kx:3 simply:2 contained:4 monotonic:6 springer:2 kan:2 owned:1 goal:2 rbf:6 room:2 jeff:1 hard:1 change:3 determined:1 specifically:1 called:3 total:4 partly:1 gauss:2 experimental:1 internal:2 mark:2 alexander:1 relevance:1 evaluate:1 |
2,021 | 2,836 | Optimizing spatio-temporal filters for improving
Brain-Computer Interfacing
Guido Dornhege1, Benjamin Blankertz1 , Matthias Krauledat1,3 ,
Florian Losch2 , Gabriel Curio2 and Klaus-Robert M?ller1,3
1
Fraunhofer FIRST.IDA, Kekul?str. 7, 12 489 Berlin, Germany
2 Campus Benjamin Franklin, Charit? University Medicine Berlin,
Hindenburgdamm 30, 12 203 Berlin, Germany.
3 University of Potsdam, August-Bebel-Str. 89, 14 482 Germany
{dornhege,blanker,kraulem,klaus}@first.fhg.de,
{florian-philip.losch,gabriel.curio}@charite.de
Abstract
Brain-Computer Interface (BCI) systems create a novel communication
channel from the brain to an output device by bypassing conventional
motor output pathways of nerves and muscles. Therefore they could
provide a new communication and control option for paralyzed patients.
Modern BCI technology is essentially based on techniques for the classification of single-trial brain signals. Here we present a novel technique
that allows the simultaneous optimization of a spatial and a spectral filter
enhancing discriminability of multi-channel EEG single-trials. The evaluation of 60 experiments involving 22 different subjects demonstrates
the superiority of the proposed algorithm. Apart from the enhanced classification, the spatial and/or the spectral filter that are determined by the
algorithm can also be used for further analysis of the data, e.g., for source
localization of the respective brain rhythms.
1 Introduction
Brain-Computer Interface (BCI) research aims at the development of a system that allows
direct control of, e.g., a computer application or a neuroprosthesis, solely by human intentions as reflected in suitable brain signals, cf. [1, 2, 3, 4, 5, 6, 7, 8, 9]. We will be
focussing on noninvasive, electroencephalogram (EEG) based BCI systems. Such devices
can be used as tools of communication for the disabled or for healthy subjects that might
be interested in exploring a new path of man-machine interfacing, say when playing BCI
operated computer games.
The classical approach to establish EEG-based control is to set up a system that is controlled by a specific EEG feature which is known to be susceptible to conditioning and to
let the subjects learn the voluntary control of that feature. In contrast, the Berlin BrainComputer Interface (BBCI) uses well established motor competences in control paradigms
and a machine learning approach to extract subject-specific discriminability patterns from
high-dimensional features. This approach has the advantage that the long subject training
needed in the operant conditioning approach is replaced by a short calibration measurement
(20 minutes) and machine training (1 minute). The machine adapts to the specific characteristics of the brain signals of each subject, accounting for the high inter-subject variability.
With respect to the topographic patterns of brain rhythm modulations the Common Spatial
Patterns (CSP) (see [10]) algorithm has proven to be very useful to extract subject-specific,
discriminative spatial filters. On the other hand the frequency band on which the CSP algorithm operates is either selected manually or unspecifically set to a broad band filter, cf.
[10, 5]. Obviously, a simultaneous optimization of a frequency filter with the spatial filter
is highly desirable. Recently, in [11] the CSSP algorithm was presented, in which very
simple frequency filters (with one delay tap) for each channel are optimized together with
the spatial filters. Although the results showed an improvement of the CSSP algorithm over
CSP, the flexibility of the frequency filters is very limited. Here we present a method that
allows to simultaneously optimize an arbitrary FIR filter within the CSP analysis. The proposed algorithm outperforms CSP and CSSP on average, and in cases where a separation of
the discriminative rhythm from dominating non-discriminative rhythms is of importance, a
considerable increase of classification accuracy can be achieved.
2 Experimental Setup
In this paper we investigate data from 60 EEG experiments with 22 different subjects. All
experiments included so called training sessions which are used to train subject-specific
classifiers. Many experiments also included feedback sessions in which the subject could
steer a cursor or play a computer game like brain-pong by BCI control. Data from feedback
sessions are not used in this a-posteriori study since they depend on an intricate interaction
of the subject with the original classification algorithm.
In the experimental sessions used for the present study, labeled trials of brain signals were
recorded in the following way: The subjects were sitting in a comfortable chair with arms
lying relaxed on the armrests. All 4.5?6 seconds one of 3 different visual stimuli indicated
for 3?3.5 seconds which mental task the subject should accomplish during that period. The
investigated mental tasks were imagined movements of the left hand (l), the right hand
(r), and one foot (f ). Brain activity was recorded from the scalp with multi-channel EEG
amplifiers using 32, 64 resp. 128 channels. Besides EEG channels, we recorded the electromyogram (EMG) from both forearms and the leg as well as horizontal and vertical electrooculogram (EOG) from the eyes. The EMG and EOG channels were used exclusively
to make sure that the subjects performed no real limb or eye movements correlated with
the mental tasks that could directly (artifacts) or indirectly (afferent signals from muscles
and joint receptors) be reflected in the EEG channels and thus be detected by the classifier,
which operates on the EEG signals only. Between 120 and 200 trials for each class were
recorded. In this study we investigate only binary classifications, but the results can be
expected to safely transfer to the multi-class case.
3 Neurophysiological Background
According to the well established model called homunculus, first described by [12], for
each part of the human body there exists a corresponding region in the motor and somatosensory area of the neocortex. The ?mapping? from the body to the respective brain
areas preserves in big parts topography, i.e., neighboring parts of the body are almost represented in neighboring parts of the cortex. While the region of the feet is located at the
center of the vertex, the left hand is represented lateralized on the right hemisphere and the
right hand on the left hemisphere. Brain activity during rest and wakefulness is describable
by different rhythms located over different brain areas. These rhythms reflect functional
states of different neuronal cortical networks and can be used for brain-computer interfacing. These rhythms are blocked by movements, independent of their active, passive or
reflexive origin. Blocking effects are visible bilaterally but pronounced contralaterally in
the cortical area that corresponds to the moved limb. This attenuation of brain rhythms is
10
15
Pz
20
10
15
Cz
20
10
15
C4
20
40
38
dB
36
34
32
30
28
Figure 1: The plot shows the spectra for
one subject during left hand (light line)
and foot (dark line) motor imagery between 5 and 25 Hz at scalp positions Pz,
Cz and C4. In both central channels
two peaks, one at 8 Hz and one at 12 Hz
are visible. Below each channel the r2 value which measures discriminability
is added. It indicates that the second
peak contains more discriminative information.
termed event-related desynchronization (ERD), see [13]. Over sensorimotor cortex a so
called idle- or ? -rhythm can be measured in the scalp EEG. The most common frequency
band of ? -rhythm is about 10 Hz (precentral ? - or ? -rhythm, [14]). Jasper and Penfield
([12]) described a strictly local so called beta-rhythm about 20 Hz over human motor cortex in electrocorticographic recordings. In Scalp EEG recording one can find ? -rhythm
over motor areas mixed and superimposed by 20 Hz-activity. In this context ? -rhythm is
sometimes interpreted as a subharmonic of cortical faster activity. These brain rhythms described above are of cortical origin but the role of a thalomo-cortical pacemaker has been
discussed since the first description of EEG by Berger ([15]) and is still a point of discussion. Lopes da Silva ([16]) showed that cortico-cortical coherence is much larger than
thalamo-cortical. However, since the focal ERD in the motor and/or sensory cortex can be
observed even when a subject is only imagining a movement or sensation in the specific
limb, this feature can well be used for BCI control. The discrimination of the imagination of movements of left hand vs. right hand vs. foot is based on the topography of the
attenuation of the ? and/or ? rhythm.
There are two problems when using ERD features for BCI control:
(1) The strength of the sensorimotor idle rhythms as measured by scalp EEG is known to
vary strongly between subjects. This introduces a high intersubject variability on the accuracy with which an ERD-based BCI system works. There is another feature independent
from the ERD reflecting imagined or intended movements, the movement related potentials
(MRP), denoting a negative DC shift of the EEG signals in the respective cortical regions.
See [17, 18] for an investigation of how this feature can be exploited for BCI use and
combined with the ERD feature. This combination strategy was able to greatly enhance
classification performance in offline studies. In this paper we focus only on improving the
ERD-based classification, but all the improvements presented here can also be used in the
combined algorithm.
(2) The precentral ? -rhythm is often superimposed by the much stronger posterior ? rhythm, which is the idle rhythm of the visual system. It is best articulated with eyes
closed, but also present in awake and attentive subjects, see Fig. 1 at channel Pz. Due to
volume conduction the posterior ? -rhythm interferes with the precentral ? -rhythm in the
EEG channels over motor cortex. Hence a ? -power based classifier is susceptible to modulations of the posterior ? -rhythm that occur due to fatigue, change in attentional focus
while performing tasks, or changing demands of visual processing. When the two rhythms
have different spectral peaks as in Fig. 1, channels Cz and C4, a suitable frequency filter
can help to weaken the interference. The optimization of such a filter integrated in the CSP
algorithm is addressed in this paper.
4 Spatial Filter - the CSP Algorithm
The common spatial pattern (CSP) algorithm ([19]) is very useful in calculating spatial
filters for detecting ERD effects ([20]) and for ERD-based BCIs, see [10], and has been
extended to multi-class problems in [21]. Given two distributions in a high-dimensional
space, the (supervised) CSP algorithm finds directions (i.e., spatial filters) that maximize
variance for one class and at the same time minimize variance for the other class. After
having band-pass filtered the EEG signals to the rhythms of interest, high variance reflects
a strong rhythm and low variance a weak (or attenuated) rhythm. Let us take the example
of discriminating left hand vs. right hand imagery. According to Sec. 3, the spatial filter
that focusses on the area of the left hand is characterized by a strong motor rhythm during
imagination of right hand movements (left hand is in idle state), and by an attenuated motor
rhythm during left hand imagination.
This criterion is exactly what the CSP algorithm optimizes: maximizing variance for the
class of right hand trials and at the same time minimizing variance for left hand trials.
Furthermore the CSP algorithm calculates the dual filter that will focus on the area of the
right hand (and it will even calculate several filters for both optimizations by considering
orthogonal subspaces).
The CSP algorithm is trained on labeled data, i.e., we have a set of trials si , i = 1, 2, ...,
where each trial consists of several channels (as rows) and time points (as columns). A
spatial filter w ? IR#channels projects these trials to the signal s?i (w) = w? si with only one
channel. The idea of CSP is to find a spatial filter w such that the projected signal has
high variance for one class and low variance for the other. In other words we maximize the
variance for one class whereas the sum of the variances of both classes remains constant,
which is expressed by the following optimization problem:
?
max
w
var(s?i (w)),
s.t.
? var(s?i (w)) = 1,
(1)
i
i:Trial in Class 1
where var(?) is the variance of the vector. An analoguous formulation can be formed for
the second class.
Using the definition of the variance we simplify the problem to
max
w
w? ?1 w,
s.t.
w? (?1 + ?2 )w = 1,
(2)
where ?y is the covariance matrix of the trial-concatenated matrix of dimension [channels
? concatenated time-points] belonging to the respective class y ? {1, 2}.
Formulating the dual problem we can find that the problem can be solved by calculating a
matrix Q and diagonal matrix D with elements in [0, 1] such that
Q?1 Q? = D
and
Q?2 Q? = I ? D
(3)
and by choosing the highest and lowest eigenvalue.
Equation (3) can be accomplished in the following way. First we whiten the matrix ?1 + ?2 ,
i.e., determine a matrix P such that P(?1 + ?2 )P? = I which is possible due to positive
definiteness of ?1 + ?2 . Then define ?? y = P?y P? and calculate an orthogonal matrix R and
a diagonal maxtrix D by spectral theory such that ?? 1 = RDR? . Therefore ?? 2 = R(I ? D)R?
since ?? 1 + ?? 2 = I and Q := R? P satisfies (3). The projection that is given by the j-th row
of matrix R has a relative variance of d j ( j-th element of D) for trials of class 1 and relative
variance 1 ? d j for trials of class 2. If d j is near 1 the filter given by the j-th row of R
maximizes variance for class 1, and since 1 ? d j is near 0, minimizes variance for class
2. Typically one would retain some projections corresponding to the highest eigenvalues
d j , i.e., CSPs for class 1, and some corresponding to the lowest eigenvalues, i.e., CSPs for
class 2.
5 Spectral Filter
As discussed in Sec. 3 the content of discriminative information in different frequency
bands is highly subject-dependent. For example the subject whose spectra are visualized in
Fig. 1 shows a highly discriminative peak at 12 Hz whereas the peak at 8 Hz does not show
good discrimination. Since the lower frequency peak is stronger a better performance in
classification can be expected, if we reduce the influence of the lower frequency peak for
this subject. However, for other subjects the situation looks differently, i.e., the classification might fail if we exclude this information. Thus it is desirable to optimize a spectral
filter for better discriminability. Here are two approaches to this task.
CSSP. In [11] the following was suggested: Given si the signal s?i is defined to be the signal
si delayed by ? timepoints. In CSSP the usual CSP approach is applied to the concatenation
of si and s?i in the channel dimension, i.e., the delayed signals are treated as new channels.
By this concatenation step the ability to neglect or emphasize specific frequency bands can
be achieved and strongly depends on the choice of ? which can be accomplished by some
validation approach on the training set. More complex frequency filters can be found by
concatenating more delayed EEG-signals with several delays. In [11] it was concluded that
in typical BCI situations where only small training sets are available, the choice of only one
delay tap is most effective. The increased flexibility of a frequency filter with more delay
taps does not trade off the increased complexity of the optimization problem.
CSSSP. The idea of our new CSSSP algorithm is to learn a complete global spatialtemporal filter in the spirit of CSP and CSSP.
A digital frequency filter consists of two sequences a and b with length na and nb such that
the signal x is filtered to y by
a(1)y(t) =
?
b(1)x(t) + b(2)x(t ? 1) + ... + b(nb)x(t ? nb ? 1)
a(2)y(t ? 1) ? ... ? a(na)y(t ? na ? 1)
Here we restrict ourselves to FIR (finite impulse response) filters by defining na = 1 and
a = 1. Furthermore we define b(1) = 1 and fix the length of b to some T with T > 1. By
this restriction we resign some flexibility of the frequency filter but it allows us to find a
suitable solution in the following way: We are looking for a real-valued sequence b1,...,T
with b(1) = 1 such that the trials
si,b = si +
?
? =2,...,T
b? s?i
(4)
can be classified better in some way. Using equation (1) we have to solve the problem
?
max
w,b,b(1)=1
var(s?i,b (w)),
? var(s?i,b (w)) = 1,
s.t.
(5)
i
i:Trial in Class 1
which can be simplified to
?
max max
w
b,b(1)=1 w
s.t.
?
w
?
? =0,...,T ?1
?
? =0,...,T ?1
?
j=1,...,T ??
?
j=1,...,T ??
!
b( j)b( j + ? )
!
b( j)b( j + ? )
??1
!
(??1 + ??2 )
!
w,
(6)
w = 1.
where ??y = E(hsi (s?i )? + s?i s?
i | i : Trial in Class yi), namely the correlation between the
signal and the by ? timepoints delayed signal.
Since we can calculate for each b the optimal w by the usual CSP techniques (see equation
(2) and (3)) a (T ? 1)-dimensional (b(1)=1) problem remains which we can solve with
usual line-search optimization techniques if T is not too large.
Consequently we get for each class a frequency band filter and a pattern (or similar to CSP
more than one pattern by choosing the next eigenvectors).
However, with increasing T the complexity of the frequency filter has to be controlled in
order to avoid overfitting. This control is achieved by introducing a regularization term in
Cz
48
C4
44
0
dB
Magnitude (dB)
10
?10
40
36
32
?20
5
10
15
Frequency (Hz)
20
25
28
10
15
20
10
15
20
Figure 2: The plot on the left shows one learned frequency filter for the subject whose spectra was
shown Fig. 1. In the plot on the right the resulting spectra are visualized after applying the frequency
filter on the left. By this technique the classification error could be reduced from 12.9 % to 4.3 %.
the following way:
max max
b,b(1)=1 w
s.t.
w?
?
? =0,...,T ?1
?
w
?
? =0,...,T ?1
?
j=1,...,T ??
?
j=1,...,T ??
!
!
b( j)b( j + ? ) ??1 w ? C/T ||b||1 ,
!
b( j)b( j + ? )
!
(??1 + ??2 )
(7)
w = 1.
Here C is a non-negative regularization constant, which has to be chosen, e.g., by crossvalidation. Since a sparse solution for b is desired, we use the 1-norm in this formulation.
With higher C we get sparser solutions for b until at one point the usual CSP approach
remains, i.e., b(1) = 1, b(m) = 0 for m > 1. We call this approach Common Sparse Spectral
Spatial Pattern (CSSSP) algorithm.
6 Feature Extraction, Classification and Validation
6.1 Feature Extraction
After choosing all channels except the EOG and EMG and a few of the outermost channels
of the cap we apply a causal band-pass filter from 7?30 Hz to the data, which encompasses
both the ? - and the ? -rhythm. For classification we extract the interval 500?3500 ms after
the presented visual stimulus. To these trials we apply the original CSP ([10]) algorithm
(see Sec. 4), the extended CSSP ([11]), and the proposed CSSSP algorithm (see Sec. 5).
For CSSP we choose the best ? by leave-one-out cross validation on the training set. For
CSSSP we present the results for different regularization constants C with fixed T = 16.
Here we use 3 patterns per class which leads to a 6-dimensional output signal. As a measure
of the amplitude in the specified frequency band we calculate the logarithm of the variances
of the spatio-temporally filtered output signals as feature vectors.
6.2 Classification and Validation
The presented preprocessing reduces the dimensionality of the feature vectors to six. Since
we have 120 up to 200 samples per class for each data set, there is no need for regularization when using linear classifiers. When testing non-linear classification methods on these
features, we could not observe any statistically significant gain for the given experimental setup when compared to Linear Discriminant Analysis (LDA) (see also [22, 6, 23]).
Therefore we choose LDA for classification.
For validation purposes the (chronologically) first half of the data are used as training and
the second half as test data.
7 Results
Fig. 2 shows one chosen frequency filter for the subject whose spectra are shown in Fig. 1
and the remaining spectrum after using this filter. As expected the filter detects that there
C = 0.1
CSP
vs.
CSSSP
CSSP
vs.
CSSSP
C = 0.5
C=1
C=5
50
50
50
50
40
40
40
40
30
30
30
30
20
20
20
20
10
10
10
10
0
0
0
0
20
40
0
20
40
0
20
40
0
50
50
50
40
40
40
40
30
30
30
30
20
20
20
20
10
10
10
10
0
0
0
0
20
40
0
20
40
0
20
40
50
0
0
20
40
0
20
40
Figure 3: Each plots shows validation error of one algorithm against another, in row 1 that is CSP
(y-axis) vs. CSSSP (x-axis), in row 2 that is CSSP (y-axis) vs. CSSSP (x-axis). In columns the
regularization parameter of CSSSP is varied between 0.1, 0.5, 1 and 5. In each plot a cross above the
diagonal marks a dataset where CSSSP outperforms the other algorithm.
is a high discrimination in frequencies at 12 Hz, but only a low discrimination in the frequency band at 8 Hz. Since the lower frequency peak is very predominant for this subject
without having a high discrimination power, a filter is learned which drastically decreases
the amplitude in this band, whereas full power at 12 Hz is retained.
Applied to all datasets and all pairwise class combinations of the datasets we get the results
shown in Fig. 3. Only the results of those datasets are displayed whose classification accuracy exceeds 70 % for at least one classifier. First of all, it is obvious that a small choice
of the regularization constant is problematic, since the algorithm tends to overfit. For high
values CSSSP tends towards the CSP performance since using frequency filters is punished
too hard. In between there is a range where CSSSP is better than CSP. Furthermore there
are some datasets where the gain by CSSSP is huge.
Compared to CSSP the situation is similar, namely that CSSSP outperforms the CSSP in
many cases and on average, but there are also a few cases, where CSSP is better.
An open issue is the choice of the parameter C. If we choose it constant at 1 for all datasets
the figure shows that CSSSP will typically outperform CSP. Compared to CSSP both cases
appear, namely that CSSP is better than CSSSP and vice versa.
A more refined way is to choose C individually for each dataset. One way to accomplish
this choice is to perform cross-validations for a set of possible values of C and to select the
C with minimum cross-validation error. We have done this, for example, for the dataset
whose spectra are shown in Fig. 1. Here on the training set for C the value 0.3 is chosen.
The classification error of CSSSP with this C is 4.3 %, whereas CSP has 12.9 % and CSSP
8.6 % classification error.
8 Concluding discussion
In past BCI research the CSP algorithm has proven to be very sucessful in determining
spatial filters which extract discriminative brain rhythms. However the performance can
suffer when a non-discriminative brain rhythm with an overlapping frequency range interferes. The presented CSSSP algorithm successful solves such problematic situations by
optimizing simultaneously with the spatial filters a spectral filter. The trade-off between
flexibility of the estimated frequency filter and the danger of overfitting is accounted for by
a sparsity constraint which is weighted by a regularization constant. The successfulness of
the proposed algorithm when compared to the original CSP and to the CSSP algorithm was
demonstrated on a corpus of 60 EEG data sets recorded from 22 different subjects.
Acknowledgments We thank S. Lemm for helpful discussions. The studies were supported
by BMBF-grants FKZ 01IBB02A and FKZ 01IBB02B, by the Deutsche Forschungsgemeinschaft
(DFG), FOR 375/B1 and by the PASCAL Network of Excellence (EU # 506778).
References
[1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, ?Brain-computer interfaces for communication and control?, Clin. Neurophysiol., 113: 767?791, 2002.
[2] E. A. Curran and M. J. Stokes, ?Learning to control brain activity: A review of the production and control
of EEG components for driving brain-computer interface (BCI) systems?, Brain Cogn., 51: 326?336, 2003.
[3] A. K?bler, B. Kotchoubey, J. Kaiser, J. Wolpaw, and N. Birbaumer, ?Brain-Computer Communication:
Unlocking the Locked In?, Psychol. Bull., 127(3): 358?375, 2001.
[4] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. K?bler, J. Perelmouter, E. Taub,
and H. Flor, ?A spelling device for the paralysed?, Nature, 398: 297?298, 1999.
[5] G. Pfurtscheller, C. Neuper, C. Guger, W. Harkam, R. Ramoser, A. Schl?gl, B. Obermaier, and M. Pregenzer,
?Current Trends in Graz Brain-computer Interface (BCI)?, IEEE Trans. Rehab. Eng., 8(2): 216?219, 2000.
[6] B. Blankertz, G. Curio, and K.-R. M?ller, ?Classifying Single Trial EEG: Towards Brain Computer Interfacing?, in: T. G. Diettrich, S. Becker, and Z. Ghahramani, eds., Advances in Neural Inf. Proc. Systems (NIPS
01), vol. 14, 157?164, 2002.
[7] L. Trejo, K. Wheeler, C. Jorgensen, R. Rosipal, S. Clanton, B. Matthews, A. Hibbs, R. Matthews, and
M. Krupka, ?Multimodal Neuroelectric Interface Development?, IEEE Trans. Neural Sys. Rehab. Eng.,
(11): 199?204, 2003.
[8] L. Parra, C. Alvino, A. C. Tang, B. A. Pearlmutter, N. Yeung, A. Osman, and P. Sajda, ?Linear spatial
integration for single trial detection in encephalography?, NeuroImage, 7(1): 223?230, 2002.
[9] W. D. Penny, S. J. Roberts, E. A. Curran, and M. J. Stokes, ?EEG-Based Communication: A Pattern Recognition Approach?, IEEE Trans. Rehab. Eng., 8(2): 214?215, 2000.
[10] H. Ramoser, J. M?ller-Gerking, and G. Pfurtscheller, ?Optimal spatial filtering of single trial EEG during
imagined hand movement?, IEEE Trans. Rehab. Eng., 8(4): 441?446, 2000.
[11] S. Lemm, B. Blankertz, G. Curio, and K.-R. M?ller, ?Spatio-Spectral Filters for Improved Classification of
Single Trial EEG?, IEEE Trans. Biomed. Eng., 52(9): 1541?1548, 2005.
[12] H. Jasper and W. Penfield, ?Electrocorticograms in man: Effects of voluntary movement upon the electrical
activity of the precentral gyrus?, Arch. Psychiat. Nervenkr., 183: 163?174, 1949.
[13] G. Pfurtscheller and F. H. L. da Silva, ?Event-related EEG/MEG synchronization and desynchronization:
basic principles?, Clin. Neurophysiol., 110(11): 1842?1857, 1999.
[14] H. Jasper and H. Andrews, ?Normal differentiation of occipital and precentral regions in man?, Arch. Neurol.
Psychiat. (Chicago), 39: 96?115, 1938.
[15] H. Berger, ??ber das Elektroenkephalogramm des Menschen?, Arch. Psychiat. Nervenkr., 99(6): 555?574,
1933.
[16] F. H. da Silva, T. H. van Lierop, C. F. Schrijer, and W. S. van Leeuwen, ?Organization of thalamic and
cortical alpha rhythm: Spectra and coherences?, Electroencephalogr. Clin. Neurophysiol., 35: 627?640,
1973.
[17] G. Dornhege, B. Blankertz, G. Curio, and K.-R. M?ller, ?Combining Features for BCI?, in: S. Becker,
S. Thrun, and K. Obermayer, eds., Advances in Neural Inf. Proc. Systems (NIPS 02), vol. 15, 1115?1122,
2003.
[18] G. Dornhege, B. Blankertz, G. Curio, and K.-R. M?ller, ?Increase Information Transfer Rates in BCI by CSP
Extension to Multi-class?, in: S. Thrun, L. Saul, and B. Sch?lkopf, eds., Advances in Neural Information
Processing Systems, vol. 16, 733?740, MIT Press, Cambridge, MA, 2004.
[19] K. Fukunaga, Introduction to Statistical Pattern Recognition, Academic Press, San Diego, 2nd edn., 1990.
[20] Z. J. Koles and A. C. K. Soong, ?EEG source localization: implementing the spatio-temporal decomposition
approach?, Electroencephalogr. Clin. Neurophysiol., 107: 343?352, 1998.
[21] G. Dornhege, B. Blankertz, G. Curio, and K.-R. M?ller, ?Boosting bit rates in non-invasive EEG singletrial classifications by feature combination and multi-class paradigms?, IEEE Trans. Biomed. Eng., 51(6):
993?1002, 2004.
[22] K.-R. M?ller, C. W. Anderson, and G. E. Birch, ?Linear and Non-Linear Methods for Brain-Computer
Interfaces?, IEEE Trans. Neural Sys. Rehab. Eng., 11(2): 165?169, 2003.
[23] B. Blankertz, G. Dornhege, C. Sch?fer, R. Krepki, J. Kohlmorgen, K.-R. M?ller, V. Kunzmann, F. Losch,
and G. Curio, ?Boosting Bit Rates and Error Detection for the Classification of Fast-Paced Motor Commands
Based on Single-Trial EEG Analysis?, IEEE Trans. Neural Sys. Rehab. Eng., 11(2): 127?131, 2003.
| 2836 |@word blankertz1:1 trial:22 nervenkr:2 stronger:2 norm:1 nd:1 open:1 accounting:1 covariance:1 eng:8 decomposition:1 analoguous:1 contains:1 exclusively:1 denoting:1 franklin:1 outperforms:3 past:1 current:1 ida:1 si:7 visible:2 chicago:1 motor:11 plot:5 discrimination:5 v:7 half:2 selected:1 device:3 pacemaker:1 sys:3 short:1 filtered:3 psychiat:3 mental:3 detecting:1 boosting:2 direct:1 beta:1 consists:2 pathway:1 excellence:1 pairwise:1 inter:1 expected:3 intricate:1 multi:6 brain:28 detects:1 kohlmorgen:1 str:2 kraulem:1 considering:1 increasing:1 ller1:1 project:1 campus:1 deutsche:1 maximizes:1 pregenzer:1 lowest:2 what:1 interpreted:1 minimizes:1 differentiation:1 jorgensen:1 dornhege:5 temporal:2 safely:1 attenuation:2 exactly:1 demonstrates:1 classifier:5 control:12 grant:1 superiority:1 appear:1 comfortable:1 positive:1 local:1 tends:2 receptor:1 krupka:1 solely:1 path:1 modulation:2 might:2 diettrich:1 discriminability:4 limited:1 range:2 statistically:1 locked:1 acknowledgment:1 testing:1 wolpaw:2 cogn:1 wheeler:1 danger:1 area:7 osman:1 projection:2 intention:1 idle:4 word:1 get:3 nb:3 context:1 influence:1 applying:1 vaughan:1 optimize:2 conventional:1 restriction:1 demonstrated:1 center:1 maximizing:1 occipital:1 rdr:1 blanker:1 resp:1 enhanced:1 play:1 diego:1 guido:1 edn:1 us:1 curran:2 origin:2 element:2 trend:1 recognition:2 located:2 electromyogram:1 labeled:2 blocking:1 electrocorticographic:1 role:1 observed:1 solved:1 electrical:1 calculate:4 region:4 graz:1 eu:1 movement:10 highest:2 trade:2 decrease:1 benjamin:2 pong:1 complexity:2 trained:1 depend:1 localization:2 upon:1 neurophysiol:4 multimodal:1 joint:1 differently:1 represented:2 train:1 articulated:1 sajda:1 fast:1 effective:1 detected:1 klaus:2 choosing:3 refined:1 whose:5 larger:1 dominating:1 valued:1 say:1 solve:2 bci:16 ability:1 alvino:1 topographic:1 bler:2 obviously:1 advantage:1 eigenvalue:3 sequence:2 matthias:1 interferes:2 interaction:1 fer:1 neighboring:2 rehab:6 combining:1 singletrial:1 wakefulness:1 flexibility:4 kunzmann:1 adapts:1 description:1 moved:1 pronounced:1 crossvalidation:1 guger:1 leave:1 help:1 andrew:1 schl:1 measured:2 intersubject:1 solves:1 strong:2 somatosensory:1 direction:1 foot:4 sensation:1 filter:45 human:3 implementing:1 fix:1 investigation:1 parra:1 exploring:1 strictly:1 bypassing:1 extension:1 lying:1 normal:1 mapping:1 matthew:2 driving:1 vary:1 purpose:1 proc:2 punished:1 healthy:1 curio2:1 individually:1 vice:1 create:1 tool:1 reflects:1 weighted:1 electroencephalogr:2 mit:1 interfacing:4 aim:1 csp:28 avoid:1 command:1 focus:4 improvement:2 indicates:1 superimposed:2 greatly:1 contrast:1 lateralized:1 posteriori:1 bebel:1 helpful:1 dependent:1 integrated:1 typically:2 fhg:1 interested:1 germany:3 biomed:2 issue:1 classification:21 dual:2 pascal:1 development:2 spatial:18 integration:1 contralaterally:1 having:2 extraction:2 manually:1 broad:1 look:1 stimulus:2 simplify:1 csssp:19 few:2 penfield:2 modern:1 simultaneously:2 preserve:1 delayed:4 dfg:1 replaced:1 intended:1 ourselves:1 amplifier:1 detection:2 organization:1 interest:1 huge:1 highly:3 investigate:2 evaluation:1 introduces:1 predominant:1 operated:1 light:1 paralysed:1 respective:4 orthogonal:2 logarithm:1 desired:1 causal:1 precentral:5 weaken:1 leeuwen:1 increased:2 column:2 steer:1 bull:1 kekul:1 reflexive:1 vertex:1 introducing:1 delay:4 successful:1 too:2 conduction:1 perelmouter:1 emg:3 accomplish:2 combined:2 ibb02a:1 gerking:1 peak:8 discriminating:1 retain:1 ghanayim:1 off:2 enhance:1 together:1 na:4 imagery:2 reflect:1 recorded:5 central:1 choose:4 obermaier:1 electrocorticograms:1 fir:2 hinterberger:1 cssp:17 imagination:3 potential:1 exclude:1 de:3 sec:4 afferent:1 depends:1 performed:1 closed:1 hindenburgdamm:1 option:1 thalamic:1 encephalography:1 minimize:1 formed:1 ir:1 accuracy:3 variance:17 characteristic:1 sitting:1 weak:1 lkopf:1 kotchoubey:2 clanton:1 classified:1 simultaneous:2 ed:3 definition:1 attentive:1 against:1 sensorimotor:2 frequency:27 invasive:1 obvious:1 gain:2 dataset:3 birch:1 cap:1 dimensionality:1 amplitude:2 reflecting:1 nerve:1 higher:1 supervised:1 reflected:2 response:1 improved:1 erd:9 formulation:2 done:1 strongly:2 anderson:1 furthermore:3 bilaterally:1 arch:3 correlation:1 until:1 hand:18 overfit:1 horizontal:1 overlapping:1 artifact:1 indicated:1 bcis:1 disabled:1 impulse:1 lda:2 menschen:1 effect:3 hence:1 regularization:7 game:2 during:6 rhythm:33 whiten:1 criterion:1 m:1 fatigue:1 complete:1 electroencephalogram:1 pearlmutter:1 interface:8 passive:1 silva:3 novel:2 recently:1 common:4 functional:1 jasper:3 conditioning:2 birbaumer:3 volume:1 imagined:3 discussed:2 measurement:1 blocked:1 significant:1 versa:1 taub:1 cambridge:1 focal:1 session:4 calibration:1 cortex:5 charit:1 posterior:3 showed:2 csps:2 optimizing:2 hemisphere:2 apart:1 optimizes:1 termed:1 inf:2 binary:1 accomplished:2 exploited:1 muscle:2 yi:1 minimum:1 relaxed:1 florian:2 determine:1 paradigm:2 ller:8 focussing:1 period:1 paralyzed:1 signal:19 maximize:2 desirable:2 hsi:1 reduces:1 full:1 exceeds:1 faster:1 characterized:1 academic:1 cross:4 long:1 controlled:2 calculates:1 involving:1 basic:1 patient:1 essentially:1 enhancing:1 yeung:1 sometimes:1 cz:4 achieved:3 background:1 whereas:4 addressed:1 interval:1 source:2 concluded:1 sch:2 electrooculogram:1 rest:1 flor:1 sure:1 subject:27 hz:13 recording:2 db:3 spirit:1 call:1 near:2 neuroelectric:1 forschungsgemeinschaft:1 restrict:1 fkz:2 reduce:1 idea:2 attenuated:2 shift:1 six:1 becker:2 suffer:1 hibbs:1 ibb02b:1 gabriel:2 useful:2 eigenvectors:1 dark:1 neocortex:1 band:11 visualized:2 reduced:1 gyrus:1 homunculus:1 outperform:1 problematic:2 estimated:1 per:2 vol:3 koles:1 changing:1 chronologically:1 bbci:1 sum:1 lope:1 almost:1 separation:1 coherence:2 bit:2 paced:1 scalp:5 activity:6 strength:1 occur:1 constraint:1 awake:1 lemm:2 chair:1 formulating:1 concluding:1 performing:1 fukunaga:1 according:2 combination:3 belonging:1 describable:1 leg:1 soong:1 operant:1 interference:1 equation:3 remains:3 fail:1 needed:1 krepki:1 available:1 apply:2 limb:3 observe:1 spectral:9 indirectly:1 original:3 remaining:1 cf:2 clin:4 iversen:1 medicine:1 calculating:2 neglect:1 concatenated:2 ghahramani:1 establish:1 classical:1 added:1 kaiser:1 strategy:1 usual:4 diagonal:3 spelling:1 obermayer:1 subspace:1 attentional:1 thank:1 berlin:4 concatenation:2 philip:1 thrun:2 discriminant:1 meg:1 besides:1 length:2 retained:1 berger:2 minimizing:1 setup:2 susceptible:2 robert:2 negative:2 perform:1 vertical:1 forearm:1 datasets:5 finite:1 displayed:1 unlocking:1 voluntary:2 extended:2 communication:6 variability:2 situation:4 dc:1 defining:1 looking:1 varied:1 harkam:1 arbitrary:1 august:1 competence:1 neuroprosthesis:1 namely:3 specified:1 optimized:1 tap:3 c4:4 potsdam:1 learned:2 established:2 nip:2 trans:8 able:1 suggested:1 mcfarland:1 below:1 pattern:10 stokes:2 sparsity:1 encompasses:1 rosipal:1 max:7 power:3 suitable:3 event:2 braincomputer:1 treated:1 arm:1 blankertz:6 technology:1 eye:3 temporally:1 axis:4 fraunhofer:1 psychol:1 extract:4 eog:3 dornhege1:1 review:1 determining:1 relative:2 synchronization:1 topography:2 mixed:1 filtering:1 proven:2 var:5 validation:8 digital:1 principle:1 playing:1 classifying:1 mrp:1 production:1 row:5 accounted:1 supported:1 gl:1 offline:1 drastically:1 cortico:1 ber:1 saul:1 sparse:2 penny:1 van:2 outermost:1 feedback:2 noninvasive:1 cortical:9 dimension:2 sensory:1 projected:1 simplified:1 preprocessing:1 san:1 alpha:1 emphasize:1 global:1 active:1 overfitting:2 b1:2 corpus:1 spatio:4 discriminative:8 spectrum:8 search:1 channel:21 learn:2 transfer:2 nature:1 eeg:27 improving:2 imagining:1 investigated:1 complex:1 ramoser:2 da:4 big:1 body:3 neuronal:1 fig:8 definiteness:1 bmbf:1 pfurtscheller:4 neuroimage:1 position:1 timepoints:2 concatenating:1 tang:1 minute:2 specific:7 desynchronization:2 r2:1 pz:3 neurol:1 exists:1 curio:7 importance:1 magnitude:1 trejo:1 thalamo:1 cursor:1 demand:1 spatialtemporal:1 sparser:1 neurophysiological:1 visual:4 expressed:1 corresponds:1 satisfies:1 ma:1 losch:2 consequently:1 towards:2 man:3 considerable:1 change:1 content:1 included:2 determined:1 typical:1 operates:2 except:1 hard:1 called:4 pas:2 experimental:3 neuper:1 select:1 mark:1 correlated:1 |
2,022 | 2,837 | An Application of Markov Random Fields to
Range Sensing
James Diebel and Sebastian Thrun
Stanford AI Lab
Stanford University, Stanford, CA 94305
Abstract
This paper describes a highly successful application of MRFs to the problem of generating high-resolution range images. A new generation of
range sensors combines the capture of low-resolution range images with
the acquisition of registered high-resolution camera images. The MRF
in this paper exploits the fact that discontinuities in range and coloring
tend to co-align. This enables it to generate high-resolution, low-noise
range images by integrating regular camera images into the range data.
We show that by using such an MRF, we can substantially improve over
existing range imaging technology.
1
Introduction
In recent years, there has been an enormous interest in developing technologies for measuring range. The set of commercially available technologies include passive stereo with two
or more cameras, active stereo, triangulating light stripers, millimeter wavelength radar,
and scanning and flash lidar. In the low-cost arena, systems such as the Swiss Ranger and
the CanestaVision sensors provide means to acquire low-res range data along with passive
camera images. Both of these devices capture high-res visual images along with lower-res
depth information. This is the case for a number of devices at all price ranges, including
the highly-praised range camera by 3DV Systems.
This paper addresses a single shortcoming that (with the exception of stereo) is shared
by most active range acquisition devices: Namely that range is captured at much lower
resolution than images. This raises the question as to whether we can turn a low-resolution
depth imager into a high-resolution one, by exploiting conventional camera images? A
positive answer to this question would significantly advance the field of depth perception.
Yet we lack techniques to fuse high-res conventional images with low-res depth images.
This paper applies graphical models to the problem of fusing low-res depth images with
high-res camera images. Specifically, we propose a Markov Random Field (MRF) method
for integrating both data sources. The intuition behind the MRF is that depth discontinuities
in a scene often co-occur with color or brightness changes within the associated camera
image. Since the camera image is commonly available at much higher resolution, this
insight can be used to enhance the resolution and accuracy of the depth image.
Our approach performs this data integration using a multi-resolution MRF, which ties
together image and range data. The mode of the probability distribution defined by the
MRF provides us with a high-res depth map. Because we are only interested in finding the
mode, we can apply fast optimization technique to the MRF inference problem, such as a
Figure 1: The MRF is composed of 5 node types: The measurements mapped to two types of variables, the range measurement variables labeled z, image pixel variables labeled x. The density of
image pixels is larger than those of the range measurements. The reconstructed range nodes, labeled
y, are unobservable, but their density matches that of the image pixels. Auxiliary nodes labeled w
and u mediate the information from the image and the depth map, as described in the text.
conjugate gradient algorithm. This approach leads to a high-res depth map within seconds,
increasing the resolution of our depth sensor by an order of magnitude while improving
local accuracy. To back up this claim, we provide several example results obtained using a
low-res laser range finder paired with a conventional point-and-shot camera.
While none of the modeling or inference techniques in this paper are new, we believe
that this paper provides a significant application of graphical modeling techniques to a
problem that can dramatically alter an entire growing industry.
2
The Image-Range MRF
Figure 1 shows the MRF designed for our task. The input to the MRF occurs at two layers,
through the variables labeled xi and the variables labeled zi . The variables xi correspond
to the image pixels, and their values are the three-dimensional RGB value of each pixel.
The variables zi are the range measurements. The range measurements are sampled much
less densely than the image pixels, as indicated in this figure.
The key variables in this MRF are the ones labeled y, which model the reconstructed
range at the same resolution as the image pixels. These variables are unobservable. Additional nodes labeled u and w leverage the image information into the estimated depth map
y.
Specifically, the MRF is defined through the following potentials:
1. The depth measurement potential is of the form
X
? =
k (yi ? zi )2
(1)
i?L
Here L is the set of indexes for which a depth measurement is available, and k
is a constant weight placed on the depth measurements. This potential measures
the quadratic distance between the estimated range in the high-res grid y and the
measured range in the variables z, where available.
2. A depth smoothness prior is expressed by a potential of the form
X X
? =
wij (yi ? yj )2
i
j?N (i)
(2)
Here N (i) is the set of nodes adjacent to i. ? is a weighted quadratic distance
between neighboring nodes.
3. The weighting factors wij are a key element, in that they provide the link to the
image layer in the MRF. Each wij is a deterministic function of the corresponding
two adjacent image pixels, which is calculated as follows:
wij
uij
=
exp(?c uij )
= ||xi ?
xj ||22
(3)
(4)
Here c is a constant that quantifies how unwilling we are to have smoothing occur
across edges in the image.
The resulting MRF is now defined through the constraints ? and ?. The conditional distribution over the target variables y is given by an expression of the
form
1
1
p(y | x, z) =
exp(? (? + ?))
(5)
Z
2
where Z is a normalizer (partition function).
3
Optimization
Unfortunately, computing the full posterior is impossible for such an MRF, not least because the MRF may possesses many millions of nodes; even loopy belief propagation [19]
requires enormous time for convergence. Instead, for the depth reconstruction problem we
shall be content with computing the mode of the posterior.
Finding the mode of the log-posterior is, of course, a least square optimization problem,
which we solve with the well-known conjugate gradient (CG) algorithm [12]. A typical
single-image optimization with 2 ? 105 nodes takes about a second to optimize on a modern
computer.
The details of the CG algorithm are omitted for brevity, but can be found in contemporary texts. The resulting algorithm for probable depth image reconstruction is now remarkably simple: Simply set y [0] by the bilinear interpolation of z, and then iterate the CG
update rule. The result is a probable reconstruction of the depth map at the same resolution
as the camera image.
4
Results
Our experiments were performed with a SICK sweeping laser range finder and a Canon
consumer digital camera with 5 mega pixels per image. Both were mounted on a rotating
platform controlled by a servo actuator. This configuration allows us to survey an entire
room from a consistent vantage point and with known camera and laser positions at all
times. The output of this system is a set of pre-aligned laser range measurements and
camera images.
Figure 2 shows a scan of a bookshelf in our lab. The top row contains several views of
the raw measurements and the bottom row is the output of the MRF. The latter is clearly
much sharper and less noisy; many features that are smaller than the resolution of the laser
scanner are pulled out by the camera image. Figure 5 shows the same scene from much
further back.
A more detailed look is taken in Figure 3. Here we examine the painted metal door
frame to an office. The detailed structure is completely invisible in the raw laser scan
but is easily drawn out when the image data is incorporated. It is notable that traditional
mesh fairing algorithms would not be able to recover this fine structure, as there is simply
insufficient evidence of it in the range data alone. Specifically, when running our MRF
using a fixed value for wij , which effectively decouples the range image and the depth
image, the depth reconstruction leads to a model that is either overly noise (for w ij = 1 or
(a) Raw low-res depth map
(b) Raw low-res 3D model
(c) Image mapped onto 3D model
(d) MRF high-res depth map
(e) MRF high-res 3D model
(f) Image mapped onto 3D model
Figure 2: Example result of our MRF approach. Panels (a-c) show the raw data, the low-res depth
map, a 3D model constructed from this depth map, and the same model with image texture superimposed. Panels (d-f) show the results of our algorithm. The depth map is now high-resolution, as is
the 3D model. The 3D rendering is a substantial improvement over the raw sensor data; in fact, many
small details are now visible.
smooths out the edge features for wij = 5. Our approach clearly recovers those corners,
thanks to the use of the camera image.
Finally, in Fig. 4 we give one more example of a shipping crate next to a white wall. The
coarse texture of the wooden surface is correctly inferred in contrast to the smooth white
wall. This brings up the obvious problem that sharp color gradients do frequently occur
on smooth surfaces; take, for example, posters. While this fact can sometimes lead to
falsely-textured surfaces, it has been our experience that these flaws are often unnoticeable
(a) Raw 3D model, with and without color from the image
(b) Two results ignoring the image color information, for two different smoothers
(c) Reconstruction with our MRF, integrating both depth and image color
Figure 3: The important of the image information in depth recovery is illustrated in this figure. It
shows a part of a door frame, for which a course depth map and a fine-grained image is available. The
rendering labeled (b) show the result of our MRF when color is entirely ignored, for different fixed
value of the weights wij . The images in (c) are the results of our approach, which clearly retains the
sharp corner of the door frame.
and certainly no worse than the original scan. Clearly, the reconstruction of such depth
maps is an ill-posed problem, and our approach generates a high-res model that is still
significantly better than the original data. Notice, however, that the background wall is
recovered accurately, and the corner of the room is visually enhanced.
5
Related Work
One of the primary acquisition techniques for depth is stereo. A good survey and comparison of stereo algorithms can is due to [14]. Our algorithm does not apply to stereo vision,
since by definition the resolution of the image and the inferred depth map are equivalent.
(a) 3D model based on the raw range data, with and without texture
(b) Refined and super-resolved model, generated by our MRF
Figure 4: This example illustrate that the amount of smoothing in the range data is dependent on
the image texture. On the left is a wooden box with an unsmooth surface that causes significant
color variations. The 3D model generated from the MRF provides relatively little smoothing. In the
background is a while wall with almost no color variation. Here our approach smooths the mesh
significantly; in fact, it enhances the visibility of the room corner.
Passive stereo, in which the sensor does not carry its own light source, is unable to estimate ranges in the absence of texture (e.g., when imaging a featureless wall). Active stereo
techniques supply their own light [4]. However, those techniques differ in characteristics
from laser-based system to an extent that renders them practically inapplicable for many
applications (most notably: long-range acquisition, where time-of-flight techniques are an
order of magnitude more accurate then triangulation techniques, and bright-light outdoor
environments). We remark that Markov Random fields have become a defining methodology in stereo reconstruction [15], along with layered EM-style methods [2, 16]; see the
comparison in [14].
Similar work due to [20] relies on a different set of image cues to improve stereo shape
estimates. In particular, learned regression coefficients are used to predict the band-passed
shape of a scene from a band-passed image of that scene. The regression coefficients are
Figure 5: 3D model of a larger indoor environment, after applying our MRF.
learned from laser-stripe-scanned reference models with regitered images.
For range images, surfaces, and point clouds, there exists a large literature on smoothing while preserving features such as edges. This includes work on diffusion processes [6],
frequency-domain filtering [17], and anisotropic diffusion [5]; see also [3] and [1]. Most
recently [10] proposed an efficient non-iterative technique for feature-preserving mesh
smoothing, [9] adapted bilateral filtering for application to mesh denoising. and [7] developed anisotropic MRF techniques. None of these techniques, however, integrates highresolution images to guide the smoothing process. Instead, they all operate on monochromatic 3D surfaces.
Our work can be viewed as generating super-resolution. Super-resolution techniques
have long been popular in the computer vision field [8] and in aerial photogrammetry [11].
Here Bayesian techniques are often brought to bear for integrating multiple images into a
single image of higher resolution. None of these techniques deal with range data. Finally,
multiple range scans are often integrated into a single model [13, 18], yet none of these
techniques involve image data.
6
Conclusion
We have presented a Markov Random Field that integrated high-res image data into low-res
range data, to recover range data at the same resolution as the image data. This approach is
specifically aimed at a new wave of commercially available sensors, which provide range
at lower resolution than image data.
The significance of this work lies in the results. We have shown that our approach can
truly fill the resolution gap between range and images, and use image data to effectively
boost the resolution of a range finder. While none of the techniques used here are new
(even though CG is usually not applied for inference in MRFs), we believe this is the first
application of MRF to multimodal data integration. A large number of scientific fields
would benefit from better range sensing; the present approach provides a solution that
endows low-cost range finders with unprecedented resolution and accuracy.
References
[1] C.L. Bajaj and G. Xu. Anisotropic diffusion of surfaces and functions on surfaces. In Proceedings of SIGGRAPH, pages 4?32, 2003.
[2] S. Baker, R Szeliski, and P. Anandan. A layered approach to stereo reconstruction. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 434?438,
Santa Barbara, CA, 1998.
[3] U. Clarenz, U. Diewald, and M. Rumpf. Anisotropic geometric diffusion in surface processing.
In Proceedings of the IEEE Conference on Visualization, pages 397?405, 2000.
[4] J. Davis, R. Ramamoothi, and S. Rusinkiewicz. Spacetime stereo: A unifying framework for
depth from triangulation. In Proceedings of the Conference on Computer Vision and Pattern
Recognition (CVPR), 2003.
[5] M. Desbrun, M. Meyer, P. Schr?oder, and A. Barr. Anisotropic feature-preserving denoising of
height fields and bivariate data. In Proceedings Graphics Interface, Montreal, Quebec, 2000.
[6] M. Desbrun, M. Meyer, P. Schr?oder, and A. H. Barr. Implicit fairing of irregular meshes using
diffusion and curvature flow. In Proceedings of SIGGRAPH, 1999.
[7] J. Diebel, S. Thrun, and M. Bru? ning. A bayesian method for probable surface reconstruction
and decimation. IEEE Transactions on Graphics, 2005. To appear.
[8] M. Elad and A. Feuer. Restoration of single super-resolution image from several blurred. IEEE
Transcation on Image Processing, 6(12):1646?1658, 1997.
[9] S. Fleishman, I. Drori, and D. Cohen-Or. Bilateral mesh denoising. In Proceedings of SIGGRAPH, pages 950?953, 2003.
[10] T.R. Jones, F. Durand, and M. Desbrun. Non-iterative, feature-preserving mesh smoothing. In
Proceedings of SIGGRAPH, pages 943?949, 2003.
[11] I. K. Jung and S. Lacroix. High resolution terrain mapping using low altitude aerial stereo
imagery. In Proceedings of the International Conference on Computer Vision (ICCV), Nice,
France, 2003.
[12] W. H. Press. Numerical recipes in C: the art of scientific computing. Cambridge University
Press, Cambridge; New York, 1988.
[13] S. Rusinkiewicz and M. Levoy. Efficient variants of the ICP algorithm. In Proc. Third International Conference on 3D Digital Imaging and Modeling (3DIM), Quebec City, Canada, 2001.
IEEEComputer Society.
[14] D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47(1-3):7?42, 2002.
[15] J. Sun, H.-Y. Shum, and N.-N. Zheng. Stereo matching using belief propagation. IEEE Transcation on PAMI, 25(7), 2003.
[16] R. Szeliski. Stereo algorithms and representations for image-based rendering. In Proceedings
of the British Machine Vision Conference, Vol 2, pages 314?328, 1999.
[17] G. Taubin. A signal processing approach to fair surface design. In Proceedings of SIGGRAPH,
pages 351?358, 1995.
[18] S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. MIT Press, Cambridge, MA, 2005.
[19] Y. Weiss and W.T. Freeman. Correctness of belief propagation in gaussian graphical models of
arbitrary topology. Neural Computation, 13(10):2173?2200, 2001.
[20] W. T. Freeman and A. Torralba. Shape recipes: Scene representations that refer to the image. In
Advances in Neural Information Processing Systems (NIPS) 15, Cambridge, MA, 2003. MIT
Press.
| 2837 |@word rgb:1 brightness:1 shot:1 carry:1 configuration:1 contains:1 shum:1 existing:1 recovered:1 yet:2 mesh:7 visible:1 numerical:1 partition:1 shape:3 enables:1 visibility:1 designed:1 update:1 alone:1 cue:1 device:3 provides:4 coarse:1 node:8 height:1 along:3 constructed:1 become:1 supply:1 combine:1 falsely:1 notably:1 examine:1 growing:1 multi:1 frequently:1 freeman:2 little:1 increasing:1 taubin:1 baker:1 panel:2 substantially:1 developed:1 finding:2 tie:1 decouples:1 imager:1 appear:1 positive:1 local:1 bilinear:1 painted:1 interpolation:1 pami:1 co:2 range:43 camera:16 yj:1 swiss:1 drori:1 significantly:3 poster:1 matching:1 vantage:1 integrating:4 regular:1 pre:1 onto:2 layered:2 impossible:1 applying:1 optimize:1 equivalent:1 conventional:3 map:13 deterministic:1 survey:2 resolution:26 unwilling:1 recovery:1 insight:1 rule:1 fairing:2 fill:1 variation:2 target:1 enhanced:1 decimation:1 element:1 recognition:2 stripe:1 labeled:9 bottom:1 cloud:1 capture:2 sun:1 contemporary:1 servo:1 substantial:1 intuition:1 environment:2 radar:1 raise:1 inapplicable:1 completely:1 textured:1 easily:1 resolved:1 multimodal:1 siggraph:5 lacroix:1 laser:8 fast:1 shortcoming:1 refined:1 stanford:3 larger:2 solve:1 posed:1 cvpr:2 elad:1 noisy:1 unprecedented:1 propose:1 reconstruction:9 neighboring:1 aligned:1 recipe:2 exploiting:1 convergence:1 generating:2 illustrate:1 montreal:1 measured:1 ij:1 auxiliary:1 differ:1 ning:1 barr:2 wall:5 probable:3 scanner:1 practically:1 exp:2 visually:1 mapping:1 predict:1 claim:1 torralba:1 omitted:1 proc:1 integrates:1 correctness:1 city:1 weighted:1 brought:1 clearly:4 sensor:6 mit:2 gaussian:1 super:4 office:1 improvement:1 superimposed:1 contrast:1 normalizer:1 cg:4 wooden:2 inference:3 flaw:1 mrfs:2 dependent:1 dim:1 entire:2 integrated:2 uij:2 wij:7 france:1 interested:1 pixel:9 unobservable:2 ill:1 smoothing:7 integration:2 platform:1 art:1 field:8 look:1 jones:1 alter:1 commercially:2 modern:1 composed:1 densely:1 ranger:1 shipping:1 interest:1 highly:2 zheng:1 arena:1 certainly:1 evaluation:1 truly:1 light:4 behind:1 accurate:1 edge:3 experience:1 fox:1 rotating:1 re:19 industry:1 modeling:3 measuring:1 retains:1 restoration:1 loopy:1 cost:2 fusing:1 burgard:1 successful:1 graphic:2 answer:1 scanning:1 thanks:1 density:2 international:3 probabilistic:1 enhance:1 together:1 icp:1 imagery:1 worse:1 corner:4 style:1 potential:4 includes:1 coefficient:2 blurred:1 notable:1 performed:1 view:1 bilateral:2 lab:2 wave:1 recover:2 square:1 bright:1 accuracy:3 characteristic:1 correspond:1 millimeter:1 raw:8 bayesian:2 accurately:1 none:5 sebastian:1 definition:1 acquisition:4 frequency:1 james:1 obvious:1 associated:1 recovers:1 sampled:1 popular:1 color:8 back:2 coloring:1 higher:2 methodology:1 wei:1 box:1 though:1 implicit:1 flight:1 lack:1 propagation:3 mode:4 brings:1 indicated:1 scientific:2 believe:2 illustrated:1 white:2 deal:1 adjacent:2 davis:1 highresolution:1 invisible:1 performs:1 interface:1 passive:3 image:67 recently:1 cohen:1 million:1 anisotropic:5 measurement:10 significant:2 refer:1 cambridge:4 ai:1 smoothness:1 grid:1 surface:11 align:1 sick:1 curvature:1 posterior:3 own:2 recent:1 triangulation:2 barbara:1 durand:1 unsmooth:1 yi:2 preserving:4 captured:1 canon:1 anandan:1 additional:1 signal:1 smoother:1 full:1 multiple:2 smooth:4 match:1 long:2 finder:4 paired:1 controlled:1 mrf:29 regression:2 variant:1 vision:7 sometimes:1 robotics:1 irregular:1 background:2 remarkably:1 fine:2 crate:1 source:2 operate:1 posse:1 tend:1 monochromatic:1 quebec:2 flow:1 leverage:1 door:3 rendering:3 iterate:1 xj:1 zi:3 topology:1 whether:1 expression:1 passed:2 stereo:16 render:1 york:1 cause:1 remark:1 oder:2 dramatically:1 ignored:1 detailed:2 involve:1 aimed:1 santa:1 amount:1 band:2 generate:1 notice:1 estimated:2 overly:1 per:1 mega:1 correctly:1 shall:1 vol:1 key:2 enormous:2 drawn:1 diffusion:5 imaging:3 fuse:1 year:1 almost:1 entirely:1 layer:2 photogrammetry:1 spacetime:1 correspondence:1 quadratic:2 adapted:1 occur:3 scanned:1 constraint:1 scene:5 generates:1 relatively:1 developing:1 aerial:2 conjugate:2 describes:1 across:1 smaller:1 em:1 dv:1 iccv:1 altitude:1 taken:1 visualization:1 turn:1 available:6 apply:2 actuator:1 original:2 top:1 running:1 include:1 graphical:3 desbrun:3 unifying:1 exploit:1 society:1 question:2 occurs:1 primary:1 traditional:1 enhances:1 gradient:3 distance:2 link:1 mapped:3 thrun:3 unable:1 extent:1 feuer:1 consumer:1 index:1 insufficient:1 acquire:1 unfortunately:1 sharper:1 taxonomy:1 design:1 markov:4 defining:1 incorporated:1 frame:4 schr:2 sweeping:1 sharp:2 arbitrary:1 canada:1 inferred:2 diebel:2 namely:1 registered:1 learned:2 boost:1 discontinuity:2 nip:1 address:1 able:1 usually:1 perception:1 pattern:2 indoor:1 including:1 belief:3 endows:1 improve:2 technology:3 text:2 prior:1 literature:1 geometric:1 nice:1 bear:1 generation:1 mounted:1 filtering:2 digital:2 metal:1 consistent:1 row:2 course:2 jung:1 placed:1 guide:1 pulled:1 szeliski:3 benefit:1 depth:33 calculated:1 commonly:1 levoy:1 transaction:1 reconstructed:2 scharstein:1 active:3 xi:3 terrain:1 iterative:2 quantifies:1 ca:2 ignoring:1 rusinkiewicz:2 improving:1 domain:1 significance:1 dense:1 featureless:1 noise:2 mediate:1 bajaj:1 fair:1 bookshelf:1 xu:1 fig:1 position:1 meyer:2 lie:1 outdoor:1 weighting:1 third:1 grained:1 british:1 sensing:2 evidence:1 bivariate:1 exists:1 bru:1 effectively:2 texture:5 magnitude:2 gap:1 wavelength:1 simply:2 visual:1 expressed:1 applies:1 relies:1 ma:2 conditional:1 viewed:1 flash:1 room:3 price:1 shared:1 content:1 lidar:1 change:1 absence:1 specifically:4 typical:1 denoising:3 triangulating:1 exception:1 latter:1 scan:4 brevity:1 |
2,023 | 2,838 | Pattern Recognition from One Example by
Chopping
Franc?ois Fleuret
CVLAB/LCN ? EPFL
Lausanne, Switzerland
[email protected]
Gilles Blanchard?
Fraunhofer FIRST
Berlin, Germany
[email protected]
Abstract
We investigate the learning of the appearance of an object from a single
image of it. Instead of using a large number of pictures of the object to
recognize, we use a labeled reference database of pictures of other objects to learn invariance to noise and variations in pose and illumination.
This acquired knowledge is then used to predict if two pictures of new
objects, which do not appear on the training pictures, actually display the
same object.
We propose a generic scheme called chopping to address this task. It
relies on hundreds of random binary splits of the training set chosen to
keep together the images of any given object. Those splits are extended
to the complete image space with a simple learning algorithm. Given
two images, the responses of the split predictors are combined with a
Bayesian rule into a posterior probability of similarity.
Experiments with the COIL-100 database and with a database of 150 degraded LATEX symbols compare our method to a classical learning with
several examples of the positive class and to a direct learning of the similarity.
1
Introduction
Pattern recognition has so far mainly focused on the following task: given many training
examples labelled with their classes (the object they display), guess the class of a new sample which was not available during training. The various approaches all consist of going
to some invariant feature space, and there using a classification method such as neural networks, decision trees, kernel techniques, Bayesian estimations based on parametric density
models, etc. Providing a large number of examples results in good statistical estimates of
the model parameters. Although such approaches have been successful in applications to
many problems, their performance are still far from what biological visual systems can do,
which is one sample learning. This can be defined as the ability, given one picture of an
object, to spot instances of the same object, under the assumption that these new views can
be induced by the single available example.
?
Supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778
Being able to perform that type of one-sample learning corresponds to the ability, given
one example, to sort out which elements of a test set are of the same class (i.e. one class
vs. the rest of the world). This can be done by comparing one by one all the elements of
the test set with the reference example, and labelling as of the same class those which are
similar enough. Learning techniques can be used to choose the similarity measure, which
could be adaptive and learned from a large number of examples of classes not involved in
the test.
Thus, given a large number of training images of a large number of objects labeled with
their actual classes, and provided two pictures of unknown objects (objects which do not
appear in the training pictures), we want to decide if these two objects are actually the
same object. The first image of such a couple can be seen as a single training example, and
the second image as a test example. Averaging the error rate by repeating that test several
times provides with an estimate of a one-sample learning (OSL) error rate.
The idea of ?learning how to learn? is not new and has been applied in various settings [12].
Taking into account and/or learning relevant geometric invariances for a given task has been
studied under various forms [1, 8, 11], and in [7] with the goal to achieve learning from very
few examples. Finally, the precise one-sample learning setting considered here has been
the object of recent research [4, 3, 5] proposing different methods (hyperfeature learning,
distance learning) for finding invariant features from a set of training reference objects
distinct from the test objects. This principle has also been dubbed interclass transfer.
The present study proposes a generic approach, and avoids an explicit description of the
space of deformations. We propose to build a large number of binary splits of the image
space, designed to assign the same binary label to all the images common to a same object.
The binary mapping associated to such a split is thus highly invariant across the images
of a certain object while highly variant across images of different objects. We can define
such a split on the training images, and train a predictor to extend it to the complete image
space by induction. We expect the predictor to respond similarly on two images of a same
object, and differently on two images of two different objects with probability 21 . The
global criterion to compare two images consists roughly of counting how many such splitpredictors responds similarly and compare the result to a fixed threshold.
The principle of transforming a multiclass learning problem into several binary ones by
class grouping has a long history in Machine Learning [10]. From this point of view the
collected output of several binary classifiers is used as a way for coding class membership.
In [2] it was proposed to carefully choose the class groupings so as to yield optimal separation of codewords (ECOC methodology). While our method is related to this general
principle, our goal is different since we are interested in recognizing yet-unseen objects.
Hence, the goal is not to code multiclass membership; our focus is not on designing efficient codes ? splits are chosen randomly and we take a large number of them ? but rather
on how to use the learned mappings for learning unknown objects.
2
Data and features
To make the rest of the paper clearer to the reader, we now introduce the data and feature
sets we are using for our proof of concept experiments. However, note that while we have
focused on image classification, our approach is generic and could be applied to any signals
for which adaptive binary classifiers are available.
2.1
Data
We use two databases of pictures for our experiments. The first one is the standard COIL100 database of pictures [9]. It contains 7200 images corresponding to 100 different objects
Figure 1: Four objects from the 100 objects of the COIL-100 database (downsampled to
38 ? 38 grayscale pixels) and four symbols from the 150 symbols of our LATEX symbol
database (A, ?, ? and ?, resolution 28 ? 28). Each image of the later is generated by
applying a rotation and a scaling, and by adding lines of random grayscales at random
locations and orientations.
(x,y)
d=0
d=1
d=2
d=3
d=4
d=5
d=6
d=7
Figure 2: The figure on the left shows how an horizontal edge ?x,y,4 is detected: the six
differences between pixels connected by a thin segment have to be all smaller in absolute
value than the difference between the pixels connected by the thick segment. The relative
values of the two pixels connected by the thick segment define the polarity of the edge
(dark to light or light to dark). On the right are shown the eight different types of edges.
seen from 72 angles of view. We down-sample these images from their original resolution
to 38 ? 38 pixels, and convert them to grayscale. Examples are given in figure 1 (left). The
second database contains images of 150 LATEX symbols. We generated 1, 000 images of
each symbol by applying a random rotation (angle is taken between ?20 and +20 degrees)
and a random scaling factor (up to 1.25). Noise is then added by adding random line
segments of various gray scales, locations and orientations. The final resulting database
contains 150, 000 images. Examples of these degraded images are given in figure 1 (right).
2.2
Features
All the classification processes in the rest of the paper are based on edge-based boolean
features. Let ?x,y,d denote a basic edge detector indexed by a location (x, y) in the image
frame and an orientation d which can take eight different values, corresponding to four
orientations and two polarities (see figure 2). Such an edge detector is equal to 1 if and
only if an edge of the given location is detected at the specified location, and 0 otherwise.
A feature fx0 ,y0 ,x1 ,y1 ,d is a disjunction of the ??s in the rectangle defined by x0 , y0 , x1 , y1 .
Thus, it is equal to one if and only if ?x, y, x0 ? x ? x1 , y0 ? y ? y1 , ?x,y,d = 1. For
pictures of size 32 ? 32 there is a total of N = 41 (32 ? 32)2 ? 8 ? 2.106 features.
0.2
0.2
Negative class
Positive class
Negative class
Positive class
0.15
0.15
0.1
0.1
0.05
0.05
0
-4000
-3000
-2000
-1000
0
Response
1000
2000
3000
4000
0
-4000
-3000
-2000
-1000
0
1000
2000
3000
4000
Response
Figure 3: These two histograms are representative of the responses of two split predictors
conditionally to the real arbitrary labelling P (L | S).
3
Chopping
The main idea we propose in this paper consists of learning a large number of binary splits
of the image space which would ideally assign the same binary label to all the images of
any given object. In this section we define these splits and describe and justify how they
are combined into a global rule.
3.1
Splits
A split is a binary labelling of the image space, with the property to give the same label
to all images of a given object. We can trivially produce a labelling with that property on
the training examples, but we need to be able to extend it to images not appearing in the
training data, including images of other objects. We suppose that it is possible to infer a
relevant split function on the complete image space, including images of other objects by
looking at the problem as a binary classification problem. Inference is done by the mean
of a simple learning scheme: a combination of a fast feature selection based on conditional
mutual information (CMIM) [6] and a linear perceptron.
Thus, we create M arbitrary splits on the training sample by randomly assigning the label 1 to half ofthe NT objects appearing in the training set, and 0 to the others. Since
there are NNTT/2 such balanced arbitrary labellings, with NT of the order of a few tens, a
very large number of splits is available and only a small subset of them will be actually
used for learning. For each one of those splits, we train a predictor using the scheme described above. Let (S1 , . . . , SM ) denote the family of arbitrary splits and (L1 , . . . , LM )
the split-predictors. The continuous outputs of these predictors before thresholding will be
combined in the final classification.
3.2
Combining splits
To combine the responses of the various split predictors, we rely on a set of simple conditional independence assumptions (comparable to the ?naive Bayes? setting) on the distribution of the true class label C (each class corresponds to an object), the split labels (Si ) and
the predictor outputs (Li ) for a single image. We do not assume that for test image pairs
(I1 , I2 ) the two images are independent, because we want to encompass the case where
pairs of images of the same object are much more frequent than they would be if they were
independent (typically in our test data we have arranged to have 50% of test pairs picturing
the same object). We however still need some conditional independence assumption for
the drawing of test image pairs. To simplify the notation we denote L1 = (L1i ), L2 = (L2i )
the collection of predictor outputs for images 1 and 2, S 1 = (Si1 ), S 2 = (Si2 ) the collection of their split labels and C1 , C2 their true classes. The conditional indepence
assumptions we make are summed up in the following Markov dependency diagram:
L21
L22
...
L2M
S12
@
S22 hh
@ C2
...
2
SM
S11
( S21
C1 (
S
...
S
S 1
SM
L11
L12
...
L1M
In words, for each split i, the predictor output Li is assumed to be independent of the true
class C conditionally to the split label Si ; and conditionally to the split labels (S1 , S2 ) of
both images, the outputs of predictors on test pair images are assumed to be independent.
Finally, we make the additional symmetry hypothesis that conditionally to C1 = C2 , for all
i : Si1 = Si2 = Si and (Si ) are independent Bernoulli variables with parameter 0.5, while
conditionally to C1 6= C2 all split labels (Si1 , Si2 ) are independent Bernoulli(0.5).
Under these assumptions we then want to compute the log-odds ratio
log
P (C1 = C2 | L1 , L2 )
P (L1 , L2 | C1 = C2 )
P (C1 = C2 )
= log
+ log
.
1
2
1
2
P (C1 6= C2 | L , L )
P (L , L | C1 6= C2 )
P (C1 6= C2 )
(1)
In this formula and the next ones, when handling real-valued variables L1 , L2 we are implicitly assuming that they have a density with respect to the Lebesgue measure and probabilities are to be interpreted as densities with some abuse of notation. We assume that the
second term above is either known or can be reliably estimated. For the first term, under
the aforementioned independence assumptions, the following holds (see appendix):
log
X
P (L1 , L2 | C1 = C2 )
= N log 2 +
log ?i1 ?i2 + (1 ? ?i1 )(1 ? ?i2 ) ,
1
2
P (L , L | C1 6= C2 )
i
(2)
where ?ij = P (Sij = 1 | Lji ). As a quick check, note that if the predictor outputs (Li ) are
uninformative (i.e. every probability ?ij is 0.5), then the above formula gives a ratio of 1
which is what we expect. If they are perfectly informative (i.e. all ?ij are 0 or 1), the odds
ratio can take the values 0 (if for some j we can ensure Sj1 6= Sj2 , this excludes the case
C1 = C2 ) or 2N (if for all j we have Sj1 = Sj2 there is still a tiny chance that C1 6= C2 if
by chance C1 , C2 are on the same side of each split).
To estimate the probabilities P (Sj | Lj ), we use a simple 1D Gaussian model for the output
of the predictor given the true split label. Mean and variance are estimated from the training
set for each predictor. Experimental findings show that this Gaussian modelling is realistic
(see figure 3).
4
Experiments
We estimate the performance of the chopping approach by comparing it to classical learning
with several examples of the positive class and to a direct learning of the similarity of two
objects on different images. For every experiment, we use a family of 10, 000 features
sampled uniformly in the complete set of features (see section 2.2)
4.1
Multiple example learning
In this procedure, we train a predictor with several pictures of a positive class and with
a very large number of pictures of a negative class. The number of positive examples
depends on the experiments (from 1 to 32) and the number of negative examples is 2, 000
1
Number of samples for multi-example learning
2
4
8
16
32
1
0.6
Chopping
Smart chopping
Multi-example learning
Similarity learnt directly
32
0.4
0.3
0.2
0.1
Chopping
Smart chopping
Multi-example learning
Similarity learnt directly
0.5
Test errors (COIL-100)
0.5
Test errors (LaTeX symbols)
Number of samples for multi-example learning
2
4
8
16
0.6
0.4
0.3
0.2
0.1
0
0
1
2
4
8
16
32
64 128 256 512 1024
Number of splits for chopping
1
2
4
8
16
32
64 128 256 512 1024
Number of splits for chopping
Figure 4: Error rates of the chopping, smart-chopping (see ?4.2), multi-example learning
and learnt similarity on the LATEX symbol (left) and the COIL-100 database (right). Each
curve shows the average error and a two standard deviation interval, both estimated on ten
experiments for each setting. The x-axis shows either the number of splits for chopping or
the number of samples of the positive class for the multi-example learning.
for both the COIL-100 and the LATEX symbol databases. Note that to handle the unbalanced
positive and negative populations, the perceptron bias is chosen to minimize a balanced
error rate. In each case, and for each number of positive samples, we run 10 experiments.
Each experiment consists of several cross-validation cycles so that the total number of test
pictures is roughly the same as the number of pairs in one-sample techniques experiments
below.
4.2
One-sample learning
For each experiment, whatever the predictor is, we first select 80 training objects from the
COIL-100 database (respectively 100 symbols from the LATEX symbol database). The test
error is computed with 500 pairs of images of the 20 unseen objects for the COIL-100, and
1, 000 pairs of images of the 50 unseen objects for the LATEX symbols. These test sets are
built to have as many pairs of images of the same object than pairs of images of different
objects.
Learnt similarity: Note that one-sample learning can also be simply cast as a standard
binary classification problem of pairs of images into the classes {same, different}. We
therefore want to compare the Chopping method to a more standard learning method directly on pairs of images using a comparable set of features. For every single feature f
on single images, we consider three features of a pair of images standing for the conjunction, disjunction and equality of the feature responses on the two images. From the 10, 000
features on single images, we thus create a set of 30, 000 features on pairs of images.
We generate a training set of 2, 000 pairs of pictures for the experiments with the COIL100 database and 5, 000 for the LATEX symbols, half picturing the same object twice, half
picturing two different objects. We then train a predictor similar to those used for the
splits in the chopping scheme: feature selection with CMIM, and linear combination with
a perceptron (see section 3.1), using the 30, 000 features described above.
Chopping: The performance of the chopping approach is estimated for several numbers
of splits (from 1 to 1024). For each split we select 50 objects from the training objects, and
select at random 1, 000 training images of these objects. We generate an arbitrary balanced
binary labelling of these 50 objects and label the training images accordingly. We then
build a predictor by selecting 2, 000 features with the CMIM algorithm, and combine them
with a perceptron (see section 3.1).
To compensate for the limitation of our conditional independence assumptions we allow to
add a fixed bias to the log-odds ratio (1). This type of correction is common when using
naive-Bayes type assumptions. Using the remaining training objects as validation set, we
compute this bias so as to minimize the validation error. We insist that no objects of the
test classes be used for training.
To improve the performance of the splits, we also test a ?smart? version of the chopping
for which each split is built in two steps. The first step is similar to what is described
above. From that first step, we remove the 10 objects for which the labelling prediction
has the highest error rate, and re-build the split with the 40 remaining objects. This get
rid of problematic objects or inconsistent labelling (for instance trying to force two similar
objects to be in different halves of the split).
4.3
Results
The experiments demonstrate the good performance of chopping when only one example
is available. Its optimal error rate, obtained for the largest number of splits, is 7.41%
on the LATEX symbol database and 11.42% on the COIL-100 database. By contrast, a
direct learning of the similarity (see section 4.2), reaches respectively 15.54% and 18.1%
respectively with 8, 192 features.
On both databases, the classical multi-sample learning scheme requires 32 samples to reach
the same level of performances (10.51% on the COIL-100 and 10.7% on the LATEX symbols).
The error curves (see figure 4) are all monotonic. There is no overfitting when the number of splits increases, which is consistent with the absence of global learning: splits are
combined with an ad-hoc Bayesian rule, without optimizing a global functional, which
generally also results in better robustness.
The smart splits (see section 4.2) achieve better performance initially but eventually reach
the same error rates as the standard splits. There is no visible degradation of the asymptotic
performance due to either a reduced independence between splits or a diminution of their
separation power. However the computational cost is twice as high, since every predictor
has to be built twice.
5
Conclusion
In this paper we have proposed an original approach to learning the appearance of an object
from a single image. Our method relies on a large number of individual splits of the image
space designed to keep together the images of any of the training objects. These splits
are learned from a training set of examples and combined into a Bayesian framework to
estimate the posterior probability for two images to show the same object.
This approach is very generic since it never makes the space of admissible perturbations
explicit and relies on the generalization properties of the family of predictors. It can be
applied to predict the similarity of two signals as soon as a family of binary predictors
exists on the space of individual signals.
Since the learning is decomposed into the training of several splits independently, it can
be easily parallelized. Also, because the combination rule is symmetric with respect to the
splits, the learning can be incremental: splits can be added to the global rule progressively
when they become available.
Appendix: Proof of formula (2).
1
For the first factor, we have
2
P (L , L | C1 = C2 )
X
=
P (L1 , L2 | C1 = C2 , S 1 = s1 , S 2 = s2 )P (S 1 = s1 , S 2 = s2 | C1 = C2 )
s1 ,s2
=
X
P (L1 , L2 | S 1 = s1 , S 2 = s2 )P (S 1 = s1 , S 2 = s2 | C1 = C2 )
s1 ,s2
=
XY
s1 ,s2
=2
?N
P (L1i | Si1 = s1i )P (L2i | Si2 = s2i )P ((Si1 , Si2 ) = (s1i , s2i ) | C1 = C2 )
i
Y
P (L1i | Si1 = 1)P (L2i | Si2 = 1) + P (L1i | Si1 = 0)P (L2i | Si2 = 0) .
i
In the second equality, we have used that L is independent of C given S. In the third
equality, we have used that the (Lji ) are independent given S. In the last equality, we have
used the symmetry assumption on the distribution of (S1 , S2 ) given C1 = C2 . Similarly,
YX
P (L1 , L2 | C1 6= C2 ) = 4?N
P (L1i | Si1 = s1 )P (L2i | Si2 = s2 )
i s1 ,s2
= 4?N
Y
P (L1i )P (L2i )
i
=4
?2N
X P (S 1 = s1 | L1 )P (S 2 = s2 | L2 )
i
i
i
i
1 = s )P (S 2 = s )
P
(S
1
2
i
i
s ,s
1
Y
2
P (L1i )P (L2i ) ,
i
since P (Sij = s) ? 21 by the symmetry hypothesis. Taking the ratio of the two factors and
using the latter property again leads to the conclusion.
References
[1] Y. Bengio and M. Monperrus. Non-local manifold tangent learning. In Advances in Neural
Information Processing Systems 17, pages 129?136. MIT press, 2005.
[2] T. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output
codes. Journal of Artificial Intelligence Research, 2:263?286, 1995.
[3] A. Ferencz, E. Learned-Miller, and J. Malik. Learning hyper-features for visual identification.
In Advances in Neural Information Processing Systems 17, pages 425?432. MIT Press, 2004.
[4] A. Ferencz, E. Learned-Miller, and J. Malik. Building a classification cascade for visual identification from one example. In International Conference on Computer Vision (ICCV), 2005.
[5] M. Fink. Object classification from a single example utilizing class relevance metrics. In
Advances in Neural Information Processing Systems 17, pages 449?456. MIT Press, 2005.
[6] F. Fleuret. Fast binary feature selection with conditional mutual information. Journal of Machine Learning Research, 5:1531?1555, November 2004.
[7] F. Li, R. Fergus, and P. Perona. A Bayesian approach to unsupervised one-shot learning of
object categories. In Proceedings of ICCV, volume 2, page 1134, 2003.
[8] E. G. Miller, N. E. Matsakis, and P. A. Viola. Learning from one example through shared
densities on transforms. In Proceedings of the IEEE conference on Computer Vision and Pattern
Recognition, volume 1, pages 464?471, 2000.
[9] S. A. Nene, S. K. Nayar, and H. Murase. Columbia Object Image Library (COIL-100). Technical Report CUCS-006-96, Columbia University, 1996.
[10] T. Sejnowski and C. Rosenberg. Parallel networks that learn to pronounce english text. Journal
of Complex Systems, 1:145?168, 1987.
[11] P. Simard, Y. Le Cun, and J. Denker. Efficient pattern recognition using a new transformation distance. In S. Hanson, J. Cowan, and C. Giles, editors, Advances in Neural Information
Processing Systems 5, pages 50?68. Morgan Kaufmann, 1993.
[12] S. Thrun and L. Pratt, editors. Learning to learn. Kluwer, 1997.
| 2838 |@word version:1 chopping:19 shot:1 contains:3 selecting:1 comparing:2 nt:2 si:4 yet:1 assigning:1 realistic:1 visible:1 informative:1 s21:1 remove:1 designed:2 progressively:1 v:1 half:4 intelligence:1 guess:1 accordingly:1 provides:1 location:5 si1:8 c2:22 direct:3 become:1 consists:3 combine:2 introduce:1 excellence:1 x0:2 acquired:1 roughly:2 multi:7 ecoc:1 insist:1 decomposed:1 actual:1 blanchar:1 provided:1 notation:2 what:3 interpreted:1 proposing:1 finding:2 dubbed:1 transformation:1 every:4 fink:1 classifier:2 whatever:1 appear:2 positive:9 before:1 local:1 abuse:1 twice:3 studied:1 lausanne:1 pronounce:1 spot:1 procedure:1 cascade:1 word:1 downsampled:1 get:1 selection:3 applying:2 quick:1 independently:1 focused:2 resolution:2 correcting:1 rule:5 utilizing:1 population:1 handle:1 variation:1 suppose:1 designing:1 hypothesis:2 element:2 recognition:4 labeled:2 database:17 s1i:2 connected:3 cycle:1 highest:1 balanced:3 sj1:2 transforming:1 lji:2 ideally:1 solving:1 segment:4 smart:5 easily:1 differently:1 various:5 s2i:2 train:4 distinct:1 fast:2 describe:1 sejnowski:1 detected:2 artificial:1 hyper:1 disjunction:2 valued:1 drawing:1 otherwise:1 ability:2 unseen:3 final:2 hoc:1 propose:3 frequent:1 relevant:2 combining:1 achieve:2 description:1 francois:1 produce:1 incremental:1 object:59 clearer:1 pose:1 ij:3 ois:1 murase:1 switzerland:1 thick:2 assign:2 l1m:1 generalization:1 biological:1 correction:1 hold:1 considered:1 mapping:2 predict:2 lm:1 estimation:1 label:12 s12:1 largest:1 create:2 mit:3 gaussian:2 rather:1 rosenberg:1 conjunction:1 focus:1 bernoulli:2 check:1 mainly:1 modelling:1 contrast:1 inference:1 membership:2 epfl:2 typically:1 lj:1 initially:1 perona:1 going:1 fhg:1 interested:1 germany:1 i1:3 pixel:5 classification:8 orientation:4 pascal:1 aforementioned:1 proposes:1 summed:1 mutual:2 equal:2 never:1 s22:1 unsupervised:1 thin:1 others:1 report:1 simplify:1 few:2 franc:1 randomly:2 recognize:1 individual:2 lebesgue:1 picturing:3 investigate:1 highly:2 light:2 edge:7 xy:1 tree:1 indexed:1 re:1 deformation:1 instance:2 boolean:1 giles:1 cost:1 deviation:1 subset:1 hundred:1 predictor:22 recognizing:1 successful:1 dependency:1 learnt:4 combined:5 density:4 international:1 standing:1 together:2 again:1 choose:2 l22:1 simard:1 li:4 account:1 de:1 coding:1 blanchard:1 depends:1 ad:1 later:1 view:3 sort:1 bayes:2 parallel:1 minimize:2 degraded:2 variance:1 kaufmann:1 miller:3 yield:1 bayesian:5 identification:2 history:1 l21:1 detector:2 nene:1 reach:3 involved:1 associated:1 proof:2 latex:11 couple:1 sampled:1 knowledge:1 carefully:1 actually:3 methodology:1 response:6 arranged:1 done:2 horizontal:1 monperrus:1 gray:1 building:1 dietterich:1 concept:1 true:4 hence:1 equality:4 symmetric:1 i2:3 conditionally:5 during:1 criterion:1 trying:1 complete:4 demonstrate:1 l1:10 osl:1 image:60 common:2 rotation:2 functional:1 volume:2 extend:2 kluwer:1 trivially:1 similarly:3 similarity:10 etc:1 add:1 posterior:2 recent:1 optimizing:1 certain:1 binary:15 s11:1 seen:2 morgan:1 additional:1 parallelized:1 signal:3 encompass:1 multiple:1 infer:1 technical:1 cross:1 compensate:1 long:1 prediction:1 variant:1 basic:1 vision:2 metric:1 histogram:1 kernel:1 c1:22 want:4 uninformative:1 interval:1 diagram:1 rest:3 lcn:1 induced:1 cmim:3 cowan:1 inconsistent:1 odds:3 counting:1 split:49 enough:1 bengio:1 pratt:1 independence:5 perfectly:1 idea:2 multiclass:3 six:1 sj2:2 generally:1 fleuret:3 transforms:1 repeating:1 coil100:2 dark:2 ten:2 category:1 reduced:1 generate:2 problematic:1 estimated:4 l2m:1 ist:2 four:3 threshold:1 rectangle:1 excludes:1 convert:1 run:1 angle:2 respond:1 family:4 reader:1 decide:1 l2i:7 separation:2 decision:1 appendix:2 scaling:2 comparable:2 display:2 combination:3 across:2 smaller:1 y0:3 labellings:1 cun:1 s1:13 invariant:3 sij:2 iccv:2 handling:1 taken:1 eventually:1 hh:1 available:6 eight:2 denker:1 generic:4 appearing:2 robustness:1 matsakis:1 original:2 remaining:2 ensure:1 yx:1 build:3 bakiri:1 classical:3 malik:2 added:2 codewords:1 parametric:1 responds:1 distance:2 berlin:1 thrun:1 manifold:1 collected:1 l12:1 induction:1 assuming:1 code:3 polarity:2 providing:1 ratio:5 negative:5 reliably:1 unknown:2 perform:1 gilles:1 l11:1 markov:1 sm:3 november:1 viola:1 extended:1 looking:1 precise:1 frame:1 y1:3 perturbation:1 interclass:1 arbitrary:5 community:1 pair:15 cast:1 specified:1 hanson:1 cucs:1 learned:5 address:1 able:2 below:1 pattern:4 built:3 including:2 power:1 rely:1 force:1 scheme:5 improve:1 library:1 picture:14 axis:1 fraunhofer:1 naive:2 columbia:2 text:1 geometric:1 l2:9 tangent:1 relative:1 asymptotic:1 expect:2 limitation:1 validation:3 degree:1 consistent:1 principle:3 thresholding:1 editor:2 tiny:1 supported:1 last:1 soon:1 english:1 ferencz:2 side:1 bias:3 allow:1 perceptron:4 taking:2 absolute:1 curve:2 world:1 avoids:1 collection:2 adaptive:2 programme:1 far:2 si2:8 sj:1 implicitly:1 keep:2 global:5 overfitting:1 rid:1 assumed:2 fergus:1 grayscale:2 continuous:1 learn:4 transfer:1 symmetry:3 european:1 complex:1 main:1 fx0:1 s2:12 noise:2 cvlab:1 x1:3 representative:1 explicit:2 third:1 admissible:1 down:1 formula:3 symbol:15 grouping:2 consist:1 exists:1 adding:2 illumination:1 labelling:7 simply:1 appearance:2 visual:3 monotonic:1 ch:1 corresponds:2 chance:2 relies:3 coil:10 conditional:6 goal:3 labelled:1 shared:1 absence:1 uniformly:1 averaging:1 justify:1 degradation:1 called:1 total:2 invariance:2 experimental:1 select:3 latter:1 unbalanced:1 l1i:7 relevance:1 nayar:1 |
2,024 | 2,839 | Improved Risk Tail Bounds
for On-Line Algorithms *
Nicolo Cesa-Bianchi
DSI, Universita di Milano
via Comelico 39
20135 Milano, Italy
[email protected]
Claudio Gentile
DICOM, Universita dell'Insubria
via Mazzini 5
21100 Varese, Italy
[email protected]
Abstract
We prove the strongest known bound for the risk of hypotheses selected
from the ensemble generated by running a learning algorithm incrementally on the training data. Our result is based on proof techniques that are
remarkably different from the standard risk analysis based on uniform
convergence arguments.
1 Introduction
In this paper, we analyze the risk of hypotheses selected from the ensemble obtained by
running an arbitrary on-line learning algorithm on an i.i.d. sequence of training data. We
describe a procedure that selects from the ensemble a hypothesis whose risk is , with high
probability, at most
Mn + 0
((innn)2+ J~n Inn) ,
where Mn is the average cumulative loss incurred by the on-line algorithm on a training
sequence of length n. Note that this bound exhibits the "fast" rate (in n)2 I n whenever the
cumulative loss nMn is 0(1).
This result is proven through a refinement of techniques that we used in [2] to prove the
substantially weaker bound Mn + 0 ( (in n) I n). As in the proof of the older result , we
analyze the empirical process associated with a run of the on-line learner using exponential
inequalities for martingales. However, this time we control the large deviations of the
on-line process using Bernstein's maximal inequality rather than the Azuma-Hoeffding
inequality. This provides a much tighter bound on the average risk of the ensemble. Finally,
we relate the risk of a specific hypothesis within the ensemble to the average risk. As in [2],
we select this hypothesis using a deterministic sequential testing procedure, but the use of
Bernstein's inequality makes the analysis of this procedure far more complicated.
J
The study of the statistical risk of hypotheses generated by on-line algorithms , initiated
by Littlestone [5], uses tools that are sharply different from those used for uniform convergence analysis , a popular approach based on the manipulation of suprema of empirical
' Part of the results contained in this paper have been presented in a talk given at the NIPS 2004
workshop on "(Ab)Use of Bounds".
processes (see, e.g., [3]). Unlike uniform convergence, which is tailored to empirical risk
minimization , our bounds hold for any learning algorithm. Indeed , disregarding efficiency
issues, any learner can be run incrementally on a data sequence to generate an ensemble of
hypotheses.
The consequences of this line of research to kernel and margin-based algorithms have been
presented in our previous work [2].
Notation. An example is a pair (x , y), where x E X (which we call instance) is a data
element and y E Y is the label associated with it. Instances x are tuples of numerical and/or
symbolic attributes. Labels y belong to a finite set of symbols (the class elements) or to
an interval of the real line, depending on whether the task is classification or regression.
We allow a learning algorithm to output hypotheses of the form h : X ----> D , where D
is a decision space not necessarily equal to y. The goodness of hypothesis h on example
(x, y) is measured by the quantity C(h(x), y), where C : D x Y ----> lR. is a nonnegative and
bounded loss function.
2
A bound on the average risk
An on-line algorithm A works in a sequence of trials. In each trial t = 1,2, ... the algorithm takes in input a hypothesis H t - l and an example Zt = (Xt, yt), and returns a new
hypothesis H t to be used in the next trial. We follow the standard assumptions in statistical learning: the sequence of examples zn = ((Xl , Yd , ... , (Xn , Yn )) is drawn i.i.d.
according to an unknown distribution over X x y. We also assume that the loss function C
satisfies 0 ::; C ::; 1. The success of a hypothesis h is measured by the risk of h, denoted by
risk(h). This is the expected loss of h on an example (X, Y) drawn from the underlying
distribution , risk(h) = lEC(h(X), Y). Define also riskernp(h) to be the empirical risk
of h on a sample zn,
1
riskernp(h) = n
n
2: C(h(X
t ),
yt) .
t =l
Given a sample zn and an on-line algorithm A, we use Ho, HI, ... ,Hn - l to denote the
ensemble of hypotheses generated by A. Note that the ensemble is a function of the random
training sample zn. Our bounds hinge on the sample statistic
which can be easily computed as the on-line algorithm is run on zn.
The following bound, a consequence of Bernstein's maximal inequality for martingales due
to Freedman [4], is of primary importance for proving our results.
Lemma 1 Let L I , L 2 , ... be a sequence of random variables, 0 ::; L t ::; 1. Define the
bounded martingale difference sequence Vi = lE[Lt ILl' ... ' L t - l ] - L t and the associated martingale Sn = VI + ... + Vn with conditional variance Kn = L:~=l Var[L t I
LI, ... ,Lt - I ]. Then,forall s,k ~ 0,
IP' (Sn
~ s, Kn ::; k) ::; exp ( -
2k
:2
2S / 3 ) .
The next proposition, derived from Lemma 1, establishes a bound on the average risk of
the ensemble of hypotheses.
Proposition 2 Let H o, . .. ,Hn - 1 be the ensemble of hypotheses generated by an arbitrary
on-line algorithm A. Then,for any 0 < 5 ::; 1,
.
36
IP' ( ;:;:1 ~
~ rlsk(Ht - d ::::: Mn + ~
(nMn +3 )
In
5
+2
The bound shown in Proposition 2 has the same rate as a bound recently proven by
Zhang [6, Theorem 5]. However, rather than deriving the bound from Bernstein inequality
as we do , Zhang uses an ad hoc argument.
Proof. Let
1 n
2: risk(Ht _d
f-ln = -
n
and
vt - l
= risk(Ht _d - C(Ht-1(Xt ), yt) for t :::::
l.
t= l
Let "'t be the conditional variance Var(C(Ht _ 1 (Xt ) , yt) I Zl , ... , Zt - l). Also, set
for brevity K n = 2:~= 1 "'t, K~ = l2:~= 1 "'d, and introduce the function A (x)
2 In (X+l)}X +3) for x ::::: O. We find upper and lower bounds on the probability
IP'
(t
vt - l :::::
A(Kn)
+J
A(Kn) Kn) .
(1)
The upper bound is determined through a simple stratification argument over Lemma 1.
We can write
n
1P'(2:
vt - l :::::
A(Kn) + J A(Kn) Kn)
t= l
n
::; 1P'(2:
vt - l :::::
A(K~) + J A(K~) K~)
t =l
n
n
s=o
n
t= l
n
s=o
t= l
::; 2: 1P'(2:
::; 2: 1P'(2:
vt - 1 :::::
A(s)
+ JA(s)s, K~ = s)
vt - l :::::
A(s)
+J
A(s) s, Kn ::; s
~
(
(A(s) + J A(s) s)2
)
~exp -~--~~-=====~~---- s=o
~(A(s) + J A(s) s) + 2(s + 1)
<
Since
(A(s)+~)2
+ 1)
(using Lemma 1).
> A(s)/2 for all s > 0 we obtain
HA(s) +VA(S)S) +2(S+1) -
t
(1) <
e- A (s)/2
- s=o
=
t
-
,
5
< 5.
s=o (s + l)(s + 3)
(2)
As far as the lower bound on (1) is concerned, we note that our assumption 0 ::; C ::; 1
implies "'t ::; risk(Ht_d for all t which, in tum, gives Kn ::; nf-ln. Thus
(1)
=
IP' ( nf-ln - nMn ::::: A(Kn) + J A(Kn) Kn)
::::: IP' ( nf-ln - nMn ::::: A(nf-ln) + J A(nf-ln) nf-ln)
=
IP' ( 2nf-ln ::::: 2nMn + 3A(nf-ln) + J4nMn A(nf-ln) + 5A(nf-lnF )
=
lP' ( x:::::
B+ ~A(x) + JB A(x) + ~A2(x) ) ,
where we set for brevity x = nf-ln and B = n Mn. We would like to solve the inequality
x
~
B+ ~A(x) + VB A(x) + ~A2(X)
(3)
W.r.t. x . More precisely, we would like to find a suitable upper bound on the (unique) x*
such that the above is satisfied as an equality.
A (tedious) derivative argument along with the upper bound A(x) ::; 4 In
x' =
(X!3) show that
B+ VB In ( Bt3) + 36ln ( Bt3)
2
makes the left-hand side of (3) larger than its right-hand side. Thus x' is an upper bound
on x* , and we conclude that
which, recalling the definitions of x and B, and combining with (2), proves the bound. D
3
Selecting a good hypothesis from the ensemble
If the decision space D of A is a convex set and the loss function ? is convex in its first
argument, then via Jensen's inequality we can directly apply the bound of Proposition 2 to
the risk of the average hypothesis H = ~ L ~=I H t - I . This yields
lP' (riSk(H)
~ Mn + ~ In (nM~ + 3) + 2 ~n In (nM~ + 3) ) : ; 6.
(4)
Observe that this is a O(l/n) bound whenever the cumulative loss n Mn is 0(1).
If the convexity hypotheses do not hold (as in the case of classification problems), then
the bound in (4) applies to a hypothesis randomly drawn from the ensemble (this was
investigated in [1] though with different goals).
In this section we show how to deterministically pick from the ensemble a hypothesis
whose risk is close to the average ensemble risk.
To see how this could be done, let us first introduce the functions
Er5(r, t) =
WI'th
3(!~ t) + J~~rt
cr5(r, t) = Er5
and
(r + J~~rt' t) ,
B-1
n n(n+2)
r5 .
Let riskemp(Ht , t + 1)
hypothesis H t , where
+ Er5 (riskemp(Ht, t + 1), t)
1
riskemp(Ht , t
be the penalized empirical risk of
n
+ 1) = - " ?(Ht(Xi), Xi)
n- t ~
i=t+1
is the empirical risk of H t on the remaining sample Zt+ l, ... , Z]1' We now analyze the performance of the learning algorithm that returns the hypothesis H minimizing the penalized
risk estimate over all hypotheses in the ensemble, i.e., I
ii =
argmin( riskemp(Ht , t
+ 1) + Er5 (riskemp(Ht , t + 1), t)) .
(5)
O::; t <n
I Note that, from an algorithmic point of view, this hypothesis is fairly easy to compute. In particular, if the underlying on-line algorithm is a standard kernel-based algorithm, fj can be calculated
via a single sweep through the example sequence.
Lemma 3 Let Ho , ... , H n - 1 be the ensemble of hypotheses generated by an arbitrary online algorit~m A working with a loss ? satisfying 0 S ? S 1. Then,for any 0 < b S 1, the
hypothesis H satisfies
lP' (risk(H) > min (risk(Ht )
O::; t <n
+ 2 c8(risk(Ht ) , t))) S
b.
Proof. We introduce the following short-hand notation
riskemp(Ht , t + 1) ,
f
=
argmin (R t
+ ?8(R t , t))
O::; t <n
argmin (risk(Ht ) + 2c8 (risk(Ht ), t)) .
T*
O::; t <n
Also, let H * = H T * and R * = riskemp(HT * , T * + 1) = R T * . Note that H defined
in (5) coincides with H i' . Finally, let
Q(
r, t
) = y'2B(2B
+ 9r(n 3(
n -)
t
t)) - 2B
.
With this notation we can write
lP' ( risk(H) > risk(H*) + 2c8(risk(H*), T *))
lP' ( risk(H) > risk(H*) + 2C8 (R* - Q(R*, T *), T *))
<
+
lP' (riSk(H*) < R * - Q(R*, T *))
lP' ( risk(H) > risk(H*) + 2C8 (R* - Q(R*, T *), T *) )
<
~ lP' ( risk(H
+
t)
< R t - Q(Rt , t)) .
Applying the standard Bernstein's inequality (see, e.g., [3 , Ch. 8]) to the random variables
R t with IRt l S 1 and expected value risk(Ht ), and upper bounding the variance of R t
with risk(Ht ), yields
()
.
lP' ( r~sk H t < R t
-
B
+ y'B(B + 18(n -
()
t)risk(Ht )))
3 n - t
Se
- B
.
With a little algebra, it is easy to show that
.
()
r~sk H t
< Rt
B
-
+
y'B(B
+ 18(n -
t)risk(Ht ))
()
3 n - t
is equivalent to risk(Ht ) < R t - Q(R t , t). Hence, we get
lP' ( risk(H) > risk(H*) + 2c8(risk(H *), T *))
< lP' (risk(H) > risk(H*) + 2C8 (R* - Q(R*, T *), T *) ) + n e- B
< lP' (risk(H) > risk(H*) + 2?8(R*, T *)) + n e- B
where in the last step we used
Q('r, t) ::;
B'r
~
n- t
and
Co
('I' -
J~~'rt' t)
=
t) .
Eo ('I',
Set for brevity E = Eo(R*, T *) . We have
IP' ( risk(H) > risk(H*) + 2E)
IP' ( risk(H) > risk(H*) + 2E , R f + Eo(R f' T) ::; R * + E)
(since Rf
<
+ Eo(Rf, T) ::; R * + E holds with certainty)
~ IP' ( R t + Eo(Rt , t) ::; R * + E, risk(Ht ) > risk(H*) + 2E).
(6)
+ Eo(Rt, t) ::; R * + E holds, then at least one of the following three conditions
R t ::; risk (Ht ) - Eo(Rt , t) , R * > risk(H*) + E, risk (Ht ) - risk (H*) < 2E
Now, if R t
must hold. Hence , for any fixed t we can write
IP' ( R t + Eo(Rt, t) ::; R * + E, risk(Ht ) > risk(H*) + 2E)
< IP' ( R t ::; risk(Ht ) - Eo(Rt , t) , risk(Ht ) > risk(H*) + 2E)
+IP' ( R * > risk(H*) + E, risk(Ht ) > risk(H*) + 2E)
+IP' ( risk (Ht ) - risk (H*) < 2E , risk(Ht ) > risk (H*) + 2E)
< IP' ( R t ::; risk(Ht ) - Eo(Rt , t)) +IP' ( R * > risk(H*) + E) .
(7)
Plugging (7) into (6) we have
IP' (risk(H) > risk (H*) + 2E)
<
~ IP' ( R t ::; risk(Ht ) -
< n e- B + n
Eo(Rt, t)) + n
IP' ( R * > risk(H*) + E)
~ IP' ( R t 2: risk(Ht ) + Eo(Rt,t))
::; n e- B + n 2 e- B ,
where in the last two inequalities we applied again Bernstein's inequality to the random
variables R t with mean risk(Ht ). Putting together we obtain
lP' (risk(H) > risk(H*) + 2co(risk(H*), T *)) ::; (2n
which, recalling that B
Fix n 2:
1 and 15
+ n 2 )e- B
= In n(no+2) , implies the thesis.
D
(0,1). For each t = 0, ... , n - 1, introduce the function
llCln(n -t) + 1
f()
E
tX
=x+3
n-t
m,cx
+ 2 --,
n-t
x2:0,
where C = In 2n(~+2) . Note that each ft is monotonically increasing. We are now ready
to state and prove the main result of this paper.
Theorem 4 Fix any loss function C satisfying 0 ::; C ::; 1. Let H 0 , ... , H n-;:..l be the ensemble of hypotheses generated by an arbitrary on-line algorithm A and let H be the hypothesis minimizing the penalized empirical risk expression obtained by replacing C8 with C8/2
in (5). Then,for any 0 < 15 ::; 1, ii satisfies
~
IP' ( risk(H) ;::: min ft
(
Mt,n
O<C;Vn
M t,n In 2n(n+3)
)) < 15
8
- ,
t
n-
2n(n+3)
+ -n 36
-In
15
+2
- t
where Mt ,n = n~t L: ~=t+l C(Hi- 1 (Xi)' Xi). In particular, upper bounding the minimum
over t with t = 0 yields
~
IP' ( risk(H) ;::: fo
For n
---+ 00,
(
36
Mn + -:;:; In
2n(n + 3)
15
+2
M In 2n(n+3) ))
n
8
< J.
n
-
(8)
bound (8) shows that risk(ii) is bounded with high probability by
Mn+O Cn:n
+ VMn~nn)
.
If the empirical cumulative loss n Mn is small (say, Mn ::; cln, where c is constant with n),
then our penalized empirical risk minimizer ii achieves a 0 ( (In 2 n) In) risk bound. Also,
recall that, in this case, under convexity assumptions the average hypothesis H achieves
the sharper bound 0 (1 In) .
Proof. Let Mt ,n
= n~t L:~:/ risk(Hi ). Applying Lemma 3 with C8/2 we obtain
lP' (risk(ii) > min (risk(Ht ) + c8/2(risk(Ht ), t)))::; i
O<C;t<n
2
We then observe that
min (risk(Ht ) + c8/ 2(risk(Ht ), t))
O.:;t<n
min min (risk(H i )
O<C;t<n t<C;2<n
+ c8/2(risk(Hi ), i))
n-l
<
min _ 1 _ "(risk(Hi )
O<t<n
n- t ~
i=t
1
+ c8/2 (risk(Hi ), i))
n- l 8 0
1
n- l (
< O.:;t<n
min ( Mt + - - , , - - - + - - "
,n n - t {;;; 3 n - i
n - t {;;;
(using the inequality
min ( Mt
O.:;t<n
<
1
Vx + y
n-l 11
::; ,jX +
0
2:/x )
1
n-l
+ -- " - -- + -"
,n n - t {;;; 3 n - i
n - t {;;;
. ( Mt n
mm
O.:;t<n
'
110
+ -3
In(n-t)+1
n- t
,n)
+ 2 V20Mt
--n- t
(using L:7=1 I ii::; 1 + In k and the concavity of the square root)
min ft(Mt n) .
O<C;t<n
'
.
(9)
Now, it is clear that Proposition 2 can be immediately generalized to imply the following
set of inequalities , one for each t = 0, ... , n - 1,
IP' ( /Jt n
,
~
Mt
n
,
A
36 - + 2
+n - t
J
A) s0
2n
M t n--'
n - t
where A = In 2n(~+3) . Introduce the random variables K o, ... ,Kn We can write
IP' ( min (riSk(Ht )
O:'S:t<n
+ c8/2(risk(Ht ) , t)) ~ Omin
:'S:t<n
SIP' ( min !t(/Jt n )
O<t<n
'
-
1
(10)
to be defined later.
Kt)
~ Omin
K t ) S ~ IP' (!t(/Jt n ) ~ K t )
<t<n
~
,
-
t=O
Now, for each t = 0, ... , n - l, define K t = ft ( Mt ,n
+
J
~6_1 + 2 M~:':./)
.Then (10)
and the monotonicity of fo, .. . , f n- l allow us to obtain
IP' ( min (risk(Ht )
O:'S: t<n
+ c8/2(risk(Ht ) , t)) ~ Omin
:'S: t<n
(iVf;;A))
(iVf;;A)
V~
(36 A
< ~ IP' ft(/Jt ,n) ~ !t Mt ,n + n _
n- l
(
n- l
(
~ IP'
/Jt ,n
~
36 A
Mt ,n + n _ t + 2
Combining with (9) concludes the proof.
4
Kt)
t
+ 2 V~
S 0/ 2 .
D
Conclusions and current research issues
We have shown tail risk bounds for specific hypotheses selected from the ensemble generated by the run of an arbitrary on-line algorithm. Proposition 2, our simplest bound, is
proven via an easy application of Bernstein's maximal inequality for martingales, a quite
basic result in probability theory. The analysis of Theorem 4 is also centered on the same
martingale inequality. An open problem is to simplify this analysis, possibly obtaining a
more readable bound. Also, the bound shown in Theorem 4 contains In n terms. We do not
know whether these logarithmic terms can be improved to In(Mnn) , similarly to Proposition 2. A further open problem is to prove lower bounds, even in the special case when
n Mn is bounded by a constant.
References
[1] A. Blum, A. Kalai, and J. Langford. Beating the hold-out. In Proc.12th COLT, 1999.
[2] N. Cesa-Bianchi , A. Conconi, and C. Gentile. On the generalization ability of on-line
learning algorithms. IEEE Trans. on Information Theory, 50(9):2050-2057,2004.
[3] L. Devroye, L. Gy6rfi , and G. Lugosi. A Probabilistic Theory of Pattern Recognition.
Springer Verlag, 1996.
[4] D. A. Freedman . On tail probabilities for martingales. The Annals of Probability ,
3: 100-118,1975.
[5] N. Littlestone. From on-line to batch learning. In Proc. 2nd COLT, 1989.
[6] T. Zhang. Data dependent concentration bounds for sequential prediction algorithms.
In Proc. 18th COLT, 2005.
| 2839 |@word trial:3 nd:1 tedious:1 open:2 pick:1 contains:1 selecting:1 current:1 must:1 numerical:1 selected:3 short:1 lr:1 provides:1 zhang:3 dell:1 along:1 dicom:1 prove:4 introduce:5 expected:2 indeed:1 little:1 increasing:1 notation:3 bounded:4 underlying:2 argmin:3 substantially:1 certainty:1 nf:11 sip:1 control:1 zl:1 yn:1 consequence:2 initiated:1 yd:1 lugosi:1 co:2 unique:1 testing:1 procedure:3 empirical:9 suprema:1 symbolic:1 get:1 close:1 risk:106 applying:2 equivalent:1 deterministic:1 yt:4 convex:2 immediately:1 deriving:1 insubria:1 proving:1 annals:1 us:2 hypothesis:30 element:2 satisfying:2 recognition:1 ft:5 algorit:1 convexity:2 algebra:1 efficiency:1 learner:2 easily:1 tx:1 talk:1 fast:1 describe:1 whose:2 quite:1 larger:1 solve:1 say:1 ability:1 statistic:1 ip:27 online:1 hoc:1 sequence:8 inn:1 maximal:3 combining:2 convergence:3 depending:1 measured:2 implies:2 attribute:1 centered:1 milano:2 vx:1 ja:1 fix:2 generalization:1 proposition:7 tighter:1 hold:6 mm:1 exp:2 algorithmic:1 achieves:2 jx:1 a2:2 proc:3 label:2 establishes:1 tool:1 minimization:1 rather:2 kalai:1 forall:1 claudio:1 derived:1 dependent:1 nn:1 selects:1 issue:2 classification:2 ill:1 colt:3 denoted:1 special:1 fairly:1 equal:1 stratification:1 r5:1 jb:1 simplify:1 randomly:1 omin:3 ab:1 recalling:2 kt:2 littlestone:2 instance:2 goodness:1 zn:5 deviation:1 uniform:3 cln:1 kn:14 lnf:1 probabilistic:1 together:1 again:1 thesis:1 cesa:3 satisfied:1 nm:2 hn:2 possibly:1 hoeffding:1 derivative:1 return:2 li:1 vi:2 ad:1 later:1 view:1 root:1 analyze:3 complicated:1 square:1 variance:3 ensemble:18 yield:3 strongest:1 fo:2 whenever:2 definition:1 proof:6 di:1 associated:3 popular:1 recall:1 tum:1 follow:1 improved:2 done:1 though:1 langford:1 hand:3 working:1 replacing:1 incrementally:2 ivf:2 equality:1 hence:2 nmn:5 coincides:1 generalized:1 fj:1 recently:1 mt:11 tail:3 belong:1 similarly:1 nicolo:1 italy:2 manipulation:1 verlag:1 inequality:15 success:1 vt:6 minimum:1 gentile:3 eo:12 monotonically:1 ii:6 plugging:1 va:1 prediction:1 regression:1 basic:1 kernel:2 tailored:1 remarkably:1 interval:1 bt3:2 unlike:1 call:1 bernstein:7 easy:3 concerned:1 cn:1 whether:2 expression:1 se:1 clear:1 simplest:1 generate:1 write:4 putting:1 blum:1 drawn:3 varese:1 ht:43 run:4 vn:2 decision:2 vb:2 bound:33 hi:6 lec:1 nonnegative:1 precisely:1 sharply:1 x2:1 argument:5 min:13 c8:16 according:1 wi:1 lp:14 ln:12 know:1 apply:1 observe:2 batch:1 ho:2 running:2 remaining:1 hinge:1 readable:1 prof:1 universita:2 sweep:1 quantity:1 primary:1 rt:13 concentration:1 exhibit:1 length:1 devroye:1 minimizing:2 sharper:1 relate:1 irt:1 zt:3 unknown:1 bianchi:3 upper:7 finite:1 arbitrary:5 pair:1 comelico:1 nip:1 trans:1 pattern:1 beating:1 azuma:1 rf:2 suitable:1 mn:12 older:1 imply:1 ready:1 concludes:1 sn:2 l2:1 loss:10 dsi:3 proven:3 var:2 incurred:1 s0:1 penalized:4 last:2 side:2 weaker:1 allow:2 calculated:1 xn:1 cumulative:4 concavity:1 refinement:1 far:2 monotonicity:1 conclude:1 tuples:1 xi:4 sk:2 obtaining:1 mnn:1 investigated:1 necessarily:1 main:1 bounding:2 freedman:2 martingale:7 deterministically:1 exponential:1 xl:1 theorem:4 specific:2 xt:3 jt:5 jensen:1 symbol:1 disregarding:1 workshop:1 sequential:2 importance:1 margin:1 cx:1 lt:2 logarithmic:1 contained:1 conconi:1 applies:1 springer:1 ch:1 minimizer:1 satisfies:3 conditional:2 goal:1 determined:1 unimi:2 lemma:6 select:1 brevity:3 |
2,025 | 284 | 248
MalkofT
A Neural Network for Real-Time Signal Processing
Donald B. Malkoff
General Electric / Advanced Technology Laboratories
Moorestown Corporate Center
Building 145-2, Route 38
Moorestown, NJ 08057
ABSTRACT
This paper describes a neural network algorithm that (1) performs
temporal pattern matching in real-time, (2) is trained on-line, with
a single pass, (3) requires only a single template for training of each
representative class, (4) is continuously adaptable to changes in
background noise, (5) deals with transient signals having low signalto-noise ratios, (6) works in the presence of non-Gaussian noise, (7)
makes use of context dependencies and (8) outputs Bayesian probability estimates. The algorithm has been adapted to the problem of
passive sonar signal detection and classification. It runs on a Connection Machine and correctly classifies, within 500 ms of onset,
signals embedded in noise and subject to considerable uncertainty.
1
INTRODUCTION
This paper describes a neural network algorithm, STOCHASM, that was developed
for the purpose of real-time signal detection and classification. Of prime concern
was capability for dealing with transient signals having low signal-to-noise ratios
(SNR).
The algorithm was first developed in 1986 for real-time fault detection and diagnosis
of malfunctions in ship gas turbine propulsion systems (Malkoff, 1987). It subsequently was adapted for passive sonar signal detection and classification. Recently,
versions for information fusion and radar classification have been developed.
Characteristics of the algorithm that are of particular merit include the following:
A Neural Network for Real-Time Signal Processing
? It performs well in the presence of either Gaussian or non-Gaussian noise,
even where the noise characteristics are changing.
? Improved classifications result from temporal pattern matching in real-time,
and by taking advantage of input data context dependencies.
? The network is trained on-line. Single exposures of target data require one
pass through the network. Target templates, once formed, can be updated
on-line.
? Outputs consist of numerical estimates of closeness for each of the template
classes, rather than nearest-neighbor "all-or-none" conclusions.
? The algorithm is implemented in parallel code on a Connection Machine.
Simulated signals, embedded in noise and subject to considerable uncertainty, are
classified within 500 ms of onset.
2
2.1
GENERAL OVERVIEW OF THE NETWORK
REPRESENTATION OF THE INPUTS
Sonar signals used for training and testing the neural network consist of pairs of
simulated chirp signals that are superimposed and bounded by a Gaussian envelope. The signals are subject to random fluctuations and embedded in white noise.
There is considerable overlapping (similarity) of the signal templates. Real data
has recently become available for the radar domain.
Once generated, the time series of the sonar signal is subject to special transformations. The outputs of these transformations are the values which are input to the
neural network. In addition, several higher-level signal features, for example, zero
crossing data, may be simultaneously input to the same network, for purposes of
information fusion. The transformations differ from those used in traditional signal processing. They contribute to the real-time performance and temporal pattern
matching capabilities of the algorithm by possessing all the following characteristics:
? Time-Origin Independence: The sonar input signal is transformed so the
resulting time-frequency representation is independent of the starting time
of the transient with respect to its position within the observation window
(Figure 1). "Observation window" refers to the most recent segment of the
sonar time series that is currently under analysis.
? Translation Independence: The time-frequency representation obtained
by transforming the sonar input transient does not shift from one network
input node to another as the transient signal moves across most of the observation window (Figure 1). In other words, not only does the representation
remain the same while the transient moves, but its position relative to specific
network nodes also does not change. Each given node continues to receive its
249
250
Malkoff
usual kind of information about the sonar transient, despite the relative position of the transient in the window. For example, where the transform is an
FFT, a specific input layer node will always receive the output of one specific
frequency bin, and none other.
Where the SNR is high, translation independence could be accomplished by
a simple time-transformation of the representation before sending it to the
neural network. This is not possible in conditions where the SNR is sufficiently
low that segmentation of the transient becomes impossible using traditional
methods such as auto-regressive analysis; it cannot be determined at what
time the transient signal originated and where it is in the observation window .
? The representation gains time-origin and translation .ndependence without
sacrificing knowledge about the signal's temporal characteristics or its complex infrastructure. This is accomplished by using (1) the absolute value of
the Fourier transform (with respect to time) of the spectrogram of the sonar
input, or (2) the radar Woodward Ambiguity Function. The derivation and
characterization of these methods for representing data is discussed in a separate paper (Malkoff, 1990).
Encoded Outputs
Olff.ent Aspects of the TransfOtmltlon Output. must
always enter their same 'l*lal node. of the Network
and result In 1M same c/asslflcatlon.
Figure 1: Despite passage of the transient, encoded data enters the same network input nodes (translation independence) and has the same form and output
classification (time-origin independence) .
A Neural Network for Real-Time Signal Processing
2.2
THE NETWORK ARCHITECTURE
Sonar data, suitably transformed, enters the network input layer. The input layer
serves as a noise filter, or discriminator. The network has two additional layers,
the hidden and output layers (Figure 2). Learning of target templates, as well as
classification of unknown targets, takes place in a single "feed-forward" pass through
these layers. Additional exposures to the same target lead to further enhancement of
the template, if training, or refinement of the classification probabilities, if testing.
The hidden layer deals only with data that passes through the input filter. This data
predominantly represents a target. Some degree of context dependency evaluation
of the data is achieved. Hidden layer data and its permutations are distributed
and maintained intact, separate, and transparent. Because of this, credit (error)
assignment is easily performed.
In the output layer, evidence is accumulated, heuristically evaluated, and transformed into figures of merit for each possible template class.
IINPU'f LAYEA
I
OUTPUT LAYER
I
Figure .2: STOCHASM network architecture.
2.2.1
The Input Layer
Each input layer node receives a succession of samples of a unique part of the sonar
representation. This series of samples is stored in a first-in, first-out queue.
With the arrival of each new input sample, the mean and standard deviation of
the values in the queue are recomputed at every node. These statistical parameters
251
252
Malkdf
are used to detect and extract a signal from the background noise by computing
a threshold for each node. Arriving input values that exceed the threshold are
passed to the hidden layer and not entered into the queues. Passed values are
expressed in terms of z-values (the number of standard deviations that the input
value differs from the mean of the queued values). Hidden layer nodes receive only
data exceeding thresholds; they are otherwise inactive.
2.2.2
The Hidden Layer
There are three basic types of hidden layer nodes:
? The first type receive values from only a single input layer node; they reflect
absolute changes in an input layer parameter.
? The second type receive values from a pair of inputs where each of those values
simultaneously deviates from normal in the same direction.
? The third type receive values from a pair of inputs where each of those values
simultaneously deviates from normal in opposite directions.
For N data inputs, there are a total of N2 hidden layer nodes.
Values are passed to the hidden layer only when they exceed the threshold levels
determined by the input node queue. The hidden layer values are stored in firstin, first-out queues, like those of the input layer. If the network is in the testing
mode, these values represent signals awaiting classification. The mean and standard
deviation are computed for each of these queues, and used for subsequent pattern
matching. If, instead, the network is in the training mode, the passed values and
their statistical descriptors are stored as templates at their corresponding nodes.
2.2.3
Pattern Matching Output Layer
Pattern matching consists of computing Bayesian likelihoods for the undiagnosed
input relative to each template class. The computation assumes a normal distribution of the values contained within the queue of each hidden layer node. The
statistical parameters of the queue representing undiagnosed inputs are matched
with those of each of the templates. For example, the number of standard deviations distance between the means of the "undiagnosed" queue and a template queue
may be used to demarcate an area under a normal probability distribution. This
area is then used as a weight, or measure, for their closeness of match. Note that
this computation has a non-linear, sigmoid-shaped output.
The weights for each template are summed across all nodes. Likelihood values
are computed for each template. A priori data is used where available, and the
results normalized for final outputs. The number of computations is minimal and
done in parallel; they scale linearly with the number of templates per node. If
more computer processing hardware were available, separate processors could be
assigned for each template of every node, and computational time would be of
constant complexity.
A Neural Network for Real-Time Signal Processing
3
PERFORMANCE
The sonar version was tested against three sets of totally overlapping double chirp
signals, the worst possible case for this algorithm. Where training and testing
SNR's differed by a factor of anywhere from 1 to 8, 46 of 48 targets were correctly
recognized .
In extensive simulated testing against radar and jet engine modulation data, classifications were better than 95% correct down to -25 dB using the unmodified sonar
algorithm.
4
DISCUSSION
Distinguishing features of this algorithm include the following capabilities:
? Information fusion.
? Improved classifications.
? Real-time performance.
? Explanation of outputs.
4.1
INFORMATION FUSION
In STOCHASM, normalization of the input data facilitates the comparison of separate data items that are diverse in type. This is followed by the fusion, or combination, of all possible pairs of the set of inputs. The resulting combinations are
transferred to the hidden layer where they are evaluated and matched with templates. This allows the combining of different features derived either from the same
sensor suite or from several different sensor suites. The latter is often one of the
most challenging tasks in situation assessment.
4.2
4.2.1
IMPROVED CLASSIFICATIONS
Multiple Output Weights per Node
In STOCHASM, each hidden layer node receives a single piece of data representing some key feature extracted from the undiagnosed target signal. In contrast,
the node has many separate output weights; one for every target template. Each
of those output weights represents an actual correlation between the undiagnosed
feature data and one of the individual target templates. STOCHASM optimizes
the correlations of an unknown input with each possible class. In so doing, it also
generates figures of merit (numerical estimates of closeness of match) for ALL the
possible target classes, instead of a single "all-or-none" classification.
In more popularized networks, there is only one output weight for each node. Its
effectiveness is diluted by having to contribute to t!1e correlation between one undiagnosed feature data and MANY different templates. In order to achieve reasonable
classifications, an extra set of input connection weights is employed. The connection
253
254
MalkofT
weights provide a somewhat watered-down numerical estimate of the contribution
of their particular input data feature to the correct classification, ON THE AVERAGE, of targets representing all possible classes. They employ iterative procedures
to compute values for those weights, which prevents real-time training and generates sub-optimal correlations. Moreover, because all of this results in only a single
output for each hidden layer node, another set of connection weights between the
hidden layer node and each node of the output layer is required to complete the
classification process. Since these tend to be fully connected layers, the number of
weights and computations is prohibitively large.
4.2.2
Avoidance of Nearest-Neighbor Techniques
Some popular networks are sensitive to initial conditions. The determination of
the final values of their weights is influenced by the initial values assigned to them.
These networks require that, before the onset of training, the values of weights
be randomly assigned. Moreover, the classification outcomes of these networks is
often altered by changing the order in which training samples are submitted to the
network. Networks of this type may be unable to express their conclusions in figures
of merit for all possible classes. When inputs to the network share characteristics
of more than one target class, these networks tend to gravitate to the classification
that initially most closely resembles the input, for an "all-or-none" classification.
STOCHASM has none of these drawbacks
4.2.3
Noisy Data
The algorithm handles SNR's of lower-than-one and situations where training and
testing SNR's differ. Segmentation of one dimensional patterns buried in noise is
done automatically. Even the noise itself can be classified. The algorithm can adapt
on-line to changing background noise patterns.
4.3
REAL-TIME PERFORMANCE
There is no need for back-propagation/gradient-descent methods to set the weights
during training. Therefore, no iterations or recursions are required. Only a single
feed-forward pass of data through the network is needed for either training or classification. Since the number of nodes, connections, layers, and weights is relatively
small, and the algorithm is implemented in parallel, the compute time is fast enough
to keep up with real-time in most application domains.
4.4
EXPLANATION OF OUTPUTS
There is strict separation of target classification evidence in the nodes of this network. In addition, the evidence is maintained so that positive and negative correlation data is separate and easily accessable. This enables improved credit (error)
assignment that leads to more effective classifications and the potential for making
available to the operator real-time explanations of program behavior.
A Neural Network for Real-Time Signal Processing
4.5
FUTURE DIRECTIONS
Previous versions of the algorithm dynamically created, destroyed, or re-arranged
nodes and their linkages to optimize the network, minimize computations, and eliminate unnecessary inputs. This algorithm also employed a multi-level hierarchical
control system. The control system, on-line and in real-time, adjusted sampling
rates and queue lengths, governing when the background noise template is permitted to adapt to current noise inputs, and the rate at which it does so. Future versions
of the Connection Machine version will be able to effect the same procedures.
Efforts are now underway to:
1. Improve the temporal pattern matching capabilities.
2. Provide better heuristics for the computation of final figures of merit from the
massive amount of positive and negative correlation data resident within the
hidden layer nodes.
3. Adapt the algorithm to radar domains where time and spatial warping problems are prominent.
4. Simulate more realistic and complex sonar transients, with the expectation
the algorithm will perform better on those targets.
5. Apply the algorithm to information fusion tasks.
References
Malkoff, D.B., "The Application of Artificial Intelligence to the Handling of RealTime Sensor Based Fault Detection and Diagnosis," Proceedings of the Eighth Ship
Control Systems Symposium, Volume 3, Ministry of Defence, The Hague, pp 264276. Also presented at the Hague, Netherlands, October 8, 1987.
Malkoff, D.B., "A Framework for Real-Time Fault Detection and Diagnosis Using
Temporal Data," The International Journal for Artificial Intelligence in Engineering,
Volume 2, No.2, pp 97-111, April 1987.
Malkoff, D.B. and L. Cohen, "A Neural Network Approach to the Detection Problem
Using Joint Time-Frequency Distributions," Proceedings of the IEEE 1990 International Conference on Acoustics, Speech, and Signal Processing, Albuquerque, New
Mexico, April 1990 (to appear).
255
PART III:
VISION
| 284 |@word version:5 suitably:1 heuristically:1 initial:2 series:3 current:1 must:1 subsequent:1 realistic:1 numerical:3 enables:1 intelligence:2 item:1 regressive:1 infrastructure:1 characterization:1 node:30 contribute:2 become:1 symposium:1 consists:1 behavior:1 multi:1 hague:2 automatically:1 actual:1 window:5 totally:1 becomes:1 classifies:1 bounded:1 matched:2 moreover:2 what:1 kind:1 developed:3 transformation:4 nj:1 suite:2 temporal:6 every:3 prohibitively:1 control:3 appear:1 before:2 positive:2 engineering:1 despite:2 fluctuation:1 modulation:1 chirp:2 resembles:1 dynamically:1 challenging:1 unique:1 testing:6 differs:1 procedure:2 area:2 matching:7 word:1 donald:1 refers:1 cannot:1 operator:1 context:3 impossible:1 queued:1 optimize:1 center:1 exposure:2 starting:1 avoidance:1 handle:1 updated:1 target:15 massive:1 distinguishing:1 origin:3 crossing:1 continues:1 enters:2 worst:1 connected:1 transforming:1 complexity:1 radar:5 trained:2 segment:1 easily:2 joint:1 awaiting:1 derivation:1 fast:1 effective:1 artificial:2 outcome:1 encoded:2 heuristic:1 otherwise:1 transform:2 noisy:1 itself:1 final:3 advantage:1 combining:1 entered:1 achieve:1 ent:1 enhancement:1 double:1 diluted:1 nearest:2 implemented:2 differ:2 direction:3 closely:1 correct:2 drawback:1 filter:2 subsequently:1 transient:12 bin:1 require:2 transparent:1 adjusted:1 sufficiently:1 credit:2 normal:4 purpose:2 currently:1 sensitive:1 sensor:3 gaussian:4 always:2 defence:1 rather:1 derived:1 superimposed:1 likelihood:2 contrast:1 detect:1 accumulated:1 eliminate:1 initially:1 hidden:16 transformed:3 buried:1 classification:22 priori:1 spatial:1 special:1 summed:1 once:2 having:3 shaped:1 sampling:1 represents:2 future:2 employ:1 randomly:1 simultaneously:3 individual:1 detection:7 evaluation:1 re:1 sacrificing:1 minimal:1 unmodified:1 assignment:2 deviation:4 snr:6 gravitate:1 stored:3 dependency:3 international:2 continuously:1 ambiguity:1 reflect:1 potential:1 onset:3 piece:1 performed:1 doing:1 capability:4 parallel:3 contribution:1 minimize:1 formed:1 descriptor:1 characteristic:5 succession:1 bayesian:2 albuquerque:1 none:5 processor:1 classified:2 submitted:1 influenced:1 against:2 frequency:4 pp:2 gain:1 popular:1 knowledge:1 segmentation:2 malfunction:1 back:1 adaptable:1 feed:2 higher:1 permitted:1 improved:4 april:2 arranged:1 evaluated:2 done:2 anywhere:1 accessable:1 governing:1 correlation:6 receives:2 overlapping:2 assessment:1 propagation:1 resident:1 mode:2 building:1 effect:1 normalized:1 assigned:3 laboratory:1 deal:2 white:1 during:1 maintained:2 m:2 prominent:1 complete:1 performs:2 passive:2 passage:1 recently:2 possessing:1 predominantly:1 sigmoid:1 overview:1 cohen:1 volume:2 discussed:1 enter:1 similarity:1 recent:1 optimizes:1 prime:1 ship:2 route:1 fault:3 accomplished:2 ministry:1 additional:2 somewhat:1 spectrogram:1 employed:2 recognized:1 signal:29 multiple:1 corporate:1 match:2 jet:1 adapt:3 determination:1 basic:1 vision:1 expectation:1 iteration:1 represent:1 normalization:1 achieved:1 receive:6 background:4 addition:2 envelope:1 extra:1 pass:1 strict:1 subject:4 tend:2 db:1 facilitates:1 effectiveness:1 presence:2 exceed:2 iii:1 enough:1 destroyed:1 fft:1 independence:5 architecture:2 opposite:1 shift:1 inactive:1 passed:4 linkage:1 effort:1 queue:11 speech:1 netherlands:1 amount:1 hardware:1 correctly:2 per:2 diagnosis:3 diverse:1 express:1 recomputed:1 key:1 threshold:4 changing:3 run:1 uncertainty:2 place:1 reasonable:1 separation:1 realtime:1 layer:32 followed:1 adapted:2 generates:2 aspect:1 fourier:1 simulate:1 relatively:1 transferred:1 popularized:1 combination:2 describes:2 across:2 remain:1 making:1 needed:1 merit:5 serf:1 sending:1 available:4 apply:1 hierarchical:1 assumes:1 include:2 warping:1 move:2 usual:1 traditional:2 gradient:1 distance:1 separate:6 unable:1 simulated:3 propulsion:1 code:1 length:1 ratio:2 mexico:1 october:1 negative:2 unknown:2 perform:1 observation:4 descent:1 gas:1 situation:2 pair:4 required:2 extensive:1 connection:7 discriminator:1 lal:1 engine:1 acoustic:1 woodward:1 able:1 pattern:9 eighth:1 program:1 explanation:3 recursion:1 advanced:1 representing:4 altered:1 improve:1 technology:1 created:1 auto:1 extract:1 deviate:2 underway:1 relative:3 embedded:3 fully:1 permutation:1 undiagnosed:6 degree:1 share:1 translation:4 demarcate:1 arriving:1 neighbor:2 template:20 taking:1 absolute:2 distributed:1 forward:2 refinement:1 keep:1 dealing:1 unnecessary:1 iterative:1 sonar:14 complex:2 electric:1 domain:3 linearly:1 noise:16 arrival:1 n2:1 representative:1 differed:1 sub:1 position:3 originated:1 exceeding:1 third:1 down:2 specific:3 concern:1 fusion:6 consist:2 closeness:3 evidence:3 signalto:1 prevents:1 expressed:1 contained:1 turbine:1 extracted:1 considerable:3 change:3 determined:2 total:1 pas:4 intact:1 latter:1 tested:1 handling:1 |
2,026 | 2,840 | Active Bidirectional Coupling in a Cochlear Chip
Bo Wen and Kwabena Boahen
Department of Bioengineering
University of Pennsylvania
Philadelphia, PA 19104
{wenbo,boahen}@seas.upenn.edu
Abstract
We present a novel cochlear model implemented in analog very large
scale integration (VLSI) technology that emulates nonlinear active
cochlear behavior. This silicon cochlea includes outer hair cell (OHC)
electromotility through active bidirectional coupling (ABC), a mechanism we proposed in which OHC motile forces, through the microanatomical organization of the organ of Corti, realize the cochlear
amplifier. Our chip measurements demonstrate that frequency responses
become larger and more sharply tuned when ABC is turned on; the degree of the enhancement decreases with input intensity as ABC includes
saturation of OHC forces.
1
Silicon Cochleae
Cochlear models, mathematical and physical, with the shared goal of emulating nonlinear
active cochlear behavior, shed light on how the cochlea works if based on cochlear micromechanics. Among the modeling efforts, silicon cochleae have promise in meeting the
need for real-time performance and low power consumption. Lyon and Mead developed
the first analog electronic cochlea [1], which employed a cascade of second-order filters
with exponentially decreasing resonant frequencies. However, the cascade structure suffers from delay and noise accumulation and lacks fault-tolerance. Modeling the cochlea
more faithfully, Watts built a two-dimensional (2D) passive cochlea that addressed these
shortcomings by incorporating the cochlear fluid using a resistive network [2]. This parallel structure, however, has its own problem: response gain is diminished by interference
among the second-order sections? outputs due to the large phase change at resonance [3].
Listening more to biology, our silicon cochlea aims to overcome the shortcomings of existing architectures by mimicking the cochlear micromechanics while including outer hair cell
(OHC) electromotility. Although how exactly OHC motile forces boost the basilar membrane?s (BM) vibration remains a mystery, cochlear microanatomy provides clues. Based
on these clues, we previously proposed a novel mechanism, active bidirectional coupling
(ABC), for the cochlear amplifier [4]. Here, we report an analog VLSI chip that implements
this mechanism. In essence, our implementation is the first silicon cochlea that employs
stimulus enhancement (i.e., active behavior) instead of undamping (i.e., high filter Q [5]).
The paper is organized as follows. In Section 2, we present the hypothesized mechanism
(ABC), first described in [4]. In Section 3, we provide a mathematical formulation of the
Oval
window
organ
of Corti
BM
Round
window
IHC
RL
A
OHC
PhP
DC
BM
Basal
d
Stereocilia
i -1
i
i+1
Apical
B
Figure 1: The inner ear. A Cutaway showing cochlear ducts (adapted from [6]). B Longitudinal view of cochlear partition (CP) (modified from [7]-[8]). Each outer hair cell (OHC)
tilts toward the base while the Deiter?s cell (DC) on which it sits extends a phalangeal process (PhP) toward the apex. The OHCs? stereocilia and the PhPs? apical ends form the
reticular lamina (RL). d is the tilt distance, and the segment size. IHC: inner hair cell.
model as the basis of cochlear circuit design. Then we proceed in Section 4 to synthesize
the circuit for the cochlear chip. Last, we present chip measurements in Section 5 that
demonstrate nonlinear active cochlear behavior.
2
Active Bidirectional Coupling
The cochlea actively amplifies acoustic signals as it performs spectral analysis. The movement of the stapes sets the cochlear fluid into motion, which passes the stimulus energy
onto a certain region of the BM, the main vibrating organ in the cochlea (Figure 1A). From
the base to the apex, BM fibers increase in width and decrease in thickness, resulting in an
exponential decrease in stiffness which, in turn, gives rise to the passive frequency tuning
of the cochlea. The OHCs? electromotility is widely thought to account for the cochlea?s
exquisite sensitivity and discriminability. The exact way that OHC motile forces enhance
the BM?s motion, however, remains unresolved.
We propose that the triangular mechanical unit formed by an OHC, a phalangeal process
(PhP) extended from the Deiter?s cell (DC) on which the OHC sits, and a portion of the
reticular lamina (RL), between the OHC?s stereocilia end and the PhP?s apical tip, plays
an active role in enhancing the BM?s responses (Figure 1B). The cochlear partition (CP)
is divided into a number of segments longitudinally. Each segment includes one DC, one
PhP?s apical tip and one OHC?s stereocilia end, both attached to the RL. Approximating
the anatomy, we assume that when an OHC?s stereocilia end lies in segment i ? 1, its
basolateral end lies in the immediately apical segment i. Furthermore, the DC in segment
i extends a PhP that angles toward the apex of the cochlea, with its apical end inserted just
behind the stereocilia end of the OHC in segment i + 1.
Our hypothesis (ABC) includes both feedforward and feedbackward interactions. On one
hand, the feedforward mechanism, proposed in [9], hypothesized that the force resulting
from OHC contraction or elongation is exerted onto an adjacent downstream BM segment
due to the OHC?s basal tilt. On the other hand, the novel insight of the feedbackward
mechanism is that the OHC force is delivered onto an adjacent upstream BM segment due
to the apical tilt of the PhP extending from the DC?s main trunk.
In a nutshell, the OHC motile forces, through the microanatomy of the CP, feed forward
and backward, in harmony with each other, resulting in bidirectional coupling between
BM segments in the longitudinal direction. Specifically, due to the opposite action of OHC
?!!!!!!!!!!!!!!!!!!!!!!
ReHZmL? S HxL M HxL
1
0.5
0
-0.2
0
A
5
10
15
20
Distance from stapes HmmL
25
B
Figure 2: Wave propagation (WP) and basilar membrane (BM) impedance in the active
cochlear model with a 2kHz pure tone (? = 0.15, ? = 0.3). A WPp
in fluid and BM. B BM
impedance Zm (i.e., pressure divided by velocity), normalized by S(x)M (x). Only the
resistive component is shown; dot marks peak location.
forces on the BM and the RL, the motion of BM segment i ? 1 reinforces that of segment i
while the motion of segment i + 1 opposes that of segment i, as described in detail in [4].
3
The 2D Nonlinear Active Model
To provide a blueprint for the cochlear circuit design, we formulate a 2D model of the
cochlea that includes ABC. Both the cochlea?s length (BM) and height (cochlear ducts)
are discretized into a number of segments, with the original aspect ratio of the cochlea
maintained. In the following expressions, x represents the distance from the stapes along
the CP, with x = 0 at the base (or the stapes) and x = L (uncoiled cochlear duct length) at
the apex; y represents the vertical distance from the BM, with y = 0 at the BM and y = ?h
(cochlear duct radius) at the bottom/top wall.
Providing that the assumption of fluid incompressibility holds, the velocity potential ?
of the fluids is required to satisfy 52 ?(x, y, t) = 0, where 52 denotes the Laplacian
operator. By definition, this potential is related to fluid velocities in the x and y directions:
Vx = ???/?x and Vy = ???/?y.
The BM is driven by the fluid pressure difference across it. Hence, the BM?s vertical motion
(with downward displacement being positive) can be described as follows.
?
?
Pd (x) + FOHC (x) = S(x)?(x) + ?(x)?(x)
+ M (x)?(x),
(1)
where S(x) is the stiffness, ?(x) is the damping, and M (x) is the mass, per unit area, of
the BM; ? is the BM?s downward displacement. Pd = ? ?(?SV (x, y, t) ? ?ST (x, y, t))/?t
is the pressure difference between the two fluid ducts (the scala vestibuli (SV) and the scala
tympani (ST)), evaluated at the BM (y = 0); ? is the fluid density.
The FOHC(x) term combines feedforward and feedbackward OHC forces, described by
FOHC (x) = s0 tanh(??S(x)?(x ? d)/s0 ) ? tanh(?S(x)?(x + d)/s0 ) ,
(2)
where ? denotes the OHC motility, expressed as a fraction of the BM stiffness, and ? is
the ratio of feedforward to feedbackward coupling, representing relative strengths of the
OHC forces exerted on the BM segment through the DC, directly and via the tilted PhP. d
denotes the tilt distance, which is the horizontal displacement between the source and the
recipient of the OHC force, assumed to be equal for the forward and backward cases. We
use the hyperbolic tangent function to model saturation of the OHC forces, the nonlinearity
that is evident in physiological measurements [8]; s0 determines the saturation level.
We observed wave propagation in the model and computed the BM?s impedance (i.e., the
ratio of driving pressure to velocity). Following the semi-analytical approach in [2], we
simulated a linear version of the model (without saturation). The traveling wave transitions
from long-wave to short-wave before the BM vibration peaks; the wavelength around the
characteristic place is comparable to the tilt distance (Figure 2A). The BM impedance?s
real part (i.e., the resistive component) becomes negative before the peak (Figure 2B). On
the whole, inclusion of OHC motility through ABC boosts the traveling wave by pumping
energy onto the BM when the wavelength matches the tilt of the OHC and PhP.
4
Analog VLSI Design and Implementation
Based on our mathematical model, which produces realistic responses, we implemented a
2D nonlinear active cochlear circuit in analog VLSI, taking advantage of the 2D nature of
silicon chips. We first synthesize a circuit analog of the mathematical model, and then we
implement the circuit in the log-domain. We start by synthesizing a passive model, and
then extend it to a nonlinear active one by including ABC with saturation.
4.1 Synthesizing the BM Circuit
The model consists of two fundamental parts: the cochlear fluid and the BM. First, we
design the fluid element and thus the fluid network. In discrete form, the fluids can be
viewed as a grid of elements with a specific resistance that corresponds to the fluid density
or mass. Since charge is conserved for a small sheet of resistance and so are particles for
a small volume of fluid, we use current to simulate fluid velocity. At the transistor level,
the current flowing through the channel of a MOS transistor, operating subthreshold as a
diffusive element, can be used for this purpose. Therefore, following the approach in [10],
we implement the cochlear fluid network using a diffusor network formed by a 2D grid of
nMOS transistors.
Second, we design the BM element and thus the BM. As current represents velocity, we
rewrite the BM boundary condition (Equation 1, without the FOHC term):
R
(3)
I?in = S(x) Imem dt + ?(x)Imem + M (x)I?mem ,
where Iin , obtained by applying the voltage from the diffusor network to the gate of a
pMOS transistor, represents the velocity potential scaled by the fluid density. In turn, Imem
? The FOHC
drives the diffusor network to match the fluid velocity with the BM velocity, ?.
term is dealt with in Section 4.2.
Implementing this second-order system requires two state-space variables, which we name
Is and Io . And with s = j?, our synthesized BM design (passive) is
?1 Is s + Is
?2 Io s + Io
Imem
= ?Iin + Io ,
= Iin ? bIs ,
= Iin + Is ? Io ,
(4)
(5)
(6)
where the two first-order systems are both low-pass filters (LPFs), with time constants ?1
and ?2 , respectively; b is a gain factor. Thus, Iin can be expressed in terms of Imem as:
Iin s2 = (b + 1)/?1 ?2 + ((?1 + ?2 )/?1 ?2)s + s2 Imem .
Comparing this expression with the design target (Equation 3) yields the circuit analogs:
S(x) = (b + 1)/?1?2 ,
?(x) = (?1 + ?2 )/?1 ?2 ,
and M (x) = 1.
Note that the mass M (x) is a constant (i.e., 1), which was also the case in our mathematical model simulation. These analogies require that ?1 and ?2 increase exponentially to
Half
LPF
( )
+
Iout-
Iin+
Iout+ Iout
Vq
Iin+
Iin-
C+
B Iin-
A
Iin+
+
-
-
Iin-
+
+
+
C
To neighbors
Is-
Is+
b
+
b
+
IT+
IT-
+
-
+
From neighbors
Io-
Io+
+
+
+
+
- - +
+
LPF
Iout+
Iout-
BM
Imem+
Imem-
Figure 3: Low-pass filter (LPF) and second-order section circuit design. A Half-LPF circuit. B Complete LPF circuit formed by two half-LPF circuits. C Basilar membrane (BM)
circuit. It consists of two LPFs and connects to its neighbors through Is and IT .
simulate the exponentially decreasing BM stiffness (and damping); b allows us to achieve
a reasonable stiffness for a practical choice of ?1 and ?2 (capacitor size is limited by silicon
area).
4.2 Adding Active Bidirectional Coupling
R
To include ABC in the BM boundary condition, we replace ? in Equation 2 with Imem dt
to obtain
R
R
FOHC = rff S(x)T Imem (x ? d)dt ? rfb S(x)T Imem (x + d)dt ,
where rff = ?? and rfb = ? denote the feedforward and feedbackward OHC motility
factors, and T denotes saturation. The saturation is applied to the displacement, instead
of the force, as this simplifies the implementation. We obtain the integrals
by observing
R
that, in the passive
design,
the
state
variable
I
=
?I
/s?
.
Thus,
I
(x ? d)dt =
s
mem
1
mem
R
??1f Isf and Imem (x + d)dt = ??1b Isb . Here, Isf and Isb represent the outputs of the first
LPF in the upstream and downstream BM segments, respectively; ?1f and ?1b represent
their respective time constants. To reduce complexity in implementation, we use ?1 to
approximate both ?1f and ?1b as the longitudinal span is small.
We obtain the active BM design by replacing Equation 5 with the synthesis result:
?2 Ios + Io = Iin ? bIs + rfb (b + 1)T (?Isb ) ? rff (b + 1)T (?Isf ).
Note that, to implement ABC, we only need to add two currents to the second LPF in
the passive system. These currents, Isf and Isb , come from the upstream and downstream
neighbors of each segment.
ISV
Fluid
Base
BM
IST
Apex
Fluid
A
IT +
IT Is+
Is-
+
Vsat Imem
Iin+
Imem-
Iin-
Is+ Is+
Is- IsBM
IT + IT +
IT IT
Vsat
IT +
IT Is+
Is-
B
Figure 4: Cochlear chip. A Architecture: Two diffusive grids with embedded BM circuits
model the cochlea. B Detail. BM circuits exchange currents with their neighbors.
4.3 Class AB Log-domain Implementation
We employ the log-domain filtering technique [11] to realize current-mode operation. In
addition, following the approach proposed in [12], we implement the circuit in Class AB to
increase dynamic range, reduce the effect of mismatch and lower power consumption. This
differential signaling is inspired by the way the biological cochlea works?the vibration of
BM is driven by the pressure difference across it.
Taking a bottom-up strategy, we start by designing a Class AB LPF, a building block for
the BM circuit. It is described by
+
?
+
?
+
?
+ ?
+
?
? (Iout
? Iout
)s + (Iout
? Iout
) = Iin
? Iin
and ? Iout
Iout s + Iout
Iout
= Iq2 ,
where Iq sets the geometric mean of the positive and negative components of the output
current, and ? sets the time constant.
Combining the common-mode constraint with the differential design equation yields the
nodal equation for the positive path (the negative path has superscripts + and ? swapped):
+
+
?
+
+
+
?
C V? out
= I? (Iin
? Iin
) + (Iq2 /Iout
? Iout
) /(Iout
+ Iout
).
+
This nodal equation suggests the half-LPF circuit shown in Figure 3A. Vout
, the voltage on
+
the positive capacitor (C ), gates a pMOS transistor to produce the corresponding current
+
?
?
signal, Iout
(Vout
and Iout
are similarly related). The bias Vq sets the quiescent current Iq
while V? determines the current I? , which is related to the time constant by ? = CuT/?I?
(? is the subthreshold slope coefficient and uT is the thermal voltage). Two of these subcircuits, connected in push?pull, form a complete LPF (Figure 3B).
The BM circuit is implemented using two LPFs interacting in accordance with the synthesized design equations (Figure 3C). Imem is the combination of three currents, Iin , Is , and
Io . Each BM sends out Is and receives IT , a saturated version of its neighbor?s Is . The
saturation is accomplished by a current-limiting transistor (see Figure 4B), which yields
IT = T (Is ) = Is Isat /(Is + Isat ), where Isat is set by a bias voltage Vsat.
4.4 Chip Architecture
We fabricated a version of our cochlear chip architecture (Figure 4) with 360 BM circuits
and two 4680-element fluid grids (360 ?13). This chip occupies 10.9mm2 of silicon area in
0.25?m CMOS technology. Differential input signals are applied at the base while the two
fluid grids are connected at the apex through a fluid element that represents the helicotrema.
5
Chip Measurements
We carried out two measurements that demonstrate the desired amplification by ABC, and
the compressive growth of BM responses due to saturation. To obtain sinusoidal current as
the input to the BM subcircuits, we set the voltages applied at the base to be the logarithm
of a half-wave rectified sinusoid.
We first investigated BM-velocity frequency responses at six linearly spaced cochlear positions (Figure 5). The frequency that maximally excites the first position (Stage 30), defined
as its characteristic frequency (CF), is 12.1kHz. The remaining five CFs, from early to later
stages, are 8.2k, 1.7k, 905, 366, and 218Hz, respectively. Phase accumulation at the CFs
ranges from 0.56 to 2.67? radians, comparable to 1.67? radians in the mammalian cochlea
[13]. Q10 factor (the ratio of the CF to the bandwidth 10dB below the peak) ranges from
1.25 to 2.73, comparable to 2.55 at mid-sound intensity in biology (computed from [13]).
The cutoff slope ranges from -20 to -54dB/octave, as compared to -85dB/octave in biology
(computed from [13]).
40
Stage
0
230 190
150
110
70
30
30
BM Velocity
Phase H? radiansL
BM Velocity
Amplitude HdBL
50
-2
20
10
-4
0
-10
0.1 0.2
0.5 1 2
5
Frequency HkHzL
A
10 20
0.1 0.2
0.5 1 2
5 10 20
Frequency HkHzL
B
Figure 5: Measured BM-velocity frequency responses at six locations. A Amplitude.
B Phase. Dashed lines: Biological data (adapted from [13]). Dots mark peaks.
We then explored the longitudinal pattern of BM-velocity responses and the effect of ABC.
Stimulating the chip using four different pure tones, we obtained responses in which a
4kHz input elicits a peak around Stage 85 while 500Hz sound travels all the way to Stage
178 and peaks there (Figure 6A). We varied the input voltage level and obtained frequency
responses at Stage 100 (Figure 6B). Input voltage level increases linearly such that the
current increases exponentially; the input current level (in dB) was estimated based on
the measured ? for this chip. As expected, we observed linearly increasing responses at
low frequencies in the logarithmic plot. In contrast, the responses around the CF increase
less and become broader with increasing input level as saturation takes effect in that region
(resembling a passive cochlea). We observed 24dB compression as compared to 27 to 47dB
in biology [13]. At the highest intensities, compression also occurs at low frequencies.
These chip measurements demonstrate that inclusion of ABC, simply through coupling
neighboring BM elements, transforms a passive cochlea into an active one. This active
cochlear model?s nonlinear responses are qualitatively comparable to physiological data.
6
Conclusions
We presented an analog VLSI implementation of a 2D nonlinear cochlear model that utilizes a novel active mechanism, ABC, which we proposed to account for the cochlear amplifier. ABC was shown to pump energy into the traveling wave. Rather than detecting
the wave?s amplitude and implementing an automatic-gain-control loop, our biomorphic
model accomplishes this simply by nonlinear interactions between adjacent neighbors. Im-
60
Frequency
4k
2k
1k
500 Hz
BM Velocity
Amplitude HdBL
BM Velocity
Amplitude HdBL
20
10
0
Input Level
40
48 dB
20
32 dB
Stage 100
16 dB
0
0 dB
-10
0
50
100
150
Stage Number
A
200
0.2
0.5 1 2
5 10 20
Frequency HkHzL
B
Figure 6: Measured BM-velocity responses (cont?d). A Longitudinal responses (20-stage
moving average). Peak shifts to earlier (basal) stages as input frequency increases from
500 to 4kHz. B Effects of increasing input intensity. Responses become broader and show
compressive growth.
plemented in the log-domain, with Class AB operation, our silicon cochlea shows enhanced
frequency responses, with compressive behavior around the CF, when ABC is turned on.
These features are desirable in prosthetic applications and automatic speech recognition
systems as they capture the properties of the biological cochlea.
References
[1] Lyon, R.F. & Mead, C.A. (1988) An analog electronic cochlea. IEEE Trans. Acoust. Speech
and Signal Proc., 36: 1119-1134.
[2] Watts, L. (1993) Cochlear Mechanics: Analysis and Analog VLSI . Ph.D. thesis, Pasadena, CA:
California Institute of Technology.
[3] Fragni`ere, E. (2005) A 100-Channel analog CMOS auditory filter bank for speech recognition.
IEEE International Solid-State Circuits Conference (ISSCC 2005) , pp. 140-141.
[4] Wen, B. & Boahen, K. (2003) A linear cochlear model with active bi-directional coupling.
The 25th Annual International Conference of the IEEE Engineering in Medicine and Biology
Society (EMBC 2003), pp. 2013-2016.
[5] Sarpeshkar, R., Lyon, R.F., & Mead, C.A. (1996) An analog VLSI cochlear model with new
transconductance amplifier and nonlinear gain control. Proceedings of the IEEE Symposium on
Circuits and Systems (ISCAS 1996) , 3: 292-295.
[6] Mead, C.A. (1989) Analog VLSI and Neural Systems . Reading, MA: Addison-Wesley.
[7] Russell, I.J. & Nilsen, K.E. (1997) The location of the cochlear amplifier: Spatial representation
of a single tone on the guinea pig basilar membrane. Proc. Natl. Acad. Sci. USA, 94: 2660-2664.
[8] Geisler, C.D. (1998) From sound to synapse: physiology of the mammalian ear . Oxford University Press.
[9] Geisler, C.D. & Sang, C. (1995) A cochlear model using feed-forward outer-hair-cell forces.
Hearing Research , 86: 132-146.
[10] Boahen, K.A. & Andreou, A.G. (1992) A contrast sensitive silicon retina with reciprocal
synapses. In Moody, J.E. and Lippmann, R.P. (eds.), Advances in Neural Information Processing Systems 4 (NIPS 1992) , pp. 764-772, Morgan Kaufmann, San Mateo, CA.
[11] Frey, D.R. (1993) Log-domain filtering: an approach to current-mode filtering. IEE Proc. G,
Circuits Devices Syst., 140 (6): 406-416.
[12] Zaghloul, K. & Boahen, K.A. (2005) An On-Off log-domain circuit that recreates adaptive
filtering in the retina. IEEE Transactions on Circuits and Systems I: Regular Papers , 52 (1):
99-107.
[13] Ruggero, M.A., Rich, N.C., Narayan, S.S., & Robles, L. (1997) Basilar membrane responses
to tones at the base of the chinchilla cochlea. J. Acoust. Soc. Am., 101 (4): 2151-2163.
| 2840 |@word version:3 compression:2 simulation:1 contraction:1 pressure:5 solid:1 tuned:1 longitudinal:5 existing:1 current:17 comparing:1 realize:2 tilted:1 realistic:1 partition:2 plot:1 half:5 device:1 tone:4 reciprocal:1 short:1 provides:1 detecting:1 location:3 sits:2 five:1 height:1 mathematical:5 along:1 nodal:2 become:3 differential:3 symposium:1 consists:2 resistive:3 isscc:1 combine:1 uncoiled:1 upenn:1 expected:1 longitudinally:1 behavior:5 mechanic:1 discretized:1 inspired:1 decreasing:2 lyon:3 window:2 increasing:3 becomes:1 circuit:25 mass:3 developed:1 compressive:3 acoust:2 fabricated:1 charge:1 growth:2 nutshell:1 shed:1 exactly:1 scaled:1 control:2 unit:2 positive:4 before:2 engineering:1 accordance:1 frey:1 io:10 acad:1 pumping:1 oxford:1 mead:4 path:2 discriminability:1 mateo:1 suggests:1 limited:1 bi:3 range:4 practical:1 motile:4 block:1 implement:5 signaling:1 displacement:4 area:3 cascade:2 thought:1 hyperbolic:1 physiology:1 regular:1 onto:4 operator:1 sheet:1 applying:1 accumulation:2 blueprint:1 resembling:1 formulate:1 immediately:1 pure:2 insight:1 pull:1 limiting:1 target:1 play:1 enhanced:1 exact:1 designing:1 hypothesis:1 pa:1 synthesize:2 velocity:17 element:7 recognition:2 mammalian:2 cut:1 bottom:2 role:1 inserted:1 observed:3 capture:1 region:2 connected:2 decrease:3 movement:1 highest:1 russell:1 boahen:5 pd:2 complexity:1 dynamic:1 rewrite:1 segment:18 basis:1 chip:14 sarpeshkar:1 fiber:1 shortcoming:2 pmos:2 larger:1 widely:1 triangular:1 reticular:2 delivered:1 superscript:1 advantage:1 transistor:6 analytical:1 propose:1 interaction:2 unresolved:1 zm:1 neighboring:1 turned:2 combining:1 loop:1 achieve:1 amplification:1 q10:1 amplifies:1 rff:3 enhancement:2 extending:1 sea:1 produce:2 cmos:2 lamina:2 coupling:9 iq:2 basilar:5 narayan:1 measured:3 excites:1 soc:1 implemented:3 come:1 direction:2 anatomy:1 radius:1 filter:5 vx:1 occupies:1 implementing:2 require:1 exchange:1 wall:1 biological:3 im:1 hold:1 around:4 rfb:3 mo:1 driving:1 early:1 purpose:1 isv:1 proc:3 travel:1 harmony:1 tanh:2 sensitive:1 vibration:3 organ:3 faithfully:1 ere:1 aim:1 modified:1 rather:1 voltage:7 broader:2 basolateral:1 contrast:2 am:1 pasadena:1 vlsi:8 mimicking:1 among:2 resonance:1 spatial:1 integration:1 equal:1 exerted:2 elongation:1 kwabena:1 biology:5 represents:5 mm2:1 report:1 stimulus:2 employ:2 wen:2 retina:2 phase:4 connects:1 iscas:1 ab:4 amplifier:5 organization:1 iq2:2 saturated:1 light:1 behind:1 natl:1 bioengineering:1 integral:1 respective:1 damping:2 logarithm:1 desired:1 modeling:2 earlier:1 hearing:1 apical:7 pump:1 delay:1 iee:1 thickness:1 sv:2 st:2 density:3 peak:8 sensitivity:1 fundamental:1 international:2 geisler:2 off:1 enhance:1 tip:2 synthesis:1 moody:1 duct:5 thesis:1 ear:2 sang:1 actively:1 syst:1 account:2 potential:3 sinusoidal:1 includes:5 coefficient:1 satisfy:1 later:1 view:1 observing:1 portion:1 wave:9 start:2 parallel:1 slope:2 formed:3 php:9 kaufmann:1 emulates:1 characteristic:2 subthreshold:2 yield:3 spaced:1 directional:1 hxl:2 dealt:1 vout:2 drive:1 rectified:1 synapsis:1 suffers:1 ed:1 definition:1 energy:3 frequency:16 pp:3 radian:2 gain:4 auditory:1 ut:1 organized:1 amplitude:5 recreates:1 bidirectional:6 feed:2 wesley:1 dt:6 response:18 flowing:1 maximally:1 scala:2 formulation:1 evaluated:1 synapse:1 furthermore:1 just:1 stage:10 traveling:3 hand:2 receives:1 horizontal:1 replacing:1 nonlinear:10 lack:1 propagation:2 mode:3 name:1 effect:4 hypothesized:2 normalized:1 building:1 usa:1 hence:1 sinusoid:1 wp:1 round:1 adjacent:3 motility:3 width:1 essence:1 maintained:1 octave:2 evident:1 complete:2 demonstrate:4 performs:1 cp:4 motion:5 passive:8 iin:20 novel:4 common:1 ohc:27 physical:1 rl:5 tilt:7 exponentially:4 attached:1 khz:4 analog:13 extend:1 volume:1 synthesized:2 isf:4 silicon:10 measurement:6 tuning:1 automatic:2 grid:5 similarly:1 inclusion:2 particle:1 nonlinearity:1 dot:2 apex:6 moving:1 operating:1 base:7 add:1 own:1 driven:2 certain:1 fault:1 meeting:1 accomplished:1 conserved:1 iout:19 morgan:1 employed:1 exquisite:1 accomplishes:1 signal:4 semi:1 dashed:1 sound:3 desirable:1 feedbackward:5 match:2 long:1 divided:2 laplacian:1 hair:5 enhancing:1 cochlea:26 represent:2 cell:7 diffusive:2 addition:1 addressed:1 source:1 sends:1 swapped:1 pass:1 hz:3 db:10 capacitor:2 feedforward:5 pennsylvania:1 architecture:4 opposite:1 bandwidth:1 inner:2 simplifies:1 reduce:2 zaghloul:1 listening:1 shift:1 expression:2 six:2 effort:1 resistance:2 speech:3 micromechanics:2 proceed:1 action:1 transforms:1 wpp:1 mid:1 ph:1 vy:1 estimated:1 per:1 reinforces:1 discrete:1 promise:1 basal:3 ist:1 four:1 cutoff:1 backward:2 downstream:3 fraction:1 mystery:1 angle:1 extends:2 place:1 resonant:1 reasonable:1 electronic:2 utilizes:1 comparable:4 annual:1 adapted:2 strength:1 vibrating:1 constraint:1 sharply:1 prosthetic:1 aspect:1 simulate:2 nmos:1 span:1 subcircuits:2 transconductance:1 department:1 watt:2 combination:1 membrane:5 across:2 stapes:4 interference:1 equation:8 vq:2 remains:2 previously:1 turn:2 trunk:1 mechanism:7 opposes:1 addison:1 end:7 operation:2 stiffness:5 spectral:1 corti:2 gate:2 original:1 recipient:1 top:1 denotes:4 include:1 cf:6 remaining:1 medicine:1 approximating:1 society:1 lpf:11 occurs:1 strategy:1 distance:6 elicits:1 simulated:1 sci:1 outer:4 consumption:2 cochlear:37 toward:3 length:2 cont:1 ratio:4 providing:1 negative:3 fluid:24 rise:1 synthesizing:2 implementation:6 design:12 embc:1 vertical:2 thermal:1 emulating:1 extended:1 dc:7 diffusor:3 interacting:1 varied:1 intensity:4 mechanical:1 required:1 andreou:1 acoustic:1 california:1 boost:2 nip:1 wenbo:1 trans:1 below:1 pattern:1 mismatch:1 reading:1 pig:1 saturation:10 built:1 including:2 power:2 force:14 representing:1 technology:3 carried:1 philadelphia:1 geometric:1 tangent:1 relative:1 embedded:1 filtering:4 analogy:1 degree:1 s0:4 tympani:1 bank:1 last:1 guinea:1 bias:2 institute:1 neighbor:7 taking:2 biomorphic:1 tolerance:1 overcome:1 boundary:2 transition:1 rich:1 forward:3 qualitatively:1 clue:2 san:1 adaptive:1 bm:61 transaction:1 approximate:1 lippmann:1 active:19 mem:3 isb:4 assumed:1 quiescent:1 impedance:4 nature:1 channel:2 ca:2 investigated:1 upstream:3 domain:6 main:2 linearly:3 whole:1 noise:1 s2:2 position:2 exponential:1 lie:2 ihc:2 specific:1 showing:1 explored:1 physiological:2 incorporating:1 adding:1 roble:1 downward:2 push:1 logarithmic:1 wavelength:2 simply:2 expressed:2 bo:1 corresponds:1 determines:2 abc:17 ma:1 stimulating:1 goal:1 viewed:1 shared:1 replace:1 change:1 diminished:1 specifically:1 oval:1 pas:2 mark:2 |
2,027 | 2,841 | Predicting EMG Data from M1 Neurons
with Variational Bayesian Least Squares
Jo-Anne Ting1 , Aaron D?Souza1
Kenji Yamamoto3 , Toshinori Yoshioka2 , Donna Ho?man3
Shinji Kakei4 , Lauren Sergio6 , John Kalaska5
Mitsuo Kawato2 , Peter Strick3 , Stefan Schaal1,2
1
Comp. Science & Neuroscience, U.of S. California, Los Angeles, CA 90089, USA
2
ATR Computational Neuroscience Laboratories, Kyoto 619-0288, Japan
3
University of Pittsburgh, Pittsburgh, PA 15261, USA
4
Tokyo Metropolitan Institute for Neuroscience, Tokyo 183-8526, Japan
5
University of Montreal, Montreal, Canada H3C-3J7
6
York University, Toronto, Ontario, Canada M3J1P3
Abstract
An increasing number of projects in neuroscience requires the statistical analysis of high dimensional data sets, as, for instance, in
predicting behavior from neural ?ring or in operating arti?cial devices from brain recordings in brain-machine interfaces. Linear
analysis techniques remain prevalent in such cases, but classical
linear regression approaches are often numerically too fragile in
high dimensions. In this paper, we address the question of whether
EMG data collected from arm movements of monkeys can be faithfully reconstructed with linear approaches from neural activity in
primary motor cortex (M1). To achieve robust data analysis, we
develop a full Bayesian approach to linear regression that automatically detects and excludes irrelevant features in the data, regularizing against over?tting. In comparison with ordinary least
squares, stepwise regression, partial least squares, LASSO regression and a brute force combinatorial search for the most predictive
input features in the data, we demonstrate that the new Bayesian
method o?ers a superior mixture of characteristics in terms of regularization against over?tting, computational e?ciency and ease of
use, demonstrating its potential as a drop-in replacement for other
linear regression techniques. As neuroscienti?c results, our analyses demonstrate that EMG data can be well predicted from M1
neurons, further opening the path for possible real-time interfaces
between brains and machines.
1
Introduction
In recent years, there has been growing interest in large scale analyses of brain activity with respect to associated behavioral variables. For instance, projects can be
found in the area of brain-machine interfaces, where neural ?ring is directly used
to control an arti?cial system like a robot [1, 2], to control a cursor on a computer
screen via non-invasive brain signals [3] or to classify visual stimuli presented to
a subject [4, 5]. In these projects, the brain signals to be processed are typically
high dimensional, on the order of hundreds or thousands of inputs, with large numbers of redundant and irrelevant signals. Linear modeling techniques like linear
regression are among the primary analysis tools [6, 7] for such data. However, the
computational problem of data analysis involves not only data ?tting, but requires
that the model extracted from the data has good generalization properties. This is
crucial for predicting behavior from future neural recordings, e.g., for continual online interpretation of brain activity to control prosthetic devices or for longitudinal
scienti?c studies of information processing in the brain. Surprisingly, robust linear
modeling of high dimensional data is non-trivial as the danger of ?tting noise and
encountering numerical problems is high. Classical techniques like ridge regression,
stepwise regression or partial least squares regression are known to be prone to
over?tting and require careful human supervision to ensure useful results.
In this paper, we will focus on how to improve linear data analysis for the high dimensional scenarios described above, with a view towards developing a statistically
robust ?black box? approach that automatically detects the most relevant input
dimensions for generalization and excludes other dimensions in a statistically sound
way. For this purpose, we investigate a full Bayesian treatment of linear regression with automatic relevance detection [8]. Such an algorithm, called Variational
Bayesian Least Squares (VBLS), can be formulated in closed form with the help of
a variational Bayesian approximation and turns out to be computationally highly
e?cient. We apply VBLS to the reconstruction of EMG data from motor cortical
?ring, using data sets collected by [9] and [10, 11]. This data analysis addresses
important neuroscienti?c questions in terms of whether M1 neurons can directly
predict EMG traces [12], whether M1 has a muscle-based topological organization
and whether information in M1 should be used to predict behavior in future brainmachine interfaces. Our main focus in this paper, however, will be on the robust
statistical analysis of these kinds of data. Comparisons with classical linear analysis techniques and a brute force combinatorial model search on a cluster computer
demonstrate that our VBLS algorithm achieves the ?black box? quality of a robust
statistical analysis technique without any tunable parameters.
In the following sections, we will ?rst sketch the derivation of Variational Bayesian
Least Squares and subsequently perform extensive comparative data analysis of this
technique in the context of prediction EMG data from M1 neural ?ring.
2
High Dimensional Regression
Before developing our VBLS algorithm, let us brie?y revisit classical linear regression techniques. The standard model for linear regression is:
y=
d
X
bm xm +
(1)
m=1
where b is the regression vector composed of bm components, d is the number of
input dimensions, is additive mean-zero noise, x are the inputs and y are the
outputs. The Ordinary Least Squares (OLS) estimate of the regression vector is
?1 T
b = XT X
X y. The main problem with OLS regression in high dimensional
?1
input spaces is that the full rank assumption of XT X
is often violated due to
underconstrained data sets. Ridge regression can ??x? such problems numerically,
but introduces uncontrolled bias. Additionally, if the input dimensionality exceeds
around 1000 dimensions, the matrix inversion can become prohibitively computationally expensive.
Several ideas exist how to improve over OLS. First, stepwise regression [13] can
be employed. However, it has been strongly criticized for its potential for over?tting and its inconsistency in the presence of collinearity in the input data [14]. To
xi1
xi1
b1
yi
xid
(a) Linear regression
?1
b1
?d
bd
xid
zid
bd
i=1..N
(b) Probabilistic back?tting
zi1
yi
yi
xid
i=1..N
xi1
zi1
zid
i=1..N
(c) VBLS
Figure 1: Graphical Models for Linear Regression. Random variables are in circular
nodes, observed random variables are in double circles and point estimated parameters are
in square nodes.
deal with such collinearity directly, dimensionality reduction techniques like Principal Components Regression (PCR) and Factor Regression (FR) [15] are useful.
These methods retain components in input space with large variance, regardless of
whether these components in?uence the prediction [16], and can even eliminate low
variance inputs that may have high predictive power for the outputs [17]. Another
class of linear regression methods are projection regression techniques, most notably
Partial Least Squares Regression (PLS) [18]. PLS performs computationally inexpensive O(d) univariate regressions along projection directions, chosen according to
the correlation between inputs and outputs. While slightly heuristic in nature, PLS
is a surprisingly successful algorithm for ill-conditioned and high-dimensional regression problems, although it also has a tendency towards over?tting [16]. LASSO
(Least Absolute Shrinkage and Selection Operator) regression [19] shrinks certain
regression coe?cients to 0, giving interpretable models that are sparse. However, a
tuning parameter needs to be set, which can be done using n-fold cross-validation
or manual hand-tuning. Finally, there are also more e?cient methods for matrix
inversion [20, 21], which, however, assume a well-condition regression problem a
priori and degrade in the presence of collinearities in inputs.
In the following section, we develop a linear regression algorithm in a Bayesian
framework that automatically regularizes against problems of over?tting. Moreover,
the iterative nature of the algorithm, due to its formulation as an ExpectationMaximization problem [22], avoids the computational cost and numerical problems
of matrix inversions. Thus, it addresses the two major problems of high-dimensional
OLS simultaneously. Conceptually, the algorithm can be interpreted as a Bayesian
version of either back?tting or partial least squares regression.
3
Variational Bayesian Least Squares
Figure 1 illustrates the progression of graphical models that we need in order to
develop a robust Bayesian version of linear regression. Figure 1a depicts the standard linear regression model. In the spirit of PLS, if we knew an optimal projection
direction of the input data, then the entire regression problem could be solved by
a univariate regression between the projected data and the outputs. This optimal
projection direction is simply the true gradient between inputs and outputs. In the
tradition of EM algorithms [22], we encode this projection direction as a hidden
variable, as shown in Figure 1b. The unobservable variables zim (where i = 1..N
denotes the index into the data set of N data points) are the results of each input
being multiplied with its corresponding component of the projection vector (i.e.
bm ). Then, the zim are summed up to form a predicted output yi .
More formally, the linear regression model in Eq. (1) is modi?ed to become:
zim = bm xim
yi =
d
X
m=1
zim +
For a probabilistic treatment with EM, we make a standard normal assumption of
all distributions in form
of:
?
?
yi |zi ? Normal yi ; 1T zi , ?y
zim |xi ? Normal (zim ; bm xim , ?zm )
where 1 = [1, 1, .., 1]T . While this model is still identical to OLS, notice that in the
graphical model, the regression coe?cients bm are behind the fan-in to the outputs
N
yi . Given the data D = {xi , yi }i=1 , we can view this new regression model as an
EM problem and maximize the incomplete log likelihood log p(y|X) by maximizing
the expected complete log likelihood log p(y, Z|X):
`
?2 N Pd
P
T
log p(y, Z|X) = ? N2 log ?y ? 2?1 y N
? 2
i=1 yi ? 1 zi
m=1 log ?zm
Pd
? m=1 2?1zm (zim ? bm xim )2 + const
(2)
where Z denotes the N by d matrix of all zim . The resulting EM updates require
standard manipulations of normal distributions and result in:
M-step :
bm =
?y =
PN
z
x
i=1
PN im2 im
i=1 xim
1
N
1
N
PN `
i=1
PN
i=1
E-step :
?h
?P
?i
?P
d
d
1 ? 1s
1 ?z 1 =
m=1 ?zm
m=1 ?zm
`
?
2
?zm
= ?zm 1 ? 1s ?zm
`
?
zim = bm xi + 1s ?xm yi ? bT xi
T
?
yi ? 1T zi 2 + 1T ?z 1
2
(zim ? bm xim )2 + ?zm
P
where we de?ne s = ?y + dm=1 ?xm and ?z = Cov(z|y, X). It is very important to
?zm =
note that one EM update has a computationally complexity of O(d), where d is the
number of input dimensions, instead of the O(d3 ) associated with OLS regression.
This e?ciency comes at the cost of an iterative solution, instead of a one-shot
solution for b as in OLS. It can be proved that this EM version of least squares
regression is guaranteed to converge to the same solution as OLS [23].
This new EM algorithm appears to only replace the matrix inversion in OLS by an
iterative method, as others have done with alternative algorithms [20, 21], although
the convergence guarantees of EM are an improvement over previous approaches.
The true power of this probabilistic formulation, though, becomes apparent when we
add a Bayesian layer that achieves the desired robustness in face of ill-conditioned
data.
3.1
Automatic Relevance Determination
From a Bayesian point of view, the parameters bm should be treated probabilistically
so that we can integrate them out to safeguard against over?tting. For this purpose,
as shown in Figure 1c, we introduce precision variables ?m over each regression
parameter bm :
p(b|?) =
p(?) =
Qd
` ?m ? 1
?
?
2
exp ? ?2m b2m
2?
Qd
?
ba
(a? ?1)
?
exp {?b? ?m }
m=1 Gamma(a? ) ?m
m=1
(3)
where ? is the vector of all ?m . In order to obtain a tractable posterior distribution
over all hidden variables b, zim and ?, we use a factorial variational approximation
of the true posterior Q(?, b, Z) = Q(?, b)Q(Z). Note that the connection from
the ?m to the corresponding zim in Figure 1c is an intentional design. Under this
graphical model, the marginal distribution of bm becomes a Student t-distribution
that allows traditional hypothesis testing [24]. The minimal factorization of the
posterior into Q(?, b)Q(Z) would not be possible without this special design.
The resulting augmented model has the following distributions:
yi |zi ? N (yi ; 1T zi , ?y )
zim |bm , ?m xim ? N (zim ; bm xim , ?zm /?m )
bm |?m ? N (wbm ; 0, 1/?m )
?m ? Gamma(?m ; a? , b? )
We now have a mechanism that infers the signi?cance of each dimension?s contribution to the observed output y. Since bm is zero mean, a very large ?m (equivalent
to a very small variance of bm ) suggests that bm is very close to 0 and has no contribution to the output. An EM-like algorithm [25] can be used to ?nd the posterior
updates of all distributions. We omit the EM update equations due to space constraints as they are similar to the EM update above and only focus on the posterior
update for bm and ?:
??1
x2im + ?zm
??1 ?P
?
?P
N
N
2
bm |?m =
i=1 xim + ?zm
i=1 zim xim
?b2m |?m =
?zm
?m
?P
a
? ? = a? +
?b(m)
= b? +
?
N
i=1
(4)
N
2
j
1
2?zm
??1 ?P
?2
? ?PN 2
N
2
zim
?
x
+
?
z
x
zm
im
im
im
i=1
i=1
PN ?
i=1
?
Note that the update equation for bm |?m can be rewritten as:
bm |?m (n+1) =
?
?
PN
x2
im
PN i=1
bm |?m (n)
2 +?
x
zm
i=1 im
+
?zm
s?m
PN
i=1
(Pyi ?b|?(n)T xi )xim
N
i=1
x2
im +?zm
(5)
Eq. (5) demonstrates that in the absence of a correlation between the current
input dimension and the residual error, the ?rst term causes the current regression
coe?cient to decay. The resulting regression solution regularizes over the number of
retained inputs in the ?nal regression vector, performing a functionality similar to
Automatic Relevance Determination (ARD) [8]. The update equations? algorithmic
complexity remains O(d). One can further show that the marginal distribution of all
a? degrees of freedom, which
bm is a t-distribution with t = bm |?m /?bm |?m and 2?
allows a principled way of determining whether a regression coe?cient was excluded
by means of standard hypothesis testing. Thus, Variational Bayesian Least Squares
(VBLS) regression is a full Bayesian treatment of the linear regression problem.
4
Evaluation
We now turn to the application and evaluation of VBLS in the context of predicting EMG data from neural data recorded in M1 of monkeys. The key questions
addressed in this application were i) whether EMG data can be reconstructed accurately with good generalization, ii) how many neurons contribute to the reconstruction of each muscle and iii) how well the VBLS algorithm compares to other
analysis techniques. The underlying assumption of this analysis is that the relationship between neural ?ring and muscle activity is approximately linear.
4.1
Data sets
We investigated data from two di?erent experiments. In the ?rst experiment by
Sergio & Kalaska [9], the monkey moved a manipulandum in a center-out task in
eight di?erent directions, equally spaced in a horizontal planar circle of 8cm radius.
A variation of this experiment held the manipulandum rigidly in place, while the
monkey applied isometric forces in the same eight directions. In both conditions,
movement or force, feedback was given through visual display on a monitor. Neural
activity for 71 M1 neurons was recorded in all conditions (2400 data points for each
neuron), along with the EMG outputs of 11 muscles.
The second experiment by Kakei et al. [10] involved a monkey trained to perform
eight di?erent combinations of wrist ?exion-extension and radial-ulnar movements
while in three di?erent arm postures (pronated, supinated and midway between the
two). The data set consisted of neural data of 92 M1 neurons that were recorded
3
3
OLS
STEP
PLS
LASSO
VBLS
ModelSearch
2.5
2
nMSE
nMSE
2
1.5
1.5
1
1
0.5
0.5
0
OLS
STEP
PLS
LASSO
VBLS
ModelSearch
2.5
nMSE Train
0
nMSE Test
(a) Sergio & Kalaska [9] data
nMSE Train
nMSE Test
(b) Kakei et al. [10] data
Figure 2: Normalized mean squared error for Cross-validation Sets (6-fold for [10] and
8-fold for [9])
VBLS
93.6%
87.1%
Sergio & Kalaska data set
Kakei et al. data set
PLS
7.44%
40.1%
STEP
8.71%
72.3%
LASSO
8.42%
76.3%
Table 1: Percentage neuron matches between baseline and all other algorithms, averaged
over all muscles in the data set
at all three wrist postures (producing 2664 data points for each neuron) and the
EMG outputs of 7 contributing muscles. In all experiments, the neural data was
represented as average ?ring rates and was time aligned with EMG data based on
analyses that are outside of the scope of this paper.
4.2
Methods
For the Sergio & Kalaska data set, a baseline comparison of good EMG reconstruction
was obtained through a limited combinatorial search over possible regression models. A
particular model is characterized by a subset of neurons that is used to predict the EMG
data. Given 71 neurons, theoretically 271 possible models exist. This value is too large
for an exhaustive search. Therefore, we considered only possible combinations of up to 20
neurons, which required several weeks of computation on a 30-node cluster computer. The
optimal predictive subset of neurons was determined from an 8-fold cross validation. This
baseline study served as a comparison for PLS, stepwise regression, LASSO regression,
OLS and VBLS. The ?ve other algorithms used the same validation sets employed in the
baseline study. The number of PLS projections for each data ?t was found by leave-oneout cross-validation. Stepwise regression used Matlab?s ?stepwise?t? function. LASSO
regression was implemented, manually choosing the optimal tuning parameter over all
cross-validation sets. OLS was implemented using a small ridge regression parameter of
10?10 in order to avoid ill-conditioned matrix inversions.
90
STEP
90
STEP
PLS
PLS
80
80
LASSO
70
Ave # of Neurons Found
Ave # of Neurons Found
LASSO
VBLS
ModelSearch
60
50
40
30
50
40
30
20
10
10
1
2
3
4
5
6
7
Muscle
8
9
10
(a) Sergio & Kalaska [9] data
11
ModelSearch
60
20
0
VBLS
70
0
1
2
3
4
Muscle
5
6
7
(b) Kakei et al. [10] data
Figure 3: Average Number of Relevant Neurons found over Cross-validation Sets (6-fold
for [10] and 8-fold for [9])
The average number of relevant neurons was calculated over all 8 cross-validation sets
and a ?nal set of relevant neurons was reached for each algorithm by taking the common
neurons found to be relevant over the 8 cross-validation sets. Inference of relevant neurons
in PLS was based on the subspace spanned by the PLS projections, while relevant neurons
in VBLS were inferred from t-tests on the regression parameters, using a signi?cance of
p < 0.05. Stepwise regression and LASSO regression determined the number of relevant
neurons from the inputs that were included in the ?nal model. Note that since OLS
retained all input dimensions, this algorithm was omitted in relevant neuron comparisons.
Analogous to the ?rst data set, a combinatorial analysis was performed on the Kakei et al.
data set in order to determine the optimal set of neurons contributing to each muscle (i.e.
producing the lowest possible prediction error) in a 6-fold cross-validation. PLS, stepwise
regression, LASSO regression, OLS and VBLS were applied using the same cross-validation
sets, employing the same procedure described for the ?rst data set.
4.3
Results
Figure 2 shows that, in general, EMG traces seem to be well predictable from M1 neural
?ring. VBLS resulted in a generalization error comparable to that produced by the baseline study. In the Kakei et al. dataset, all algorithms performed similarly, with LASSO
regression performing a little better than the rest. However, OLS, stepwise regression,
LASSO regression and PLS performed far worse on the Sergio & Kalaska dataset, with
OLS regression attaining the worst error. Such performance is typical for traditional linear
regression methods on ill-conditioned high dimensional data, motivating the development
of VBLS. The average number of relevant neurons found by VBLS was slightly higher
than the baseline study, as seen in Figure 3. This result is not surprising as the baseline
study did not consider all possible combination of neurons. Given the good generalization
results of VBLS, it seems that the Bayesian approach regularized the participating neurons su?ciently so that no over?tting occurred. Note that the results for muscle 6 and 7
in Figure 3b seem to be due to some irregularities in the data and should be considered
outliers. Table 1 demonstrates that the relevant neurons identi?ed by VBLS coincided at
a very high percentage with those of the baseline results, while PLS, stepwise regression
and LASSO regression had inferior outcomes.
Thus, in general, VBLS achieved comparable performance with the baseline study when
reconstructing EMG data from M1 neurons. While VBLS is an iterative statistical method,
which performs slower than classical ?one-shot? linear least squares methods (i.e., on the
order of several minutes for the data sets in our analyses), it achieved comparable results
with our combinatorial model search, which took weeks on a cluster computer.
5
Discussion
This paper addressed the problem of analyzing high dimensional data with linear regression
techniques, as encountered in neuroscience and the new ?eld of brain-machine interfaces.
To achieve robust statistical results, we introduced a novel Bayesian technique for linear
regression analysis with automatic feature detection, called Variational Bayesian Least
Squares. Comparisons with classical linear regression methods and a ?gold standard?
obtained from a brute force search over all possible linear models demonstrate that VBLS
performs very well without any manual parameter tuning, such that it has the quality of
a ?black box? statistical analysis technique.
A point of concern against the VBLS algorithm is how the variational approximation in
this algorithm a?ects the quality of function approximation. It is known that factorial
approximations to a joint distribution create more peaked distributions, such that one
could potentially assume that VBLS might tend to over?t. However, in the case of VBLS,
a more peaked distribution over bm pushes the regression parameter closer to zero. Thus,
VBLS will be on the slightly pessimistic side of function ?tting and is unlikely to over?t.
Future evaluations and comparisons with Markov Chain Monte Carlo methods will reveal
more details of the nature of the variational approximation. Regardless, it appears that
VBLS could become a useful drop-in replacement for various classical regression methods.
It lends itself to incremental implementation as would be needed in real-time analyses of
brain information.
Acknowledgments
This research was supported in part by National Science Foundation grants ECS-0325383, IIS-0312802,
IIS-0082995, ECS-0326095, ANI-0224419, a NASA grant AC#98 ? 516, an AFOSR grant on Intelligent
Control, the ERATO Kawato Dynamic Brain Project funded by the Japanese Science and Technology
Agency, the ATR Computational Neuroscience Laboratories and by funds from the Veterans Administration Medical Research Service.
References
[1] M.A. Nicolelis. Actions from thoughts. Nature, 409:403?407, 2001.
[2] D.M. Taylor, S.I. Tillery, and A.B. Schwartz. Direct cortical control of 3d neuroprosthetic devices.
Science, 296:1829?1932, 2002.
[3] J.R. Wolpaw and D.J. McFarland. Control of a two-dimensional movement signal by a noninvasive
brain-computer interface in humans. Proceedings of the National Academy of Sciences, 101:17849?
17854, 2004.
[4] Y. Kamitani and F. Tong. Decoding the visual and subjective contents of the human brain. Nature
Neuroscience, 8:679, 2004.
[5] J.D. Haynes and G. Rees. Predicting the orientation of invisible stimuli from activity in human
primary visual cortex. Nature Neuroscience, 8:686, 2005.
[6] J. Wessberg and M.A. Nicolelis. Optimizing a linear algorithm for real-time robotic control using
chronic cortical ensemble recordings in monkeys. Journal of Cognitive Neuroscience, 16:1022?1035,
2004.
[7] S. Musallam, B.D. Corneil, B. Greger, H. Scherberger, and R.A. Andersen. Cognitive control signals
for neural prosthetics. Science, 305:258?262, 2004.
[8] R.M. Neal. Bayesian learning for neural networks.
University of Toronto, 1994.
PhD thesis, Dept. of Computer Science,
[9] L.E. Sergio and J.F. Kalaska. Changes in the temporal pattern of primary motor cortex activity in a
directional isometric force versus limb movement task. Journal of Neurophysiology, 80:1577?1583,
1998.
[10] S. Kakei, D.S. Ho?man, and P.L. Strick. Muscle and movement representations in the primary
motor cortex. Science, 285:2136?2139, 1999.
[11] S. Kakei, D.S. Ho?man, and P.L. Strick. Direction of action is represented in the ventral premotor
cortex. Nature Neuroscience, 4:1020?1025, 2001.
[12] E. Todorov. Direct cortical control of muscle activation in voluntary arm movements: a model.
Nature Neuroscience, 3:391?398, 2000.
[13] N. R. Draper and H. Smith. Applied Regression Analysis. Wiley, 1981.
[14] S. Derksen and H.J. Keselman. Backward, forward and stepwise automated subset selection algorithms: Frequency of obtaining authentic and noise variables. British Journal of Mathematical
and Statistical Psychology, 45:265?282, 1992.
[15] W.F. Massey. Principal component regression in exploratory statistical research. Journal of the
American Statistical Association, 60:234?246, 1965.
[16] S. Schaal, S. Vijayakumar, and C.G. Atkeson. Local dimensionality reduction. In M.I. Jordan, M.J.
Kearns, and S.A. Solla, editors, Advances in Neural Information Processing Systems. MIT Press,
1998.
[17] I.E. Frank and J.H. Friedman. A statistical view of some chemometric regression tools. Technometrics, 35:109?135, 1993.
[18] H. Wold. Soft modeling by latent variables: The nonlinear iterative partial least squares approach.
In J. Gani, editor, Perspectives in probability and statistics, papers in honor of M. S. Bartlett.
Academic Press, 1975.
[19] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of Royal Statistical Society,
Series B, 58(1):267?288, 1996.
[20] V. Strassen. Gaussian elimination is not optimal. Num Mathematik, 13:354?356, 1969.
[21] T. J. Hastie and R. J. Tibshirani. Generalized additive models. Number 43 in Monographs on
Statistics and Applied Probability. Chapman and Hall, 1990.
[22] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the em
algorithm. Journal of Royal Statistical Society. Series B, 39(1):1?38, 1977.
[23] A. D?Souza, S. Vijayakumar, and S. Schaal. The bayesian back?tting relevance vector machine. In
Proceedings of the 21st International Conference on Machine Learning. ACM Press, 2004.
[24] A. Gelman, J. Carlin, H.S. Stern, and D.B. Rubin. Bayesian Data Analaysis. Chapman and Hall,
2000.
[25] Z. Ghahramani and M.J. Beal. Graphical models and variational methods. In D. Saad and M. Opper,
editors, Advanced Mean Field Methods - Theory and Practice. MIT Press, 2000.
| 2841 |@word neurophysiology:1 collinearity:2 version:3 inversion:5 seems:1 nd:1 arti:2 eld:1 shot:2 reduction:2 series:2 longitudinal:1 subjective:1 current:2 anne:1 surprising:1 activation:1 bd:2 john:1 numerical:2 additive:2 midway:1 motor:4 drop:2 interpretable:1 update:8 fund:1 device:3 manipulandum:2 wessberg:1 smith:1 num:1 node:3 toronto:2 contribute:1 mathematical:1 along:2 direct:2 become:3 ect:1 behavioral:1 introduce:1 theoretically:1 notably:1 expected:1 behavior:3 strassen:1 growing:1 brain:14 detects:2 automatically:3 little:1 increasing:1 becomes:2 project:4 moreover:1 underlying:1 lowest:1 kind:1 interpreted:1 cm:1 monkey:6 guarantee:1 cial:2 temporal:1 continual:1 prohibitively:1 demonstrates:2 schwartz:1 brute:3 control:9 medical:1 omit:1 grant:3 producing:2 before:1 service:1 local:1 analyzing:1 rigidly:1 path:1 approximately:1 black:3 might:1 suggests:1 ease:1 factorization:1 limited:1 statistically:2 averaged:1 acknowledgment:1 testing:2 wrist:2 practice:1 irregularity:1 wolpaw:1 procedure:1 danger:1 area:1 thought:1 projection:8 radial:1 close:1 selection:3 operator:1 gelman:1 context:2 equivalent:1 center:1 maximizing:1 chronic:1 regardless:2 pyi:1 spanned:1 exploratory:1 variation:1 analogous:1 tting:14 hypothesis:2 pa:1 expensive:1 observed:2 solved:1 worst:1 thousand:1 solla:1 movement:7 principled:1 cance:2 dempster:1 pd:2 predictable:1 complexity:2 agency:1 monograph:1 donna:1 dynamic:1 trained:1 predictive:3 joint:1 represented:2 various:1 derivation:1 train:2 monte:1 outside:1 choosing:1 exhaustive:1 outcome:1 apparent:1 heuristic:1 premotor:1 cov:1 statistic:2 h3c:1 itself:1 laird:1 online:1 beal:1 took:1 reconstruction:3 fr:1 zm:19 relevant:11 cients:2 aligned:1 ontario:1 achieve:2 gold:1 academy:1 moved:1 participating:1 lauren:1 tillery:1 los:1 rst:5 convergence:1 cluster:3 double:1 xim:10 comparative:1 incremental:1 ring:7 leave:1 help:1 develop:3 montreal:2 ac:1 erent:4 ard:1 expectationmaximization:1 eq:2 implemented:2 kenji:1 predicted:2 involves:1 come:1 qd:2 signi:2 direction:7 radius:1 tokyo:2 functionality:1 subsequently:1 human:4 elimination:1 xid:3 require:2 generalization:5 pessimistic:1 im:7 extension:1 around:1 intentional:1 considered:2 normal:4 exp:2 hall:2 algorithmic:1 predict:3 scope:1 b2m:2 week:2 major:1 ventral:1 achieves:2 omitted:1 purpose:2 combinatorial:5 faithfully:1 create:1 metropolitan:1 tool:2 stefan:1 mit:2 j7:1 gaussian:1 pn:9 avoid:1 shrinkage:2 probabilistically:1 encode:1 focus:3 schaal:2 zim:16 improvement:1 prevalent:1 rank:1 likelihood:3 prosthetics:1 ave:2 tradition:1 baseline:9 inference:1 typically:1 eliminate:1 entire:1 bt:1 hidden:2 unlikely:1 unobservable:1 among:1 ill:4 orientation:1 priori:1 development:1 summed:1 special:1 marginal:2 field:1 manually:1 identical:1 haynes:1 chapman:2 peaked:2 future:3 others:1 stimulus:2 intelligent:1 opening:1 modi:1 composed:1 simultaneously:1 gamma:2 ve:1 resulted:1 national:2 replacement:2 technometrics:1 friedman:1 freedom:1 detection:2 mitsuo:1 interest:1 organization:1 investigate:1 highly:1 circular:1 evaluation:3 introduces:1 mixture:1 scienti:1 behind:1 held:1 chain:1 closer:1 partial:5 incomplete:2 taylor:1 circle:2 desired:1 uence:1 minimal:1 criticized:1 instance:2 classify:1 modeling:3 soft:1 ordinary:2 cost:2 subset:3 hundred:1 successful:1 too:2 motivating:1 emg:15 rees:1 st:1 international:1 retain:1 vijayakumar:2 probabilistic:3 xi1:3 decoding:1 safeguard:1 jo:1 squared:1 andersen:1 recorded:3 thesis:1 worse:1 cognitive:2 american:1 japan:2 potential:2 de:1 attaining:1 student:1 kamitani:1 chemometric:1 performed:3 view:4 closed:1 reached:1 contribution:2 square:16 variance:3 characteristic:1 ensemble:1 spaced:1 directional:1 conceptually:1 bayesian:21 accurately:1 produced:1 carlo:1 comp:1 served:1 manual:2 ed:2 against:5 inexpensive:1 frequency:1 involved:1 invasive:1 dm:1 associated:2 di:4 tunable:1 treatment:3 proved:1 dataset:2 dimensionality:3 infers:1 zid:2 back:3 nasa:1 appears:2 higher:1 isometric:2 planar:1 formulation:2 done:2 box:3 strongly:1 shrink:1 though:1 wold:1 correlation:2 sketch:1 hand:1 horizontal:1 su:1 nonlinear:1 quality:3 reveal:1 usa:2 consisted:1 true:3 normalized:1 regularization:1 excluded:1 laboratory:2 neal:1 deal:1 erato:1 inferior:1 generalized:1 ridge:3 demonstrate:4 complete:1 invisible:1 performs:3 interface:6 variational:11 novel:1 superior:1 ols:17 common:1 kawato:1 association:1 interpretation:1 m1:12 occurred:1 numerically:2 im2:1 automatic:4 tuning:4 similarly:1 strick:2 had:1 funded:1 robot:1 cortex:5 operating:1 encountering:1 supervision:1 add:1 sergio:7 posterior:5 recent:1 perspective:1 optimizing:1 irrelevant:2 scenario:1 manipulation:1 certain:1 honor:1 inconsistency:1 yi:14 muscle:12 seen:1 employed:2 converge:1 maximize:1 redundant:1 determine:1 signal:5 ii:3 full:4 sound:1 kyoto:1 exceeds:1 match:1 determination:2 characterized:1 cross:10 academic:1 kalaska:7 equally:1 prediction:3 regression:75 achieved:2 addressed:2 crucial:1 saad:1 rest:1 recording:3 subject:1 tend:1 spirit:1 seem:2 jordan:1 ciently:1 presence:2 iii:1 automated:1 todorov:1 carlin:1 zi:6 psychology:1 hastie:1 lasso:15 idea:1 administration:1 angeles:1 fragile:1 whether:7 bartlett:1 peter:1 york:1 cause:1 action:2 matlab:1 useful:3 factorial:2 processed:1 exist:2 percentage:2 revisit:1 notice:1 neuroscience:11 estimated:1 tibshirani:2 key:1 demonstrating:1 monitor:1 authentic:1 d3:1 nal:3 ani:1 draper:1 backward:1 massey:1 excludes:2 year:1 place:1 comparable:3 layer:1 uncontrolled:1 guaranteed:1 display:1 fold:7 topological:1 fan:1 encountered:1 activity:7 constraint:1 x2:2 prosthetic:1 performing:2 developing:2 according:1 combination:3 remain:1 slightly:3 em:12 reconstructing:1 derksen:1 ulnar:1 outlier:1 computationally:4 equation:3 remains:1 mathematik:1 turn:2 mechanism:1 kakei:8 needed:1 tractable:1 rewritten:1 multiplied:1 apply:1 progression:1 eight:3 limb:1 alternative:1 robustness:1 ho:3 slower:1 denotes:2 ensure:1 graphical:5 coe:4 const:1 giving:1 ghahramani:1 veteran:1 classical:7 society:2 question:3 posture:2 primary:5 traditional:2 gradient:1 lends:1 subspace:1 atr:2 degrade:1 collected:2 trivial:1 index:1 retained:2 relationship:1 potentially:1 frank:1 trace:2 oneout:1 ba:1 design:2 implementation:1 stern:1 perform:2 neuron:29 markov:1 voluntary:1 regularizes:2 zi1:2 canada:2 souza:1 inferred:1 introduced:1 required:1 extensive:1 connection:1 california:1 identi:1 address:3 mcfarland:1 pattern:1 xm:3 pcr:1 royal:2 power:2 treated:1 force:6 regularized:1 predicting:5 nicolelis:2 residual:1 advanced:1 arm:3 improve:2 technology:1 ne:1 determining:1 contributing:2 shinji:1 afosr:1 versus:1 validation:11 foundation:1 integrate:1 degree:1 rubin:2 editor:3 prone:1 surprisingly:2 supported:1 bias:1 side:1 institute:1 musallam:1 face:1 taking:1 absolute:1 sparse:1 feedback:1 dimension:9 cortical:4 calculated:1 avoids:1 neuroprosthetic:1 noninvasive:1 opper:1 forward:1 projected:1 bm:28 atkeson:1 employing:1 far:1 ec:2 reconstructed:2 robotic:1 b1:2 pittsburgh:2 knew:1 xi:5 search:6 iterative:5 latent:1 table:2 additionally:1 nature:8 robust:7 ca:1 obtaining:1 investigated:1 japanese:1 did:1 main:2 gani:1 noise:3 n2:1 nmse:6 augmented:1 cient:4 brie:1 screen:1 depicts:1 tong:1 wiley:1 precision:1 ciency:2 coincided:1 minute:1 british:1 schaal1:1 xt:2 er:1 decay:1 concern:1 stepwise:11 underconstrained:1 phd:1 conditioned:4 illustrates:1 push:1 cursor:1 simply:1 univariate:2 visual:4 pls:16 extracted:1 acm:1 formulated:1 careful:1 towards:2 replace:1 brainmachine:1 absence:1 content:1 change:1 included:1 determined:2 typical:1 man:2 principal:2 kearns:1 called:2 tendency:1 aaron:1 formally:1 corneil:1 relevance:4 violated:1 dept:1 regularizing:1 |
2,028 | 2,842 | How fast to work: Response vigor, motivation
and tonic dopamine
1
Yael Niv1,2 Nathaniel D. Daw2 Peter Dayan2
ICNC, Hebrew University, Jerusalem 2 Gatsby Computational Neuroscience Unit, UCL
[email protected] {daw,dayan}@gatsby.ucl.ac.uk
Abstract
Reinforcement learning models have long promised to unify computational, psychological and neural accounts of appetitively conditioned behavior. However, the bulk of data on animal conditioning comes from
free-operant experiments measuring how fast animals will work for reinforcement. Existing reinforcement learning (RL) models are silent about
these tasks, because they lack any notion of vigor. They thus fail to address the simple observation that hungrier animals will work harder for
food, as well as stranger facts such as their sometimes greater productivity even when working for irrelevant outcomes such as water. Here,
we develop an RL framework for free-operant behavior, suggesting that
subjects choose how vigorously to perform selected actions by optimally
balancing the costs and benefits of quick responding. Motivational states
such as hunger shift these factors, skewing the tradeoff. This accounts
normatively for the effects of motivation on response rates, as well as
many other classic findings. Finally, we suggest that tonic levels of
dopamine may be involved in the computation linking motivational state
to optimal responding, thereby explaining the complex vigor-related effects of pharmacological manipulation of dopamine.
1
Introduction
A banal, but nonetheless valid, behaviorist observation is that hungry animals work harder
to get food [1]. However, associated with this observation are two stranger experimental
facts and a large theoretical failing. The first weird fact is that hungry animals will in some
circumstances work more vigorously even for motivationally irrelevant outcomes such as
water [2, 3], which seems highly counterintuitive. Second, contrary to the emphasis theoretical accounts have placed on the effects of dopamine (DA) on learning to choose between
actions, the most overt behavioral effects of DA interventions are similar swings in undirected vigor [4], at least part of which appear immediately, without learning [5]. Finally,
computational theories fail to deliver on the close link they trumpet between DA, behavior,
and reinforcement learning (RL; eg [6]), as they do not address the whole experimental
paradigm of free-operant tasks [7], whence hail those and many other results.
Rather than the standard RL problem of discrete choices between alternatives at prespecified timesteps [8], free-operant experiments investigate tasks in which subjects pace their
own responding (typically on a lever or other manipulandum). The primary choice in these
tasks is of how rapidly/vigorously to behave, rather than what behavior to choose (as typically only one relevant action is available). RL models are silent about these aspects, and
thus fail to offer a principled understanding of the policies selected by the animals.
Hungry
30
Sated
20
10
0
0
10
20
30
40
(c)
LPs in 30 minutes
(b)
40
responses/min
rate per minute
(a)
seconds since reinforcement
reinforcements per hour
FR schedule
Figure 1: (a) Leverpress (blue, right) and consummatory nose poke (red, left) response
rates of rats leverpressing for food on a modified RI30 schedule. Hungry rats (open circles) clearly press the lever at a higher rate than sated rats (filled circles). Data from [11],
averaged over 19 rats in each group. (b) The relationship between rate of responding and
rate of reinforcement (reciprocal of the interval) on an RI schedule, is hyperbolic (of the
form y = B ? x/(x + x0 )). This is an instantiation of Herrnstein?s matching law for one
response (adapted from [9]). (c) Total number of leverpresses per session averaged over
five 30 minute sessions by rats pressing for food on different FR schedules. Rats with
nucleus accumbens 6-OHDA dopamine lesions (gray) press significantly less than control
rats (black), with the difference larger for higher ratio requirements. Adapted from [12].
Here, we address these issues by constructing an RL account of behavior rates in freeoperant settings (Sections 2,3). We consider optimal control in a continuous-time Markov
Decision Process (MDP), in which agents must choose both an action and the latency with
which to emit it (ie how vigorously, or at what instantaneous rate to perform it). Our model
treats response vigor as being determined normatively, as the outcome of a battle between
the cost of behaving more expeditiously and the benefit of achieving desirable outcomes
more quickly. We show that this simple, normative framework captures many classic features of animal behavior that are obscure in our and others? earlier treatments (Section 4).
These include the characteristic time-dependent profiles of response rates on tasks with different payoff scheduling [7], the hyperbolic relationship between response rate and payoff
[9], and the difference in response rates between tasks in which reinforcements are allocated based on the number of responses emitted and those allocating reinforcements based
on the passage of time [10].
A key feature of this model is that response rates are strongly dependent on the expected
average reward rate, because this determines the opportunity cost of sloth. By influencing
the value of reinforcers ? and through this, the average reward rate ? motivational states
such as hunger influence the output response latencies (and not only response choice).
Thus, in our model, hungry animals should optimally also work harder for water, since
in typical circumstances, this should allow them to return more quickly to working for
food. Further, we identify tonic levels of dopamine with the representation of average
reward rate, and thereby suggest an account of a wealth of experiments showing that DA
influences response vigor [4, 5], thus complementing existing ideas about the role of phasic
DA signals in learned action selection (Section 5).
2
Free-operant behavior
We consider the free-operant scenario common in experimental psychology, in which an
animal is placed in an experimental chamber, and can choose freely which actions to emit
and when. Most actions have no programmed consequences; however, one action (eg leverpressing; LP) is rewarded with food (which falls into a food magazine) according to an
experimenter-determined schedule of reinforcement. Food delivery makes a characteristic
sound, signalling its availability for harvesting via a nose poke (NP) into the magazine.
The schedule of reinforcement defines the (possibly stochastic) relationship between the
delivery of a reward and one or both of (a) the number of LPs, and (b) the time since the
last reward was delivered. In common use are fixed-ratio (FR) schedules, in which a fixed
number of LPs is required to obtain a reinforcer; random-ratio (RR) schedules, in which
each LP has a constant probability of being reinforced; and random interval (RI) schedules,
in which the first LP after an (exponentially distributed) interval of time has elapsed, is
reinforced. Schedules are often labelled by their type and a parameter, so RI30 is a random
interval schedule with the exponential waiting time having a mean of 30 seconds [7].
Different schedules induce different patterns of responding [7]. Fig 1a shows response
metrics from rats leverpressing on an RI30 schedule. Leverpressing builds up to a relatively constant rate following a rather long pause after gaining each reward, during which
the food is consumed. Hungry rats leverpress more vigorously than sated ones. A similar overall pattern is also characteristic of responding on RR schedules. Figure 1b shows
the total number of LP responses in a 30 minute session for different interval schedules.
The hyperbolic relationship between the reward rate (the inverse of the interval) and the
response rate is a classic hallmark of free operant behavior [9].
3
The model
We model a free-operant task as a continuous MDP. Based on its state, the agent chooses
both an action (a), and a latency (? ) at which to emit it. After time ? has elapsed, the
action is completed, the agent receives rewards and incurs costs associated with its choice,
and then selects a new (a, ? ) pair based on its new state. We define three possible actions
a ? {LP, NP, other}, where we take a = other to include the various miscellaneous
behaviors such as grooming, rearing, and sniffing which animals typically perform during
the experiment. For simplicity we consider unit actions, with the latency ? related to the
vigor with which this unit is performed. To account for consumption time (which is nonnegligible [11, 13]), if the agent nose-pokes and food is available, a predefined time t eat
passes before the next decision point (and the next state) is reached.
Crucially, performing actions incurs costs as well as potentially gains rewards. Following
Staddon [14], we assume one part of the cost of an action to be proportional to the vigor of
its execution, ie inversely proportional to ? . The constant of proportionality Kv depends on
both the previous and the current action, since switching between different action types can
require travel between different parts of the experimental chamber (say, the magazine to
the lever), and can thus be more costly. Each action also incurs a fixed ?internal? reward or
cost of ?(a) per unit, typically with other being rewarding. The reinforcement schedule
defines the probability of reward delivery for each state-action-latency triplet. An available
reward can be harvested by a = NP into the magazine, and we assume that the thereby
obtained subjective utility U (r) of the food reward is motivation-dependent, such that food
is worth more to a hungry animal than to a sated one.
We consider the simplified case of a state space comprised of all the parameters relevant
to the task. Specifically, the state space includes the identity of the previous action, an
indicator as to whether a reward is available in the food magazine, and, as necessary, the
number of LPs since the previous reinforcement (for FR) or the elapsed time since the previous LP (for RI). The transitions between the states P (S 0 |S, a, ? ) and the reward function
Pr (S, a, ? ) are defined by the dynamics of the schedule of reinforcement, and all rewards
and costs are harvested at state transitions and considered as point events. In the following
we treat the problem of optimising a policy (which action to take and with what latency,
given the state) in order to maximize the average rate of return (rewards minus costs per
time). An exponentially discounted model gives the same qualitative results.
In the average reward case [15, 16], the Bellman equation for the long-term differential (or
(b)
30
20
LP
NP
10
0
0
20
40
seconds since reinforcement
rate per minute
rate per minute
(a)
30
(c)
LP
NP
30
20
20
10
10
0
0
20
40
seconds since reinforcement
0
0
#LPs in 5min
LP latency (sec)
5
reinforcement rate/min
10
Figure 2: Data generated by the model captures the essence of the behavioral data: Leverpress (solid blue; circles) and nose poke (dashed red; stars) response rates on (a) an RR10
schedule and (b) a matched (yoked) RI schedule show constant LP rates which are higher
for the ratio schedule. (c) The relationship between the total number of responses (circles)
and rate of reinforcement is hyperbolic (solid line: hyperbolic curve fit). The mean latency
to leverpress (dashed line) decreases as the rate of reinforcement increases.
average-adjusted) value of state S is:
?
ff
Z
Kv (aprev , a)
V ? (S) = max ?(a)?
+U (r)Pr (S, a, ? )?? ? r+ dS 0 P (S 0 |S, a, ? )V ? (S 0 )
a,?
?
(1)
where r is the long term average reward rate (whose subtraction from the value quantifies
the opportunity cost of delay). Building on ideas from [16], we suggest that the average
reward rate is reported by tonic (baseline) levels of dopamine (and not serotonin [16]) in
basal ganglia structures relevant for action selection, and that changes in tonic DA (eg as a
result of pharmacological interventions) would thus alter the assumed average reward rate.
In this paper, we eschew learning, and examine the steady state behavior that arises when
actions are chosen stochastically (via the so-called softmax or Boltzmann distribution) from
the optimal one-step look-ahead model-based Q(S, a, ? ) state-action-latency values. For
ratio schedules, the simple transition structure of the task allows the Bellman equation to
be solved analytically to determine the Q values. For interval schedules, we use averagereward value iteration [15] with time discretized at a resolution of 100ms. For simulations
(eg of dopaminergic manipulations) where r was assumed to change independent of any
change in the task contingencies, we used value iteration to find values approximately
satisfying the Bellman equation (which is no longer exactly solvable). Our overriding aim
is to replicate basic aspects of free operant behavior qualitatively, in order to understand
the normative foundations of response vigor. We do not fit the parameters of the model
to experimental data in a quantitative way, and the results we describe below are general,
robust, characteristics of the model.
4
Results
Fig 2a depicts the behavior of our model on an RR10 schedule. In rough accordance with
the behavior displayed by animals (which is similar to that shown in Fig 1a), the LP rate
is constant over time, bar a pause for consumption. Fig 2b depicts the model?s behavior
in a yoked random interval schedule, in which the intervals between rewards were set to
match exactly the intervals obtained by the agent trained on the ratio schedule in Fig 2a.
The response rate is again constant over time, but it is also considerably lower than that
in the corresponding RR schedule, although the external reward density is similar. This
phenomenon has also been observed experimentally, and although the apparent anomaly
has been much discussed in the associative learning literature, its explanation is not fully
resolved [10]. Our model suggests that it is the result of an optimal cost/benefit tradeoff.
We can analyse this difference by considering the Q values for leverpressing at different
latencies in random schedules
Q(Snr , LP, ? ) = ?(LP) ?
Kv (LP, LP)
? ? ? r + P (Sr |? )V ? (Sr ) + [1 ? P (Sr |? )]V ? (Snr )
?
(2)
where we are looking at consecutive leverpresses in the absence of available reward, and
Sr and Snr designate the states in which a reward is or is not available in the magazine,
respectively.
p In ratio schedules, since P (Sr |? ) is independent of ? , the optimizing latency
?
is ?LP
= Kv (LP, LP)/r, its inverse defining the optimal rate of leverpressing. In interval schedules, however, P (Sr |? ) = 1 ? exp{?? /T } where T is the schedule interval.
?
Taking the derivative of eq. (2) we find that the optimal latency to leverpress ? LP
satisfies
?
?2
/T } = 0. Although no longer
Kv (LP, LP)/?LP
? r + (1/T )[V ? (Sr ) ? V ? (Snr )] ? exp{??LP
analytically solvable, it is easily seen that this latency will always be longer than that found
above for ratio schedules. Intuitively, since longer inter-response intervals increase the
probability of reward per press in interval schedules but not in ratio schedules, the optimal
leverpressing rate is lower in the former than in the latter.
Fig 2c shows the average number of LPs in a 5 minute session for different interval schedules. This ?molar? measure of rate shows the well documented hyperbolic relationship (cf
Fig 1b). On the ?molecular? level of single action choices, the mean latency h? LP i between
consecutive LPs decreases as the probability of reinforcement increases. This measure of
response vigor is actually more accurate than the overall response measure, as it is not contaminated by competition with other actions, or confounded with the number of reinforcers
per session for different schedules (and the time forgone when consuming them). For this
reason, although we (correctly; see [13]) predict that inter-response latency should slow for
higher ratio requirements, raw LP counts can actually increase, as in Fig. 1c, probably due
to fewer rewards and less time spent eating [13].
5
Drive and dopamine
Having provided a qualitative account of the basic patterns of free operant rates of behavior,
we turn to the main theoretical conundrum ? the effects of drive and DA manipulations
on response vigor. The key to understanding these is the role that the average reward r
plays in the tradeoffs determining optimal response vigor. In effect, the average expected
reward per unit time quantifies the opportunity cost for doing nothing (and receiving no
reward) for that time; its increase thus produces general pressure for faster work. A direct
consequence of making the agent hungrier is that the subjective utility of food is enhanced.
This will have interrelated effects on the optimal average reward r, the optimal values V ? ,
and the resultant optimal action choices and vigors. Notably, so long as the policy obtains
food, its average reward rate will increase.
Consider a fixedp
or random ratio schedule. The increase in r will increase the optimal
?
LP rate 1/?LP
= r/Kv (LP, LP), as the higher reward utility offsets higher procurement
costs. Importantly, because the optimal ? ? has a similar dependence on r even for actions
irrelevant to obtaining food, they also become more vigorous. The explanation of this effect
is presented graphically in Fig 3e. The higher r increases the cost of sloth, since every
? time without reward forgoes an expected (? ? r) mean reward. Higher average rewards
penalize late actions more than they do early ones, thus tilting action selection toward faster
behavior, for all pre-potent actions. Essentially, hunger encourages the agent to complete
irrelevant actions faster, in order to be able to resume leverpressing more quickly.
For other schedules, the same effects generally hold (although the analytical reasoning is
complicated by the fact that the optimal latencies may in these cases depend not only on the
new average reward but also on the new values V ? ). Fig 3a shows simulated responding
on an RI25 schedule in which the internal reward for the food-irrelevant action other has
been set high enough to warrant non-negligible base responding. Fig 3b shows that when
NP
15
(c)
20
20
15
15
Other
10
10
10
5
0
0
(d)
(b)
rate per minute
LP
rate per minute
rate per minute
20
10
20
30
40
sec from reinforcement
5
0
0
(e)
10
20
30
40
sec from reinforcement
5
0
0
(f)
LPs in 30 minutes
(a)
10
Q value/prob
mean latency (sec)
4
2
0
LP
other
30
40
Control
1500
6
20
sec from reinforcement
60% DA depleted
1000
?
500
0
1
3
9
27
Figure 3: The effects of drive on response rates. (a) Responding on a RI25 schedule, with
high internal rewards (0.35) for a = other (open circles). (b) The effects of hunger: U (r)
was changed from 10 to 15. (c) The effect of an irrelevant drive (hungry animals leverpressing for water rewards): r was increased by 4% compared to (a). (d) Mean latencies
to responding h? i for LP and other in baseline (a; black), increased hunger (b; white) and
irrelevant drive (c; gray). (e) Q values for leverpressing at different latencies ? . In black
(top) are the unadjusted Q values, before subtracting (? ?r). In red (middle, solid) and green
(bottom, solid) are the values adjusted for two different average reward rates. The higher
reward rate penalizes late actions more, thereby causing faster responding, as shown by
the corresponding softmaxed action probability curves (dashed). (f) Simulation of DA depletion: overall leverpress count over 30 minute sessions (each bar averaging 15 sessions),
for different FR requirements (bottom). In black is the control condition, and in gray is
simulated DA depletion, attained by lowering r by 60%. The effects of the depletion seem
more pronounced in higher schedules (compare to Fig 1c), but this actually results from the
interaction with the number of rewards attained (see text).
the utility of food is increased by 50%, the agent chooses to leverpress more, at the expense
of other actions. This illustrates the ?directing? effect of motivation, by which the agent is
directed more forcefully toward the motivationally relevant action [17]. Furthermore, the
second, ?driving? effect, by which motivation increases vigor globally [17], is illustrated in
Fig 3d which shows that, in fact, the latency to both actions has decreased. Thus, although
selected less often, when other is selected, it is performed more vigorously than it was
when the agent was sated.
This general drive effect can be better isolated if we examine hungry agents leverpressing
for water (rather than food), without competition from actions for food. We can view our
leverpressing MDP as a portion of a larger one, which also includes (for instance) occasional opportunities for visits to a home cage where food is available. Without explicitly
specifying all this extra structure, a good approximation is to take hunger as again causing
an increase in the global rate of reinforcement r, reflecting the increase in the utility of
food received elsewhere. Fig 3c shows the effects on responding on an interval schedule,
of estimating the average reward rate to be 4% higher than in Fig 3a, and deriving new Q
values from the previous V ? with this new r as illustrated in Fig 3e. As above, the adjusted
vigors of all behaviors are faster (Fig 3d, gray bars), as a result of the higher ?drive?.
How do these drive effects relate to dopamine? Pharmacological and lesion studies show
that enhancing DA levels (through agonists such as amphetamine) increases general activity
[5, 18, 19], while depleting or antagonising DA causes a general slowing of responding
(eg [4]). Fig. 1c is representative of a host of results from the lab of Salamone [4, 12]
which show that lower levels of DA in the nucleus accumbens (a structure in the basal
ganglia implicated in action selection) result in lower response rates. This effect seems
more pronounced in higher fixed-ratio schedules, those requiring more work per reinforcer.
As a result of this apparent dependence on the response requirement, Salamone and his
colleagues have hypothesized that DA enables animals to overcome higher work demands.
We suggest that tonic levels of DA represent the average reward rate (a role tentatively
proposed for serotonin in [16]). Thus a higher tonic level of DA represents a situation akin
to higher drive, in which behavior is more vigorous, and lower tonic levels of DA cause a
general slowing of behavior. Fig. 3f shows the simulated response counts for different FR
schedules in two conditions. The control condition is the standard model described above;
DA depletion was modeled by decreasing tonic DA levels (and therefore r) to 40% of their
original levels. The results match the data in Fig. 1c. Here, the apparently small effect on
the number of LPs for low ratio schedules actually arises because of the large amount of
time spent eating. Thus, according to the model DA is not really allowing animals to cope
with higher work requirements, but rather is important for optimal choice of vigor at any
work requirement, with the slowing effect of DA depletion more prominent (in the crude
measure of LPs per session) when more time is spent leverpressing.
6
Discussion
The present model brings the computational machinery and neural grounding of RL models
fully into contact with the vast reservoir of data from free-operant tasks. Classic quantitative accounts of operant behavior (such as Herrnstein?s matching law [9], and variations
such as melioration) lack RL?s normative grounding in sound control theory, and tend instead toward descriptive curve-fitting. Most of these theories do not address that fine scale
(molecular) structure of behavior, and instead concentrate on fairly crude molar measures
such as total number of leverpresses over long durations. In addition to the normative starting point it offers for investigations of response vigor, our theory provides a relatively fine
scalpel for dissecting the temporal details of behavior, such as the distributions of interresponse intervals at particular state transitions. There is thus great scope for revealing
re-analyses of many existing data sets. In particular, the effects of generalized drive have
proved mixed and complex [17]. Our theory suggests that studies of inter-response intervals (eg Fig 3d) may reveal more robust changes in vigor, uncontaminated by shifts in
overall action propensity.
Response vigor and dopamine?s role in controlling it have appeared in previous RL models of behavior [20, 21], but only as fairly ad-hoc bolt-ons ? for instance, using repeated
choices between doing nothing versus something to capture response latency. Here, these
aspects are wholly integrated into the explanatory framework: optimizing response vigor
is treated as itself an RL problem, with a natural dopaminergic substrate. To account for
immediate (unlearned) effects of motivational or dopaminergic manipulations, the main
assumption we make is that tonic levels of DA can be sensitive to predicted changes in
the average reward occasioned by changes in the motivational state, and that behavioral
policies are in turn immediately affected. This sensitivity would be easy to embed in a
temporal-difference RL system, producing flexible adaptation of response vigor. By contrast, due to the way they cache outcome values, the action choices of such RL systems are
characteristically insensitive to the ?directing? effects of motivational manipulations [22].
In animal behavior, ?habitual actions? (the ones associated with the DA system) are indeed
motivationally insensitive for action choice, but show a direct effect of drive on vigor [23].
Our model is easy to accommodate within a framework of temporal difference (TD) learning. Thus, it naturally preserves the link between phasic DA signals and online learning
of optimal values [24]. We further elaborate this link by suggesting an additional role for
tonic levels of DA in online vigor selection. A major question remains as to whether phasic responses (which are known to correlate with response latency [25]) play an additional
role in determining response vigor. Further, it is pressing to reconcile the present account
with our previous suggestion (based on microdialysis findings) [16] that tonic levels of DA
might track average punishment.
The most critical avenues to develop this work will be an account of learning, and neurally and psychologically more plausible state and temporal representations. On-line value
learning should be a straightforward adaptation of existing TD models of phasic DA based
on the continuous-time semi-Markov setting [26]. The representation of state is more challenging ? the assumption of a fully observable state space automatically appropriate for
the schedule of reinforcement is not realistic. Indeed, apparently sub-optimal actions emitted by animals, eg engaging in excessive nose-poking even when a reward has not audibly
dropped into the food magazine [11], may provide clues to this issue. Finally, it will be
crucial to consider the fact that animals? decisions about vigor may translate only noisily
into response times, due, for instance, to the variability of internal timing [27].
Acknowledgments
This work was funded by the Gatsby Charitable Foundation, a Dan David fellowship (YN), the Royal Society (ND) and
the EU BIBA project (ND and PD). We are grateful to Jonathan Williams for discussions on free operant behavior.
References
[1] Dickinson A. and Balleine B.W. The role of learning in the operation of motivational systems. Steven?s Handbook
of Experimental Psychology Volume 3, pages 497?533. John Wiley & Sons, New York, 2002.
[2] Hull C.L. Principles of behavior: An introduction to behavior theory. Appleton-Century-Crofts, New York, 1943.
[3] B?elanger D. and T?etreau B. L?influence d?une motivation inappropriate sur le comportement du rat et sa fr e? quence
cardiaque. Can. J. of Psych., 15:6?14, 1961.
[4] Salamone J.D. and Correa M. Motivational views of reinforcement: implications for understanding the behavioral
functions of nucleus accumbens dopamine. Behavioural Brain Research, 137:3?25, 2002.
[5] Ikemoto S. and Panksepp J. The role of nucleus accumbens dopamine in motivated behavior: a unifying interpretation with special reference to reward-seeking. Brain Res. Rev., 31:6?41, 1999.
[6] Schultz W. Predictive reward signal of dopamine neurons. J. Neurophys., 80:1?27, 1998.
[7] Domjan M. The principles of learning and behavior. Brooks/Cole, Pacific Grove, California, 3rd edition, 1993.
[8] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT Press, 1998.
[9] Herrnstein R.J. On the law of effect. J. of the Exp. Anal. of Behav., 13(2):243?266, 1970.
[10] Dawson G.R. and Dickinson A. Performance on ratio and interval schedules with matched reinforcement rates. Q.
J. of Exp. Psych. B, 42:225?239, 1990.
[11] Niv Y., Daw N.D., Joel D., and Dayan P. Motivational effects on behavior: Towards a reinforcement learning model
of rates of responding. In CoSyNe, Salt Lake City, Utah, 2005.
[12] Aberman J.E. and Salamone J.D. Nucleus accumbens dopamine depletions make rats more sensitive to high ratio
requirements but do not impair primary food reinforcement. Neuroscience, 92(2):545?552, 1999.
[13] Foster T.M., Blackman K.A., and Temple W. Open versus closed economies: performance of domestic hens under
fixed-ratio schedules. J. of the Exp. Anal. of Behav., 67:67?89, 1997.
[14] Staddon J.E.R. Adaptive dynamics. MIT Press, Cambridge, Mass., 2001.
[15] Mahadevan S. Average reward reinforcement learning: Foundations, algorithms and empirical results. Machine
Learning, 22:1?38, 1996.
[16] Daw N.D., Kakade S., and Dayan P. Opponent interactions between serotonin and dopamine. Neural Networks,
15(4-6):603?616, 2002.
[17] Bolles R.C. Theory of Motivation. Harper & Row, 1967.
[18] Carr G.D. and White N.M. Effects of systemic and intracranial amphetamine injections on behavior in the open
field: a detailed analysis. Pharmacol. Biochem. Behav., 27:113?122, 1987.
[19] Jackson D.M., Anden N., and Dahlstrom A. A functional effect of dopamine in the nucleus accumbens and in some
other dopamine-rich parts of the rat brain. Psychopharmacologia, 45:139?149, 1975.
[20] Dayan P. and Balleine B.W. Reward, motivation and reinforcement learning. Neuron, 36:285?298, 2002.
[21] McClure S.M., Daw N.D., and Montague P.R. A computational substrate for incentive salience. Trends in Neurosc.,
26(8):423?428, 2003.
[22] Daw N.D., Niv Y., and Dayan P. Uncertainty based competition between prefrontal and dorsolateral striatal systems
for behavioral control. Nature Neuroscience, 8(12):1704?1711, 2005.
[23] Dickinson A., Balleine B., Watt A., Gonzalez F., and Boakes R.A. Motivational control after extended instrumental
training. Anim. Learn. and Behav., 23(2):197?206, 1995.
[24] Montague P.R., Dayan P., and Sejnowski T.J. A framework for mesencephalic dopamine systems based on predictive hebbian learning. J. of Neurosci., 16(5):1936?1947, 1996.
[25] Satoh T., Nakai S., Sato T., and Kimura M. Correlated coding of motivation and outcome of decision by dopamine
neurons. J. of Neurosci., 23(30):9913?9923, 2003.
[26] Daw N.D., Courville A.C., and Touretzky D.S. Timing and partial observability in the dopamine system. In T.G.
Dietterich, S. Becker, and Z. Ghahramani, editors, NIPS, volume 14, Cambridge, MA, 2002. MIT Press.
[27] Gallistel C.R. and Gibbon J. Time, rate and conditioning. Psych. Rev., 107:289?344, 2000.
| 2842 |@word middle:1 seems:2 replicate:1 nd:2 instrumental:1 open:4 proportionality:1 simulation:2 crucially:1 pressure:1 incurs:3 thereby:4 minus:1 solid:4 accommodate:1 harder:3 vigorously:6 rearing:1 subjective:2 existing:4 current:1 neurophys:1 must:1 john:1 realistic:1 enables:1 overriding:1 selected:4 manipulandum:1 fewer:1 complementing:1 signalling:1 slowing:3 une:1 reciprocal:1 prespecified:1 harvesting:1 provides:1 five:1 direct:2 differential:1 become:1 gallistel:1 qualitative:2 depleting:1 fitting:1 dan:1 behavioral:5 balleine:3 x0:1 inter:3 indeed:2 notably:1 expected:3 behavior:31 examine:2 brain:3 discretized:1 bellman:3 discounted:1 globally:1 decreasing:1 td:2 food:24 automatically:1 cache:1 considering:1 inappropriate:1 domestic:1 motivational:10 project:1 estimating:1 provided:1 matched:2 mass:1 what:3 psych:3 accumbens:6 finding:2 kimura:1 temporal:4 sloth:2 quantitative:2 every:1 exactly:2 nonnegligible:1 uk:1 control:8 unit:5 intervention:2 appear:1 producing:1 yn:1 before:2 negligible:1 influencing:1 accordance:1 treat:2 dropped:1 timing:2 consequence:2 switching:1 sutton:1 approximately:1 black:4 might:1 emphasis:1 suggests:2 specifying:1 alice:1 challenging:1 biba:1 programmed:1 averaged:2 systemic:1 directed:1 acknowledgment:1 wholly:1 empirical:1 hyperbolic:6 significantly:1 matching:2 revealing:1 pre:1 induce:1 suggest:4 get:1 close:1 selection:5 scheduling:1 influence:3 quick:1 jerusalem:1 graphically:1 starting:1 duration:1 straightforward:1 williams:1 resolution:1 unify:1 hunger:6 simplicity:1 immediately:2 importantly:1 counterintuitive:1 conundrum:1 deriving:1 his:1 jackson:1 classic:4 century:1 notion:1 molar:2 variation:1 enhanced:1 play:2 controlling:1 magazine:7 anomaly:1 substrate:2 dickinson:3 engaging:1 trend:1 satisfying:1 observed:1 role:8 bottom:2 steven:1 panksepp:1 solved:1 capture:3 eu:1 decrease:2 principled:1 pd:1 unlearned:1 gibbon:1 reward:51 dynamic:2 trained:1 depend:1 grateful:1 predictive:2 deliver:1 resolved:1 easily:1 montague:2 various:1 fast:2 describe:1 sejnowski:1 outcome:6 whose:1 apparent:2 larger:2 plausible:1 say:1 serotonin:3 analyse:1 itself:1 delivered:1 associative:1 online:2 hoc:1 descriptive:1 pressing:2 rr:3 motivationally:3 reinforcer:4 ucl:2 analytical:1 subtracting:1 interaction:2 poke:4 fr:7 adaptation:2 causing:2 relevant:4 rapidly:1 translate:1 kv:6 competition:3 pronounced:2 requirement:7 produce:1 spent:3 poking:1 develop:2 ac:2 received:1 sa:1 eq:1 predicted:1 come:1 concentrate:1 stochastic:1 hull:1 forcefully:1 require:1 really:1 investigation:1 niv:2 designate:1 adjusted:3 hold:1 considered:1 exp:5 great:1 scope:1 predict:1 driving:1 major:1 consecutive:2 early:1 failing:1 overt:1 travel:1 yoked:2 tilting:1 propensity:1 sensitive:2 cole:1 city:1 mit:3 rough:1 clearly:1 always:1 aim:1 modified:1 rather:5 eating:2 barto:1 quence:1 blackman:1 contrast:1 baseline:2 whence:1 economy:1 dayan:6 dependent:3 typically:4 integrated:1 explanatory:1 amphetamine:2 dissecting:1 selects:1 issue:2 overall:4 flexible:1 animal:18 softmax:1 fairly:2 special:1 field:1 having:2 optimising:1 represents:1 look:1 excessive:1 warrant:1 alter:1 others:1 np:6 contaminated:1 preserve:1 highly:1 investigate:1 unadjusted:1 joel:1 occasioned:1 predefined:1 allocating:1 accurate:1 emit:3 implication:1 grove:1 partial:1 necessary:1 machinery:1 filled:1 penalizes:1 circle:5 re:2 isolated:1 theoretical:3 psychological:1 increased:3 instance:3 earlier:1 temple:1 measuring:1 cost:14 snr:4 ikemoto:1 comprised:1 delay:1 optimally:2 reported:1 considerably:1 chooses:2 punishment:1 density:1 potent:1 sensitivity:1 huji:1 ie:2 rewarding:1 receiving:1 quickly:3 again:2 lever:3 sniffing:1 choose:5 possibly:1 prefrontal:1 cosyne:1 stochastically:1 external:1 derivative:1 return:2 account:11 suggesting:2 star:1 sec:5 coding:1 availability:1 includes:2 explicitly:1 depends:1 dayan2:1 ad:1 performed:2 view:2 lab:1 closed:1 doing:2 apparently:2 red:3 reached:1 portion:1 complicated:1 consummatory:1 il:1 nathaniel:1 characteristic:4 reinforced:2 identify:1 resume:1 raw:1 agonist:1 worth:1 drive:11 touretzky:1 nonetheless:1 colleague:1 uncontaminated:1 involved:1 resultant:1 associated:3 naturally:1 gain:1 experimenter:1 treatment:1 proved:1 schedule:47 actually:4 reflecting:1 higher:17 attained:2 response:41 strongly:1 furthermore:1 d:1 working:2 receives:1 cage:1 lack:2 defines:2 brings:1 gray:4 reveal:1 mdp:3 normatively:2 utah:1 grounding:2 effect:29 building:1 requiring:1 hypothesized:1 herrnstein:3 swing:1 analytically:2 former:1 dietterich:1 illustrated:2 eg:7 white:2 pharmacological:3 during:2 encourages:1 essence:1 steady:1 rat:12 m:1 generalized:1 prominent:1 complete:1 carr:1 bolles:1 correa:1 hungry:9 passage:1 reasoning:1 hallmark:1 instantaneous:1 common:2 functional:1 rl:12 salt:1 conditioning:2 exponentially:2 insensitive:2 volume:2 linking:1 discussed:1 interpretation:1 cambridge:2 appleton:1 rd:1 session:8 satoh:1 softmaxed:1 funded:1 bolt:1 habitual:1 expeditiously:1 behaving:1 longer:4 base:1 biochem:1 something:1 own:1 intracranial:1 noisily:1 optimizing:2 irrelevant:7 rewarded:1 manipulation:5 scenario:1 dawson:1 seen:1 greater:1 additional:2 freely:1 subtraction:1 determine:1 paradigm:1 maximize:1 signal:3 semi:1 dashed:3 neurally:1 desirable:1 stranger:2 sound:2 hebbian:1 match:2 faster:5 offer:2 long:6 mcclure:1 host:1 molecular:2 visit:1 basic:2 circumstance:2 metric:1 essentially:1 dopamine:20 enhancing:1 iteration:2 sometimes:1 represent:1 psychologically:1 penalize:1 pharmacol:1 addition:1 fellowship:1 fine:2 interval:19 decreased:1 wealth:1 allocated:1 crucial:1 extra:1 sr:7 pass:1 probably:1 subject:2 tend:1 undirected:1 contrary:1 seem:1 emitted:2 depleted:1 mahadevan:1 enough:1 easy:2 fit:2 timesteps:1 psychology:2 silent:2 observability:1 idea:2 avenue:1 tradeoff:3 consumed:1 shift:2 whether:2 motivated:1 utility:5 becker:1 akin:1 peter:1 york:2 cause:2 behav:4 action:44 skewing:1 generally:1 latency:22 detailed:1 forgoes:1 staddon:2 amount:1 weird:1 documented:1 neuroscience:3 sated:5 per:15 correctly:1 bulk:1 pace:1 blue:2 track:1 discrete:1 incentive:1 waiting:1 affected:1 group:1 key:2 basal:2 promised:1 achieving:1 lowering:1 vast:1 inverse:2 prob:1 uncertainty:1 nakai:1 lake:1 home:1 delivery:3 decision:4 gonzalez:1 dorsolateral:1 courville:1 activity:1 sato:1 adapted:2 ahead:1 ri:4 aspect:3 min:3 performing:1 eat:1 injection:1 relatively:2 dopaminergic:3 pacific:1 according:2 watt:1 battle:1 son:1 lp:42 kakade:1 rev:2 making:1 intuitively:1 pr:2 operant:13 depletion:6 behavioural:1 equation:3 remains:1 turn:2 count:3 fail:3 phasic:4 nose:5 confounded:1 available:7 operation:1 yael:1 opponent:1 occasional:1 appropriate:1 chamber:2 alternative:1 original:1 responding:14 top:1 include:2 cf:1 completed:1 vigor:25 opportunity:4 unifying:1 neurosc:1 build:1 ghahramani:1 society:1 contact:1 seeking:1 question:1 primary:2 costly:1 dependence:2 link:3 simulated:3 consumption:2 water:5 reason:1 toward:3 characteristically:1 sur:1 modeled:1 relationship:6 ratio:16 hebrew:1 nc:1 potentially:1 relate:1 expense:1 striatal:1 anal:2 policy:4 boltzmann:1 perform:3 allowing:1 observation:3 neuron:3 markov:2 behave:1 displayed:1 immediate:1 payoff:2 defining:1 tonic:12 looking:1 situation:1 directing:2 variability:1 extended:1 david:1 pair:1 required:1 california:1 elapsed:3 learned:1 hour:1 daw:6 nip:1 brook:1 address:4 able:1 bar:3 impair:1 below:1 pattern:3 appeared:1 royal:1 gaining:1 max:1 eschew:1 explanation:2 green:1 event:1 icnc:1 treated:1 natural:1 critical:1 indicator:1 solvable:2 pause:2 inversely:1 tentatively:1 text:1 understanding:3 literature:1 hen:1 determining:2 law:3 harvested:2 fully:3 grooming:1 mixed:1 suggestion:1 proportional:2 versus:2 foundation:3 nucleus:6 contingency:1 agent:11 principle:2 foster:1 charitable:1 editor:1 balancing:1 obscure:1 row:1 elsewhere:1 changed:1 placed:2 last:1 free:12 implicated:1 salience:1 allow:1 understand:1 explaining:1 fall:1 taking:1 benefit:3 distributed:1 curve:3 overcome:1 valid:1 transition:4 rich:1 qualitatively:1 reinforcement:32 clue:1 simplified:1 schultz:1 adaptive:1 cope:1 correlate:1 mesencephalic:1 obtains:1 observable:1 ons:1 global:1 instantiation:1 handbook:1 assumed:2 consuming:1 continuous:3 triplet:1 quantifies:2 nature:1 learn:1 robust:2 obtaining:1 du:1 complex:2 constructing:1 da:27 main:2 neurosci:2 motivation:9 whole:1 reconcile:1 profile:1 edition:1 nothing:2 lesion:2 repeated:1 fig:21 representative:1 reservoir:1 ff:1 depicts:2 elaborate:1 gatsby:3 slow:1 wiley:1 sub:1 exponential:1 crude:2 late:2 procurement:1 croft:1 minute:12 embed:1 showing:1 normative:4 offset:1 execution:1 conditioned:1 forgone:1 illustrates:1 demand:1 vigorous:2 interrelated:1 ganglion:2 determines:1 satisfies:1 ma:1 identity:1 towards:1 miscellaneous:1 labelled:1 absence:1 change:6 experimentally:1 determined:2 typical:1 specifically:1 averaging:1 anim:1 total:4 called:1 experimental:7 productivity:1 internal:4 latter:1 arises:2 jonathan:1 harper:1 phenomenon:1 correlated:1 |
2,029 | 2,843 | Transfer learning for text classification
Chuong B. Do
Computer Science Department
Stanford University
Stanford, CA 94305
Andrew Y. Ng
Computer Science Department
Stanford University
Stanford, CA 94305
Abstract
Linear text classification algorithms work by computing an inner product between a test document vector and a parameter vector. In many such
algorithms, including naive Bayes and most TFIDF variants, the parameters are determined by some simple, closed-form, function of training set
statistics; we call this mapping mapping from statistics to parameters, the
parameter function. Much research in text classification over the last few
decades has consisted of manual efforts to identify better parameter functions. In this paper, we propose an algorithm for automatically learning
this function from related classification problems. The parameter function found by our algorithm then defines a new learning algorithm for
text classification, which we can apply to novel classification tasks. We
find that our learned classifier outperforms existing methods on a variety
of multiclass text classification tasks.
1
Introduction
In the multiclass text classification task, we are given a training set of documents, each
labeled as belonging to one of K disjoint classes, and a new unlabeled test document.
Using the training set as a guide, we must predict the most likely class for the test document. ?Bag-of-words? linear text classifiers represent a document as a vector x of
word counts, and predict
Pn the class whose score (a linear function of x) is highest, i.e.,
arg maxk?{1,...,K} i=1 ?ki xi . Choosing parameters {?ki } which give high classification
accuracy on test data, thus, is the main challenge for linear text classification algorithms.
In this paper, we focus on linear text classification algorithms in which the parameters are
pre-specified functions of training set statistics; that is, each ?ki is a function ?ki := g(uki )
of some fixed statistics uki of the training set. Unlike discriminative learning methods, such
as logistic regression [1] or support vector machines (SVMs) [2], which use numerical optimization to pick parameters, the learners we consider perform no optimization. Rather, in
our technique, parameter learning involves tabulating statistics vectors {u ki } and applying
the closed-form function g to obtain parameters. We refer to g, this mapping from statistics
to parameters, as the parameter function.
Many common text classification methods?including the multinomial and multivariate
Bernoulli event models for naive Bayes [3], the vector space-based TFIDF classifier [4],
and its probabilistic variant, PrTFIDF [5]?belong to this class of algorithms. Here, picking
a good text classifier from this class is equivalent to finding the right parameter function
for the available statistics.
In practice, researchers often develop text classification algorithms by trial-and-error,
guided by empirical testing on real-world classification tasks (cf. [6, 7]). Indeed, one could
argue that much of the 30-year history of information retrieval has consisted of manually
trying TFIDF formula variants (i.e. adjusting the parameter function g) to optimize performance [8]. Even though this heuristic process can often lead to good parameter functions,
such a laborious task requires much human ingenuity, and risks failing to find algorithm
variations not considered by the designer.
In this paper, we consider the task of automatically learning a parameter function g for
text classification. Given a set of example text classification problems, we wish to ?metalearn? a new learning algorithm (as specified by the parameter function g), which may then
be applied new classification problems. The meta-learning technique we propose, which
leverages data from a variety of related classification tasks to obtain a good classifier for
new tasks, is thus an instance of transfer learning; specifically, our framework automates
the process of finding a good parameter function for text classifiers, replacing hours of
hand-tweaking with a straightforward, globally-convergent, convex optimization problem.
Our experiments demonstrate the effectiveness of learning classifier forms. In low training
data classification tasks, the learning algorithm given by our automatically learned parameter function consistently outperforms human-designed parameter functions based on naive
Bayes and TFIDF, as well as existing discriminative learning approaches.
2
Preliminaries
Let V = {w1 , . . . , wn } be a fixed vocabulary of words, and let X = Zn and Y =
{1, . . . , K} be the input and output spaces for our classification problem. A labeled document is a pair (x, y) ? X ? Y, where x is an n-dimensional vector with xi indicating the
number of occurrences of word wi in the document, and y is the document?s class label. A
classification problem is a tuple hD, S, (xtest , ytest )i, where D is a distribution over X ? Y,
S = {(xi , yi )}M
i=1 is a set of M training examples, (xtest , ytest ) is a single test example, and
all M + 1 examples are drawn iid from D. Given a training set S and a test input vector
xtest , we must predict the value of the test class label ytest .
P
In linear classification algorithms, we evaluate the score fk (xtest ) := i ?ki xtest i for assigning xtest to each class k ? {1, . . . , K} and pick the class y = arg maxk fk (xtest ) with
the highest score. In our meta-learning setting, we define each ?ki as the component-wise
evaluation of the parameter function g on some vector of training set statistics u ki :
? ?
?
?
?k1
g(uk1 )
? ?k2 ?
? g(uk2 ) ?
? . ? := ? . ? .
(1)
? .. ?
? .. ?
?kn
g(ukn )
Here, each uki ? Rq (k = 1, . . . , K, i = 1, . . . , n) is a vector whose components are
computed from the training set S (we will provide specific examples later). Furthermore,
g : Rq ? R is the parameter function mapping from uki to its corresponding parameter
?ki . To illustrate these definitions, we show that two specific cases of the naive Bayes and
TFIDF classification methods belong to the class of algorithms described above.
Naive Bayes: In the multinomial variant of the naive Bayes classification algorithm, 1 the
score for assigning a document x to class k is
Pn
fkNB (x) := log p?(y = k) + i=1 xi log p?(wi | y = k).
(2)
The first term, p?(y = k), corresponds to a ?prior? over document classes, and
the second term, p?(wi | y = k), is the (smoothed) relative frequency of word
1
Despite naive Bayes? overly strong independence assumptions and thus its shortcomings as a
probabilistic model for text documents, we can nonetheless view naive Bayes as simply an algorithm
which makes predictions by computing certain functions of the training set. This view has proved
useful for analysis of naive Bayes even when none of its probabilistic assumptions hold [9]; here, we
adopt this view, without attaching any particular probabilistic meaning to the empirical frequencies
p?(?) that happen to be computed by the algorithm.
wi in training documents of class k. For P
balanced training sets, the first term is
irrelevant. Therefore, we have fkNB (x) = i ?ki xi where ?ki = gNB (uki ),
?
? ?
?
uki1
number of times wi appears in documents of class k
?uki2 ? ? number of documents of class k containing wi ?
?
? ?
?
uki := ?uki3 ? = ? total number of words in documents of class k ? , (3)
?u ? ?
?
total number of documents of class k
ki4
uki5
total number of documents
and
uki1 + ?
gNB (uki ) := log
(4)
uki3 + n?
where ? is a smoothing parameter. (? = 1 gives Laplace smoothing.)
TFIDF: In the unnormalized TFIDF classifier, the score for assigning x to class k is
Pn
1
1
fkTFIDF (x) := i=1 xi |y=k ? log p(x
x
?
log
(5)
i
? i >0)
p(x
? i >0) ,
where xi |y=k (sometimes called the average term frequency of wi ) is the average
ith component of all document vectors of class k, and p?(xi > 0) (sometimes
called the document frequency of wi ) is the
Pproportion of all documents containing
wi .2 As before, we write fkTFIDF (x) = i ?ki xi with ?ki = gTFIDF (uki ). The
statistics vector is again defined as in (3), but this time,
2
uki1
uki5
gTFIDF (uki ) :=
.
(6)
log
uki4
uki2
Space constraints preclude a detailed discussion, but many other classification algorithms
can similarly be expressed in this framework, using other definitions of the statistics vectors {uki }. These include most other variants of TFIDF based on different TF and IDF
terms [7], PrTFIDF [5], and various heuristically modified versions of naive Bayes [6].
3
Learning the parameter function
In the last section, we gave two examples of algorithms that obtain their parameters ? ki
by applying a function g to a statistics vector uki . In each case, the parameter function
was hand-designed, either from probabilistic (in the case of naive Bayes [3]) or geometric
(in the case of TFIDF [4]) considerations. We now consider the problem of automatically
learning a parameter function from example classification tasks. In the sequel, we assume
fixed statistics vectors {uki } and focus on finding an optimal parameter function g.
In the standard supervised learning setting, we are given a training set of examples sampled from some unknown distribution D, and our goal is to use the training set to make a
prediction on a new test example also sampled from D. By using the training examples to
understand the statistical regularities in D, we hope to predict ytest from xtest with low error.
Analogously, the problem of meta-learning g is again a supervised learning task; here, however, the training ?examples? are now classification problems sampled from a distribution
D over classification problems.3 By seeing many instances of text classification problems
2
Note that (5) implicitly defines fkTFIDF (x) as a dot product of two vectors, each of whose components consist of a product of two terms. In the normalized TFIDF classifier, both vectors are
normalized to unit length before computing the dot product, a modification that makes the algorithm
more stable for documents of varying length. This too can be represented within our framework by
considering appropriately normalized statistics vectors.
3
Note that in our meta-learning problem, the output of our algorithm is a parameter function
g mapping statistics to parameters. Our training data, however, do not explicitly indicate the best
parameter function g ? for each example classification problem. Effectively then, in the meta-learning
task, the central problem is to fit g to some unseen g? , based on test examples in each training
classification problem.
drawn from D, we hope to learn a parameter function g that exploits the statistical regularities in problems from D. Formally, let S = {hD (j) , S (j) , (x(j) , y (j) )i}m
j=1 be a collection
of m classification problems sampled iid from D. For a new, test classification problem
hDtest , Stest , (xtest , ytest )i sampled independently from D, we desire that our learned g correctly classify xtest with high probability.
To achieve our goal, we first restrict our attention to parameter functions g that are linear
in their inputs. Using the linearity assumption, we pose a convex optimization problem
for finding a parameter function g that achieves small loss on test examples in the training
collection. Finally, we generalize our method to the non-parametric setting via the ?kernel
trick,? thus allowing us to learn complex, highly non-linear functions of the input statistics.
3.1
Softmax learning
Recall that in softmax regression, the class probabilities p(y | x) are modeled as
P
exp( i ?ki xi )
P
,
k = 1, . . . , K,
p(y = k | x; {?ki }) := P
0
i ?k i xi )
k0 exp(
(7)
where the parameters {?ki } are learned from the training data S by maximizing the conditional log likelihood of the data. In this approach, a total of Kn parameters are trained
jointly using numerical optimization. Here, we consider an alternative approach in which
each of the Kn parameters is some function of the prespecified statistics vectors; in particular, ?ki := g(uki ). Our goal is to learn an appropriate g.
To pose our optimization problem, we start by learning the linear form g(uki ) = ? T uki .
Under this parameterization, the conditional likelihood of an example (x, y) is
P
exp( i ? T uki xi )
p(y = k | x; ?) = P
,
k = 1, . . . , K.
(8)
P T
k0 exp(
i ? uk 0 i xi )
In this setup, one natural approach for learning a linear function g is to maximize the
(regularized) conditional log likelihood `(? : S ) for the entire collection S :
Pm
`(? : S ) := j=1 log p(y (j) | x(j) ; ?) ? C||?||2
P
?
?
(j)
(j)
m
exp ? T i uy(j) i xi
X
P
? ? C||?||2 .
=
log ? P
(9)
(j) (j)
T
exp
?
u
x
j=1
k
i ki i
In (9), the latter term corresponds to a Gaussian prior on the parameters ?, which provides
a means for controlling the complexity of the learned parameter function g. The maximization of (9) is similar to softmax regression training except that here, instead of optimizing
over the parameters {?ki } directly, we optimize over the choice of ?.
3.2
Nonparametric function learning
In this section, we generalize the technique of the previous section to nonlinear g. By the
Representer Theorem [10], there exists a maximizing solution to (9) for which the optimal
parameter vector ? ? is a linear combination of training set statistics:
Pm P ? P (j) (j)
? ? = j=1 k ?jk
(10)
i uki xi .
From this, we reparameterize the original optimization over ? in (9) as an equivalent optimization over training example weights {?jk }. For notational convenience, let
P P (j) (j 0 ) (j)
(j 0 )
K(j, j 0 , k, k 0 ) := i i0 xi xi0 (uki )T uk0 i0 .
(11)
1
0
?5
0.8
uk2
exp(uk2)
?10
0.6
?15
0.4
?20
0.2
0
(a)
?25
0
0.2
0.4
0.6
exp(uk1)
0.8
1
(b)
?30
?35
?30
?25
?20
?15
uk1
?10
?5
0
Figure 1: Distribution of unnormalized uki vectors in dmoz data (a) with and (b) without
applying the log transformation in (15). In principle, one could alternatively use a feature
vector representation using these frequencies directly, as in (a). However, applying the log
transformation yields a feature space with fewer isolated points in R2 , as in (b). When using
the Gaussian kernel, a feature space with few isolated points is important as the topology
of the feature space establishes locality of influence for support vectors.
Substituting (10) and (11) into (9), we obtain
?
P
?
m P
0
(j 0 )
m
)
exp
X
j=1
k ?jk K(j, j , k, y
P
?
`({?jk } : S ) :=
log ? P
m P
0
0
j 0 =1
k0 exp
j=1
k ?jk K(j, j , k, k )
?C
m X
m XX
X
j=1 j 0 =1 k
?jk ?j 0 k0 K(j, j 0 , k, k 0 ).
(12)
k0
Note that (12) is concave and differentiable, so we can train the model using any standard
numerical gradient optimization procedure, such as conjugate gradient or L-BFGS [11].
The assumption that g is a linear function of uki , however, places a severe restriction on
the class of learnable parameter functions. Noting that the statistics vectors appear only as
an inner product in (11), we apply the ?kernel trick? to obtain
P P (j) (j 0 )
(j)
(j 0 )
K(j, j 0 , k, k 0 ) := i i0 xi xi0 K(uki , uk0 i0 ),
(13)
where the kernel function K(u, v) = ?(u), ?(v) defines the inner product of some
high-dimensional mapping ?(?) of its inputs.4 In particular, choosing a Gaussian (RBF)
kernel, K(u, v) := exp(??||u ? v||2 ), gives a non-parametric representation for g:
Pm P P
(j)
(j)
g(uki ) = ? T ?(uki ) = j=1 k i ?jk xi exp(??||uki ? uki ||2 ).
(14)
(j)
Thus, g(uki ) is a weighted combination of the values {?jk xi }, where the weights depend
(j)
exponentially on the squared `2 -distance of uki to each of the statistics vectors {uki }. As a
result, we can approximate any sufficiently smooth bounded function of u arbitrarily well,
given sufficiently many training classification problems.
4
Experiments
To validate our method, we evaluated its ability to learn parameter functions on a variety
of email and webpage classification tasks in which the number of classes, K, was large
(K = 10), and the number of number of training examples per class, m/K, was small
(m/K = 2). We used the dmoz Open Directory Project hierarchy,5 the 20 Newsgroups
dataset,6 the Reuters-21578 dataset,7 and the Industry Sector dataset8 .
4
Note also that as a consequence of our kernelization, K itself can be considered a ?kernel?
between all statistics vectors from two entire documents.
5
http://www.dmoz.org
6
http://kdd.ics.uci.edu/databases/20newsgroups/20newsgroups.tar.gz
7
http://www.daviddlewis.com/resources/testcollections/reuters21578/reuters21578.tar.gz
8
http://www.cs.umass.edu/?mccallum/data/sector.tar.gz
Table 1: Test set accuracy on dmoz categories. Columns 2-4 give the proportion of correct
classifications using non-discriminative methods: the learned g, Naive Bayes, and TFIDF,
respectively. Columns 5-7 give the corresponding values for the discriminative methods:
softmax regression, 1-vs-all SVMs, and multiclass SVMs. The best accuracy in each row
is shown in bold.
Category
Arts
Business
Computers
Games
Health
Home
Kids and Teens
News
Recreation
Reference
Regional
Science
Shopping
Society
Sports
World
Average
g
0.421
0.456
0.467
0.411
0.479
0.640
0.252
0.349
0.663
0.635
0.438
0.363
0.612
0.435
0.619
0.531
0.486
gNB
0.296
0.283
0.304
0.288
0.282
0.470
0.205
0.222
0.487
0.415
0.268
0.256
0.456
0.308
0.432
0.491
0.341
gTFIDF
0.286
0.286
0.327
0.240
0.337
0.454
0.142
0.212
0.529
0.458
0.258
0.246
0.556
0.285
0.285
0.352
0.328
softmax
0.352
0.336
0.344
0.279
0.382
0.501
0.202
0.382
0.477
0.602
0.329
0.353
0.483
0.379
0.507
0.329
0.390
1VA-SVM
0.203
0.233
0.217
0.240
0.213
0.333
0.173
0.270
0.353
0.383
0.260
0.223
0.373
0.213
0.267
0.277
0.264
MC-SVM
0.367
0.340
0.387
0.330
0.337
0.440
0.167
0.397
0.590
0.543
0.357
0.340
0.550
0.377
0.527
0.303
0.397
The dmoz project is a hierarchical collection of webpage links organized by subject matter.
The top level of the hierarchy consists of 16 major categories, each of which contains several subcategories. To perform cross-validated testing, we obtained classification problems
from each of the top-level categories by retrieving webpages from each of their respective subcategories. For the 20 Newsgroups, Reuters-21578, and Industry Sector datasets,
we performed similar preprocessing.9 Given a dataset of documents, we sampled 10-class
2-training-examples-per-class classification problems by randomly selecting 10 different
classes within the dataset, picking 2 training examples within each class, and choosing one
test example from a randomly chosen class.
4.1
Choice of features
Theoretically, for the method described in this paper, any sufficiently rich set of features
could be used to learn a parameter function for classification. For simplicity, we reduced
the feature vector in (3) to the following two-dimensional representation,10
log(proportion of wi among words from documents of class k)
uki =
.
(15)
log(proportion of documents containing wi )
Note that up to the log transformation, the components of uki correspond to the relative
term frequency and document frequency of a word relative to class k (see Figure 1).
4.2
Generalization performance
We tested our meta-learning algorithm on classification problems taken from each of the
16 top-level dmoz categories. For each top-level category, we built a collection of 300
classification problems from that category; results reported here are averages over these
9
For the Reuters data, we associated each article with its hand-annotated ?topic? label and discarded any articles with more than one topic annotation. For each dataset, we discarded all categories
with fewer than 50 examples, and selected a 500-word vocabulary based on information gain.
10
Features were rescaled to have zero mean and unit variance over the training set.
Table 2: Cross corpora classification accuracy, using classifiers trained on each of the four
corpora. The best accuracy in each row is shown in bold.
Dataset
gdmoz gnews greut
gindu
gNB gTFIDF softmax 1VA-SVM MC-SVM
dmoz
n/a 0.471 0.475 0.473 0.365 0.352 0.381
0.283
0.412
20 Newsgroups 0.369 n/a 0.371 0.369 0.223 0.184 0.217
0.206
0.248
Reuters-21578 0.567 0.567 n/a 0.619 0.463 0.475 0.463
0.308
0.481
Industry Sector 0.438 0.459 0.446 n/a 0.374 0.274 0.376
0.271
0.375
problems. To assess the accuracy of our meta-learning algorithm for a particular test category, we used the g learned from a set of 450 classification problems drawn from the other
15 top-level categories.11 This ensured no overlap of training and testing data. In 15 out
of 16 categories, the learned parameter function g outperforms naive Bayes and TFIDF in
addition to the discriminative methods we tested (softmax regression, 1-vs-all SVMs [12],
and multiclass SVMs [13]12 ; see Table 1).13
Next, we assessed the ability of g to transfer across even more dissimilar corpora. Here, for
each of the four corpora (dmoz, 20 Newsgroups, Reuters-21578, Industry Sector), we constructed independent training and testing datasets of 480 random classification problems.
After training separate classifiers (gdmoz , gnews , greut , and gindu ) using data from each of the
four corpora, we tested the performance of each learned classifier on the remaining three
corpora (see Table 2). Again, the learned parameter functions compare favorably to the
other methods. Moreover, these tests show that a single parameter function may give an accurate classification algorithm for many different corpora, demonstrating the effectiveness
of our approach for achieving transfer across related learning tasks.
5
Discussion and Related Work
In this paper, we presented an algorithm based on softmax regression for learning a parameter function g from example classification problems. Once learned, g defines a new
learning algorithm that can be applied to novel classification tasks.
Another approach for learning g is to modify the multiclass support vector machine formulation of Crammer and Singer [13] in a manner analagous to the modification of softmax
regression in Section 3.1, giving the following quadratic program:
P
minimize 12 ||?||2 + C j ?j
??Rn ,??Rm
P (j) (j)
(j)
subject to ? T i xi (uy(j) i ? uki ) ? I{k6=y(j) } ? ?j , ?k, ?j.
As usual, taking the dual leads naturally to an SMO-like procedure for optimization. We
implemented this method and found that the learned g, like in the softmax formulation,
outperforms naive Bayes, TFIDF, and the other discriminative methods.
The techniques described in this paper give one approach for achieving inductive transfer in
classifier design?using labeled data from related example classification problems to solve
a particular classification problem [16, 17]. Bennett et al. [18] also consider the issue of
knowledge transfer in text classification in the context of ensemble classifiers, and propose
a system for using related classification problems to learn the reliability of individual classifiers within the ensemble. Unlike their approach, which attempts to meta-learn properties
11
For each execution of the learning algorithm, (C, ?) parameters were determined via grid search
using a small holdout set of 160 classification problems. The same holdout set was used to select
regularization parameters for the discriminative learning algorithms.
12
We used LIBSVM [14] to assess 1VA-SVMs and SVM-Light [15] for multiclass SVMs.
13
For larger values of m/K (e.g. m/K = 10), softmax and multiclass SVMs consistently outperform naive Bayes and TFIDF; nevertheless, the learned g achieves a performance on par with discriminative methods, despite being constrained to parameters which are explicit functions of training
data statistics. This result is consistent with a previous study in which a heuristically hand-tuned
version of Naive Bayes attained near-SVM text classification performance for large datasets [6].
of algorithms, our method uses meta-learning to construct a new classification algorithm.
Though not directly applied to text classification, Teevan and Karger [19] consider the
problem of automatically learning term distributions for use in information retrieval.
Finally, Thrun and O?Sullivan [20] consider the task of classification in a mobile robot domain. In this work, the authors describe a task-clustering (TC) algorithm in which learning
tasks are grouped via a nearest neighbors algorithm, as a means of facilitating knowledge
transfer. A similar concept is implicit in the kernelized parameter function learned by our
algorithm, where the Gaussian kernel facilitates transfer between similar statistics vectors.
Acknowledgments
We thank David Vickrey and Pieter Abbeel for useful discussions, and the anonymous
referees for helpful comments. CBD was supported by an NDSEG fellowship. This work
was supported by DARPA under contract number FA8750-05-2-0249.
References
[1] K. Nigam, J. Lafferty, and A. McCallum. Using maximum entropy for text classification. In
IJCAI-99 Workshop on Machine Learning for Information Filtering, pages 61?67, 1999.
[2] T. Joachims. Text categorization with support vector machines: Learning with many relevant
features. In Machine Learning: ECML-98, pages 137?142, 1998.
[3] A. McCallum and K. Nigam. A comparison of event models for Naive Bayes text classification.
In AAAI-98 Workshop on Learning for Text Categorization, 1998.
[4] G. Salton and C. Buckley. Term weighting approaches in automatic text retrieval. Information
Processing and Management, 29(5):513?523, 1988.
[5] T. Joachims. A probabilistic analysis of the Rocchio algorithm with TFIDF for text categorization. In Proceedings of ICML-97, pages 143?151, 1997.
[6] J. D. Rennie, L. Shih, J. Teevan, and D. R. Karger. Tackling the poor assumptions of naive
Bayes text classifiers. In ICML, pages 616?623, 2003.
[7] A. Moffat and J. Zobel. Exploring the similarity space. In ACM SIGIR Forum 32, 1998.
[8] C. Manning and H. Schutze. Foundations of statistical natural language processing, 1999.
[9] A. Ng and M. Jordan. On discriminative vs. generative classifiers: a comparison of logistic
regression and naive Bayes. In NIPS 14, 2002.
[10] G. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. J. Math. Anal.
Appl., 33:82?95, 1971.
[11] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 1999.
[12] R. Rifkin and A. Klautau. In defense of one-vs-all classification. J. Mach. Learn. Res., 5:101?
141, 2004.
[13] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based
vector machines. J. Mach. Learn. Res., 2:265?292, 2001.
[14] C-C. Chang and C-J. Lin. LIBSVM: a library for support vector machines, 2001. Software
available at http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[15] T. Joachims. Making large-scale support vector machine learning practical. In Advances in
Kernel Methods: Support Vector Machines. MIT Press, Cambridge, MA, 1998.
[16] S. Thrun. Lifelong learning: A case study. CMU tech report CS-95-208, 1995.
[17] R. Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997.
[18] P. N. Bennett, S. T. Dumais, and E. Horvitz. Inductive transfer for text classification using
generalized reliability indicators. In Proceedings of ICML Workshop on The Continuum from
Labeled to Unlabeled Data in Machine Learning and Data Mining, 2003.
[19] J. Teevan and D. R. Karger. Empirical development of an exponential probabilistic model for
text retrieval: Using textual analysis to build a better model. In SIGIR ?03, 2003.
[20] S. Thrun and J. O?Sullivan. Discovering structure in multiple learning tasks: The TC algorithm.
In International Conference on Machine Learning, pages 489?497, 1996.
| 2843 |@word multitask:1 trial:1 version:2 proportion:3 open:1 heuristically:2 pieter:1 xtest:10 pick:2 contains:1 score:5 uma:1 selecting:1 karger:3 tuned:1 document:26 fa8750:1 outperforms:4 existing:2 horvitz:1 com:1 assigning:3 tackling:1 must:2 numerical:4 happen:1 kdd:1 designed:2 v:4 generative:1 fewer:2 selected:1 discovering:1 parameterization:1 directory:1 mccallum:3 ith:1 stest:1 prespecified:1 provides:1 math:1 org:1 constructed:1 retrieving:1 consists:1 manner:1 theoretically:1 indeed:1 ingenuity:1 globally:1 automatically:5 preclude:1 considering:1 project:2 xx:1 linearity:1 bounded:1 moreover:1 kimeldorf:1 finding:4 transformation:3 concave:1 ensured:1 classifier:17 k2:1 uk:1 rm:1 unit:2 appear:1 before:2 modify:1 consequence:1 despite:2 mach:2 appl:1 uy:2 acknowledgment:1 practical:1 testing:4 practice:1 uk0:2 sullivan:2 procedure:2 empirical:3 word:9 pre:1 tweaking:1 seeing:1 convenience:1 unlabeled:2 risk:1 applying:4 influence:1 context:1 optimize:2 equivalent:2 restriction:1 www:4 maximizing:2 straightforward:1 attention:1 independently:1 convex:2 sigir:2 simplicity:1 hd:2 variation:1 laplace:1 controlling:1 hierarchy:2 us:1 trick:2 referee:1 jk:8 labeled:4 database:1 csie:1 news:1 highest:2 rescaled:1 balanced:1 rq:2 complexity:1 automates:1 trained:2 depend:1 learner:1 darpa:1 k0:5 various:1 represented:1 train:1 shortcoming:1 describe:1 choosing:3 whose:3 heuristic:1 stanford:4 solve:1 larger:1 rennie:1 ability:2 statistic:22 unseen:1 jointly:1 itself:1 differentiable:1 propose:3 product:6 uci:1 relevant:1 rifkin:1 achieve:1 validate:1 webpage:3 ijcai:1 regularity:2 categorization:3 illustrate:1 andrew:1 develop:1 pose:2 nearest:1 strong:1 implemented:1 c:2 involves:1 indicate:1 guided:1 correct:1 annotated:1 human:2 shopping:1 generalization:1 abbeel:1 preliminary:1 ntu:1 tfidf:15 anonymous:1 exploring:1 hold:1 sufficiently:3 considered:2 ic:1 wright:1 exp:12 mapping:6 predict:4 algorithmic:1 substituting:1 major:1 achieves:2 adopt:1 continuum:1 rocchio:1 failing:1 bag:1 label:3 daviddlewis:1 grouped:1 tf:1 establishes:1 reuters21578:2 weighted:1 hope:2 mit:1 gaussian:4 modified:1 rather:1 pn:3 tar:3 varying:1 mobile:1 validated:1 focus:2 kid:1 joachim:3 notational:1 consistently:2 bernoulli:1 likelihood:3 tech:1 schutze:1 helpful:1 i0:4 entire:2 dataset8:1 kernelized:1 arg:2 classification:60 among:1 dual:1 issue:1 k6:1 development:1 smoothing:2 softmax:11 art:1 constrained:1 once:1 construct:1 ng:2 manually:1 icml:3 representer:1 report:1 spline:1 few:2 randomly:2 individual:1 attempt:1 ukn:1 highly:1 mining:1 evaluation:1 laborious:1 severe:1 recreation:1 light:1 accurate:1 tuple:1 moffat:1 respective:1 re:2 isolated:2 instance:2 classify:1 industry:4 column:2 caruana:1 zn:1 maximization:1 too:1 reported:1 kn:3 dumais:1 international:1 sequel:1 probabilistic:7 uk1:3 contract:1 picking:2 analogously:1 w1:1 again:3 central:1 squared:1 ndseg:1 containing:3 aaai:1 management:1 bfgs:1 zobel:1 bold:2 matter:1 analagous:1 explicitly:1 later:1 chuong:1 view:3 closed:2 performed:1 start:1 bayes:19 annotation:1 ass:2 minimize:1 accuracy:6 variance:1 ensemble:2 yield:1 identify:1 correspond:1 generalize:2 iid:2 none:1 mc:2 researcher:1 history:1 manual:1 email:1 definition:2 nonetheless:1 frequency:7 naturally:1 associated:1 salton:1 sampled:6 gain:1 proved:1 adjusting:1 dataset:6 holdout:2 recall:1 knowledge:2 organized:1 appears:1 attained:1 supervised:2 formulation:2 evaluated:1 though:2 furthermore:1 implicit:1 hand:4 replacing:1 nonlinear:1 defines:4 logistic:2 tabulating:1 consisted:2 normalized:3 concept:1 inductive:2 regularization:1 vickrey:1 game:1 unnormalized:2 generalized:1 trying:1 demonstrate:1 meaning:1 wise:1 consideration:1 novel:2 common:1 multinomial:2 teen:1 exponentially:1 belong:2 xi0:2 refer:1 cambridge:1 automatic:1 fk:2 pm:3 similarly:1 grid:1 language:1 dot:2 reliability:2 stable:1 robot:1 similarity:1 multivariate:1 optimizing:1 irrelevant:1 certain:1 meta:9 arbitrarily:1 yi:1 gnb:4 maximize:1 multiple:1 smooth:1 cross:2 retrieval:4 lin:1 va:3 prediction:2 variant:5 regression:8 cmu:1 represent:1 sometimes:2 kernel:9 addition:1 fellowship:1 appropriately:1 unlike:2 regional:1 comment:1 subject:2 facilitates:1 lafferty:1 effectiveness:2 jordan:1 call:1 near:1 leverage:1 uki:31 noting:1 wn:1 variety:3 independence:1 fit:1 gave:1 newsgroups:6 testcollections:1 restrict:1 topology:1 wahba:1 inner:3 multiclass:8 klautau:1 defense:1 effort:1 buckley:1 useful:2 detailed:1 nonparametric:1 svms:8 category:11 reduced:1 http:5 outperform:1 uk2:3 designer:1 disjoint:1 overly:1 correctly:1 per:2 write:1 four:3 shih:1 demonstrating:1 nevertheless:1 achieving:2 drawn:3 tchebycheffian:1 libsvm:3 nocedal:1 year:1 place:1 home:1 ki:20 convergent:1 quadratic:1 constraint:1 idf:1 software:1 reparameterize:1 department:2 combination:2 poor:1 manning:1 belonging:1 conjugate:1 across:2 wi:11 tw:1 modification:2 making:1 teevan:3 taken:1 resource:1 count:1 cjlin:1 singer:2 available:2 apply:2 hierarchical:1 appropriate:1 occurrence:1 alternative:1 original:1 top:5 remaining:1 cf:1 include:1 clustering:1 exploit:1 giving:1 cbd:1 k1:1 build:1 society:1 forum:1 parametric:2 usual:1 gradient:2 distance:1 link:1 separate:1 thrun:3 thank:1 topic:2 argue:1 dmoz:8 length:2 modeled:1 setup:1 sector:5 favorably:1 design:1 anal:1 implementation:1 unknown:1 perform:2 allowing:1 datasets:3 discarded:2 ecml:1 maxk:2 rn:1 smoothed:1 david:1 pair:1 specified:2 smo:1 learned:14 textual:1 hour:1 nip:1 challenge:1 program:1 built:1 including:2 event:2 overlap:1 natural:2 business:1 regularized:1 indicator:1 library:1 gz:3 naive:19 health:1 text:29 prior:2 geometric:1 relative:3 loss:1 subcategories:2 par:1 filtering:1 foundation:1 consistent:1 article:2 principle:1 row:2 supported:2 last:2 guide:1 ytest:5 understand:1 neighbor:1 lifelong:1 taking:1 attaching:1 vocabulary:2 world:2 rich:1 author:1 collection:5 preprocessing:1 approximate:1 implicitly:1 corpus:7 xi:20 discriminative:9 alternatively:1 search:1 decade:1 table:4 learn:9 transfer:9 ca:2 nigam:2 complex:1 domain:1 main:1 reuters:5 facilitating:1 wish:1 explicit:1 exponential:1 weighting:1 formula:1 theorem:1 specific:2 learnable:1 r2:1 svm:6 consist:1 exists:1 workshop:3 effectively:1 execution:1 locality:1 entropy:1 tc:2 simply:1 likely:1 expressed:1 desire:1 sport:1 chang:1 springer:1 corresponds:2 acm:1 ma:1 conditional:3 goal:3 rbf:1 bennett:2 determined:2 specifically:1 except:1 total:4 called:2 indicating:1 formally:1 select:1 support:7 latter:1 crammer:2 assessed:1 dissimilar:1 evaluate:1 kernelization:1 tested:3 |
2,030 | 2,844 | A PAC-Bayes approach to the Set
Covering Machine
Fran?
cois Laviolette, Mario Marchand
IFT-GLO, Universit?e Laval
Sainte-Foy (QC) Canada, G1K-7P4
given [email protected]
Mohak Shah
SITE, University of Ottawa
Ottawa, Ont. Canada,K1N-6N5
[email protected]
Abstract
We design a new learning algorithm for the Set Covering Machine from a PAC-Bayes perspective and propose a PAC-Bayes
risk bound which is minimized for classifiers achieving a non trivial
margin-sparsity trade-off.
1
Introduction
Learning algorithms try to produce classifiers with small prediction error by trying
to optimize some function that can be computed from a training set of examples and
a classifier. We currently do not know exactly what function should be optimized
but several forms have been proposed. At one end of the spectrum, we have the
set covering machine (SCM), proposed by Marchand and Shawe-Taylor (2002), that
tries to find the sparsest classifier making few training errors. At the other end, we
have the support vector machine (SVM), proposed by Boser et al. (1992), that tries
to find the maximum soft-margin separating hyperplane on the training data. Since
both of these learning machines can produce classifiers having small prediction error,
we have recently investigated (Laviolette et al., 2005) if better classifiers could be
found by learning algorithms that try to optimize a non-trivial function that depends
on both the sparsity of a classifier and the magnitude of its separating margin. Our
main result was a general data-compression risk bound that applies to any algorithm
producing classifiers represented by two complementary sources of information: a
subset of the training set, called the compression set, and a message string of
additional information. In addition, we proposed a new algorithm for the SCM
where the information string was used to encode radius values for data-dependent
balls and, consequently, the location of the decision surface of the classifier. Since
a small message string is sufficient when large regions of equally good radius values
exist for balls, the data compression risk bound applied to this version of the SCM
exhibits, indirectly, a non-trivial margin-sparsity trade-off. Moreover, this version
of the SCM currently suffers from the fact that the radius values, used in the final
classifier, depends on a a priori chosen distance scale R. In this paper, we use a new
PAC-Bayes approach, that applies to the sample-compression setting, and present a
new learning algorithm for the SCM that does not suffer from this scaling problem.
Moreover, we propose a risk bound that depends more explicitly on the margin
and which is also minimized by classifiers achieving a non-trivial margin-sparsity
trade-off.
2
Definitions
We consider binary classification problems where the input space X consists of an
def
arbitrary subset of Rn and the output space Y = {0, 1}. An example z = (x, y)
is an input-output pair where x ? X and y ? Y. In the probably approximately
correct (PAC) setting, we assume that each example z is generated independently
according to the same (but unknown) distribution D. The (true) risk R(f ) of a
classifier f : X ? Y is defined to be the probability that f misclassifies z on a
random draw according to D:
def
R(f ) = Pr(x,y)?D (f (x) 6= y) = E(x,y)?D I(f (x) 6= y)
where I(a) = 1 if predicate a is true and 0 otherwise. Given a training set
S = (z1 , . . . , zm ) of m examples, the task of a learning algorithm is to construct
a classifier with the smallest possible risk without any information about D. To
achieve this goal, the learner can compute the empirical risk RS (f ) of any given
classifier f according to:
def
RS (f ) =
m
1 X
def
I(f (xi ) 6= yi ) = E(x,y)?S I(f (x) 6= y)
m i=1
We focus on learning algorithms that construct a conjunction (or disjunction) of
features called data-dependent balls from a training set. Each data-dependent ball
is defined by a center and a radius value. The center is an input example xi chosen
among the training set S. For any test example x, the output of a ball h, of radius
? and centered on example xi , and is given by
?
yi if d(x, xi ) ? ?
def
,
hi,? (x) =
y?i otherwise
where y?i denotes the boolean complement of yi and d(x, xi ) denotes the distance
between the two points. Note that any metric can be used for the distance here.
To specify a conjunction of balls we first need to list all the examples that participate
def
as centers for the balls in the conjunction. For this purpose, we use a vector i =
(i1 , . . . , i|i| ) of indices ij ? {1, . . . , m} such that i1 < i2 < . . . < i|i| where |i| is the
number of indices present in i (and thus the number of balls in the conjunction).
To complete the specification of a conjunction of balls, we need a vector ? =
(?i1 , ?i2 , . . . , ?i|i| ) of radius values where ij ? {1, . . . , m} for j ? {1, . . . , |i|}.
On any input example x, the output Ci,?? (x) of a conjunction of balls is given by:
?
1 if hj,?j (x) = 1 ?j ? i
def
Ci,?? (x) =
0 if ?j ? i : hj,?j (x) = 0
Finally, any algorithm that builds a conjunction can be used to build a disjunction
just by exchanging the role of the positive and negative labelled examples. Due to
lack of space, we describe here only the case of a conjunction.
3
A PAC-Bayes Risk Bound
The PAC-Bayes approach, initiated by McAllester (1999a), aims at providing PAC
guarantees to ?Bayesian? learning algorithms. These algorithms are specified in
terms of a prior distribution P over a space of classifiers that characterizes our
prior belief about good classifiers (before the observation of the data) and a posterior distribution Q (over the same space of classifiers) that takes into account
the additional information provided by the training data. A remarkable result that
came out from this line of research, known as the ?PAC-Bayes theorem?, provides
a tight upper bound on the risk of a stochastic classifier called the Gibbs classifier .
Given an input example x, the label GQ (x) assigned to x by the Gibbs classifier
is defined by the following process. We first choose a classifier h according to the
posterior distribution Q and then use h to assign the label h(x) to x. The PACBayes theorem was first proposed by McAllester (1999b) and later improved by
others (see Langford (2005) for a survey). However, for all these versions of the
PAC-Bayes theorem, the prior P must be defined without reference to the training
data. Consequently, these theorems cannot be applied to the sample-compression
setting where classifiers are partly described by a subset of the training data (as for
the case of the SCM).
In the sample compression setting, each classifier is described by a subset Si of the
training data, called the compression set, and a message string ? that represents
the additional information needed to obtain a classifier. In other words, in this
setting, there exists a reconstruction function R that outputs a classifier R(?, Si )
when given an arbitrary compression set Si and a message string ?.
Given a training set S, the compression set Si ? S is defined by a vector of indices
def
i = (i1 , . . . , i|i| ) that points to individual examples in S. For the case of a conjunction of balls, each j ? i will point to a training example that is used for a ball center
and the message string ? will be the vector ? of radius values (defined above) that
are used for the balls. Hence, given Si and ? , the classifier obtained from R(??, Si )
is just the conjunction Ci,?? defined previously.1
Recently, Laviolette and Marchand (2005) have extended the PAC-Bayes theorem
to the sample-compression setting. Their proposed risk bound depends on a dataindependent prior P and a data-dependent posterior Q that are both defined on
I ? M where I denotes the set of the 2m possible index vectors i and M denotes,
in our case, the set of possible radius vectors ? . The posterior Q is used by a
stochastic classifier, called the sample-compressed Gibbs classifier GQ , defined as
follows. Given a training set S and given a new (testing) input example x, a samplecompressed Gibbs classifier GQ chooses randomly (i, ? ) according to Q to obtain
classifier R(??, Si ) which is then used to determine the class label of x.
In this paper we focus on the case where, given any training set S, the learner returns
a Gibbs classifier defined with a posterior distribution Q having all its weight on a
single vector i. Hence, a single compression set Si will be used for the final classifier.
However, the radius ?i for each i ? i will be chosen stochastically according to the
posterior Q. Hence we consider posteriors Q such that Q(i0 , ? ) = I(i = i0 )Qi (??)
where i is the vector of indices chosen by the learner. Hence, given a training set
S, the true risk R(GQi ) of GQi and its empirical risk RS (GQi ) are defined by
def
R(GQi ) =
E R(R(??, Si ))
? ?Qi
;
def
RS (GQi ) =
E RSi (R(??, Si )) ,
? ?Qi
where i denotes the set of indices not present in i. Thus, i ? i = ? and i ? i =
(1, . . . , m).
In contrast with the posterior Q, the prior P assigns a non zero weight to several
vectors i. Let PI (i) denote the prior probability P assigned to vector i and let Pi (??)
1
We assume that the examples in Si are ordered as in S so that the kth radius value
in ? is assigned to the kth example in Si .
denote the probability density function associated with prior P given i. The risk
bound depends on the Kullback-Leibler divergence KL(QkP ) between the posterior
Q and the prior P which, in our case, gives
KL(Qi kP ) = E
? ?Qi
ln
Qi (??)
.
PI (i)Pi (??)
For these classes of posteriors Q and priors P , the PAC-Bayes theorem of Laviolette
and Marchand (2005) reduces to the following simpler version.
Theorem 1 (Laviolette and Marchand (2005)) Given all our previous definitions, for any prior P and for any ? ? (0, 1]
?
?
??
1
KL(Qi kP ) + ln m+1
? 1?? ,
Pr m ? Qi : kl(RS (GQi )kR(GQi )) ? m?|i|
?
S?D
where
def
kl(qkp) = q ln
q
1?q
+ (1 ? q) ln
.
p
1?p
To obtain a bound for R(GQi ) we need to specify Qi (??), PI (i), and Pi (??).
Since all vectors i having the same size |i| are, a priori, equally ?good?, we choose
1
PI (i) = ?m? p(|i|)
|i|
Pm
for any p(?) such that d=0 p(d) = 1. We could choose p(d) = 1/(m + 1) for d ?
{0, 1, . . . , m} if we have complete ignorance about the size |i| of the final classifier.
But since the risk bound will deteriorate for large |i|, it is generally preferable to
choose, for p(d), a slowly decreasing function of d.
For the specification of Pi (??), we assume that each radius value, in some predefined
interval [0, R], is equally likely to be chosen for each ?i such that i ? i. Here R is
some ?large? distance specified a priori. For Qi (??), a margin interval [ai , bi ] ? [0, R]
of equally good radius values is chosen by the learner for each i ? i. Hence, we choose
? ?|i|
Y 1
Y 1
1
=
;
Qi (??) =
.
Pi (??) =
R
R
bi ? ai
i?i
i?i
Therefore, the Gibbs classifier returned by the learner will draw each radius ?i
uniformly in [ai , bi ]. A deterministic classifier is then specified by fixing each radius
values ?i ? [ai , bi ]. It is tempting at this point to choose ?i = (ai + bi )/2 ?i ? i (i.e.,
in the middle of each interval). However, we will see shortly that the PAC-Bayes
theorem offers a better guarantee for another type of deterministic classifier.
Consequently, with these choices for Qi (??), PI (i), and Pi (??), the KL divergence
between Qi and P is given by
? X ?
?
? ?
?
R
m
1
+
ln
.
KL(Qi kP ) = ln
+ ln
p(|i|)
bi ? ai
|i|
i?i
Notice that the KL divergence is small for small values of |i| (whenever p(|i|) is not
too small) and for large margin values (bi ? ai ). Hence, the KL divergence term in
Theorem 1 favors both sparsity (small |i|) and large margins. Hence, in practice,
the minimum might occur for some GQi that sacrifices sparsity whenever larger
margins can be found.
Since the posterior Q is identified by i and by the intervals [ai , bi ] ?i ? i, we will
now refer to the Gibbs classifier GQi by Giab where a and b are the vectors formed
by the unions of ai s and bi s respectively. To obtain a risk bound for Giab , we need
to find a closed-form expression for RS (Giab ). For this task, let U [a, b] denote the
i
uniform distribution over [a, b] and let ?a,b
(x) be the probability that a ball with
center xi assigns to x the class label yi when its radius ? is drawn according to
U [a, b]:
?
if d(x, xi ) ? a
? 1
def
b?d(x,xi )
i
?a,b
(x) = Pr??U [a,b] (hi,? (x) = yi ) =
if
a ? d(x, xi ) ? b
b?a
?
0
if d(x, xi ) ? bi .
Therefore,
?
def
i
?a,b
(x) =
Pr??U [a,b] (hi,? (x) = 1) =
i
?a,b
(x)
if
i
1 ? ?a,b
(x) if
yi = 1
yi = 0 .
Now let Giab (x) denote the probability that Ci,?? (x) = 1 when each ?i ? ? are drawn
according to U [ai , bi ]. We then have
Y
Giab (x) =
?ai i ,bi (x) .
i?i
R(x,y) (Giab )
Consequently, the risk
on a single example (x, y) is given by Giab (x) if
i
y = 0 and by 1 ? Gab (x) otherwise. Therefore
R(x,y) (Giab ) =
y(1 ? Giab (x)) + (1 ? y)Giab (x) = (1 ? 2y)(Giab (x) ? y) .
Hence, the empirical risk RS (Giab ) of the Gibbs classifier Giab is given by
X
1
(1 ? 2yj )(Giab (xj ) ? yj ) .
RS (Giab ) =
m ? |i|
j?i
From this expression we see that RS (Giab ) is small when Giab (xj ) ? yj ?j ? i.
Training points where Giab (xj ) ? 1/2 should therefore be avoided.
The PAC-Bayes theorem below provides a risk bound for the Gibbs classifier Giab .
i
Since the Bayes classifier Bab
just performs a majority vote under the same posterior
i
distribution as the one used by Giab , we have that Bab
(x) = 1 iff Giab (x) > 1/2.
From the above definitions, note that the decision surface of the Bayes classifier,
given by Giab (x) = 1/2, differs from the decision surface of classifier Ci?? when
?i = (ai + bi )/2 ?i ? i. In fact there does not exists any classifier Ci?? that has the
i
i
same decision surface as Bayes classifier Bab
. From the relation between Bab
and
i
i
i
Gab , it also follows that R(x,y) (Bab ) ? 2R(x,y) (Gab ) for any (x, y). Consequently,
i
R(Bab
) ? 2R(Giab ). Hence, we have the following theorem.
Theorem
Pm 2 Given all our previous definitions, for any ? ? (0, 1], for any p satisfying d=0 p(d) = 1, and for any fixed distance value R, we have:
?
?
? ? ?
1
m
PrS?Dm ?i, a, b : R(Giab ) ? sup ? : kl(RS (Giab )k?) ?
ln
+
m ? |i|
|i|
#) !
? X ?
?
?
R
1
m+1
+
ln
+ ln
+ ln
?1?? .
p(|i|)
bi ? ai
?
i?i
Furthermore:
i
R(Bab
)
?
2R(Giab )
?i, a, b.
Recall that the KL divergence is small for small values of |i| (whenever p(|i|) is not
too small) and for large margin values (bi ? ai ). Furthermore, the Gibbs empirical
risk RS (Giab ) is small when the training points are located far away from the Bayes
decision surface Giab (x) = 1/2 (with Giab (xj ) ? yj ?j ? i). Consequently, the
Gibbs classifier with the smallest guarantee of risk should perform a non trivial
margin-sparsity tradeoff.
4
A Soft Greedy Learning Algorithm
i
Theorem 2 suggests that the learner should try to find the Bayes classifier Bab
that uses a small number of balls (i.e., a small |i|), each with a large separating
margin (bi ? ai ), while keeping the empirical Gibbs risk RS (Giab ) at a low value. To
achieve this goal, we have adapted the greedy algorithm for the set covering machine
(SCM) proposed by Marchand and Shawe-Taylor (2002). It consists of choosing the
(Boolean-valued) feature i with the largest utility Ui defined as Ui = |Ni | ? p |Pi | ,
where Ni is the set of negative examples covered (classified as 0) by feature i, Pi
is the set of positive examples misclassified by this feature, and p is a learning
parameter that gives a penalty p for each misclassified positive example. Once the
feature with the largest Ui is found, we remove Ni and Pi from the training set S
and then repeat (on the remaining examples) until either no more negative examples
are present or that a maximum number of features has been reached.
In our case, however, we need to keep the Gibbs risk on S low instead of the risk
of a deterministic classifier. Since the Gibbs risk is a ?soft measure? that uses the
i
piece-wise linear functions ?a,b
instead of ?hard? indicator functions, we need a
?softer? version of the utility function Ui . Indeed, a negative example that falls in
i
the linear region of a ?a,b
is in fact partly covered. Following this observation, let
k be the vector of indices of the examples that we have used as ball centers so far
for the construction of the classifier. Let us first define the covering value C(Gkab )
k
of Gkab by the ?amount? of negative examples assigned to class 0 by Gab
:
X
?
?
def
(1 ? yj ) 1 ? Gkab (xj ) .
C(Gkab ) =
j?k
We also define the positive-side error E(Gkab ) of Gkab as the ?amount? of positive
examples assigned to class 0 :
X ?
?
def
yj 1 ? Gkab (xj ) .
E(Gkab ) =
j?k
We now want to add another ball, centered on an example with index i, to obtain
a new vector k0 containing this new index in addition to those present in k. Hence,
we now introduce the covering contribution of ball i (centered on xi ) as
k
Cab
(i)
def
=
=
0
C(Gka0 b0 ) ? C(Gkab )
?
?
?
? X
(1 ? yj ) 1 ? ?ai i ,bi (xj ) Gkab (xj ) ,
(1 ? yi ) 1 ? ?ai i ,bi (xi ) Gkab (xi ) +
j?k0
and the positive-side error contribution of ball i as
k
Eab
(i)
def
=
=
0
E(Gka0 b0 ) ? E(Gkab )
?
?
? X ?
yj 1 ? ?ai i ,bi (xj ) Gkab (xj ) .
yi 1 ? ?ai i ,bi (xi ) Gkab (xi ) +
j?k0
Typically, the covering contribution of ball i should increase its ?utility? and its
k
positive-side error should decrease it. Hence, we define the utility Uab
(i) of adding
k
ball i to Gab as
k
Uab
(i)
def
=
k
k
Cab
(i) ? pEab
(i) ,
where parameter p represents the penalty of misclassifying a positive example. For
a fixed value of p, the ?soft greedy? algorithm simply consists of adding, to the
current Gibbs classifier, a ball with maximum added utility until either the maximum number of possible features (balls) has been reached or that all the negative
examples have been (totally) covered. It is understood that, during this soft greedy
algorithm, we can remove an example (xj , yj ) from S whenever it is totally covered.
This occurs whenever Gkab (xj ) = 0.
P
The term i?i ln(R/(bi ? ai )), present in the risk bound of Theorem 2, favors ?soft
balls? having large margins bi ? ai . Hence, we introduce a margin parameter ? ? 0
that we use as follows. At each greedy step, we first search among balls having
bi ? ai = ?. Once such a ball, of center xi , having maximum utility has been found,
we try to increase further its utility be searching among all possible values of ai
and bi > ai while keeping its center xi fixed2 . Both p and ? will be chosen by cross
validation on the training set.
We conclude this section with an analysis of the running time of this soft greedy
learning algorithm for fixed p and ?. For each potential ball center, we first sort the
m ? 1 other examples with respect to their distances from the center in O(m log m)
time. Then, for this center xi , the set of ai values that we examine are those
specified by the distances (from xi ) of the m ? 1 sorted examples3 . Since the
examples are sorted, it takes time ? O(km) to compute the covering contributions
and the positive-side error for all the m ? 1 values of ai . Here k is the largest
number of examples falling into the margin. We are always using small enough ?
values to have k ? O(log m) since, otherwise, the results are terrible. It therefore
takes time ? O(m log m) to compute the utility values of all the m ? 1 different
balls of a given center. This gives a time ? O(m2 log m) to compute the utilities
for all the possible m centers. Once a ball with a largest utility value has been
chosen, we then try to increase further its utility by searching among O(m2 ) pair
values for (ai , bi ). We then remove the examples covered by this ball and repeat
the algorithm on the remaining examples. It is well known that greedy algorithms
of this kind have the following guarantee: if there exist r balls that covers all the
m examples, the greedy algorithm will find at most r ln(m) balls. Since we almost
always have r ? O(1), the running time of the whole algorithm will almost always
be ? O(m2 log2 (m)).
5
Empirical Results on Natural Data
We have compared the new PAC-Bayes learning algorithm (called here SCM-PB),
with the old algorithm (called here SCM). Both of these algorithms were also compared with the SVM equipped with a RBF kernel of variance ? 2 and a soft margin
parameter C. Each SCM algorithm used the L2 metric since this is the metric
present in the argument of the RBF kernel. However, in contrast with Laviolette
et al. (2005), each SCM was constrained to use only balls having centers of the same
class (negative for conjunctions and positive for disjunctions).
2
3
The possible values for ai and bi are defined by the location of the training points.
Recall that for each value of ai , the value of bi is set to ai + ? at this stage.
Table 1: SVM and SCM results on UCI data sets.
Data Set
Name
train
breastw
343
bupa
170
credit
353
glass
107
heart
150
haberman 144
USvotes
235
test
C
340
175
300
107
147
150
200
1
2
100
10
1
2
1
SVM results
?2
SVs errs
5
.17
2
.17
.17
1
25
38
169
282
51
64
81
53
15
66
51
29
26
39
13
SCM
errs
b
1
5
3
5
1
1
10
12
62
58
22
23
39
27
b
4
6
11
16
1
1
18
SCM-PB
?
errs
.08
.1
.09
.04
0
.2
.14
10
67
55
19
28
38
12
Each algorithm was tested the UCI data sets of Table 1. Each data set was randomly split in two parts. About half of the examples was used for training and
the remaining set of examples was used for testing. The corresponding values for
these numbers of examples are given in the ?train? and ?test? columns of Table 1.
The learning parameters of all algorithms were determined from the training set
only. The parameters C and ? for the SVM were determined by the 5-fold cross
validation (CV) method performed on the training set. The parameters that gave
the smallest 5-fold CV error were then used to train the SVM on the whole training
set and the resulting classifier was then run on the testing set. Exactly the same
method (with the same 5-fold split) was used to determine the learning parameters
of both SCM and SCM-PB.
The SVM results are reported in Table 1 where the ?SVs? column refers to the
number of support vectors present in the final classifier and the ?errs? column refers
to the number of classification errors obtained on the testing set. This notation is
used also for all the SCM results reported in Table 1. In addition to this, the
?b? and ??? columns refer, respectively, to the number of balls and the margin
parameter (divided by the average distance between the positive and the negative
examples). The results reported for SCM-PB refer to the Bayes classifier only. The
results for the Gibbs classifier are similar. We observe that, except for bupa and
heart, the generalization error of SCM-PB was always smaller than SCM. However,
the only significant difference occurs on USvotes. We also observe that SCM-PB
generally sacrifices sparsity (compared to SCM) to obtain some margin ? > 0.
References
B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin
classifiers. In Proceedings of the 5th Annual ACM Workshop on Computational Learning
Theory, pages 144?152. ACM Press, 1992.
John Langford. Tutorial on practical prediction theory for classification. Journal of Machine Learning Research, 6:273?306, 2005.
Fran?cois Laviolette and Mario Marchand. PAC-Bayes risk bounds for sample-compressed
Gibbs classifiers. Proceedings of the 22nth International Conference on Machine Learning (ICML 2005), pages 481?488, 2005.
Fran?cois Laviolette, Mario Marchand, and Mohak Shah. Margin-sparsity trade-off for the
set covering machine. Proceedings of the 16th European Conference on Machine Learning
(ECML 2005); Lecture Notes in Artificial Intelligence, 3720:206?217, 2005.
Mario Marchand and John Shawe-Taylor. The set covering machine. Journal of Machine
Learning Reasearch, 3:723?746, 2002.
David McAllester. Some PAC-Bayesian theorems. Machine Learning, 37:355?363, 1999a.
David A. McAllester. Pac-bayesian model averaging. In COLT, pages 164?170, 1999b.
| 2844 |@word middle:1 version:5 compression:11 km:1 r:12 current:1 si:12 must:1 john:2 remove:3 eab:1 greedy:8 half:1 intelligence:1 provides:2 location:2 simpler:1 consists:3 introduce:2 deteriorate:1 sacrifice:2 indeed:1 examine:1 decreasing:1 ont:1 equipped:1 haberman:1 totally:2 provided:1 moreover:2 notation:1 gqi:10 what:1 kind:1 string:6 guarantee:4 exactly:2 universit:1 classifier:55 preferable:1 reasearch:1 producing:1 positive:11 before:1 understood:1 initiated:1 approximately:1 might:1 suggests:1 bupa:2 bi:27 practical:1 testing:4 yj:9 practice:1 union:1 differs:1 empirical:6 word:1 refers:2 cannot:1 risk:26 optimize:2 deterministic:3 center:14 independently:1 survey:1 qc:1 assigns:2 m2:3 searching:2 qkp:2 construction:1 us:2 satisfying:1 located:1 role:1 region:2 trade:4 decrease:1 ui:4 tight:1 learner:6 k0:3 represented:1 uottawa:1 train:3 describe:1 kp:3 artificial:1 choosing:1 disjunction:3 larger:1 valued:1 otherwise:4 compressed:2 favor:2 final:4 propose:2 reconstruction:1 gq:3 p4:1 zm:1 uci:2 iff:1 achieve:2 produce:2 gab:5 fixing:1 ij:2 b0:2 radius:15 correct:1 stochastic:2 centered:3 pacbayes:1 mcallester:4 softer:1 surname:1 assign:1 generalization:1 credit:1 smallest:3 purpose:1 cois:3 label:4 currently:2 largest:4 always:4 aim:1 hj:2 conjunction:11 encode:1 focus:2 contrast:2 glass:1 dependent:4 i0:2 typically:1 relation:1 misclassified:2 i1:4 classification:3 among:4 colt:1 priori:3 misclassifies:1 constrained:1 construct:2 once:3 having:7 bab:8 represents:2 icml:1 minimized:2 others:1 few:1 randomly:2 divergence:5 individual:1 message:5 predefined:1 taylor:3 old:1 column:4 soft:8 boolean:2 cover:1 exchanging:1 ottawa:2 subset:4 uniform:1 predicate:1 too:2 reported:3 dataindependent:1 chooses:1 density:1 international:1 off:4 containing:1 choose:6 slowly:1 stochastically:1 return:1 account:1 potential:1 explicitly:1 depends:5 piece:1 performed:1 later:1 try:7 closed:1 mario:4 characterizes:1 sup:1 reached:2 bayes:20 sort:1 scm:22 contribution:4 formed:1 ni:3 variance:1 bayesian:3 classified:1 suffers:1 whenever:5 definition:4 dm:1 associated:1 recall:2 specify:2 improved:1 furthermore:2 just:3 stage:1 langford:2 until:2 lack:1 mohak:2 name:2 true:3 hence:12 assigned:5 leibler:1 i2:2 ignorance:1 during:1 covering:10 ulaval:1 trying:1 complete:2 performs:1 wise:1 recently:2 laval:1 refer:3 significant:1 gibbs:17 ai:30 cv:2 pm:2 shawe:3 specification:2 surface:5 glo:1 add:1 posterior:12 perspective:1 binary:1 came:1 errs:4 yi:9 uab:2 minimum:1 additional:3 determine:2 tempting:1 reduces:1 offer:1 cross:2 divided:1 equally:4 usvotes:2 qi:14 prediction:3 n5:1 metric:3 kernel:2 addition:3 want:1 interval:4 source:1 probably:1 split:2 enough:1 xj:12 gave:1 identified:1 tradeoff:1 expression:2 utility:11 penalty:2 suffer:1 rsi:1 returned:1 svs:2 generally:2 covered:5 amount:2 terrible:1 exist:2 misclassifying:1 tutorial:1 notice:1 pb:6 achieving:2 drawn:2 falling:1 run:1 almost:2 guyon:1 fran:3 draw:2 decision:5 scaling:1 def:18 hi:3 bound:14 fold:3 marchand:9 annual:1 adapted:1 occur:1 argument:1 according:8 ball:34 smaller:1 making:1 pr:5 heart:2 ln:13 previously:1 needed:1 know:1 end:2 observe:2 away:1 indirectly:1 shah:2 shortly:1 denotes:5 remaining:3 running:2 log2:1 laviolette:8 build:2 added:1 occurs:2 exhibit:1 kth:2 distance:8 separating:3 majority:1 participate:1 trivial:5 index:9 providing:1 negative:8 design:1 unknown:1 perform:1 upper:1 observation:2 ecml:1 extended:1 rn:1 arbitrary:2 canada:2 david:2 complement:1 pair:2 specified:4 kl:11 optimized:1 z1:1 boser:2 below:1 sparsity:9 sainte:1 belief:1 natural:1 indicator:1 nth:1 prior:10 l2:1 lecture:1 remarkable:1 validation:2 sufficient:1 pi:14 ift:2 repeat:2 keeping:2 side:4 fall:1 avoided:1 far:2 kullback:1 keep:1 conclude:1 xi:19 k1n:1 spectrum:1 search:1 table:5 ca:2 investigated:1 european:1 main:1 whole:2 complementary:1 site:2 sparsest:1 theorem:15 pac:18 list:1 svm:7 exists:2 workshop:1 vapnik:1 adding:2 kr:1 ci:6 cab:2 magnitude:1 margin:21 simply:1 likely:1 g1k:1 ordered:1 applies:2 acm:2 goal:2 sorted:2 consequently:6 rbf:2 labelled:1 hard:1 determined:2 except:1 uniformly:1 hyperplane:1 averaging:1 called:7 partly:2 vote:1 support:2 tested:1 |
2,031 | 2,845 | Response Analysis of Neuronal Population with
Synaptic Depression
Wentao Huang
Institute of Intelligent Information
Processing, Xidian University,
Xi'an 710071, China
[email protected]
Licheng Jiao
Institute of Intelligent Information
Processing, Xidian University,
Xi'an 710071, China
[email protected]
Shan Tan
Institute of Intelligent Information
Processing, Xidian University,
Xi'an 710071, China
[email protected]
Maoguo Gong
Institute of Intelligent Information
Processing, Xidian University,
Xi'an 710071, China
[email protected]
Abstract
In this paper, we aim at analyzing the characteristic of neuronal population responses to instantaneous or time-dependent inputs and the role of
synapses in neural information processing. We have derived an evolution equation of the membrane potential density function with synaptic
depression, and obtain the formulas for analytic computing the response
of instantaneous re rate. Through a technical analysis, we arrive at several signi cant conclusions: The background inputs play an important
role in information processing and act as a switch betwee temporal integration and coincidence detection. the role of synapses can be regarded
as a spatio-temporal lter; it is important in neural information processing for the spatial distribution of synapses and the spatial and temporal
relation of inputs. The instantaneous input frequency can affect the response amplitude and phase delay.
1
Introduction
Noise has an important impact on information processing of the nervous system in vivo. It
is signi cance for us to study the stimulus-and-response behavior of neuronal populations,
especially to transients or time-dependent inputs in noisy environment, viz. given this stochastic environment, the neuronal output is typically characterized by the instantaneous
ring rate. It has come in for a great deal of attention in recent years[1-4]. Moreover, it
is revealed recently that synapses have a more active role in information processing[5-7].
The synapses are highly dynamic and show use-dependent plasticity over a wide range
of time scales. Synaptic short-term depression is one of the most common expressions
of plasticity. At synapses with this type of modulation, pre-synaptic activity produces a
decrease in synaptic. The present work is concerned with the processes underlying investigating the collectivity dynamics of neuronal population with synaptic depression and
the instantaneous response to time-dependence inputs. First, we deduce a one-dimension
Fokker-Planck (FP) equation via reducing the high-dimension FP equations. Then, we derive the stationary solution and the response of instantaneous re rate from it. Finally, the
models are analyzed and discussed in theory and some conclusions are presented.
2
2.1
Models and Methods
Single Neuron Models and Density Evolution Equations
Our approach is based on the integrate-and- re(IF) neurons. The population density based
on the integrate-and- re neuronal model is low-dimensional and thus can be computed
ef ciently, although the approach could be generalized to other neuron models. It is completely characterized by its membrane potential below threshold. Details of the generation
of an action potential above the threshold are ignored. Synaptic and external inputs are
summed until it reaches a threshold where a spike is emitted. The general form of the
dynamics of the membrane potential v in IF model can be written as
v
dv(t)
=
dt
v(t) + Se (t) +
v
N
X
tsp
k );
Jk (t) (t
(1)
k=1
where 0 v 1, v is the membrane time constant, Se (t) is an external current directly
injected in the neuron, N is the number of synaptic connections, tsp
k is occurring time of
the ring of a presynaptic neuron k and obeys a Poisson distribution with mean k , Jk (t)
is the ef cacy of synapse k. The transmembrane potential, v, has been normalized so that
v = 0 marks the rest state, and v = 1 the threshold for ring. When the latter is achieved, v
is reset to zero. Jk (t) = ADk (t), where A is a constant representing the absolute synaptic
ef cacy corresponding to the maximal postsynaptic response obtained if all the synaptic
resources are released at once, and Dk (t) act in accordance with complex dynamics rule.
We use the phenomenological model by Tsodyks & Markram [7] to simulate short-term
synaptic depression:
(1
dDk (t)
=
dt
Dk (t))
Uk Dk (t) (t
d
tsp
k );
(2)
where Dk is a `depression' variable, Dk 2 [0; 1], d is the recovery time constant, Uk is a
constant determining the step decrease in Dk . Using the diffusion approximation, we can
get from (1) and (2)
v
dv(t)
=
dt
v(t) + Se (t) +
v
N
X
ADk (
Dk )
Uk Dk (
k
+
d
The Fokker-Planck equation of equations (3) is
@
v + Kv
(
p)
@v
v
2
1 @
+ f 2(
2 @v
N
X
N
X
k=1
v k ADk ;
p
N
X
k=1
+
N
X
k=1
2
@
(
@Dk2
K Dk =
(1
p
k k (t));
(3)
k k (t)):
N
X
@
(KDk p)
@Dk
2 2
k A Dk p)
k=1
K v = Se +
+
k=1
dDk (t)
(1
=
dt
@p(t; v; D)
=
@t
k
k=1
@
(
@v@Dk
2
k AUk Dk p)
2 2
k Uk Dk p)g;
Dk )
d
k Uk Dk :
(4)
where D = (D1 ; D2 ; :::DN ), and
Z
p(t; v; D) = pd (t; Djv)pv (t; v);
1
pd (t; Djv)dD = 1:
(5)
1
We assume that D1 ; D2 ; :::DN are uncorrelated, then we have
pd (t; Djv) =
N
Y
k=1
p~kd (t; Dk jv);
(6)
where p~kd (t; Dk jv) is the conditional probability density. Moreover, we can assume
Substituting (5) into (4), we get
pd
p~kd (t; Dk jv)
pkd (t; Dk ):
@pd
@
v + Kv
@pv
+ pv
=
(
pv p d )
@t
@t
@v
v
N
N
X
X
@
@
pv
(KDk pd )
(AUk Dk2
@Dk
@v@Dk
k=1
k pv pd )+
k=1
N
1 @2 X
f
(
2 @v 2
2 2
k A Dk pv pd )
+
Integrating Eqation (8) over D, we get
@pv (t; v)
=
@t
N
X
@2
(
@Dk2
2 2
k Uk Dk pv pd )g:
(8)
k=1
k=1
v
(7)
2
@
~ v )pv (t; v) + Qv @ pv (t; v) ;
( v+K
@v
2
@v 2
(9)
where
~v =
K
mk =
Z
Z
Kv pd dD =Se +
N
X
v
k Amk ; Qv =
k=1
Dk pkd (t; Dk )dDk ;
k
=
N
X
2
v kA k;
k=1
Z
Dk2 pkd (t; Dk )dDk ;
(10)
and pkd (t; Dk ) satis es the following equation Fokker-Planck equation
@pkd
=
@t
@
1 @2
(U 2 D2
(KDk pkd ) +
@Dk
2 @Dk2 k k
k
k pd ):
(11)
From (10) and (11), we can get
dmk
=
dt
d k
=
dt
(
1
+U
k )mk
+
d
(
2
1
;
d
+ (2U
U 2)
d
k) k
+
2mk
:
(12)
d
Let
Jv (t; v) = (
~v
v+K
v
r(t) = Jv (t; 1);
)pv (t; v)
Qv @pv (t; v)
;
2 v
@v
(13)
where Jv (t; v) is the probability ux of pv , r(t) is the re rate. The boundary conditions of
equation (9) are
Z 1
pv (t; 1) = 0;
pv (t; v)dv = 1;
r(t) = Jv (t; 0):
(14)
0
2.2
Stationary Solution and Response Analysis
When the system is in the stationary states, @pv =@t = 0; dmk =dt = 0; d k =dt = 0;
pv (t; v) = p0v (v); r(t) = r0 ; mk (t) = m0k ; k (t) = 0k and k (t) = 0k . are timeindependent. From (9), (12), (13) and (14), we get
0
~ v0 )2 Z 1
~ v0 )2 0
2 v r0
(v K
(v
K
p0v (v) =
exp[
]
exp[
]dv ; 0 v 1;
0
0
0
Qv
Qv
Qv
v
1 1
0
Z 1pK~ v0
0
~
K
B p
C
Q0
v
exp(u2 )[erf( p v ) + erf(u)]duA ;
r0 = @ v
~0
K
Q0v
p v
Q0
v
~ 0 = Se +
K
v
N
X
0 0
k mk ;
vA
Q0v =
k=1
1
m0k =
1 + Uk
N
X
2 0 0
vA k k;
k=1
0
k
0;
d k
2m0k
2 + d (2Uk Uk2 )
=
(15)
0:
k
Sometimes, we are more interested in the instantaneous response to time-dependence random uctuation inputs. The inputs take the form:
k
where "k
1. Then mk and
k
0
k (1
=
+ "k
1
k (t));
(16)
have the forms, i.e.,
mk = m0k (1 + "k m1k (t) + O("2k ));
=
k
0
k (1
1
k (t)
+ "k
+ O("2k ));
(17)
~ v and Qv are
and K
~ v = Se +
K
N
X
0 0
v A k mk
+
k=1
Qv =
N
X
N
X
0 0
1
v A k mk ( k
"k
+ m1k )) + O("2k );
k=1
2 0 0
vA k k
+
N
X
"k
1
2 0 0
vA k k( k
1
k)
+
+ O("2k ):
(18)
k=1
k=1
Substituting (17) into (12), and ignoring the high order item, it yields:
dm1k
=
dt
d 1k
=
dt
(
1
d
(
2
d
+ Uk
0
1
k )mk
+ (2Uk
Uk2 )
0 1
k k (t);
Uk
0
1
k) k
+
2m1k
(2Uk
d
Uk2 )
0 1
k k (t):
(19)
With the de nitions
~v = K
~ v0 + K
~ v1 (t) + O( 2 );
K
Qv = Q0v + Q1v (t) + O( 2 );
pv = p0v + p1 (t) + O( 2 );
r = r0 + r1 (t) + O( 2 );
where
1; and boundary conditions of p1
Z
p1 (t; 1) = 0;
0
(20)
1
p1 (t; v)dv = 0;
(21)
using the perturbative expansion in powers of ; we can get
2 0
@
~ v0 )p0v (v) + Qv @ pv (v) ;
( v+K
@v
2
@v 2
0 2
@p1
@f0 (t; v)
@
~ 0 )p1 + Qv @ p1
=
( v+K
;
v
v
@t
@v
2 @v 2
@v
0
1
~ v1 (t)p0v Qv (t) @pv ;
f0 (t; v) = K
2 @v
Q0v @p1 (t; 1) Q1v (t) @p0v (1)
r1 =
:
2 v
@v
2 v
@v
0=
(22)
~ v1 (t) = k(!)ej!t , Q1v (t) = q(!)ej!t , the output has the same
For the oscillatory inputs K
frequency and takes the forms p1 (t; v) = p! (!; v)ej!t ; @p1 =@t = j!p1 .
For inputs that vary on a slow enough time scale, satisfy
l
=
p1 =
r1 =
v !;
0
p1 + l p11
r10 + l r11
v!
1; we de ne
+ O( 2l );
+ O( 2l ):
(23)
Using the perturbative expansion in powers of l ; we get
@
~ v0 )p01 +
( v+K
@v
@
~ 0 )p1 +
( v+K
v 1
@v
@f0 (t; v)
=
@v
jp01 =
Q0v
2
Q0v
2
@ 2 p01
;
@v 2
@ 2 p11
:
@v 2
(24)
The solutions of equtions (24) are
0
~ v0 )2 Z 1
~ v0 )2 0
(v K
2
(v
K
n
exp[
]
]dv ;
(
r
F
)
exp[
v
n
1
Q0v
Q0v
Q0v
v
Z
0
~ v0 )2 Z 1
~ v0 )2 0
2r0 1
(v K
(v
K
r1n = 0
]
]dv dv;
exp[
F
exp[
n
0
0
Qv 0
Qv
Qv
v
Z v
0
0
F0 = f0 (t; v);
F1 = j
p01 (v )dv ;
n = 0; 1.
pn1 =
(25)
0
In general, Q1v (t)
~ v1 (t), then we have
K
F0 = f0 (t; v)
~ v1 (t)p0v :
K
From (23), (25) and (26), we can get
Z 1
0
~ v0 )2 Z 1
~ v0 )2 0
2r0 ~ 1
(v K
(v
K
0
r1
K
(t)
exp[
]
p
exp[
]dv dv + j!
v
v
0
0
0
Qv
Qv
Qv
0
v
0
Z
0
~ v0 )2 Z 1 Z v
~ v0 )2 0
00
(v K
(v
K
2r0 1
0 00
exp[
]
[
p
(v
)dv
]
exp[
]dv dv:
1
0
0
0
Qv 0
Qv
Qv
v
0
In the limit of high frequency inputs, i.e. 1=
v!
v
(27)
1; with the de nitions
1
;
v!
p1 = p0h + h p1h + O( 2h );
h
(26)
=
(28)
we obtain
@f0 (t; v)
;
@v
Q1v (t) @p0v (1)
Q0 @ 2 f0 (t; 1)
r1 =
+ O( 2h )
j h v
2 v
@v
2 v
@v 2
Q0 ~ 1 @ 2 p0v (1) Q1v (t) @ 3 p0v
Q1v (t)
)
r0 j h v (K
(t)
Q0
2 v v
@v 2
2 @v 3
~ v1 (t)r0
Q1 (t)r0
2j h K
Q1v (t)
~ 0)
= v 0
(1 K
1
v
0
~ 1 (t)Q0
Qv
Qv
K
v
v
p0h = 0;
When Q1v (t)
p1h = j
Q0v
:
(29)
~ v1 (t), we have
K
r1
3
~0
K
v
Q1v (t)r0
Q0v
~ v1 (t)r0
2j K
(1
0
v !Qv
~ v0 )(1
K
Q1v (t)
);
~ v1 (t)Q0v
K
(30)
Discussion
~ v0 re ects the average intensity of background inputs and Q0v re ects the
In equation (15), K
0
intensity of background noise. When 1
d Uk k , we have
~ v0
K
Se +
N
X
k=1
Q0v
N
X
vA
d Uk
U (1
k=1 d k
+
;
2
vA
0
d Uk k (1
Uk =2))
:
(31)
~ v0
From (31), we can know the change of background inputs 0k has little in uence on K
0
which is dominated by parameter v A= d Uk , but more in uence on Qv which decreases
with 0k increasing.
In the low input frequency regime, from (27), we can know that the input frequency !
increasing will result in the response amplitude and the phase delay increasing. However,
in the high input frequency limit regime, from (30), we can know the input frequency !
increasing will result in the response amplitude and the phase delay decreasing. Moreover, from (27) and (30), we know the stationary background re rate r0 play an important
part in response to changes in uctuation outputs. The instantaneous response r1 increases
monotonically with background re rate r0 :But the background re rate r0 is a function
~ v1 re ects the response amplitude,
of the background noise Q0v : In equation (27), r1 =K
and in equation (30), r0 =Q0v re ects the response amplitude. As Figure 1 (A) and (B) show
~ 1 and r0 =Q0 changes with variables Q0 and K
~ 0 respectively. We can know,
that r1 =K
v
v
v
v
~ 0 < 1), they increase monotonically with Q0 when K
~ 0 is a
for the subthreshold regime (K
v
v
v
0
~ v > 1), they decrease monotonically
constant. However, for the suprathreshold regime (K
~ v0 is a constant. When inputs remain, if the instantaneous response ampliwith Q0v when K
tude increases, then we can take for the role of neurons are more like coincidence detection
than temporal integration. And from this viewpoint, it suggests that the background inputs play an important role in information processing and act as a switch between temporal
integration and coincidence detection.
In equation (16), if the inputs take the oscillatory form,
1
k (t)
= ej!t ; according to (19),
~ 1 (for equation (27))
e 0 . (A) r1 =K
Figure 1: Response amplitude versus Q0v and K
v
v
0
0
0
~
~ 0.
changes with Q and K . (B) r0 =Q (for equation (30)) changes with Q0 and K
v
v
v
v
v
we get
m1k =
q
(
0 j(!t
d Uk k e
2
d !) + (1 +
m)
;
d Uk
(32)
0 2
k)
q
where m =arctg( 1+ ddU!k 0 ) is the phase delay, d Uk 0k = ( d !)2 + (1 + d Uk 0k )2 is
k
the amplitude. The minus shows it is a `depression' response amplitude. The phase delay
increases with the input frequency ! and decreases with the background input 0k . The
`depression' response amplitude decrease with the input frequency ! and increase with the
background input 0k . The equations (15) (18), (12), (19), (27), (30) and (32) show us a
point of view that the synapses can be regarded as a time-dependent external eld which
impacts on the neuronal population through the time-dependent mean and variance. We
assume the inputs are composed of two parts, viz. 1k1 (t) = 1k2 (t) = 21 ej!t ; then we can
get m1k1 and m1k2 . However, in general m1k 6= m1k1 + m1k2 , this suggest for us that the
spatial distribution of synapses and inputs is important on neural information processing.
In conclusion, the role of synapses can be regarded as a spatio-temporal lter. Figure 2 is
the results of simulation of a network of 2000 neurons and the analytic solution for equation
(15) and equation (27) in different conditions.
4
Summary
In this paper, we deal with the model of the integrate-and- re neurons with synaptic current dynamics and synaptic depression. In Section 2, rst, using the membrane potential
equation (1) and combining the synaptic depression equation (2), we derive the evolution
equation (4) of the joint distribution density function. Then, we give an approach to cut
the evolution equation of the high dimensional function down to one dimension, and get
equation (9). Finally, we give the stationary solution and the response of instantaneous re
rate to time-dependence random uctuation inputs. In Section 3, the analysis and discussion of the model is given and several signi cant conclusions are presented. This paper can
only investigate the IF neuronal model without internal connection. We can also extend to
other models, such as the non-linear IF neuronal models of sparsely connected networks of
excitatory and inhibitory neurons.
Figure 2: Simulation of a network of 2000 neurons (thin solid line) and the analytic solution
(thick solid line) for equation (15) and equation (27), with v = 15(ms), d = 1(s),
A = 0:5, Uk = 0:5, N = 30, ! = 6:28(Hz); 1k = sin(!t), "k 0k = 10(Hz), 0k = 70(Hz)
(A and C) and 100(Hz) (B and D), Se = 0:5(A and B) and 0:8(C and D). The horizontal
axis is time (0-2s), and the longitudinal axis is the re rate.
References
[1] Fourcaud N. & Brunel, N. (2005) Dynamics of the Instantaneous Firing Rate in Response to
Changes in Input Statistics. Journal of Computational Neuroscience 18(3):311-321.
[2] Fourcaud, N. & Brunel, N. (2002) Dynamics of the Firing Probability of Noisy Integrate-and-Fire
Neurons. Neural Computation 14(9):2057-2110.
[3] Gerstner, W. (2000) Population Dynamics of Spiking Neurons: Fast Transients, Asynchronous
States, and Locking. Neural Computation 12(1):43-89.
[4] Silberberg, G., Bethge, M., Markram, H., Pawelzik, K. & Tsodyks, M. (2004) Dynamics of
Population Rate Codes in Ensembles of Neocortical Neurons. J Neurophysiol 91(2):704-709.
[5] Abbott, L.F. & Regehr, W.G. (2004) Synaptic Computation. Nature 431(7010):796-803.
[6] Destexhe, A. & Marder, E. (2004) Plasticity in Single Neuron and Circuit Computations. Nature
431(7010):789-795.
[7] Markram, H., Wang, Y. & Tsodyks, M. (1998) Differential Signaling Via the Same Axon of
Neocortical Pyramidal Neurons. Proc Natl Acad Sci USA 95(9):5323-5328.
| 2845 |@word especially:1 normalized:1 signi:3 come:1 regehr:1 evolution:4 m1k:5 dua:1 thick:1 q0:10 d2:3 spike:1 simulation:2 stochastic:1 deal:2 dependence:3 silberberg:1 transient:2 q1:1 suprathreshold:1 eld:1 sin:1 minus:1 solid:2 sci:1 djv:3 m:1 f1:1 generalized:1 mail:4 presynaptic:1 dmk:2 neocortical:2 longitudinal:1 timeindependent:1 p0v:10 r1n:1 current:2 ka:1 code:1 instantaneous:11 exp:11 perturbative:2 written:1 great:1 ef:3 recently:1 common:1 substituting:2 spiking:1 plasticity:3 cant:2 analytic:3 vary:1 released:1 discussed:1 proc:1 stationary:5 extend:1 nervous:1 item:1 pn1:1 neuron:15 short:2 qv:25 aim:1 phenomenological:1 f0:9 ej:5 dn:2 v0:19 deduce:1 differential:1 ect:4 intensity:2 adk:3 derived:1 recent:1 viz:2 connection:2 behavior:1 p1:15 dependent:5 below:1 decreasing:1 typically:1 fp:2 little:1 pawelzik:1 relation:1 r0:18 increasing:4 monotonically:3 interested:1 regime:4 moreover:3 underlying:1 circuit:1 p0h:2 power:2 technical:1 characterized:2 spatial:3 integration:3 summed:1 representing:1 once:1 k1:1 temporal:6 va:6 impact:2 ne:1 act:3 axis:2 thin:1 poisson:1 k2:1 auk:2 uk:22 stimulus:1 intelligent:4 sometimes:1 achieved:1 r11:1 planck:3 composed:1 background:11 determining:1 accordance:1 limit:2 phase:5 acad:1 pyramidal:1 fire:1 generation:1 analyzing:1 rest:1 versus:1 xidian:8 modulation:1 firing:2 detection:3 satis:1 highly:1 investigate:1 hz:4 china:4 p01:3 integrate:4 suggests:1 fourcaud:2 emitted:1 analyzed:1 cacy:2 ciently:1 dd:2 viewpoint:1 revealed:1 range:1 natl:1 obeys:1 p1h:2 enough:1 concerned:1 destexhe:1 switch:2 affect:1 m0k:4 summary:1 excitatory:1 asynchronous:1 cn:4 signaling:1 institute:4 wide:1 markram:3 arctg:1 absolute:1 re:15 expression:1 dk2:5 uence:2 boundary:2 dimension:3 mk:10 pre:1 integrating:1 kdk:3 suggest:1 get:11 action:1 depression:10 ignored:1 tude:1 se:9 delay:5 attention:1 active:1 investigating:1 recovery:1 spatio:2 xi:4 inhibitory:1 rule:1 density:5 regarded:3 uk2:3 neuroscience:1 nature:2 population:8 ignoring:1 bethge:1 tan:1 play:3 expansion:2 threshold:4 gerstner:1 p11:2 complex:1 huang:1 jv:7 pk:1 abbott:1 r10:1 external:3 jk:3 diffusion:1 lter:2 v1:10 noise:3 nitions:2 cut:1 sparsely:1 potential:6 year:1 role:7 de:3 neuronal:9 coincidence:3 wang:1 injected:1 tsodyks:3 arrive:1 satisfy:1 connected:1 ddk:4 slow:1 axon:1 decrease:6 pv:21 view:1 transmembrane:1 cance:1 environment:2 pd:11 locking:1 shan:1 dynamic:9 formula:1 vivo:1 down:1 activity:1 marder:1 variance:1 characteristic:1 ensemble:1 yield:1 completely:1 neurophysiol:1 subthreshold:1 dk:29 dominated:1 joint:1 simulate:1 pkd:6 jiao:1 fast:1 occurring:1 according:1 oscillatory:2 synapsis:9 reach:1 uncorrelated:1 kd:3 membrane:5 synaptic:15 remain:1 postsynaptic:1 frequency:9 ux:1 erf:2 statistic:1 dv:14 u2:1 brunel:2 noisy:2 tsp:3 fokker:3 equation:25 resource:1 conditional:1 amplitude:9 maximal:1 reset:1 know:5 combining:1 change:6 dt:10 reducing:1 response:22 synapse:1 kv:3 uctuation:3 e:1 rst:1 until:1 r1:10 horizontal:1 produce:1 internal:1 mark:1 ring:3 latter:1 derive:2 gong:1 d1:2 usa:1 |
2,032 | 2,846 | Benchmarking Non-Parametric Statistical Tests
Mikaela Keller?
IDIAP Research Institute
1920 Martigny
Switzerland
[email protected]
Samy Bengio
IDIAP Research Institute
1920 Martigny
Switzerland
[email protected]
Siew Yeung Wong
IDIAP Research Institute
1920 Martigny
Switzerland
[email protected]
Abstract
Although non-parametric tests have already been proposed for that purpose, statistical significance tests for non-standard measures (different
from the classification error) are less often used in the literature. This
paper is an attempt at empirically verifying how these tests compare with
more classical tests, on various conditions. More precisely, using a very
large dataset to estimate the whole ?population?, we analyzed the behavior of several statistical test, varying the class unbalance, the compared
models, the performance measure, and the sample size. The main result is that providing big enough evaluation sets non-parametric tests are
relatively reliable in all conditions.
1
Introduction
Statistical tests are often used in machine learning in order to assess the performance of
a new learning algorithm or model over a set of benchmark datasets, with respect to the
state-of-the-art solutions. Several researchers (see for instance [4] and [9]) have proposed
statistical tests suited for 2-class classification tasks where the performance is measured in
terms of the classification error (ratio of the number of errors and the number of examples),
which enables the use of assumptions based on the fact that the error can be seen as a sum
of random variables over the evaluation examples. On the other hand, various research domains prefer to measure the performance of their models using different indicators, such as
the F1 measure, used in information retrieval [11], described in Section 2.1. Most classical
statistical tests cannot cope directly with such measure as the usual necessary assumptions
are no longer correct, and non-parametric bootstrap-based methods are then used [5].
Since several papers already use these non-parametric tests [2, 1], we were interested in
verifying empirically how reliable they were. For this purpose, we used a very large text
categorization database (the extended Reuters dataset [10]), composed of more than 800000
examples, and concerning more than 100 categories (each document was labelled with one
or more of these categories). We purposely set aside the largest part of the dataset and
considered it as the whole population, while a much smaller part of it was used as a training
set for the models. Using the large set aside dataset part, we tested the statistical test in the
?
This work was supported in part by the Swiss NSF through the NCCR on IM2 and in part by the
European PASCAL Network of Excellence, IST-2002-506778, through the Swiss OFES.
same spirit as was done in [4], by sampling evaluation sets over which we observed the
performance of the models and the behavior of the significance test.
Following the taxonomy of questions of interest defined by Dietterich in [4], we can differentiate between statistical tests that analyze learning algorithms and statistical tests that
analyze classifiers. In the first case, one intends to be robust to possible variations of the
train and evaluation sets, while in the latter, one intends to only be robust to variations of
the evaluation set. While the methods discussed in this paper can be applied alternatively
to both approaches, we concentrate here on the second one, as it is more tractable (for the
empirical section) while still corresponding to real life situations where the training set is
fixed and one wants to compare two solutions (such as during a competition).
In order to conduct a thorough analysis, we tried to vary the evaluation set size, the class
unbalance, the error measure, the statistical test itself (with its associated assumptions),
and even the closeness of the compared learning algorithms. This paper, and more precisely
Section 3, is a detailed account of this analysis. As it will be seen empirically, the closeness
of the compared learning algorithms seems to have an effect on the resulting quality of the
statistical tests: comparing an MLP and an SVM yields less reliable statistical tests than
comparing two SVMs with a different kernel. To the best of our knowledge, this has never
been considered in the literature of statistical tests for machine learning.
2
A Statistical Significance Test for the Difference of F1
Let us first remind the basic classification framework in which statistical significance tests
are used in machine learning. We consider comparing two models A and B on a two-class
classification task where the goal is to classify input examples xi into the corresponding
class yi ? {?1, 1}, using already trained models fA (xi ) or fB (xi ). One can estimate their
respective performance on some test data by counting the number of utterances of each
possible outcome: either the obtained class corresponds to the desired class, or not. Let
Ne,A (resp. Ne,B ) be the number of errors of model A (resp. B) and N the total number
of test examples; The difference between models A and B can then be written as
Ne,A ? Ne,B
.
(1)
N
The usual starting point of most statistical tests is to define the so-called null hypothesis
H0 which considers that the two models are equivalent, and then verifies how probable this
hypothesis is. Hence, assuming that D is an instance of some random variable D which
follows some distribution, we are interested in
D=
p (|D| < |D|) < ?
(2)
where ? represents the risk of selecting the alternate hypothesis (the two models are different) while the null hypothesis is in fact true. This can in general be estimated easily
when the distribution of D is known. In the simplest case, known as the proportion test,
one assumes (reasonably) that the decision taken by each model on each example can be
modeled by a Bernoulli, and further assumes that the errors of the models are independent.
This is in general wrong in machine learning since the evaluation sets are the same for both
models. When N is large, this leads to estimate D as a Normal distribution with zero mean
and standard deviation ?D
r
? ? C)
?
2C(1
?D =
(3)
N
N
+N
where C? = e,A2N e,B is the average classification error. In order to get rid of the wrong
independence assumption between the errors of the models, the McNemar test [6] concentrates on examples which were differently classified by the two compared models. Following the notation of [4], let N01 be the number of examples misclassified by model A but not
by model B and N10 the number of examples misclassified by model B but not by model
A. It can be shown that the following statistics is approximatively distributed as a ?2 with
1 degree of freedom:
(|N01 ? N10 | ? 1)2
.
(4)
z=
N01 + N10
More recently, several other statistical tests have been proposed, such as the 5x2cv
method [4] or the variance estimate proposed in [9], which both claim to better estimate
the distribution of the errors (and hence the confidence on the statistical significance of
the results). Note however that these solutions assume that the error of one model is the
average of some random variable (the error) estimated on each example. Intuitively, it will
thus tend to be Normally distributed as N grows, following the central limit theorem.
2.1
The F1 Measure
Text categorization is the task of assigning one or several categories, among a predefined set
of K categories, to textual documents. As explained in [11], text categorization is usually
solved as K 2-class classification problems, in a one-against-the-others approach. In this
field two measures are considered of importance:
Ntp
Ntp
, and Recall =
,
Ntp + Nf p
Ntp + Nf n
where for each category Ntp is the number of true positives (documents belonging to the
category that were classified as such), Nf p the number of false positives (documents out
of this category but classified as being part of it) and Nf n the number of false negatives
(documents from the category classified as out of it). Precision and Recall are effectiveness measures, i.e. inside [0, 1] interval, the closer to 1 the better. For each category k,
Precisionk measures the proportion of documents of the class among the ones considered
as such by the classifier and Recallk the proportion of documents of the class correctly
classified.
Precision =
To summarize these two values, it is common to consider the so-called F1 measure [12], often used in domains such as information retrieval, text categorization, or vision processing.
F1 can be described as the inverse of the harmonic mean of Precision and Recall:
? ?
???1
1
1
1
2 ? Precision ? Recall
2Ntp
F1 =
+
=
=
.
2 Recall Precision
Precision + Recall
2Ntp + Nf n + Nf p
(5)
Let us consider two models A and B, which achieve a performance measured by F1,A and
F1,B respectively. The difference dF1 = F1,A ? F1,B does not fit the assumptions of the
tests presented earlier. Indeed, it cannot be decomposed into a sum over the documents of
independent random variables, since the numerator and the denominator of dF1 are non
constant sums over documents of independent random variables. For the same reason F1 ,
while being a proportion, cannot be considered as a random variable following a Normal
distribution for which we could easily estimate the variance.
An alternative solution to measure the statistical significance of dF1 is based on the Bootstrap Percentile Test proposed in [5]. The idea of this test is to approximate the unknown
distribution of dF1 by an estimate based on bootstrap replicates of the data.
2.2
Bootstrap Percentile Test
Given an evaluation set of size N , one draws, with replacement, N samples from it. This
gives the first bootstrap replicate B1 , over which one can compute the statistics of interest,
dF1,B1 . Similarly, one can create as many bootstrap replicates Bn as needed, and for
each, compute dF1,Bn . The higher n is, the more precise should be the statistical test.
Literature [3] suggests to create at least 50
? replicates where ? is the level of the test; for
the smallest ? we considered (0.01), this amounts to 5000 replicates. These 5000 estimates
dF1,Bi represent the non-parametric distribution of the random variable dF1 . From it, one
can for instance consider an interval [a, b] such that p(a < dF1 < b) = 1 ? ? centered
around the mean of p(dF1 ). If 0 lies outside this interval, one can say that dF1 = 0 is not
among the most probable results, and thus reject the null hypothesis.
3
Analysis of Statistical Tests
We report in this section an analysis of the bootstrap percentile test, as well as other more
classical statistical tests, based on a real large database. We first describe the database itself
and the protocol we used for this analysis, and then provide results and comments.
3.1
Database, Models and Protocol
All the experiments detailed in this paper are based on the very large RCV1 Reuters
dataset [10], which contains up to 806,791 documents. We divided it as follows: 798,809
documents were kept aside and any statistics computed over this set Dtrue was considered
as being the truth (ie a very good estimate of the actual value); the remaining 7982 documents were used as a training set Dtr (to train models A and B). There was a total of 101
categories and each document was labeled with one or more of these categories.
We first extracted the dictionary from the training set, removed stop-words and applied
stemming to it, as normally done in text categorization. Each document was then represented as a bag-of-words using the usual tf idf coding. We trained three different models:
a linear Support Vector Machine (SVM), a Gaussian kernel SVM, and a multi-layer perceptron (MLP). There was one model for each category for the SVMs, and a single MLP for
the 101 categories. All models were properly tuned using cross-validation on the training
set.
Using the notation introduced earlier, we define the following competing hypotheses:
H0 : |dF1 | = 0 and H1 : |dF1 | > 0. We further define the level of the test
? = p(Reject H0 |H0 ), where ? takes on values 0.01, 0.05 and 0.1. Table 1 summarizes
the possible outcomes of a statistical test. With that respect, rejecting H0 means that one is
confident with (1 ? ?) ? 100% that H0 is really false.
Table 1: Various outcomes of a statistical test, with ? = p(Type I error).
Truth
H0
H1
Decision
Reject H0
Accept H0
Type I error
OK
OK
Type II error
In order to assess the performance of the statistical tests on their Type I error, also called
Size of the test, and on their Power = 1? Type II error, we used the following protocol.
s
For each category Ci , we sampled over Dtrue , S (500) evaluation sets Dte
of N documents,
s
ran the significance test over each Dte
and computed the proportion of sets for which H0
was rejected given that H0 was true over Dtrue (resp. H0 was false over Dtrue ), which we
note ?true (resp. ?).
We used ?true as an estimate of the significance test?s probability of making a Type I error
and ? as an estimate of the significance test?s Power. When ?true is higher than the ? fixed
by the statistical test, the test underestimates Type I error, which means we should not rely
on its decision regarding the superiority of one model over the other. Thus, we consider
that the significance test fails. On the contrary, ?true < ? yields a pessimistic statistical
test that decides correctly H0 more often than predicted.
Furthermore we would like to favor significance tests with a high ?, since the Power of the
test reflects its ability to reject H0 when H0 is false.
3.2
Summary of Conditions
In order to verify the sensitivity of the analyzed statistical tests to several conditions, we
varied the following parameters:
? the value of ?: it took on values in {0.1, 0.05, 0.01};
? the two compared models: there were three models, two of them were of the same
family (SVMs), hence optimizing the same criterion, while the third one was an
MLP. Most of the times the two SVMs gave very similar results, (probably because
the optimal capacity for this problem was near linear), while the MLP gave poorer
results on average. The point here was to verify whether the test was sensitive to
the closeness of the tested models (although a more formal definition of closeness
should certainly be devised);
? the evaluation sample size: we varied it from small sizes (100) up to larger sizes
(6000) to see the robustness of the statistical test to it;
? the class unbalance: out of the 101 categories of the problem, most of them resulted in highly unbalanced tasks, often with a ratio of 10 to 100 between the
two classes. In order to experiment with more balanced tasks, we artificially created meta-categories, which were random aggregations of normal categories that
tended to be more balanced;
? the tested measure: our initial interest was to directly test dF1 , the difference of
F1 , but given poor initial results, we also decided to assess dCerr, the difference of classification errors, in order to see whether the tests were sensitive to the
measure itself;
? the statistical test: on top of the bootstrap percentile test, we also analyzed the
more classical proportion test and McNemar test, both of them only on dCerr
(since they were not adapted to dF1 ).
3.3
Results
Figure 1 summarizes the results for the Size of the test estimates. All graphs show ?true ,
the number of times the test rejected H0 while H0 was true, for a fixed ? = 0.05, with
respect to the sample size, for various statistical tests and tested measures.
Figure 2 shows the obtained results for the Power of the test estimates. The proportion of
evaluation sets over which the significance test (with ? = 0.05) rejected H0 when indeed
H0 was false, is plotted against the evaluation set size.
Figures 1(a) and 2(a) show the results for balanced data (where the positive and negative
examples were approximatively equally present in the evaluation set) when comparing two
different models (an SVM and an MLP).
Figures 1(b) and 2(b) show the results for unbalanced data when comparing two different
models.
Figures 1(c) and 2(c) show the results for balanced data when comparing two similar models (a linear SVM and a Gaussian SVM) for balanced data, and finally Figures 1(d) and 2(d)
show the results for unbalanced data and two similar models.
Note that each point in the graphs was computed over a different number of samples, since
eg over the (500 evaluation sets ? 101 categories) experiments only those for which H0
was true in Dtrue were taken into account in the computation of ?true .
When the proportion of H0 true in Dtrue equals 0 (resp. the proportion of H0 false in Dtrue
equals 0), ?true (resp. ?) is set to -1. Hence, for instance the first points ({100, . . . , 1000})
of Figures 2(c) and 2(d) were computed over only 500 evaluation sets on which respectively
the same categorization task was performed. This makes these points unreliable. See [8]
for more details.
For each of the Size?s graphs, when the curves are over the 0.05 line, we can state that the
statistical test is optimistic, while when it is below the line, the statistical test is pessimistic.
As already explained, a pessimistic test should be favored whenever possible.
Several interesting conclusions can be drawn from the analysis of these graphs. First of
all, as expected, most of the statistical tests are positively influenced by the size of the
evaluation set, in the sense that their ?true value converges to ? for large sample sizes 1 .
On the available results, the McNemar test and the bootstrap test over dCerr have a similar
performance. They are always pessimistic even for small evaluation set sizes, and tend to
the expected ? values when the models compared on balanced tasks are dissimilar. They
have also a similar performance in Power over all the different conditions, higher in general
when comparing very different models.
When the compared models are similar, the bootstrap test over dF1 has a pessimistic behavior even on quite small evaluation sets. However, when the models are really different
the bootstrap test over dF1 is on average always optimistic. Note nevertheless that most
of the points in Figures 1(a) and 1(b) have a standard deviation std, over the categories,
such that ?true ? std < ? (see [8] for more details). Another interesting point is that in
the available results for the Power, the dF1 ?s bootstrap test have relatively high values with
respect to the other tests.
The proportion test have in general, on the available results, a more conservative behavior
than the McNemar test and the dCerr bootstrap test. It has more pessimistic results and
less Power. It is too often prone to ?Accept H0 ?, ie to conclude that the compared models
have an equivalent performance, whether it is true or not. This results seem to be consistent
with those of [4] and [9]. However, when comparing close models in a small unbalanced
evaluation set (Figure 1(d)), this conservative behavior is not present.
To summarize the findings, the bootstrap-based statistical test over dCerr obtained a good
performance in Size comparable to the one of the McNemar test in all conditions. However
both significance test performances in Power are low even for big evaluation sets in particular when the compared models are close. The bootstrap-based statistical test over dF1
has higher Power than the other compared tests, however it must be emphasized that it is
slightly over-optimistic in particular for small evaluation sets. Finally, when applying the
proportion test over unbalanced data for close models we obtained an optimistic behavior,
untypical of this usually conservative test.
4
Conclusion
In this paper, we have analyzed several parametric and non-parametric statistical tests for
various conditions often present in machine learning tasks, including the class balancing,
the performance measure, the size of the test sets, and the closeness of the compared mod1
Note that the same is true for the variance of ?true (? 0), and this for any of the ? values tested.
0.6
0.2
0.3
0.4
0.5
Bootstrap test dF1
McNemar test
Proportion test
Bootstrap test dCerr
0.1
Proportion of Type I error
0.6
0.5
0.4
0.3
0.2
0.1
0.05
0.05
0.0
0.0
Proportion of Type I error
Bootstrap test dF1
McNemar test
Proportion test
Bootstrap test dCerr
0
1000
2000
3000
4000
5000
6000
0
1000
Evaluation set size
5000
6000
0.6
0.2
0.3
0.4
0.5
Bootstrap test dF1
McNemar test
Proportion test
Bootstrap test dCerr
0.1
0.3
0.2
0.1
0.05
0.0
0.0
0.05
Proportion of Type I error
0.6
0.5
4000
(b) Linear SVM vs MLP - Unbalanced data
Bootstrap test dF1
McNemar test
Proportion test
Bootstrap test dCerr
0.4
3000
Evaluation set size
(a) Linear SVM vs MLP - Balanced data
Proportion of Type I error
2000
0
1000
2000
3000
4000
5000
6000
Evaluation set size
(c) Linear vs RBF SVMs - Balanced data
0
1000
2000
3000
4000
5000
6000
Evaluation set size
(d) Linear vs RBF SVMs - Unbalanced data
Figure 1: Several statistical tests comparing Linear SVM vs MLP or vs RBF SVM. The
proportion of Type I error equals -1, in Figure 1(b), when there was no data to compute the
proportion (ie H0 was always false).
els. More particularly, we were concerned by the quality of non-parametric tests since in
some cases (when using more complex performance measures such as F1 ), they are the
only available statistical tests.
Fortunately, most statistical tests performed reasonably well (in the sense that they were
more often pessimistic than optimistic in their decisions) and larger test sets always improved their performance. Note however that for dF1 the only available statistical test was
too optimistic although consistant for different levels. An unexpected result was that the
rather conservative proportion test used over unbalanced data for close models yielded an
optimistic behavior.
It has to be noted that recently, a probabilistic interpretation of F1 was suggested in [7],
and a comparison with bootstrap-based tests should be worthwhile.
References
[1] M. Bisani and H. Ney. Bootstrap estimates for confidence intervals in ASR performance evaluation. In Proceedings of ICASSP, 2004.
[2] R. M. Bolle, N. K. Ratha, and S. Pankanti. Error analysis of pattern recognition
systems - the subsets bootstrap. Computer Vision and Image Understanding, 93:1?
33, 2004.
1.0
1.0
0.6
Bootstrap test dF1
McNemar test
Proportion test
Bootstrap test dCerr
0.0
0.0
0.2
0.4
Power of the test
0.8
0.8
0.6
0.4
0.2
Power of the test
Bootstrap test dF1
McNemar test
Proportion test
Bootstrap test dCerr
0
1000
2000
3000
4000
5000
6000
0
1000
Evaluation set size
5000
6000
1.0
0.6
0.8
Bootstrap test dF1
McNemar test
Proportion test
Bootstrap test dCerr
0.0
0.0
0.2
0.4
0.6
0.4
0.2
Power of the test
1.0
4000
(b) Linear SVM vs MLP - Unbalanced data
Bootstrap test dF1
McNemar test
Proportion test
Bootstrap test dCerr
0.8
3000
Evaluation set size
(a) Linear SVM vs MLP - Balanced data
Power of the test
2000
0
1000
2000
3000
4000
5000
6000
Evaluation set size
(c) Linear vs RBF SVMs - Balanced data
0
1000
2000
3000
4000
5000
6000
Evaluation set size
(d) Linear vs RBF SVMs - Unbalanced data
Figure 2: Power of several statistical tests comparing Linear SVM vs MLP or vs RBF
SVM. The power equals -1, in Figures 2(c) and 2(d), when there was not data to compute
the proportion (ie H1 was never true).
[3] A. C. Davison and D. V. Hinkley. Bootstrap methods and their application. Cambridge University Press, 1997.
[4] T.G. Dietterich. Approximate statistical tests for comparing supervised classification
learning algorithms. Neural Computation, 10(7):1895?1924, 1998.
[5] B. Efron and R. Tibshirani. An Introduction to the Bootstrap. Chapman and Hall,
1993.
[6] B. S. Everitt. The analysis of contingency tables. Chapman and Hall, 1977.
[7] C. Goutte and E. Gaussier. A probabilistic interpretation of precision, recall and Fscore, with implication for evaluation. In Proceedings of ECIR, pages 345?359, 2005.
[8] M. Keller, S. Bengio, and S. Y. Wong. Surprising Outcome While Benchmarking
Statistical Tests. IDIAP-RR 38, IDIAP, 2005.
[9] Claude Nadeau and Yoshua Bengio. Inference for the generalization error. Machine
Learning, 52(3):239?281, 2003.
[10] T.G. Rose, M. Stevenson, and M. Whitehead. The Reuters Corpus Volume 1 - from
yesterday?s news to tomorrow?s language resources. In Proceedings of the 3rd Int.
Conf. on Language Resources and Evaluation, 2002.
[11] F. Sebastiani. Machine learning in automated text categorization. ACM Computing
Surveys, 34(1):1?47, 2002.
[12] C. J. van Rijsbergen. Information Retrieval. Butterworths, London, UK, 1975.
| 2846 |@word seems:1 replicate:1 proportion:27 tried:1 bn:2 initial:2 contains:1 selecting:1 tuned:1 document:15 comparing:11 surprising:1 assigning:1 written:1 must:1 stemming:1 enables:1 aside:3 v:12 ecir:1 davison:1 tomorrow:1 inside:1 excellence:1 expected:2 indeed:2 behavior:7 multi:1 decomposed:1 actual:1 notation:2 null:3 finding:1 thorough:1 nf:6 wrong:2 classifier:2 uk:1 normally:2 superiority:1 positive:3 limit:1 suggests:1 bi:1 decided:1 swiss:2 bootstrap:36 empirical:1 reject:4 confidence:2 word:2 get:1 cannot:3 close:4 risk:1 applying:1 wong:2 equivalent:2 mikaela:1 starting:1 keller:2 survey:1 population:2 variation:2 resp:6 samy:1 hypothesis:6 recognition:1 particularly:1 std:2 database:4 labeled:1 observed:1 solved:1 verifying:2 news:1 intends:2 removed:1 ran:1 balanced:10 rose:1 trained:2 easily:2 icassp:1 differently:1 various:5 represented:1 train:2 describe:1 london:1 outcome:4 h0:24 outside:1 quite:1 larger:2 say:1 favor:1 statistic:3 ability:1 itself:3 differentiate:1 rr:1 claude:1 took:1 a2n:1 achieve:1 competition:1 categorization:7 converges:1 measured:2 idiap:8 predicted:1 switzerland:3 concentrate:2 correct:1 centered:1 mkeller:1 f1:14 generalization:1 really:2 probable:2 pessimistic:7 around:1 considered:7 hall:2 normal:3 claim:1 vary:1 dictionary:1 smallest:1 purpose:2 bag:1 sensitive:2 largest:1 create:2 tf:1 reflects:1 gaussian:2 always:4 rather:1 varying:1 properly:1 bernoulli:1 sense:2 inference:1 el:1 accept:2 misclassified:2 interested:2 classification:9 among:3 pascal:1 favored:1 art:1 field:1 equal:4 never:2 asr:1 sampling:1 chapman:2 represents:1 others:1 report:1 yoshua:1 composed:1 resulted:1 replacement:1 attempt:1 freedom:1 interest:3 mlp:12 highly:1 evaluation:32 certainly:1 replicates:4 analyzed:4 bolle:1 predefined:1 implication:1 poorer:1 closer:1 necessary:1 respective:1 conduct:1 desired:1 plotted:1 instance:4 classify:1 earlier:2 deviation:2 subset:1 too:2 confident:1 sensitivity:1 ie:4 probabilistic:2 central:1 conf:1 nccr:1 account:2 stevenson:1 coding:1 int:1 performed:2 h1:3 optimistic:7 analyze:2 aggregation:1 ass:3 variance:3 yield:2 rejecting:1 researcher:1 classified:5 n10:3 influenced:1 tended:1 whenever:1 definition:1 against:2 underestimate:1 associated:1 stop:1 sampled:1 dataset:5 recall:7 knowledge:1 efron:1 ok:2 higher:4 supervised:1 improved:1 done:2 furthermore:1 rejected:3 hand:1 quality:2 grows:1 dietterich:2 effect:1 verify:2 true:19 hence:4 consistant:1 eg:1 during:1 numerator:1 yesterday:1 noted:1 percentile:4 criterion:1 image:1 harmonic:1 purposely:1 recently:2 common:1 empirically:3 volume:1 discussed:1 interpretation:2 im2:1 cambridge:1 sebastiani:1 everitt:1 rd:1 similarly:1 language:2 longer:1 optimizing:1 ntp:7 meta:1 mcnemar:13 life:1 yi:1 dtrue:7 seen:2 fortunately:1 ii:2 cross:1 retrieval:3 divided:1 concerning:1 devised:1 equally:1 basic:1 n01:3 denominator:1 vision:2 yeung:1 kernel:2 represent:1 want:1 interval:4 probably:1 comment:1 tend:2 contrary:1 spirit:1 effectiveness:1 seem:1 near:1 counting:1 bengio:4 enough:1 concerned:1 automated:1 independence:1 fit:1 gave:2 competing:1 idea:1 regarding:1 whether:3 detailed:2 amount:1 svms:8 category:19 simplest:1 df1:28 nsf:1 estimated:2 correctly:2 tibshirani:1 ist:1 nevertheless:1 drawn:1 kept:1 graph:4 sum:3 inverse:1 family:1 draw:1 decision:4 prefer:1 summarizes:2 comparable:1 layer:1 yielded:1 adapted:1 precisely:2 idf:1 rcv1:1 relatively:2 hinkley:1 alternate:1 poor:1 belonging:1 smaller:1 slightly:1 making:1 intuitively:1 explained:2 taken:2 resource:2 goutte:1 pankanti:1 needed:1 tractable:1 whitehead:1 available:5 worthwhile:1 ney:1 alternative:1 robustness:1 assumes:2 remaining:1 top:1 unbalance:3 classical:4 already:4 question:1 parametric:9 fa:1 usual:3 capacity:1 considers:1 reason:1 assuming:1 modeled:1 remind:1 rijsbergen:1 providing:1 ratio:2 gaussier:1 taxonomy:1 dtr:1 negative:2 martigny:3 unknown:1 datasets:1 benchmark:1 situation:1 extended:1 precise:1 varied:2 introduced:1 textual:1 suggested:1 usually:2 below:1 pattern:1 summarize:2 reliable:3 including:1 power:15 rely:1 indicator:1 ne:4 created:1 nadeau:1 utterance:1 text:6 literature:3 understanding:1 interesting:2 validation:1 contingency:1 degree:1 consistent:1 balancing:1 prone:1 summary:1 supported:1 formal:1 perceptron:1 institute:3 distributed:2 van:1 curve:1 fb:1 cope:1 approximate:2 unreliable:1 decides:1 rid:1 b1:2 corpus:1 conclude:1 butterworths:1 xi:3 alternatively:1 table:3 reasonably:2 robust:2 dte:2 european:1 artificially:1 complex:1 domain:2 protocol:3 significance:13 main:1 whole:2 big:2 reuters:3 verifies:1 positively:1 benchmarking:2 precision:7 fails:1 lie:1 third:1 untypical:1 theorem:1 emphasized:1 svm:14 closeness:5 false:8 importance:1 ci:1 fscore:1 suited:1 unexpected:1 approximatively:2 ch:3 corresponds:1 truth:2 extracted:1 acm:1 goal:1 rbf:6 labelled:1 conservative:4 total:2 called:3 support:1 latter:1 unbalanced:10 dissimilar:1 tested:5 |
2,033 | 2,847 | Off-Road Obstacle Avoidance through
End-to-End Learning
Yann LeCun
Courant Institute of Mathematical Sciences
New York University,
New York, NY 10004, USA
http://yann.lecun.com
Jan Ben
Net-Scale Technologies
Morganville, NJ 07751, USA
Eric Cosatto
NEC Laboratories,
Princeton, NJ 08540
Urs Muller
Net-Scale Technologies
Morganville, NJ 07751, USA
[email protected]
Beat Flepp
Net-Scale Technologies
Morganville, NJ 07751, USA
Abstract
We describe a vision-based obstacle avoidance system for off-road mobile robots. The system is trained from end to end to map raw input
images to steering angles. It is trained in supervised mode to predict the
steering angles provided by a human driver during training runs collected
in a wide variety of terrains, weather conditions, lighting conditions, and
obstacle types. The robot is a 50cm off-road truck, with two forwardpointing wireless color cameras. A remote computer processes the video
and controls the robot via radio. The learning system is a large 6-layer
convolutional network whose input is a single left/right pair of unprocessed low-resolution images. The robot exhibits an excellent ability to
detect obstacles and navigate around them in real time at speeds of 2 m/s.
1 Introduction
Autonomous off-road vehicles have vast potential applications in a wide spectrum of domains such as exploration, search and rescue, transport of supplies, environmental management, and reconnaissance. Building a fully autonomous off-road vehicle that can reliably
navigate and avoid obstacles at high speed is a major challenge for robotics, and a new
domain of application for machine learning research.
The last few years have seen considerable progress toward that goal, particularly in areas
such as mapping the environment from active range sensors and stereo cameras [11, 7],
simultaneously navigating and building maps [6, 15], and classifying obstacle types.
Among the various sub-problems of off-road vehicle navigation, obstacle detection and
avoidance is a subject of prime importance. The wide diversity of appearance of potential
obstacles, and the variability of the surroundings, lighting conditions, and other factors,
make the problem very challenging.
Many recent efforts have attacked the problem by relying on a multiplicity of sensors,
including laser range finder and radar [11]. While active sensors make the problem considerably simpler, there seems to be an interest from potential users for purely passive
systems that rely exclusively on camera input. Cameras are considerably less expensive,
bulky, power hungry, and detectable than active sensors, allowing levels of miniaturization
that are not otherwise possible. More importantly, active sensors can be slow, limited in
range, and easily confused by vegetation, despite rapid progress in the area [2].
Avoiding obstacles by relying solely on camera input requires solving a highly complex
vision problem. A time-honored approach is to derive range maps from multiple images
through multiple cameras or through motion [6, 5]. Deriving steering angles to avoid obstacles from the range maps is a simple matter. A large number of techniques have been
proposed in the literature to construct range maps from stereo images. Such methods have
been used successfully for many years for navigation in indoor environments where edge
features can be reliably detected and matched [1], but navigation in outdoors environment,
despite a long history, is still a challenge [14, 3]: real-time stereo algorithms are considerably less reliable in unconstrained outdoors environments. The extreme variability of
lighting conditions, and the highly unstructured nature of natural objects such as tall grass,
bushes and other vegetation, water surfaces, and objects with repeating textures, conspire
to limit the reliability of this approach. In addition, stereo-based methods have a rather
limited range, which dramatically limits the maximum driving speed.
2 End-To-End Learning for Obstacle Avoidance
In general, computing depth from stereo images is an ill-posed problem, but the depth map
is only a means to an end. Ultimately, the output of an obstacle avoidance system is a set
of possible steering angles that direct the robot toward traversible regions.
Our approach is to view the entire problem of mapping input stereo images to possible
steering angles as a single indivisible task to be learned from end to end. Our learning
system takes raw color images from two forward-pointing cameras mounted on the robot,
and maps them to a set of possible steering angles through a single trained function.
The training data was collected by recording the actions of a human driver together with the
video data. The human driver remotely drives the robot straight ahead until the robot encounters a non-traversible obstacle. The human driver then avoids the obstacle by steering
the robot in the appropriate direction. The learning system is trained in supervised mode.
It takes a single pair of heavily-subsampled images from the two cameras, and is trained to
predict the steering angle produced by the human driver at that time.
The learning architecture is a 6-layer convolutional network [9]. The network takes the
left and right 149?58 color images and produces two outputs. A large value on the first
output is interpreted as a left steering command while a large value on the second output
indicates a right steering command. Each layer in a convolutional network can be viewed as
a set of trainable, shift-invariant linear filters with local support, followed by a point-wise
non-linear saturation function. All the parameters of all the filters in the various layers
are trained simultaneously. The learning algorithm minimizes the discrepancy between the
desired output vector and the output vector produced by the output layer.
The approach is somewhat reminiscent of the ALVINN and MANIAC systems [13, 4]. The
main differences with ALVINN are: (1) our system uses stereo cameras; (2) it is trained
for off-road obtacle avoidance rather than road following; (3) Our trainable system uses a
convolutional network rather than a traditional fully-connected neural net.
Convolutional networks have two considerable advantages for this applications. Their local and sparse connection scheme allows us to handle images of higher resolution than
ALVINN while keeping the size of the network within reasonnable limits. Convolutional
nets are particularly well suited for our task because local feature detectors that combine
inputs from the left and right images can be useful for estimating distances to obstacles
(possibly by estimating disparities). Furthermore, the local and shift-invariant property of
the filters allows the system to learn relevant local features with a limited amount of training
data.
They key advantage of the approach is that the entire function from raw pixels to steering
angles is trained from data, which completely eliminates the need for feature design and
selection, geometry, camera calibration, and hand-tuning of parameters. The main motivation for the use of end-to-end learning is, in fact, to eliminate the need for hand-crafted
heuristics. Relying on automatic global optimization of an objective function from massive
amounts for data may produce systems that are more robust to the unpredictable variability
of the real world. Another potential benefit of a pure learning-based approach is that the
system may use other cues than stereo disparity to detect obstacles, possibly alleviating the
short-sightedness of methods based purely on stereo matching.
3 Vehicle Hardware
We built a small and light-weight vehicle which can be carried by a single person so as
to facilitate data collection and testing in a wide variety of environments. Using a small,
rugged and low-cost robot allowed us to drive at relatively high speed without fear of causing damage to people, property or the robot itself. The downside of this approach is the
limited payload, too limited for holding the computing power necessary for the visual processing. Therefore, the robot has no significant on-board computing power. It is remotely
controled by an off-board computer. A wireless link is used to transmit video and sensor
readings to the remote computer. Throttle and steering controls are sent from the computer
to the robot through a regular radio control channel.
The robot chassis was built around a customized 1/10-th scale remote-controlled, electricpowered, four-wheel-drive truck which was roughly 50cm in length. The typical speed of
the robot during data collection and testing sessions was roughly 2 meters per second. Two
forward-pointing low-cost 1/3-inch CCD cameras were mounted 110mm apart behind a
clear lexan window. With 2.5mm lenses, the horizontal field of view of each camera was
about 100 degrees.
A pair of 900MHz analog video transmitters was used to send the camera outputs to the
remote computer. The analog video links were subject to high signal noise, color shifts,
frequent interferences, and occasional video drop-outs. But the small size, light weight,
and low cost provided clear advantages. The vehicle is shown in Figure 1. The remote
control station consisted of a 1.4GHz Athlon PC running Linux with video capture cards,
and an interface to an R/C transmitter.
Figure 1: Left: The robot is a modified 50 cm-long truck platform controled by a remote
computer. Middle: sample images images from the training data. Right: poor reception
occasionally caused bad quality images.
4 Data Collection
During a data collection session, the human operator wears video goggles fed with the
video signal from one the robot?s cameras (no stereo), and controls the robot through a
joystick connected to the PC. During each run, the PC records the output of the two video
cameras at 15 frames per second, together with the steering angle and throttle setting from
the operator.
A crucially important requirement of the data collection process was to collect large
amounts of data with enough diversity of terrain, obstacles, and lighting conditions. Tt
was necessary for the human driver to adopt a consistent obstacle avoidance behaviour. To
ensure this, the human driver was to drive the vehicle straight ahead whenever no obstacle
was present within a threatening distance. Whenever the robot approached an obstacle, the
human driver had to steer left or right so as to avoid the obstacle. The general strategy
for collecting training data was as follows: (a) Collecting data from as large a variety of
off-road training grounds as possible. Data was collected from a large number of parks,
playgrounds, frontyards and backyards of a number of suburban homes, and heavily cluttered construction areas; (b) Collecting data with various lighting conditions, i. e., different
weather conditions and different times of day; (c) Collecting sequences where the vehicle
starts driving straight and then is steered left or right as the robot approached an obstacle;
(d) Avoiding turns when no obstacles were present; (e) Including straight runs with no obstacles and no turns as part of the training set; (f) Trying to be consistent in the turning
behavior, i. e., always turning at approximately the same distance from an obstacle.
Even though great care was taken in collecting the highest quality training data, there were
a number of imperfections in the training data that could not be avoided: (a) The smallform-factor, low-cost cameras presented significant differences in their default settings. In
particular, the white balance of the two cameras were somewhat different; (b) To maximize
image quality, the automatic gain control and automatic exposure were activated. Because
of differences in fabrication, the left and right images had slightly different brightness and
contrast characteristics. In particular, the AGC adjustments seem to react at different speeds
and amplitudes; (c) Because of AGC, driving into the sunlight caused the images to become
very dark and obstacles to become hard to detect; (d) The wireless video connection caused
dropouts and distortions of some frames. Approximately 5 % of the frames were affected.
An example is shown in Figures 1; (e) The cameras were mounted rigidly on the vehicle
and were exposed to vibration, despite the suspension. Despite these difficult conditions,
the system managed to learn the task quite well as will be shown later.
The data was recorded and archived at a resolution of 320?240? pixels at 15 frames per
second. The data was collected on 17 different days during the Winter of 2003/2004 (the
sun was very low on the horizon). A total of 1,500 clips were collected with an average
length of about 85 frames each. This resulted in a total of about 127,000 individual pairs of
frames. Segments during which the robot was driven into position in preparation for a run
were edited out. No other manual data cleaning took place. In the end, 95,000 frame pairs
were used for training and 32,000 for validation/testing. The training pairs and testing pairs
came from different sequences (and often different locations).
Figure 1 shows example snapshots from the training data, including an image with poor
reception. Note that only one of the two (stereo) images is shown. High noise and frame
dropouts occurred in approximately 5 % of the frames. It was decided to leave them in the
training set and test set so as to train the system under realistic conditions.
5 The Learning System
The entire processing consists of a single convolutional network. The architecture of convolutional nets is somewhat inspired by the structure of biological visual systems. Convolutional nets have been used successfully in a number of vision applications such as
handwriting recognition [9], object recognition [10], and face detection [12].
The input to the convolutional net consists of 6 planes of size 149?58 pixels. The six
planes respectively contain the Y, U and V components for the left camera and the right
camera. The input images were obtained by cropping the 320 ? 240 images, and through
2? horizontal low-pass filtering and subsampling, and 4? vertical low-pass filtering and
subsampling. The horizontal resolution was set higher so as to preserve more accurate
image disparity information.
Each layer in a convolutional net is composed of units organized in planes called feature
maps. Each unit in a feature map takes inputs from a small neighborhood within the feature
maps of the previous layer. Neighborhing units in a feature map are connected to neighboring (possibly overlapping) windows. Each unit computes a weighted sum of its inputs and
passes the result through a sigmoid saturation function. All units within a feature map share
the same weights. Therefore, each feature map can be seen as convolving the feature maps
of the previous layers with small-size kernels, and passing the sum of those convolutions
through sigmoid functions. Units in a feature map detect local features at all locations on
the previous layer.
The first layer contains 6 feature maps of size 147?56 connected to various combinations
of the input maps through 3?3 kernels. The first feature map is connected to the YUV
planes of the left image, the second feature map to the YUV planes of the right image, and
the other 4 feature maps to all 6 input planes. Those 4 feature maps are binocular, and
can learn filters that compare the location of features in the left and right images. Because
of the weight sharing, the first layer merely has 276 free parameters (30 kernels of size
3?3 plus 6 biases). The next layer is an averaging/subsampling layer of size 49?14 whose
purpose is to reduce the spatial resolution of the feature maps so as to build invariances
to small geometric distortions of the input. The subsampling ratios are 3 horizontally and
4 vertically. The 3-rd layer contains 24 feature maps of size 45?12. Each feature map is
connected to various subsests of maps in the previous layer through a total of 96 kernels of
size 5?3. The 4-th layer is an averaging/subsampling layer of size 9?4 with 5?3 subsampling ratios. The 5-th layer contains 100 feature maps of size 1?1 connected to the 4-th
layer through 2400 kernels of size 9?4 (full connection). finally, the output layer contains
two units fully-connected to the 100 units in the 5-th layer. The two outputs respectively
code for ?turn left? and ?turn right? commands. The network has 3.15 Million connections
and about 72,000 trainable parameters.
The bottom half of figure 2 shows the states of the six layers of the convolutional net. the
size of the input, 149?58, was essentially limited by the computing power of the remote
computer (a 1.4GHz Athlon). The network as shown runs in about 60ms per image pair on
the remote computer. Including all the processing, the driving system ran at a rate of 10
cycles per second.
The system?s output is computed on a frame by frame basis with no memory of the past
and no time window. Using multiple successive frames as input would seem like a good
idea since the multiple views resulting from ego-motion facilitates the segmentation and
detection of nearby obstacles. Unfortunately, the supervised learning approach precludes
the use of multiple frames. The reason is that since the steering is fairly smooth in time
(with long, stable periods), the current rate of turn is an excellent predictor of the next
desired steering angle. But the current rate of turn is easily derived from multiple successive
frames. Hence, a system trained with multiple frames would merely predict a steering
angle equal to the current rate of turn as observed through the camera. This would lead to
catastrophic behavior in test mode. The robot would simply turn in circles.
The system was trained with a stochastic gradient-based method that automatically sets the
relative step sizes of the parameters based on the local curvature of the loss surface [8]. Gradients were computed using the variant of back-propagation appropriate for convolutional
nets.
6 Results
Two performance measurements were recorded, the average loss, and the percentage of
?correctly classified? steering angles. The average loss is the sum of squared differences
between outputs produced by the system and the target outputs, averaged over all samples. The percentage of correctly classified steering angles measures the number of times
the predicted steering angle, quantized into three bins (left, straight, right), agrees with
steering angle provided by the human driver. Since the thresholds for deciding whether an
angle counted as left, center, or right were somewhat arbitrary, the percentages cannot be
intepreted in absolute terms, but merely as a relative figure of merit for comparing runs and
architectures.
Figure 2: Internal state of the convolutional net for two sample frames. The top row shows
left/right image pairs extracted from the test set. The light-blue bars below show the steering angle produced by the system. The bottom halves show the state of the layers of the
network, where each column is a layer (the penultimate layer is not shown). Each rectangular image is a feature map in which each pixel represents a unit activation. The YUV
components of the left and right input images are in the leftmost column.
With 95,000 training image pairs, training took 18 epochs through the training set. No
significant improvements in the error rate occurred thereafter. After training, the error rate
was 25.1% on the training set, and 35.8% on the test set. The average loss (mean-sqaured
error) was 0.88 on the training set and 1.24 on the test set. A complete training session
required about four days of CPU time on a 3.0GHz Pentium/Xeon-based server. Naturally,
a classification error rate of 35.8 % doesn?t mean that the vehicle crashes into obstacles
35.8 % of the time, but merely that the prediction of the system was in a different bin
than that of the human drivers for 35.8 % of the frames. The seemingly high error rate is
not an accurate reflection of the actual effectiveness of the robot in the field. There are
several reasons for this. First, there may be several legitimate steering angles for a given
image pair: turning left or right around an obstacle may both be valid options, but our
performance measure would record one of those options as incorrect. In addition, many
illegitimate errors are recorded when the system starts turning at a different time than the
human driver, or when the precise values of the steering angles are different enough to be
in different bins, but close enough to cause the robot to avoid the obstacle. Perhaps more
informative is diagram in figure 3. It shows the steering angle produced by the system and
the steering angle provided by the human driver for 8000 frames from the test set. It is
clear for the plot that only a small number of obstacles would not have been avoided by the
robot.
The best performance measure is a set of actual runs through representative testing grounds.
Videos of typical test runs are available at
http://www.cs.nyu.edu/?yann/research/dave/index.html.
Figure 2 shows a snapshot of the trained system in action. The network was presented with
a scene that was not present in the training set. This figure shows that the system can detect
obstacles and predict appropriate steering angles in the presence of back-lighting and with
wild difference between the automatics gain settings of the left and right cameras.
Another visualization of the results can be seen in Figures 4. They are snapshots of
video clips recorded from the vehicle?s cameras while the vehicle was driving itself autonomously. Only one of the two camera outputs is shown here. Each picture also shows
Figure 3: The steering angle produced by the system (black) compared to the steering
angle provided by the human operator (red line) for 8000 frames from the test set. Very
few obstacles would not have been avoided by the system.
the steering angle produced by the system for that particular input.
7 Conclusion
We have demonstrate the applicability of end-to-end learning methods to the task of obstacle avoidance for off-road robots.
A 6-layer convolutional network was trained with massive amounts of data to emulate the
obstacle avoidance behavior of a human driver. the architecture of the system allowed it
to learn low-level and high-level features that reliably predicted the bearing of traversible
areas in the visual field.
The main advantage of the system is its robustness to the extreme diversity of situations
in off-road environments. Its main design advantage is that it is trained from raw pixels to
directly produce steering angles. The approach essentially eliminates the need for manual
calibration, adjustments, parameter tuning etc. Furthermore, the method gets around the
need to design and select an appropriate set of feature detectors, as well as the need to
design robust and fast stereo algorithms.
The construction of a fully autonomous driving system for ground robots will require several other components besides the purely-reactive obstacle detection and avoidance system
described here. The present work is merely one component of a future system that will
include map building, visual odometry, spatial reasoning, path finding, and other strategies
for the identification of traversable areas.
Acknowledgment
This project was a preliminary study for the DARPA project ?Learning Applied to Ground Robots?
(LAGR). The material presented is based upon work supported by the Defense Advanced Research
Project Agency Information Processing Technology Office, ARPA Order No. Q458, Program Code
No. 3D10, Issued by DARPA/CMO under Contract #MDA972-03-C-0111.
References
[1] N. Ayache and O. Faugeras. Maintaining representations of the environment of a mobile robot.
IEEE Trans. Robotics and Automation, 5(6):804?819, 1989.
[2] C. Bergh, B. Kennedy, L. Matthies, and Johnson A. A compact, low power two-axis scanning
laser rangefinder for mobile robots. In The 7th Mechatronics Forum International Conference,
2000.
[3] S. B. Goldberg, M. Maimone, and L. Matthies. Stereo vision and rover navigation software for
planetary exploration. In IEEE Aerospace Conference Proceedings, March 2002.
[4] T. Jochem, D. Pomerleau, and C. Thorpe. Vision-based neural network road and intersection
detection and traversal. In Proc. IEEE Conf. Intelligent Robots and Systems, volume 3, pages
344?349, August 1995.
Figure 4: Snapshots from the left camera while the robots drives itself through various environment. The black bar beneath each image indicates the steering angle produced by the system. Top row: four successive snapshots showing the robot navigating
through a narrow passageway between a trailer, a backhoe, and some construction material. Bottom row, left: narrow obstacles such as table legs and poles (left), and solid
obstacles such as fences (center-left) are easily detected and avoided. Higly textured objects on the ground do not detract the system from the correct response (center-right).
One scenario where the vehicle occasionally made wrong decisions is when the sun is
in the field of view: the system seems to systematically drive towards the sun, whenever the sun is low on the horizon (right). Videos of these sequences are available at
http://www.cs.nyu.edu/?yann/research/dave/index.html.
[5] A. Kelly and A. Stentz. Stereo vision enhancements for low-cost outdoor autonomous vehicles. In International Conference on Robotics and Automation, Workshop WS-7, Navigation of
Outdoor Autonomous Vehicles, (ICRA ?98), May 1998.
[6] D.J. Kriegman, E. Triendl, and T.O. Binford. Stereo vision and navigation in buildings for
mobile robots. IEEE Trans. Robotics and Automation, 5(6):792?803, 1989.
[7] E. Krotkov and M. Hebert. Mapping and positioning for a prototype lunar rover. In Proc. IEEE
Int?l Conf. Robotics and Automation, pages 2913?2919, May 1995.
[8] Y. LeCun, L. Bottou, G. Orr, and K. Muller. Efficient backprop. In G. Orr and Muller K.,
editors, Neural Networks: Tricks of the trade. Springer, 1998.
[9] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, November 1998.
[10] Yann LeCun, Fu-Jie Huang, and Leon Bottou. Learning methods for generic object recognition
with invariance to pose and lighting. In Proceedings of CVPR?04. IEEE Press, 2004.
[11] L. Matthies, E. Gat, R. Harrison, B. Wilcox, R. Volpe, and T. Litwin. Mars microrover navigation: Performance evaluation and enhancement. In Proc. IEEE Int?l Conf. Intelligent Robots
and Systems, volume 1, pages 433?440, August 1995.
[12] R. Osadchy, M. Miller, and Y. LeCun. Synergistic face detection and pose estimation with
energy-based model. In Advances in Neural Information Processing Systems (NIPS 2004).
MIT Press, 2005.
[13] Dean A. Pomerleau. Knowledge-based training of artificial neural netowrks for autonomous
robot driving. In J. Connell and S. Mahadevan, editors, Robot Learning. Kluwer Academic
Publishing, 1993.
[14] C. Thorpe, M. Herbert, T. Kanade, and S Shafer. Vision and navigation for the carnegie-mellon
navlab. IEEE Trans. Pattern Analysis and Machine Intelligence, 10(3):362?372, May 1988.
[15] S. Thrun. Learning metric-topological maps for indoor mobile robot navigation. Artificial
Intelligence, 99(1):21?71, February 1998.
| 2847 |@word middle:1 agc:2 seems:2 crucially:1 brightness:1 solid:1 contains:4 exclusively:1 disparity:3 document:1 past:1 current:3 com:2 comparing:1 activation:1 reminiscent:1 realistic:1 informative:1 drop:1 plot:1 grass:1 cue:1 half:2 intelligence:2 plane:6 short:1 record:2 quantized:1 location:3 successive:3 simpler:1 mathematical:1 direct:1 become:2 driver:13 supply:1 lagr:1 incorrect:1 consists:2 combine:1 wild:1 rapid:1 behavior:3 roughly:2 inspired:1 relying:3 automatically:1 cpu:1 actual:2 unpredictable:1 window:3 provided:5 confused:1 matched:1 estimating:2 project:3 cm:3 interpreted:1 minimizes:1 rugged:1 finding:1 playground:1 nj:4 collecting:5 passageway:1 wrong:1 control:6 unit:9 local:7 vertically:1 limit:3 osadchy:1 despite:4 rigidly:1 solely:1 path:1 approximately:3 reception:2 black:2 plus:1 collect:1 challenging:1 limited:6 range:7 averaged:1 decided:1 acknowledgment:1 lecun:6 camera:25 testing:5 jan:1 area:5 remotely:2 weather:2 matching:1 road:12 regular:1 get:1 cannot:1 wheel:1 selection:1 operator:3 close:1 synergistic:1 www:2 map:29 dean:1 center:3 send:1 exposure:1 cluttered:1 rectangular:1 resolution:5 unstructured:1 pure:1 react:1 legitimate:1 avoidance:10 importantly:1 deriving:1 handle:1 autonomous:6 transmit:1 construction:3 target:1 heavily:2 user:1 massive:2 alleviating:1 cleaning:1 us:2 goldberg:1 trick:1 ego:1 expensive:1 particularly:2 recognition:4 bottom:3 observed:1 capture:1 region:1 connected:8 sun:4 cycle:1 autonomously:1 remote:8 trade:1 highest:1 edited:1 ran:1 traversable:1 environment:8 agency:1 kriegman:1 traversal:1 ultimately:1 radar:1 trained:13 honored:1 solving:1 segment:1 exposed:1 purely:3 upon:1 lunar:1 eric:1 rover:2 completely:1 basis:1 textured:1 easily:3 darpa:2 joystick:1 various:6 emulate:1 laser:2 train:1 fast:1 describe:1 detected:2 approached:2 artificial:2 neighborhood:1 whose:2 heuristic:1 posed:1 quite:1 faugeras:1 distortion:2 cvpr:1 otherwise:1 precludes:1 ability:1 itself:3 seemingly:1 advantage:5 sequence:3 net:13 took:2 frequent:1 causing:1 relevant:1 neighboring:1 beneath:1 enhancement:2 requirement:1 cropping:1 produce:3 leave:1 ben:1 object:5 tall:1 derive:1 pose:2 progress:2 bulky:1 predicted:2 c:2 payload:1 direction:1 correct:1 filter:4 stochastic:1 exploration:2 human:15 material:2 bin:3 backprop:1 require:1 behaviour:1 trailer:1 preliminary:1 biological:1 mm:2 around:4 ground:5 deciding:1 great:1 mapping:3 predict:4 pointing:2 driving:7 major:1 adopt:1 purpose:1 estimation:1 proc:3 radio:2 vibration:1 agrees:1 indivisible:1 successfully:2 weighted:1 mit:1 sensor:6 always:1 imperfection:1 odometry:1 modified:1 rather:3 avoid:4 mobile:5 command:3 office:1 derived:1 fence:1 improvement:1 transmitter:2 indicates:2 contrast:1 pentium:1 detect:5 entire:3 eliminate:1 flepp:1 w:1 pixel:5 among:1 ill:1 suburban:1 classification:1 html:2 platform:1 spatial:2 fairly:1 field:4 construct:1 equal:1 represents:1 park:1 matthies:3 discrepancy:1 future:1 yoshua:1 intelligent:2 few:2 thorpe:2 surroundings:1 winter:1 composed:1 simultaneously:2 resulted:1 preserve:1 individual:1 subsampled:1 geometry:1 detection:6 interest:1 highly:2 threatening:1 goggles:1 evaluation:1 navigation:9 extreme:2 light:3 activated:1 behind:1 pc:3 cosatto:1 accurate:2 edge:1 fu:1 necessary:2 desired:2 circle:1 arpa:1 column:2 xeon:1 steer:1 obstacle:38 downside:1 mhz:1 cost:5 applicability:1 pole:1 predictor:1 fabrication:1 johnson:1 too:1 scanning:1 considerably:3 person:1 international:2 contract:1 off:11 reconnaissance:1 together:2 linux:1 squared:1 suspension:1 management:1 recorded:4 huang:1 possibly:3 conf:3 steered:1 convolving:1 yuv:3 potential:4 diversity:3 archived:1 orr:2 automation:4 int:2 matter:1 caused:3 vehicle:15 view:4 later:1 red:1 start:2 option:2 convolutional:15 characteristic:1 miller:1 inch:1 raw:4 identification:1 produced:8 lighting:7 drive:6 kennedy:1 straight:5 dave:2 history:1 classified:2 detector:2 binford:1 whenever:3 manual:2 sharing:1 energy:1 naturally:1 handwriting:1 gain:2 color:4 knowledge:1 organized:1 segmentation:1 amplitude:1 back:2 higher:2 courant:1 supervised:3 day:3 response:1 though:1 mar:1 furthermore:2 binocular:1 until:1 hand:2 horizontal:3 transport:1 overlapping:1 propagation:1 d10:1 mode:3 outdoors:2 quality:3 perhaps:1 building:4 usa:4 facilitate:1 consisted:1 contain:1 managed:1 hence:1 laboratory:1 white:1 during:6 m:1 leftmost:1 trying:1 tt:1 complete:1 demonstrate:1 hungry:1 motion:2 passive:1 interface:1 reflection:1 reasoning:1 image:32 wise:1 sigmoid:2 netowrks:1 volume:2 million:1 analog:2 occurred:2 vegetation:2 kluwer:1 significant:3 measurement:1 mellon:1 tuning:2 unconstrained:1 automatic:4 rd:1 session:3 had:2 reliability:1 wear:1 robot:38 calibration:2 stable:1 surface:2 etc:1 patrick:1 curvature:1 recent:1 apart:1 prime:1 driven:1 occasionally:2 issued:1 sunlight:1 server:1 scenario:1 came:1 muller:3 seen:3 herbert:1 somewhat:4 steering:31 care:1 maximize:1 period:1 signal:2 multiple:7 full:1 smooth:1 positioning:1 academic:1 long:3 finder:1 controlled:1 prediction:1 variant:1 vision:8 essentially:2 metric:1 kernel:5 robotics:5 navlab:1 athlon:2 addition:2 crash:1 jochem:1 diagram:1 harrison:1 eliminates:2 pass:1 subject:2 recording:1 sent:1 facilitates:1 seem:2 effectiveness:1 presence:1 bengio:1 enough:3 mahadevan:1 variety:3 architecture:4 reduce:1 idea:1 prototype:1 haffner:1 illegitimate:1 shift:3 unprocessed:1 whether:1 six:2 chassis:1 defense:1 effort:1 stereo:15 york:2 passing:1 cause:1 action:2 jie:1 dramatically:1 useful:1 clear:3 amount:4 repeating:1 dark:1 hardware:1 clip:2 http:3 percentage:3 rescue:1 cmo:1 per:5 correctly:2 blue:1 carnegie:1 controled:2 affected:1 key:1 four:3 thereafter:1 threshold:1 conspire:1 rangefinder:1 vast:1 merely:5 year:2 sum:3 run:8 angle:27 place:1 yann:6 home:1 decision:1 dropout:2 layer:26 followed:1 topological:1 truck:3 ahead:2 scene:1 software:1 nearby:1 speed:6 stentz:1 leon:2 connell:1 relatively:1 combination:1 poor:2 march:1 slightly:1 ur:2 leg:1 invariant:2 multiplicity:1 interference:1 taken:1 visualization:1 turn:8 detectable:1 merit:1 fed:1 end:14 available:2 occasional:1 appropriate:4 generic:1 encounter:1 robustness:1 top:2 running:1 ensure:1 subsampling:6 include:1 ccd:1 publishing:1 maintaining:1 build:1 february:1 forum:1 icra:1 objective:1 damage:1 strategy:2 traditional:1 exhibit:1 navigating:2 gradient:3 distance:3 link:2 card:1 thrun:1 penultimate:1 collected:5 toward:2 water:1 reason:2 length:2 code:2 index:2 besides:1 ratio:2 balance:1 difficult:1 unfortunately:1 holding:1 design:4 reliably:3 pomerleau:2 allowing:1 vertical:1 convolution:1 snapshot:5 november:1 attacked:1 beat:1 situation:1 variability:3 precise:1 frame:19 station:1 arbitrary:1 august:2 pair:11 required:1 connection:4 aerospace:1 learned:1 narrow:2 planetary:1 nip:1 trans:3 bar:2 below:1 pattern:1 indoor:2 reading:1 challenge:2 saturation:2 program:1 built:2 including:4 reliable:1 video:14 memory:1 power:5 natural:1 rely:1 turning:4 customized:1 advanced:1 scheme:1 technology:4 picture:1 axis:1 carried:1 epoch:1 literature:1 geometric:1 meter:1 kelly:1 relative:2 fully:4 loss:4 mounted:3 filtering:2 throttle:2 validation:1 degree:1 consistent:2 editor:2 systematically:1 classifying:1 share:1 row:3 supported:1 wireless:3 last:1 keeping:1 free:1 hebert:1 bias:1 institute:1 wide:4 face:2 absolute:1 sparse:1 benefit:1 ghz:3 depth:2 default:1 world:1 avoids:1 valid:1 computes:1 doesn:1 forward:2 collection:5 made:1 avoided:4 counted:1 compact:1 global:1 active:4 terrain:2 spectrum:1 search:1 ayache:1 table:1 kanade:1 nature:1 learn:4 robust:2 channel:1 alvinn:3 excellent:2 complex:1 bearing:1 bottou:3 domain:2 main:4 motivation:1 noise:2 shafer:1 allowed:2 crafted:1 representative:1 board:2 ny:1 slow:1 sub:1 position:1 outdoor:2 bad:1 navigate:2 showing:1 nyu:2 workshop:1 importance:1 gat:1 texture:1 nec:1 horizon:2 suited:1 intersection:1 simply:1 appearance:1 visual:4 horizontally:1 adjustment:2 fear:1 springer:1 environmental:1 extracted:1 goal:1 viewed:1 towards:1 considerable:2 hard:1 typical:2 averaging:2 lens:1 total:3 pas:2 called:1 invariance:2 catastrophic:1 select:1 internal:1 support:1 people:1 reactive:1 bush:1 preparation:1 avoiding:2 princeton:1 trainable:3 wilcox:1 |
2,034 | 2,848 | Nearest Neighbor Based Feature Selection for
Regression and its Application to Neural Activity
Amir Navot12
Lavi Shpigelman12 Naftali Tishby12 Eilon Vaadia23
School of computer Science and Engineering
2
Interdisciplinary Center for Neural Computation
3
Dept. of Physiology, Hadassah Medical School
The Hebrew University Jerusalem, 91904, Israel
1
Email for correspondence:
{anavot,shpigi}@cs.huji.ac.il
Abstract
We present a non-linear, simple, yet effective, feature subset selection
method for regression and use it in analyzing cortical neural activity. Our
algorithm involves a feature-weighted version of the k-nearest-neighbor
algorithm. It is able to capture complex dependency of the target function on its input and makes use of the leave-one-out error as a natural
regularization. We explain the characteristics of our algorithm on synthetic problems and use it in the context of predicting hand velocity from
spikes recorded in motor cortex of a behaving monkey. By applying feature selection we are able to improve prediction quality and suggest a
novel way of exploring neural data.
1 Introduction
In many supervised learning tasks the input is represented by a very large number of features, many of which are not needed for predicting the labels. Feature selection is the task
of choosing a small subset of features that is sufficient to predict the target labels well. Feature selection reduces the computational complexity of learning and prediction algorithms
and saves on the cost of measuring non selected features. In many situations, feature selection can also enhance the prediction accuracy by improving the signal to noise ratio.
Another benefit of feature selection is that the identity of the selected features can provide
insights into the nature of the problem at hand. Therefore feature selection is an important
step in efficient learning of large multi-featured data sets.
Feature selection (variously known as subset selection, attribute selection or variable selection) has been studied extensively both in statistics and by the machine learning community
over the last few decades. In the most common selection paradigm an evaluation function
is used to assign scores to subsets of features and a search algorithm is used to search for
a subset with a high score. The evaluation function can be based on the performance of a
specific predictor (wrapper model, [1]) or on some general (typically cheaper to compute)
relevance measure of the features to the prediction (filter model). In any case, an exhaustive
search over all feature sets is generally intractable due to the exponentially large number of
possible sets. Therefore, search methods are employed which apply a variety of heuristics,
such as hill climbing and genetic algorithms. Other methods simply rank individual features, assigning a score to each feature independently. These methods are usually very fast,
but inevitably fail in situations where only a combined set of features is predictive of the
target function. See [2] for a comprehensive overview of feature selection and [3] which
discusses selection methods for linear regression.
A possible choice of evaluation function is the leave-one-out (LOO) mean square error
(MSE) of the k-Nearest-Neighbor (kNN) estimator ([4, 5]). This evaluation function has
the advantage that it both gives a good approximation of the expected generalization error
and can be computed quickly. [6] used this criterion on small synthetic problems (up to 12
features). They searched for good subsets using forward selection, backward elimination
and an algorithm (called schemata) that races feature sets against each other (eliminating
poor sets, keeping the fittest) in order to find a subset with a good score. All these algorithms perform a local search by flipping one or more features at a time. Since the space
is discrete the direction of improvement is found by trial and error, which slows the search
and makes it impractical for large scale real world problems involving many features.
In this paper we develop a novel selection algorithm. We extend the LOO-kNN-MSE
evaluation function to assign scores to weight vectors over the features, instead of just to
feature subsets. This results in a smooth (?almost everywhere?) function over a continuous
domain, which allows us to compute the gradient analytically and to employ a stochastic
gradient ascent to find a locally optimal weight vector. The resulting weights provide a
ranking of the features, which we can then threshold in order to produce a subset. In this
way we can apply an easy-to-compute, gradient directed search, without relearning of a
regression model at each step but while employing a strong non-linear function estimate
(kNN) that can capture complex dependency of the function on its features1 .
Our motivation for developing this method is to address a major computational neuroscience question: which features of the neural code are relevant to the observed behavior.
This is an important element of enabling interpretability of neural activity. Feature selection is a promising tool for this task. Here, we apply our feature selection method to the
task of reconstructing hand movements from neural activity, which is one of the main challenges in implementing brain computer interfaces [8]. We look at neural population spike
counts, recorded in motor cortex of a monkey while it performed hand movements and locate the most informative subset of neural features. We show that it is possible to improve
prediction results by wisely selecting a subset of cortical units and their time lags, relative
to the movement. Our algorithm, which considers feature subsets, outperforms methods
that consider features on an individual basis, suggesting that complex dependency on a set
of features exists in the code.
The remainder of the paper is organized as follows: we describe the problem setting in
section 2. Our method is presented in section 3. Next, we demonstrate its ability to cope
with a complicated dependency of the target function on groups of features using synthetic
data (section 4). The results of applying our method to the hand movement reconstruction
problem is presented in section 5.
2 Problem Setting
First, let us introduce some notation. Vectors in Rn are denoted by boldface small letters
(e.g. x, w). Scalars are denoted by small letters (e.g. x, y). The i?th element of a vector x
is denoted by xi . Let f (x), f : Rn ?? R be a function that we wish to estimate. Given
a set S ? Rn , the empiric mean square error (MSE) of an estimator f? for f is defined as
2
P
f (x) ? f?(x) .
M SES (f?) = 1
|S|
x?S
1
The design of this algorithm was inspired by work done by Gilad-Bachrach et al. ([7]) which
used a large margin based evaluation function to derive feature selection algorithms for classification.
kNN Regression k-Nearest-Neighbor (kNN) is a simple, intuitive and efficient way to estimate the value of an unknown function in a given point using its values in other (training)
points. Let S = {x1 , . . . , xm } be a set of training points. The kNN estimator is defined
P
as the mean function value of the nearest neighbors: f?(x) = k1 x? ?N (x) f (x? ) where
N (x) ? S is the set of k nearest points to x in S and k is a parameter([4, 5]). A softer
version takes a weighted average, where the weight of each neighbor is proportional to its
proximity. One specific way of doing this is
?
1 X
f (x? )e?d(x,x )/?
(1)
f?(x) =
Z ?
x ?N (x)
P
?
where d (x, x? ) = kx ?
is the ?2 norm, Z = x? ?N (x) e?d(x,x )/? is a normalization
factor and ? is a parameter. The soft kNN version will be used in the remainder of this
paper. This regression method is a special form of locally weighted regression (See [5] for
an overview of the literature on this subject.) It has the desirable property that no learning
(other than storage of the training set) is required for the regression. Also note that the
Gaussian Radial Basis Function has the form of a kernel ([9]) and can be replaced with any
operator on two data points that decays as a function of the difference between them (e.g.
kernel induced distances). As will be seen in the next section, we use the MSE of a modified
kNN regressor to guide the search for a set of features F ? {1, . . . n} that achieves a low
MSE. However, the MSE and the Gaussian kernel can be replaced by other loss measures
and kernels (respectively) as long as they are differentiable almost everywhere.
2
x? k2
3 The Feature Selection Algorithm
In this section we present our selection algorithm called RGS (Regression, Gradient guided,
feature Selection). It can be seen as a filter method for general regression algorithms or as
a wrapper for estimation by the kNN algorithm.
Our goal is to find subsets of features that induce a small estimation error. As in most supervised learning problems, we wish to find subsets that induce a small generalization error,
but since it is not known, we use an evaluation function on the training set. This evaluation
function is defined not only for subsets but for any weight vector over the features. This
is more general because a feature subset can be represented by a binary weight vector that
assigns a value of one to features in the set and zero to the rest of the features.
For a given weights vector over the features w ? Rn , we consider the weighted squared ?2
P
2
norm induced by w, defined as kzkw = i zi2 wi2 . Given a training set S, we denote by
f?w (x) the value assigned to x by a weighted kNN estimator, defined in equation 1, using
the weighted squared ?2 -norm as the distances d(x, x? ) and the nearest neighbors are found
among the points of S excluding x. The evaluation function is defined as the negative
(halved) square error of the weighted kNN estimator:
2
1 X
e(w) = ?
f (x) ? f?w (x) .
(2)
2
x?S
This evaluation function scores weight vectors (w). A change of weights will cause a
change in the distances and, possibly, the identity of each point?s nearest neighbors, which
will change the function estimates. A weight vector that induces a distance measure in
which neighbors have similar labels would receive a high score. The mean, 1/|S| is replaced with a 1/2 to ease later differentiation. Note that there is no explicit regularization
term in e(w). This is justified by the fact that for each point, the estimate of its function
value does not include that point as part of the training set. Thus, equation 2 is a leave-oneout cross validation error. Clearly, it is impossible to go over all the weight vectors (or even
over all the feature subsets), and therefore some search technique is required.
Algorithm 1 RGS(S, k, ?, T )
1. initialize w = (1, 1, . . . , 1)
2. for t = 1 . . . T
(a) pick randomly an instance x from S
(b) calculate the gradient of e(w):
X
?e(w) = ?
f (x) ? f?w (x) ?w f?w (x)
x?S
?w f?w (x)
=
? ?4
P
??
?
??
?
??
x?? ,x? ?N (x) f (x )a(x , x ) u(x , x )
P
?
??
x?? ,x? ?N (x) a(x , x )
? 2
?? 2
where a(x? , x?? ) = e?(||x?x ||w +||x?x ||w )/?
and u(x? , x?? ) ? Rn is a vector with ui = wi (xi ? x?i )2 + (xi ? x??i )2 .
(c) w = w + ?t ?e(w) = w 1 + ?t ?w f?w (x) where ?t is a decay factor.
Our method finds a weight vector w that locally maximizes e(w) as defined in (2) and
then uses a threshold in order to obtain a feature subset. The threshold can be set either
by cross validation or by finding a natural cutoff in the weight values. However, we later
show that using the distance measure induced by w in the regression stage compensates for
taking too many features. Since e(w) is defined over a continuous domain and is smooth
almost everywhere we can use gradient ascent in order to maximize it. RGS (algorithm 1)
is a stochastic gradient ascent over e(w). In each step the gradient is evaluated using one
sample point and is added to the current weight vector. RGS considers the weights of all
the features at the same time and thus it can handle dependency on a group of features.
This is demonstrated in section 4. In this respect, it is superior to selection algorithms that
scores each feature independently. It is also faster than methods that try to find a good
subset directly by trial and error. Note, however, that convergence to global optima is not
guaranteed and standard techniques to avoid local optima can be used.
The parameters of the algorithm are k (number of neighbors), ? (Gaussian decay factor),
T (number of iterations) and {?t }Tt=1 (step size decay scheme). The value of k can be
tuned by cross validation, however a proper choice of ? can compensate for a k that is too
large. It makes sense to tune ? to a value that places most neighbors in an active zone of
the Gaussian. In our experiments, we set ? to half of the mean distance between points
and their k neighbors. It usually makes sense to use ?t that decays over time to ensure
convergence, however, on our data, convergence was also achieved with ?t = 1.
The computational complexity of RGS is ?(T N m) where T is the number of iterations,
N is the number of features and m is the size of the training set S. This is correct for a
naive implementation which finds the nearest neighbors and their distances from scratch at
each step by measuring the distances between the current point to all the other points. RGS
is basically an on line method which can be used in batch mode by running it in epochs
on the training
set. When it is run for only one epoch, T = m and the complexity is
? m2 N . Matlab code for this algorithm (and those that we compare with) is available at
http://www.cs.huji.ac.il/labs/learning/code/fsr/
4 Testing on synthetic data
The use of synthetic data, where we can control the importance of each feature, allows us
to illustrate the properties of our algorithm. We compare our algorithm with other common
1
1
0.5
0
0
1
?1
1 1
0.5
0 0
0.5
0.5
0 0
(a)
2
1
0
0.5
(b)
1
0
1
0.5
0 0
0.5
(c)
?1
1 1
1
1
0
0
?1
1
?1
1
1
1
0.5
0.5
0 0
0.5
(d)
1
0.5
0 0
(e)
1
0.5
0.5
0 0
(f)
Figure 1: (a)-(d): Illustration of the four synthetic target functions. The plots shows the functions
value as function of the first two features. (e),(f): demonstration of the effect of feature selection on
estimating the second function using kNN regression (k = 5, ? = 0.05). (e) using both features
(mse = 0.03), (f) using the relevant feature only (mse = 0.004)
selection methods: infoGain [10], correlation coefficients (corrcoef ) and forward selection
(see [2]) . infoGain and corrcoef simply rank features according to the mutual information2
or the correlation coefficient (respectively) between each feature and the labels (i.e. the
target function value). Forward selection (fwdSel) is a greedy method in which features
are iteratively added into a growing subset. In each step, the feature showing the greatest
improvement (given the previously selected subset) is added. This is a search method that
can be applied to any evaluation function and we use our criterion (equation 2 on feature
subsets). This well known method has the advantages of considering feature subsets and
that it can be used with non linear predictors. Another algorithm we compare with scores
each feature independently using our evaluation function (2). This helps us in analyzing
RGS, as it may help single out the respective contributions to performance of the properties
of the evaluation function and the search method. We refer to this algorithm as SKS (Single
feature, kNN regression, feature Selection).
We look at four different target functions over R50 . The training sets include 20 to 100
points that were chosen randomly from the [?1, 1]50 cube. The target functions are given
in the top row of figure 2 and are illustrated in figure 1(a-d). A random Gaussian noise with
zero mean and a variance of 1/7 was added to the function value of the training points.
Clearly, only the first feature is relevant for the first two target functions, and only the first
two features are relevant for the last two target functions. Note also that the last function
is a smoothed version of parity function learning and is considered hard for many feature
selection algorithms [2].
First, to illustrate the importance of feature selection on regression quality we use kNN to
estimate the second target function. Figure 1(e-f) shows the regression results for target
(b), using either only the relevant feature or both the relevant and an irrelevant feature.
The addition of one irrelevant feature degrades the MSE ten fold. Next, to demonstrate
the capabilities of the various algorithms, we run them on each of the above problems with
varying training set size. We measure their success by counting the number of times that
the relevant features were assigned the highest rank (repeating the experiment 250 times by
re-sampling the training set). Figure 2 presents success rate as function of training set size.
We can see that all the algorithms succeeded on the first function which is monotonic and
depends on one feature alone. infoGain and corrcoef fail on the second, non-monotonic
function. The three kNN based algorithms succeed because they only depend on local
properties of the target function. We see, however, that RGS needs a larger training set to
achieve a high success rate. The third target function depends on two features but the dependency is simple as each of them alone is highly correlated with the function value. The
fourth, XOR-like function exhibits a complicated dependency that requires consideration
of the two relevant features simultaneously. SKS which considers features separately sees
the effect of all other features as noise and, therefore, has only marginal success on the third
2
Feature and function values were ?binarized? by comparing them to the median value.
success rate
(a) x2
1
(b) sin(2?x1 + ?/2)
(c) sin(2?x1 + ?/2) + x2
100
100
100
80
80
80
60
60
60
40
40
40
20
20
20
20
40 60 80 100
# examples
20
40 60 80 100
# examples
(d) sin(2?x1 ) sin(2?x2 )
100
80
60
40
20
20
40 60 80 100
# examples
corrcoef
infoGain
SKS
fwdSel
RGS
20
40 60 80 100
# examples
Figure 2: Success rate of the different algorithms on 4 synthetic regression tasks (averaged over 250
repetitions) as a function of the number of training examples. Success is measured by the percent of
the repetitions in which the relevant feature(s) received first place(s).
function and fails on the fourth altogether. RGS and fwdSel apply different search methods.
fwdSel considers subsets but can evaluate only one additional feature in each step, giving
it some advantage over RGS on the third function but causing it to fail on the fourth. RGS
takes a step in all features simultaneously. Only such an approach can succeed on the fourth
function.
5 Hand Movements Reconstruction from Neural Activity
To suggest an interpretation of neural coding we apply RGS and compare it with the alternatives presented in the previous section3 on the hand movement reconstruction task. The
data sets were collected while a monkey performed a planar center-out reaching task with
one or both hands [11]. 16 electrodes, inserted daily into novel positions in primary motor
cortex were used to detect and sort spikes in up to 64 channels (4 per electrode). Most
of the channels detected isolated neuronal spikes by template matching. Some, however,
had templates that were not tuned, producing spikes during only a fraction of the session.
Others (about 25%) contained unused templates (resulting in a constant zero producing
channel or, possibly, a few random spikes). The rest of the channels (one per electrode)
produced spikes by threshold passing. We construct a labeled regression data set as follows. Each example corresponds to one time point in a trial. It consists of the spike counts
that occurred in the 10 previous consecutive 100ms long time bins from all 64 channels
(64 ? 10 = 640 features) and the label is the X or Y component of the instantaneous hand
velocity. We analyze data collected over 8 days. Each data set has an average of 5050
examples collected during the movement periods of the successful trials.
In order to evaluate the different feature selection methods we separate the data into training
and test sets. Each selection method is used to produce a ranking of the features. We then
apply kNN (based on the training set) using different size groups of top ranking features to
the test set. We use the resulting MSE (or correlation coefficient between true and estimated
movement) as our measure of quality. To test the significance of the results we apply 5fold cross validation and repeat the process 5 times on different permutations of the trial
ordering. Figure 3 shows the average (over permutations, folds and velocity components)
MSE as a function of the number of selected features on four of the different data sets
(results on the rest are similar and omitted due to lack of space)4 . It is clear that RGS
achieves better results than the other methods throughout the range of feature numbers.
To test whether the performance of RGS was consistently better than the other methods
we counted winning percentages (the percent of the times in which RGS achieved lower
MSE than another algorithm) in all folds of all data sets and as a function of the number of
3
fwdSel was not applied due to its intractably high run time complexity. Note that its run time is
at least r times that of RGS where r is the size of the optimal set and is longer in practice.
4
We use k = 50 (approximately 1% of the data points). ? is set automatically as described in
section 3. These parameters were manually tuned for good kNN results and were not optimized for
any of the feature selection algorithms. The number of epochs for RGS was set to 1 (i.e. T = m).
0.74
0.09
0.63
0.10
0.06
7
200
400
600
0.77
0.08
7
200
400
600
RGS
SKS
infoGain
corrcoef
0.27
7
200
400
600
7
200
400
600
Figure 3: MSE results for the different feature selection methods on the neural activity data sets. Each
sub figure is a different recording day. MSEs are presented as a function of the number of features
used. Each point is a mean over all 5 cross validation folds, 5 permutations on the data and the two
velocity component targets. Note that some of the data sets are harder than others.
100
90
90
80
RGS vs SKS
RGS vs infoGain
RGS vs corrcoef
70
80
winning percentages
uniform weights
non?uniform weights
70
60
60
50
0
0.8
MSE
100
winning percentage
winning percentage
features used. Figure 4 shows the winning percentages of RGS versus the other methods.
For a very low number of features, while the error is still high, RGS winning scores are
only slightly better than chance but once there are enough features for good predictions
the winning percentages are higher than 90%. In figure 3 we see that the MSE achieved
when using only approximately 100 features selected by RGS is better than when using all
the features. This difference is indeed statistically significant (win score of 92%). If the
MSE is replaced by correlation coefficient as the measure of quality, the average results
(not shown due to lack of space) are qualitatively unchanged.
RGS not only ranks the features but also gives them weights that achieve locally optimal
results when using kNN regression. It therefore makes sense not only to select the features
but to weigh them accordingly. Figure 5 shows the winning percentages of RGS using
the weighted features versus RGS using uniformly weighted features. The corresponding
MSEs (with and without weights) on the first data set are also displayed. It is clear that
using the weights improves the results in a manner that becomes increasingly significant as
the number of features grows, especially when the number of features is greater than the
optimal number. Thus, using weighted features can compensate for choosing too many by
diminishing the effect of the surplus features.
To take a closer look at what features are selected, figure 6 shows the 100 highest ranking
features for all algorithms on one data set. Similar selection results were obtained in the
rest of the folds. One would expect to find that well isolated cells (template matching) are
more informative than threshold based spikes. Indeed, all the algorithms select isolated
cells more frequently within the top 100 features (RGS does so in 95% of the time and the
rest in 70%-80%). A human selection of channels, based only on looking at raster plots
and selecting channels with stable firing rates was also available to us. This selection was
independent of the template/threshold categorisation. Once again, the algorithms selected
the humanly preferred channels more frequently than the other channels. Another and more
interesting observation that can also be seen in the figure is that while corrcoef, SKS and
infoGain tend to select all time lags of a channel, RGS?s selections are more scattered (more
channels and only a few time bins per channel). Since RGS achieves best results, we
100
200
300
400
number of features
500
600
Figure 4: Winning percentages of RGS over the
other algorithms. RGS achieves better MSEs
consistently.
50
0
100
200
300
400
number of features
500
600
0.6
Figure 5: Winning percentages of RGS with
and without weighting of features (black). Gray
lines are corresponding MSEs of these methods
on the first data set.
RGS
SKS
corrCoef
infoGain
Figure 6: 100 highest ranking features (grayed out) selected by the algorithms. Results are for one
fold of one data set. In each sub figure the bottom row is the (100ms) time bin with least delay and
the higher rows correspond to longer delays. Each column is a channel (silent channels omitted).
conclude that this selection pattern is useful. Apparently RGS found these patterns thanks
to its ability to evaluate complex dependency on feature subsets. This suggests that such
dependency of the behavior on the neural activity does exist.
6 Summary
In this paper we present a new method of selecting features for function estimation and use
it to analyze neural activity during a motor control task . We use the leave-one-out mean
squared error of the kNN estimator and minimize it using a gradient ascent on an ?almost?
smooth function. This yields a selection method which can handle a complicated dependency of the target function on groups of features yet can be applied to large scale problems.
This is valuable since many common selection methods lack one of these properties. By
comparing the result of our method to other selection methods on the motor control task,
we show that consideration of complex dependency helps to achieve better performance.
These results suggest that this is an important property of the code.
Our future work is aimed at a better understanding of neural activity through the use of
feature selection. One possibility is to perform feature selection on other kinds of neural
data such as local field potentials or retinal activity. Another promising option is to explore
the temporally changing properties of neural activity. Motor control is a dynamic process
in which the input output relation has a temporally varying structure. RGS can be used in
on line (rather than batch) mode to identify these structures in the code.
References
[1] R. Kohavi and G.H. John. Wrapper for feature subset selection. Artificial Intelligence, 97(12):273?324, 1997.
[2] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. JMLR, 2003.
[3] A.J. Miller. Subset Selection in Regression. Chapman and Hall, 1990.
[4] L. Devroye. The uniform convergence of nearest neighbor regression function estimators and
their application in optimization. IEEE transactions in information theory, 24(2), 1978.
[5] C. Atkeson, A. Moore, and S. Schaal. Locally weighted learning. AI Review, 11.
[6] O. Maron and A. Moore. The racing algorithm: Model selection for lazy learners. In Artificial
Intelligence Review, volume 11, pages 193?225, April 1997.
[7] R. Gilad-Bachrach, A. Navot, and N. Tishby. Margin based feature selection - theory and
algorithms. In Proc. 21st (ICML), pages 337?344, 2004.
[8] D. M. Taylor, S. I. Tillery, and A. B. Schwartz. Direct cortical control of 3d neuroprosthetic
devices. Science, 296(7):1829?1832, 2002.
[9] V. Vapnik. The Nature Of Statistical Learning Theory. Springer-Verlag, 1995.
[10] J. R. Quinlan. Induction of decision trees. In Jude W. Shavlik and Thomas G. Dietterich,
editors, Readings in Machine Learning. Morgan Kaufmann, 1990. Originally published in
Machine Learning 1:81?106, 1986.
[11] R. Paz, T. Boraud, C. Natan, H. Bergman, and E. Vaadia. Preparatory activity in motor cortex
reflects learning of local visuomotor skills. Nature Neuroscience, 6(8):882?890, August 2003.
| 2848 |@word trial:5 version:4 eliminating:1 norm:3 elisseeff:1 pick:1 infogain:8 harder:1 wrapper:3 score:11 selecting:3 genetic:1 tuned:3 outperforms:1 current:2 comparing:2 yet:2 assigning:1 john:1 informative:2 motor:7 plot:2 v:3 alone:2 half:1 selected:8 greedy:1 intelligence:2 amir:1 accordingly:1 device:1 direct:1 consists:1 manner:1 introduce:1 indeed:2 expected:1 preparatory:1 behavior:2 frequently:2 growing:1 multi:1 brain:1 inspired:1 automatically:1 considering:1 grayed:1 becomes:1 features1:1 estimating:1 notation:1 maximizes:1 israel:1 what:1 kind:1 monkey:3 finding:1 differentiation:1 impractical:1 binarized:1 k2:1 schwartz:1 control:5 unit:1 medical:1 producing:2 engineering:1 local:5 analyzing:2 firing:1 approximately:2 black:1 studied:1 suggests:1 ease:1 range:1 ms:4 averaged:1 statistically:1 directed:1 testing:1 practice:1 featured:1 physiology:1 matching:2 radial:1 induce:2 suggest:3 selection:50 operator:1 storage:1 context:1 applying:2 impossible:1 eilon:1 www:1 demonstrated:1 center:2 jerusalem:1 go:1 independently:3 bachrach:2 assigns:1 m2:1 insight:1 estimator:7 population:1 handle:2 target:16 us:1 bergman:1 velocity:4 element:2 racing:1 labeled:1 observed:1 inserted:1 bottom:1 capture:2 calculate:1 ordering:1 movement:8 highest:3 valuable:1 weigh:1 complexity:4 ui:1 dynamic:1 depend:1 predictive:1 learner:1 basis:2 represented:2 various:1 fast:1 effective:1 describe:1 detected:1 artificial:2 visuomotor:1 choosing:2 exhaustive:1 heuristic:1 lag:2 larger:1 s:1 compensates:1 ability:2 statistic:1 knn:19 advantage:3 differentiable:1 vaadia:1 reconstruction:3 remainder:2 causing:1 relevant:9 achieve:3 fittest:1 intuitive:1 tillery:1 convergence:4 electrode:3 optimum:2 produce:2 leave:4 help:3 derive:1 develop:1 ac:2 illustrate:2 measured:1 nearest:10 school:2 received:1 strong:1 c:2 involves:1 direction:1 guided:1 correct:1 attribute:1 filter:2 stochastic:2 human:1 softer:1 elimination:1 implementing:1 bin:3 assign:2 generalization:2 exploring:1 proximity:1 considered:1 hall:1 predict:1 major:1 achieves:4 consecutive:1 omitted:2 estimation:3 proc:1 label:5 repetition:2 tool:1 weighted:11 reflects:1 clearly:2 gaussian:5 modified:1 reaching:1 rather:1 avoid:1 varying:2 schaal:1 improvement:2 consistently:2 rank:4 sense:3 detect:1 typically:1 diminishing:1 relation:1 classification:1 among:1 denoted:3 special:1 initialize:1 mutual:1 cube:1 marginal:1 construct:1 once:2 field:1 sampling:1 manually:1 chapman:1 look:3 icml:1 lavi:1 future:1 others:2 few:3 employ:1 randomly:2 simultaneously:2 comprehensive:1 individual:2 cheaper:1 variously:1 replaced:4 highly:1 possibility:1 evaluation:13 section3:1 succeeded:1 closer:1 daily:1 respective:1 tree:1 taylor:1 re:1 isolated:3 instance:1 column:1 soft:1 measuring:2 cost:1 subset:27 predictor:2 uniform:3 delay:2 successful:1 paz:1 too:3 loo:2 tishby:1 sks:7 dependency:11 synthetic:7 combined:1 thanks:1 st:1 huji:2 interdisciplinary:1 regressor:1 enhance:1 quickly:1 squared:3 again:1 recorded:2 possibly:2 suggesting:1 potential:1 retinal:1 coding:1 coefficient:4 race:1 ranking:5 depends:2 performed:2 later:2 try:1 lab:1 schema:1 doing:1 analyze:2 apparently:1 sort:1 option:1 complicated:3 capability:1 contribution:1 minimize:1 il:2 square:3 accuracy:1 xor:1 variance:1 characteristic:1 kaufmann:1 miller:1 correspond:1 yield:1 identify:1 climbing:1 produced:1 basically:1 published:1 explain:1 email:1 against:1 raster:1 improves:1 organized:1 surplus:1 higher:2 originally:1 supervised:2 day:2 planar:1 april:1 done:1 evaluated:1 just:1 stage:1 correlation:4 hand:9 lack:3 mode:2 maron:1 quality:4 fsr:1 gray:1 grows:1 effect:3 dietterich:1 true:1 regularization:2 analytically:1 assigned:2 iteratively:1 moore:2 illustrated:1 sin:4 during:3 naftali:1 criterion:2 m:2 hill:1 tt:1 demonstrate:2 interface:1 percent:2 consideration:2 novel:3 instantaneous:1 common:3 superior:1 overview:2 exponentially:1 volume:1 extend:1 interpretation:1 occurred:1 refer:1 significant:2 ai:1 session:1 had:1 stable:1 cortex:4 behaving:1 longer:2 halved:1 hadassah:1 irrelevant:2 verlag:1 binary:1 success:7 seen:3 morgan:1 additional:1 greater:1 employed:1 paradigm:1 maximize:1 period:1 signal:1 desirable:1 reduces:1 empiric:1 smooth:3 faster:1 cross:5 long:2 compensate:2 dept:1 prediction:6 involving:1 regression:20 iteration:2 normalization:1 kernel:4 jude:1 gilad:2 achieved:3 cell:2 receive:1 justified:1 addition:1 separately:1 median:1 kohavi:1 rest:5 ascent:4 subject:1 induced:3 recording:1 tend:1 counting:1 unused:1 easy:1 enough:1 variety:1 silent:1 whether:1 kzkw:1 cause:1 passing:1 matlab:1 generally:1 useful:1 clear:2 aimed:1 tune:1 repeating:1 extensively:1 locally:5 induces:1 ten:1 http:1 percentage:9 exist:1 wisely:1 neuroscience:2 estimated:1 per:3 discrete:1 group:4 four:3 threshold:6 changing:1 cutoff:1 backward:1 fraction:1 run:4 everywhere:3 letter:2 fourth:4 place:2 almost:4 throughout:1 guyon:1 decision:1 guaranteed:1 correspondence:1 fold:7 activity:12 categorisation:1 x2:3 developing:1 according:1 poor:1 slightly:1 reconstructing:1 increasingly:1 wi:1 equation:3 previously:1 r50:1 discus:1 count:2 fail:3 needed:1 available:2 apply:7 zi2:1 save:1 batch:2 alternative:1 altogether:1 thomas:1 top:3 running:1 include:2 ensure:1 quinlan:1 giving:1 k1:1 corrcoef:8 especially:1 unchanged:1 question:1 added:4 spike:9 flipping:1 degrades:1 primary:1 exhibit:1 gradient:9 win:1 distance:8 separate:1 considers:4 collected:3 boldface:1 induction:1 code:6 devroye:1 illustration:1 ratio:1 demonstration:1 hebrew:1 slows:1 negative:1 oneout:1 design:1 implementation:1 proper:1 unknown:1 perform:2 observation:1 enabling:1 inevitably:1 displayed:1 situation:2 excluding:1 looking:1 locate:1 rn:5 smoothed:1 august:1 community:1 natan:1 required:2 optimized:1 address:1 able:2 usually:2 pattern:2 xm:1 wi2:1 reading:1 challenge:1 interpretability:1 greatest:1 natural:2 predicting:2 scheme:1 improve:2 temporally:2 naive:1 epoch:3 literature:1 understanding:1 review:2 relative:1 loss:1 expect:1 permutation:3 interesting:1 proportional:1 versus:2 validation:5 sufficient:1 editor:1 row:3 summary:1 repeat:1 last:3 keeping:1 parity:1 intractably:1 guide:1 shavlik:1 neighbor:14 template:5 taking:1 benefit:1 cortical:3 world:1 neuroprosthetic:1 forward:3 qualitatively:1 counted:1 employing:1 atkeson:1 cope:1 transaction:1 skill:1 preferred:1 global:1 active:1 conclude:1 navot:1 xi:3 search:12 continuous:2 decade:1 promising:2 nature:3 channel:14 improving:1 mse:16 complex:5 domain:2 significance:1 main:1 motivation:1 noise:3 x1:4 neuronal:1 scattered:1 fails:1 position:1 sub:2 wish:2 explicit:1 winning:10 jmlr:1 third:3 weighting:1 specific:2 showing:1 decay:5 intractable:1 exists:1 vapnik:1 importance:2 margin:2 relearning:1 kx:1 rg:37 simply:2 explore:1 lazy:1 contained:1 scalar:1 monotonic:2 springer:1 corresponds:1 chance:1 succeed:2 identity:2 goal:1 change:3 hard:1 uniformly:1 shpigi:1 called:2 zone:1 select:3 searched:1 relevance:1 evaluate:3 scratch:1 correlated:1 |
2,035 | 2,849 | Phase Synchrony Rate for the Recognition of
Motor Imagery in Brain-Computer Interface
Le Song
Nation ICT Australia
School of Information Technologies
The University of Sydney
NSW 2006, Australia
[email protected]
Evian Gordon
Brain Resource Company
Scientific Chair, Brain Dynamics Center
Westmead Hospitial
NSW 2006, Australia
[email protected]
Elly Gysels
Swiss Center for Electronics and Microtechnology
Neuch?atel, CH-2007 Switzerland
[email protected]
Abstract
Motor imagery attenuates EEG ? and ? rhythms over sensorimotor cortices. These amplitude changes are most successfully captured by the
method of Common Spatial Patterns (CSP) and widely used in braincomputer interfaces (BCI). BCI methods based on amplitude information, however, have not incoporated the rich phase dynamics in the EEG
rhythm. This study reports on a BCI method based on phase synchrony
rate (SR). SR, computed from binarized phase locking value, describes
the number of discrete synchronization events within a window. Statistical nonparametric tests show that SRs contain significant differences between 2 types of motor imageries. Classifiers trained on SRs consistently
demonstrate satisfactory results for all 5 subjects. It is further observed
that, for 3 subjects, phase is more discriminative than amplitude in the
first 1.5-2.0 s, which suggests that phase has the potential to boost the
information transfer rate in BCIs.
1
Introduction
A brain-computer interface (BCI) is a communication system that relies on the brain rather
than the body for control and feedback. Such an interface offers hope not only for those
severely paralyzed to control wheelchairs but also to enhance normal performance. Current
BCI research is still in its infancy. Most studies focus on finding useful brain signals and
designing algorithms to interpret them [1, 2].
The most exploited signal in BCI is the scalp-recorded electroencephalogram (EEG). EEG
is a noninvasive measurement of the brain?s electrical activities and has a temporal resolution of milliseconds. It is well known that motor imagery attenuates EEG ? and ? rhythm
over sensorimotor cortices. Depending on the part of the body imagined moving, the am-
plitude of multichannel EEG recordings exhibits distinctive spatial patterns. Classification
of these patterns is used to control computer applications. Currently, the most successful
method for BCI is called Common Spatial Patterns (CSP). The CSP method constructs a
few new time series whose variances contain the most discriminative information. For the
problem of classifying 2 types of motor imageries, the CSP method is able to correctly
recognize 90% of the single trials in many studies [3, 4]. Ongoing research on the CSP
method mainly focuses on its extension to the multi-class problem [5] and its integration
with other forms of EEG amplitude information [4].
EEG signals contain both amplitude and phase information. Phase, however, has been
largely ignored in BCI studies. Literature from neuroscience suggests, instead, that phase
can be more discriminative than amplitude [6, 7]. For example, compared to a stimuli in
which no face is present, face perception induces significant changes in ? synchrony, but
not in amplitude [6]. Phase synchrony has been proposed as a mechanism for dynamic
integration of distributed neural networks in the brain. Decreased synchrony, on the other
hand, is associated with active unbinding of the neural assemblies and preparation of the
brain for the next mental state (see [7] for a review). Accumulating evidence from both
micro-electrode recordings [8,9] and EEG measurements [6] provides support to the notion
that phase dynamics subserve all mental processes, including motor planning and imagery.
In the BCI community, only a paucity of results has demonstrated the relevance of phase
information [10?12]. Fewer studies still have ever compared the difference between amplitude and phase information for BCI. To address these deficits, this paper focuses on three
issues:
? Does binarized phase locking value (PLV) contain relevant information for the
classification of motor imageries?
? How does the performance of binarized PLV compare to that of non-binarized
PLV?
? How does the performance of methods based on phase information compare to
that of the CSP method?
In the remainder of the paper, the experimental paradigm will be described first. The details
of the method based on binarized PLV are presented in Section 3. Comparison between
PLV, binarized PLV and CSP are then made in Section 4. Finally, conclusions are provided
in Section 5.
2
Recording paradigm
Data set IVa provided by the Berlin BCI group [5] is investigated in this paper (available
from the BCI competition III web site). Five healthy subjects (labeled ?aa?, ?al?, ?av?, ?aw?
and ?ay? respectively) participated in the EEG recordings. Based on the visual cues, they
were required to imagine for 3.5 s either right hand (type 1) or right foot movements (type
2). Each type of motor imagery was carried out 140 times, which results in 280 labeled
trials for each subject. Furthermore, the down-sampled data (at 100 Hz) is used. For
the convenience of explanation, the length of the data is also referred to as time points.
Therefore, the window for the full length of a trial is [1, 350].
3
3.1
Feature from phase
Phase locking value
Two EEG signals xi (t) and xj (t) are said to be synchronized, if their instantaneous phase
difference ?ij (t) (complex-valued with unit modulus) stays constant for a period of time
?? . Phase locking value (PLV) is commonly used to quantify the degree of synchrony, i.e.
t
1 X
(1)
?ij (t) ? [0, 1],
P LVij (t) =
??
t???
where 1 represents perfect synchrony. The instantaneous phase difference ?ij (t) can be
computed using either wavelet analysis or Hilbert transformation. Studies show that these
two approaches are equivalent for the analysis of EEG signals [13]. In this study, Hilbert
transformation is employed in a similar manner to [10].
3.2
Synchrony rate
Neuroscientists usually threshold the phase locking value and focus on statistically significant periods of strong synchrony. Only recently, researchers begin to study the transition
between high and low levels of synchrony [6, 14, 15]. Most notably, the researcher in [15]
transformed PLV into discrete values called link rates and showed that link rates could be
a sensitive measure to relevant changes in synchrony. To investigate the usefulness of discretization for BCIs, we binarize the time series of PLV and define synchrony rate based
on them.
The threshold chosen to binarize PLV minimizes the quantization error. Suppose that the
distribution of PLV is p(x), then the threshold th0 is determined by
Z 1
2
th0 = arg min
(x ? g(x ? th)) p(x)dx,
(2)
th
0
where g(?) is the hard-limit transfer function which assumes 1 for non-negative numbers
and 0 otherwise. In practice, p(x) is computed at discrete locations and the integration is
replaced by summation. For the data set investigated, th0 s are similar across 5 subjects ('
0.5) when EEG signals are filtered between 4 and 40Hz and ?? is 0.25 s (These parameters
are used in the Result section for all 5 subjects). The thresholded sequences are binary and
denoted by bij (t).
The ones in bij (t) can be viewed as discrete events of strong synchrony, while zeros are
those of weak synchrony. The resemblance of bij (t) to the spike trains of neurons prompts
us to define synchrony rate (SR)?the number of discrete events of strong synchrony per
second. Formally, given a window ?b , the synchrony rate rij (t) at time t is:
rij (t) =
t
1 X
bij (t).
?b
(3)
t??b
SR describes the average level of synchrony between a pair of electrodes in a given window.
The size of the window will affect the value of the SR. In the next section, we will detail
the choice of the windows and the selection of features from SRs.
3.3
Feature extraction
Before computing synchrony rates, a circular Laplacian [16] is applied to boost the spatial resolution of the raw EEG. This method first interpolates the scalp EEG, and then
re-references EEG using interpolated values on a circle around an electrode. Varying the
radius of the circles achieves different spatial filtering effects, and the best radius is tuned
for individual subject.
Spatially filtered EEG is split into 6 sliding windows of length 100, namely [1, 100], [51,
150], [101, 200], [151, 250], [201, 300] and [251, 350]. Each window is further divivded
Figure 1: Overall scheme of window division for (a) the synchrony rate (SR) method and
(b) the phase locking value (PLV) method. ?? for the SR method covers the length of
a micro-window, while that for the PLV method corresponds to the length of a sliding
window. ?b is equal to 100 ? ?? + 1. (Note: time axis is NOT uniformly scaled.)
into 76 micro-windows (with size 25 and overlap by 24). PLVs are then computed and
binarized for each micro-window (according to (1)). Averaging the 76 binarized PLVs
results in the SR (according to (3)). As a whole, 6 SRs will be computed for each electrode
pair in a trial. SRs from all electrode pairs will be passed to statistical tests and further used
as features for classification. The overall scheme of this window division is illustrated in
Fig 1(a). In order to compare PLV and SR, PLVs are also computed for the full length of
each sliding window (Fig. 1(b)), which results in 6 PLVs for each electrode pair. These
PLVs will go through the same statistical tests and classification stage.
3.4
Statistical test
A key observation is that both PLVs and SRs contain many statistically significant differences between the 2 types of motor imagery in almost every sliding window. Statistical
nonparametric tests [17] are employed to locate these differences. For each electrode pair, a
null hypothesis?H0 : The difference of the mean SR/PLV for the 2 types of motor imagery
is zero?is formulated for each sliding window. Then the distribution of the difference is
obtained by 1000 randomization. The hypothesis is rejected if the difference of the original
data is larger than 99.5% or smaller than 0.5% of those from randomized data (equivalent
to p < 0.01).
Fig. 2 illustrates the test results with data from subject ?av?. For simplicity, only those
SRs with significant increase are displayed. Although the exact locations of these increases
differ from window to window, some general patterns can be observed. Roughly speaking, window 2, 3 and 4 can be grouped as similar, while window 1, 5 and 6 are different
from each other. Window 1 reflects changes in the early stage of a motor imagery, consisting increased couplings mainly within visual cortices and between visual and motor
cortices. Then (window 2, 3 and 4) increased couplings occur between motor cortices of
both hemispheres and between lateral and mesial areas of the motor cortices. During the
last stage, these couplings first (window 5) shift to the left hemisphere and then (window 6)
reduce to some sparse distant interactions. Similar patterns can also be discovered from the
PLVs (not illustrated). Although the exact functional interpretation of these patterns awaits
further investigation, they can be treated as potential features for classification.
Figure 2: Significantly increased synchrony rates in right hand motor imagery. Data are
from subject ?av?. (A: anterior; L: left; P: posterior; R: right.)
4
Classification strategy
To evaluate the usefulness of synchrony rate for the classification of motor imagery, 50?2fold cross validation is employed to compute the generalization error. This scheme randomizes the order of the trials for 50 times. Each randomization further splits the trials into
two equal halves (of 70 trials), each serving as training data once. There are four steps in
each fold. Averaging the prediction errors from each fold results in the generalization error.
? Compute SRs for each trial (including both training and test data). As illustrated in
Fig. 1(a), this results in a 6 dimensional (one for each window) feature vector for each
electrode pair (6903= 118?(118?1)
pairs in total). Alternatively, it can be viewed as a 6903
2
dimensional feature vector for each window.
??? |
? Filter features using the Fisher ratio. The Fisher ratio (a variant, |??+++??
, is used in the
actual computation) measures the discriminability of individual feature for classification
task. It is computed using training data only, and then compared to a threshold (0.3), below
which a feature is discarded. The indices of the selected features are further used to filter
the test data. The selected features are not necessarily all those located by the statistical
tests. Generally, they are only a subset of the most significant SRs.
? Train a linear SVM for each window and use meta-training scheme to combine them. The
evolving nature of the SRs (illustrated in Fig. 2) suggests that information in the 6 windows
may be complementary to each other. Similar to [4], a second level of linear SVM is trained
on the output of the SVMs for individual windows. This meta-training scheme allows us to
exploit the inter-window relations. (Note that this step is carried out strictly on the training
data.)
? Predict the label of the test data. Test data are fed into the two-level SVM, and the
prediction error is measured as the proportion of the misclassified trials.
The above four steps are also used to compute the generalization errors for the PLV method.
The only modification is in step one, where PLVs are computed instead of SRs (Fig. 1(b)).
In the next section, we will present the generalization errors for both SR and PLV method,
and compare them to those of the CSP method.
5
Result and comparison
5.1
Generalization error
Table 1 shows the generalization errors in percentage (with standard deviations) for both
synchrony rate and PLV method. For comparison, we also computed the generalization
errors of the CSP method [3] using linear SVM and 50?2-fold cross validation. The parameters of the CSP method (including filtering frequency, the number of channels used
and the number of spatial patterns selected) are individually tuned for each subject according the competition winning entry of data set IVa [18]. Note that all errors in Table 1 are
computed using the full length (3.5 s) of a trial.
Generally, the errors of the SR method is higher than those of the PLV method. This is
because SR is an approximation of PLV by definition. Remember that during the computation of SRs, the PLVs in the micro-windows are first binarized with a threshold th0 . This
threshold is so chosen that the approximation is as close to its original as possible. It works
especially well for two of the subjects (?al? and ?ay?), with the difference between the two
methods less than 1%. Although SR method produces higher errors, it may have some advantages in practice. Especially for hardware implemented BCI systems, smaller window
for PLV computation means smaller buffer and binarized PLV makes further processing
easier and faster.
The errors of the CSP method is lowest for most of the subjects. For subject ?aa? and
?aw?, it is better than the other two methods by 10-20%, but the gaps are narrowed for
subject ?al? and ?av? (less than 2.5%). Most notably, for subject ?ay?, the SR method even
outperforms the CSP method by about 5%. Remember that the CSP method is implemented
using individually optimized parameters, while those for the SR and PLV method are the
same across the 5 subjects. Fine tuning the parameters has the potential to further improve
the performance of the latter two methods. The errors computed above, however, reveals
only partial difference between the three methods. In the next subsection, a more thorough
investigation will be carried out using information transfer rates.
5.2
Information transfer rate
Information transfer rate (ITR) [1] is the amount of information (measured in bits) generated by a BCI system within a second. It takes both the error and the length of a trial into
account. If two BCI systems produce the same error, the one with a short trial will have
higher information transfer rate. To investigate the performance of the three methods in this
context, we shortened the trials into 5 different lengths, namely 1.0 s, 1.5 s, 2.0 s, 2.5 s and
3.0 s. The generalization errors are computed for these shortened trials and then converted
into information transfer rates, as showed in Fig. 3.
Interesting results emerge from the curves in Fig. 3. Most subjects (except subject ?aw?)
achieve the highest information transfer rates within the first 1.5-2.0 s. Although longer
trials usually decrease the errors, they do not necessarily result in increased information
transfer rates. Furthermore, for subject ?al?, ?av? and ?ay?, the highest information transfer
Table 1: Generalization errors (%) of the synchrony rate (SR), PLV and CSP methods
Subject
aa
al
av
aw
ay
SR
29.34?3.97 4.05?1.28 32.67?3.41 22.96?4.39 5.93?1.75
PLV
23.05?3.39 3.59?1.28 29.91?3.23 18.65?3.48 5.41?1.53
CSP
12.58?2.56 2.65?1.35 30.30?3.02 3.16?1.32 11.43?2.34
Figure 3: Information transfer rates (ITR) for synchrony rate (SR), PLV and CSP method.
Horizontal axis is time T (in seconds). Vertical axis on the left measures information transfer rate (in bit/second) and that on the right shows the generalization error (GE) in decimals. The three lines of Greek characters under each subplot code the results of statistical
comparisons (Student?s t-test, significance level 0.01) of different methods. Line 1 is the
comparison between SR and CSP methods; Line 2 is between SR and PLV method; and
Line 3 between PLV and CSP method.
rates are achieved by methods based on phase. Especially for subject ?ay?, phase generates
about 0.2 bits more information per second. The qualitative similarity between SR and
PLV method suggests that phase can be more discriminative than amplitude within the
first 1.5-2.0 s. Common to the three methods, however, the near zero information transfer
rates within the first second virtually pose a limit for BCIs. In the case where real-time
application is of high priority, such as navigating wheelchairs, this problem is even more
pronounced. Incorporating phase information and continuing the search for new features
have the potential to overcome this limit.
6
Conclusion
EEG phase contains complex dynamics. Changes of phase synchrony provide complementary information to EEG amplitude. Our results show that within the first 1.5-2.0 s of a
motor imagery, phase can be more useful for classification and can be exploited by our
synchrony rate method. Although methods based on phase have achieved good results in
some subjects, the subject-wise difference and the exact functional interpretation of the
selected features need further investigation. Solving these problems have the potential to
boost information transfer rates in BCIs.
Acknowledgments
The author would like to thank Ms. Yingxin Wu and Dr. Julien Epps from NICTA, and Dr.
Michael Breakspear from Brain Dynamics Center for discussion.
References
[1] J.R. Wolpaw et al., ?Brain-computer interface technology: a review of the first international
meeting,? IEEE Trans. Rehab. Eng., vol. 8, pp. 164-173, 2000.
[2] T.M. Vaughan et al., ?Brain-computer interface technonolgy: a review of the second international
meeting,? IEEE Trans. Rehab. Eng., vol. 11, pp. 94-109, 2003.
[3] H. Ramoser, J. M?uller-Gerking, and G. Pfurtscheller, ?Optimal spatial filtering of single trial
EEG during imagined hand movement,? IEEE Trans. Rehab. Eng., vol. 8, pp. 441-446, 2000.
[4] G. Dornhege, B. Blankertz, G. Curio, and K.R. M?uller, ?Combining features for BCI,? Advances
in Neural Inf. Proc. Systems (NIPS 02), vol. 15, pp. 1115-1122, 2003.
[5] G. Dornhege, B. Blankertz, G. Curio, and K.R. M?uller, ?Boosting bit rates in non-invasive
EEG single-trial classifications by feature combination and multi-class paradigms,? IEEE Trans.
Biomed. Eng., vol. 51, pp. 993-1002, 2004.
[6] E. Rodriguez et al., ?Perception?s shadow: long distance synchronization of human brain activity,? Nature, vol. 397, pp. 430-433, 1999.
[7] F. Varela, J.P. Lachaux, E. Rodriguez, and J. Martinerie, ?The brainweb: phase synchronization
and large-scale integration,? Nature Reviews Neuroscience, vol. 2, pp. 229-239, 2001.
[8] W. Singer, and C.M. Gray, ?Visual feature integration and the temporal correlation hypothesis,?
Annu. Rev. Neurosci, vol. 18, pp. 555-586, 1995.
[9] P.R. Roelfsema, A.K. Engel, P. K?onig, and W. Singer, ?Visuomotor integration is associated with
zero time-lag synchronization among cortical areas,? Nature, vol. 385, pp. 157-161, 1997.
[10] E. Gysels, and P. Celka, ?Phase synchronization for the recognition of mental tasks in braincomputer interface,? IEEE Trans. Neural Syst. Rehab. Eng., vol. 12, pp. 406-415, 2004.
[11] C. Brunner, B. Graimann, J.E. Huggins, S.P Levine and G. Pfurtscheller, ?Phase relationships
between different subdural electrode recordings in man,? Neurosci. Lett., vol. 275, pp.69-74,
2005.
[12] L. Song, ?Desynchronization network analysis for the recognition of imagined movement in
BCIs,?, Proc. of 27th IEEE EMBS conference, Shanghai, China, September 2005.
[13] M. Le Van Quyen et al., ?Comparison of Hilbert transform and wavelet methods for the analysis
of neuronal synchrony,? J. Neurosci. Methods, vol. 111, pp. 83-98, 2001.
[14] M. Breakspear, L. Williams, and C.J. Stam, ?A novel method for the topographic analysis of
neural activity reveals formation and dissolution of ?dynmaic cell assemblies?,? J. Comput. Neurosci., vol. 16, pp. 49-68, 2004.
[15] M.J.A.M van Putten, ?Proposed link rates in the human brain,? J. of Neurosci. Methods, vol.
127, pp. 1-10, 2003.
[16] L. Song, and J. Epps, ?Improving separability of EEG signals during motor imagery with an
efficient circular Laplacian,? in preparation.
[17] T.E. Nichols, and A.P. Holmes, ?Nonparametric permutation tests for functional neuroimaging:
a primer with examples,? Human Brain Mapping, vol. 15, pp. 1-25, 2001.
[18] Y.J. Wang, X.R. Gao, Z.G. Zhang, B. Hong, and S.K. Gao, ?BCI competition III?data set IVa:
classifying single-trial EEG during motor imagery with a small training set,? IEEE Trans. Neural
Syst. Rehab. Eng., submitted.
| 2849 |@word trial:18 proportion:1 elly:2 eng:6 nsw:2 electronics:1 series:2 contains:1 tuned:2 outperforms:1 current:1 com:1 discretization:1 anterior:1 dx:1 distant:1 motor:19 cue:1 fewer:1 half:1 selected:4 short:1 filtered:2 mental:3 provides:1 boosting:1 location:2 zhang:1 five:1 qualitative:1 combine:1 manner:1 inter:1 notably:2 roughly:1 planning:1 multi:2 brain:15 company:1 actual:1 window:33 provided:2 begin:1 null:1 unbinding:1 lowest:1 minimizes:1 iva:3 finding:1 transformation:2 dornhege:2 temporal:2 thorough:1 remember:2 every:1 binarized:10 nation:1 classifier:1 scaled:1 control:3 unit:1 onig:1 before:1 limit:3 severely:1 randomizes:1 shortened:2 discriminability:1 au:1 china:1 suggests:4 gysels:3 statistically:2 acknowledgment:1 practice:2 swiss:1 wolpaw:1 area:2 evolving:1 significantly:1 convenience:1 close:1 selection:1 context:1 accumulating:1 vaughan:1 equivalent:2 demonstrated:1 center:3 go:1 williams:1 resolution:2 simplicity:1 holmes:1 notion:1 imagine:1 suppose:1 exact:3 designing:1 hypothesis:3 recognition:3 located:1 labeled:2 observed:2 levine:1 electrical:1 rij:2 wang:1 movement:3 highest:2 decrease:1 locking:6 dynamic:6 trained:2 solving:1 distinctive:1 division:2 train:2 visuomotor:1 formation:1 h0:1 whose:1 lag:1 widely:1 valued:1 larger:1 otherwise:1 bci:17 topographic:1 transform:1 sequence:1 advantage:1 interaction:1 remainder:1 rehab:5 relevant:2 combining:1 achieve:1 pronounced:1 competition:3 stam:1 electrode:9 csem:1 produce:2 perfect:1 depending:1 coupling:3 pose:1 measured:2 ij:3 school:1 strong:3 sydney:1 implemented:2 shadow:1 synchronized:1 quantify:1 switzerland:1 foot:1 radius:2 differ:1 greek:1 filter:2 human:3 australia:3 generalization:10 investigation:3 randomization:2 summation:1 extension:1 strictly:1 around:1 normal:1 mapping:1 predict:1 achieves:1 early:1 proc:2 label:1 currently:1 healthy:1 sensitive:1 individually:2 grouped:1 engel:1 successfully:1 reflects:1 hope:1 uller:3 csp:18 rather:1 martinerie:1 varying:1 focus:4 consistently:1 mainly:2 am:1 relation:1 transformed:1 misclassified:1 biomed:1 issue:1 classification:10 arg:1 overall:2 denoted:1 among:1 spatial:7 integration:6 equal:2 construct:1 once:1 extraction:1 represents:1 report:1 stimulus:1 gordon:1 micro:5 few:1 recognize:1 individual:3 replaced:1 phase:33 consisting:1 neuroscientist:1 investigate:2 circular:2 brunner:1 partial:1 continuing:1 re:1 circle:2 increased:4 cover:1 deviation:1 subset:1 entry:1 usefulness:2 successful:1 aw:4 gerking:1 international:2 randomized:1 stay:1 enhance:1 michael:1 imagery:16 recorded:1 plv:29 priority:1 dr:2 syst:2 account:1 potential:5 converted:1 student:1 narrowed:1 synchrony:28 variance:1 largely:1 weak:1 raw:1 researcher:2 subdural:1 submitted:1 definition:1 sensorimotor:2 frequency:1 pp:15 invasive:1 associated:2 sampled:1 wheelchair:2 subsection:1 hilbert:3 amplitude:10 higher:3 furthermore:2 rejected:1 stage:3 correlation:1 hand:4 horizontal:1 web:1 rodriguez:2 gray:1 resemblance:1 scientific:1 bcis:5 modulus:1 effect:1 contain:5 nichols:1 spatially:1 satisfactory:1 illustrated:4 during:5 rhythm:3 m:1 hong:1 ay:6 demonstrate:1 electroencephalogram:1 interface:7 wise:1 instantaneous:2 novel:1 recently:1 common:3 functional:3 shanghai:1 imagined:3 interpretation:2 interpret:1 significant:6 measurement:2 tuning:1 subserve:1 moving:1 cortex:6 longer:1 similarity:1 posterior:1 showed:2 hemisphere:2 inf:1 buffer:1 meta:2 binary:1 meeting:2 exploited:2 captured:1 subplot:1 employed:3 brainweb:1 paradigm:3 period:2 paralyzed:1 signal:7 sliding:5 full:3 faster:1 offer:1 cross:2 long:1 laplacian:2 prediction:2 variant:1 achieved:2 cell:1 embs:1 participated:1 fine:1 decreased:1 sr:34 subject:23 recording:5 hz:2 virtually:1 near:1 iii:2 split:2 xj:1 affect:1 reduce:1 itr:2 shift:1 lesong:1 passed:1 song:3 interpolates:1 speaking:1 ignored:1 useful:2 generally:2 amount:1 nonparametric:3 induces:1 svms:1 hardware:1 multichannel:1 percentage:1 millisecond:1 neuroscience:2 correctly:1 per:2 serving:1 discrete:5 vol:15 group:1 key:1 four:2 varela:1 threshold:6 thresholded:1 almost:1 roelfsema:1 wu:1 epps:2 bit:4 fold:4 scalp:2 activity:3 occur:1 interpolated:1 generates:1 chair:1 min:1 according:3 combination:1 describes:2 across:2 smaller:3 character:1 separability:1 rev:1 modification:1 huggins:1 resource:1 mechanism:1 singer:2 ge:1 fed:1 available:1 awaits:1 primer:1 original:2 assumes:1 assembly:2 paucity:1 exploit:1 especially:3 spike:1 strategy:1 said:1 exhibit:1 navigating:1 usyd:1 september:1 distance:1 deficit:1 link:3 berlin:1 lateral:1 thank:1 binarize:2 nicta:1 length:9 code:1 index:1 relationship:1 ratio:2 decimal:1 dissolution:1 neuroimaging:1 negative:1 attenuates:2 lachaux:1 av:6 neuron:1 observation:1 vertical:1 discarded:1 quyen:1 displayed:1 communication:1 ever:1 locate:1 discovered:1 community:1 prompt:1 pair:7 required:1 namely:2 optimized:1 boost:3 nip:1 trans:6 address:1 able:1 usually:2 pattern:8 perception:2 below:1 including:3 explanation:1 event:3 overlap:1 braincomputer:2 treated:1 blankertz:2 scheme:5 improve:1 technology:2 julien:1 axis:3 carried:3 review:4 ict:1 literature:1 synchronization:5 permutation:1 interesting:1 filtering:3 validation:2 degree:1 classifying:2 last:1 face:2 emerge:1 sparse:1 distributed:1 van:2 feedback:1 noninvasive:1 curve:1 transition:1 overcome:1 rich:1 cortical:1 lett:1 author:1 made:1 commonly:1 active:1 reveals:2 discriminative:4 xi:1 alternatively:1 putten:1 search:1 table:3 nature:4 transfer:14 channel:1 eeg:23 improving:1 investigated:2 complex:2 necessarily:2 ramoser:1 significance:1 neurosci:5 whole:1 complementary:2 body:2 neuronal:1 site:1 referred:1 fig:8 pfurtscheller:2 winning:1 comput:1 infancy:1 wavelet:2 bij:4 down:1 annu:1 desynchronization:1 svm:4 evidence:1 incorporating:1 curio:2 quantization:1 illustrates:1 breakspear:2 gap:1 easier:1 gao:2 visual:4 ch:2 aa:3 corresponds:1 relies:1 viewed:2 formulated:1 fisher:2 man:1 change:5 hard:1 determined:1 except:1 uniformly:1 averaging:2 called:2 total:1 experimental:1 formally:1 support:1 latter:1 relevance:1 preparation:2 ongoing:1 evaluate:1 |
2,036 | 285 | 76
Kammen, Koch and Holmes
Collective Oscillations in the
Visual Cortex
Daniel Kammen & Christof Koch
Philip J. H oImes
Computation and Neural Systems
Dept. of Theor. & Applied Mechanics
Caltech 216-76
Cornell University
Pasadena, CA 91125
Ithaca, NY 14853
ABSTRACT
The firing patterns of populations of cells in the cat visual cortex can exhibit oscillatory responses in the range of 35 - 85 Hz.
Furthermore, groups of neurons many mm's apart can be highly
synchronized as long as the cells have similar orientation tuning.
We investigate two basic network architectures that incorporate either nearest-neighbor or global feedback interactions and conclude
that non-local feedback plays a fundamental role in the initial synchronization and dynamic stability of the oscillations.
1
INTRODUCTION
40 - 60 Hz oscillations have long been reported in the rat and rabbit olfactory
bulb and cortex on the basis of single- and multi-unit recordings as well as EEG
activity (Freeman, 1972; Wilson & Bower 1990). Recently, two groups (Eckhorn et
ai., 1988 and Gray et ai., 1989) have reported highly synchronized, stimulus specific
oscillations in the 35 - 85 Hz range in areas 17, 18 and PMLS of anesthetized as
well as awake cats. Neurons with similar orientation tuning up to 7 mm apart show
phase-locked oscillations, with a phase shift of less than 3 msec. We address here
the computational architecture necessary to subserve this process by investigating
to what extent two neuronal architectures, nearest-neighbor coupling and feedback
from a central "comparator", can synchronize neuronal oscillations in a robust and
rapid manner.
Collective Oscillations in the Visual Cortex
It was argued in earlier work on central pattern generators (Cohen et al., 1982), that
in studying coupling effects among large populations of oscillating neurons, one can
ignore the details of individual oscillators and represent each one by a single periodic
variable: its phase. Our approach assumes a population of neuronal oscillators,
firing repetitively in response to synaptic input. Each cell (or group of tightly
electrically coupled cells) has an associated variable representing the membrane
potential. In particular, when (Ji
7r, an action potential is generated and the
phase is reset to its initial value (in our case to -7r). The number of times per unit
time (Ji passes through 7r, i.e. d(Ji/dt, is then proportional to the firing frequency of
the neuron. For a network of n + 1 such oscillators, our basic model is
=
(1)
where Wi represents the synaptic input to neuron i and I, a function of the phases,
represents the coupling within the network. Each oscillator i in isolation (i.e. with
Ii = 0), exhibits asymptotically stable periodic oscillations; that is, if the input
is changed the oscillator will rapidly adjust to a new firing rate. In our model Wi
is assumed to derive from neurons in the lateral geniculate nucleus (LG N) and is
purely excitatory.
2
FREQUENCY AND PHASE LOCKING
Any realistic model of the observed, highly synchronized, oscillations must account
for the fact that the individual neurons oscillate at different frequencies in isolation.
This is due to variations in the synaptic input, Wi, as well as in the intrinsic properties of the cells. We will contrast the abilities of two markedly different network
architectures to synchronize these oscillations. The "chain" model (Fig. 1 top) consists of a one-dimensional array of oscillators connected to their nearest neighbors,
while in the alternative "comparator" model (Fig. 1 middle), an array of neurons
project to a single unit, where the phases are averaged (i.e. (lin) L~=o Oi(t)). This
average is then feed back to every neuron in the network. In the continuum limit
I being identical, the two models are
(on the unit interval) with all Ii
=
(Chain Model)
(Comparator Model)
80(x, t)
8t
8(J(x, t)
8t
W(x)
+ .!..n 88 fx (4))
w(x)
+ 1((J(x, t)
=
(2)
-10
1
(J(s, t)ds), (3)
where 0 < x < 1 and 4> is the phase gradient, 4>
~M. In the chain model, we
require that I be an odd function (for simplicity of analysis only) while our analysis
of the comparator model holds for any continuous function I. We use two spatially
separated "spots" of width 6 and amplitude Q' as visual input (Fig. 1 bottom). This
pattern was chosen as a simple version of the double-bar stimulus that (Gray et al.
1989) found to evoke coherent oscillatory activity in widely separated populations
of visual cortical cells.
77
78
Kammen, Koch and Holmes
-~
00(0)
?
?
?
?
???
-
?
8i=n(t)
+
m(n)
men)
00(0)
m(x)
I
I ta
Ita
x
Figure 1: The linear chain (top) and comparator (middle) architectures. The
spatial pattern of inputs is indicated by Wj(x). See equs. 2 & 3 for a mathematical
description of the models. The "two spot" input is shown at bottom and represents
two parts of a perceptually extended figure.
We determine under what circumstances the chain model will develop frequencylocked solutions, such that every oscillator fires at the same frequency (but not
necessarily at the same time), i.e. 8 2 (} /8x8t O. We prove (Kammen, et al. 1990)
that frequency-locked solutions exist as long as In(wx- f o:17 w(s)ds)1 does not exceed
the maximal value of I, Imax (with w
w(s)ds the mean excitation level).
Thus, if the excitation is too irregular or the chain too long (n ? 1), we will not
find frequency-locked solutions. Phase coherence between the excited regions is not
generally maintained and is, in fact, strongly a function of the initial conditions.
Another feature of the chain model is that the onset of frequency locking is slow
and takes time of order Vii.
=
= f;
The location of the stimulus has no effect on phase relationships in the comparator
model due to the global nature of the feedback. The comparator model exhibits
two distinct regimes of behavior depending on the amplitude of the input, a. In the
case of the two spot input (Fig. 1 bottom ), if a is small, all neurons will frequencylock regardless of location, that is units responding to both the "figure" and the
background ("ground") will oscillate at the same frequency. They will, however,
fire at different times, with () Jig 1= () gnd. If a is above a critical threshold, the units
responding to the "figure" will decouple in frequency as well as phase from the
background while still maintaining internal phase coherency. Phase gradients never
exist within the excited groups, no matter what the input amplitude.
Collective Oscillations in the Visual Cortex
We numerically simulated the chain and comparator models with the two spot input
for the coupling function fCf}) = sin(f}). Additive Gaussian noise was included in the
input, Wi. Our analytical results were confirmed; frequency and phase gradients were
always present in the chain model (Fig. 2A) even though the coupling strength was
ten times greater than that of the comparator modeL In the comparator network
small excitation levels led to frequency-locking along the entire array and to phasecoupled activity within the illuminated areas (Fig. 2B), while large excitation levels
led to phase and frequency decoupling between the "figure" and the "background"
(Fig. 2C). The excited regions in the comparator settle very rapidly - within 2 to
3 cycles - into phase-locked activity with small phase-delays. The chain model, on
the other hand, exhibits strong sensitivity to initial conditions as well as a very slow
approach to coherence that is still not complete even after 50 cycles (See Fig. 2).
A
B
c
Figure 2: The phase portrait of the chain (A), weak (B) and strongly (C) excited
comparator networks after 50 cycles. The input, indicated by the horizontal lines,
is the two spot pattern. Note that the central, unstimulated, region in the chain
model has been "dragged along" by the flanking excited regions.
3
STABILITY ANALYSIS
Perhaps the most intriguing aspect of the oscillations concerns the role that they
may play in cortical information processing and the labeling of cells responding to a
single perceptual object. To be useful in object coding, the oscillations must exhibit
some degree of noise tolerance both in the input signal and in the stability of the
population to variation in the firing times of individual cells.
The degree to which input noise to individual neurons disrupts the synchronization
? ?IS d etermme
. d b y t he ratio
. coupling
input noise
~ F
II
of th e popu IatlOn
strength = irT. or sma per-
=
turbations, wet) Wo + f(t), the action of the feedback, from the nearest neighbors
in the chain and from the entire network in the comparator, will compensate for
the noise and the neuron will maintain coherence with the excited population. As
f is increased first phase and then frequency coherence will be lost.
In Fig. 3 we compare the dynamical stability of the chain and comparator models.
In each case the phase, (J, of a unit receiving perturbated input is plotted as the
deviation from the average phase, (Jo, of all the excited units receiving input WOo The
chain in highly sensitive to noise: even 10% stochastic noise significantly perturbs
the phase of the neuron. In the comparator model (Fig. 3B) noise must reach the
79
80
Kammen, Koch and Holmes
40% level to have a similar effect on the phase. As the noise increases above 0.30wo
even frequency coherence is lost in the chain model (broken error bars). Frequency
0.60wo.
coherence is maintained in the comparator for f
=
e
+0.10
0.00
-G.OS
-G.IO
~
A)
B)
.............~....1t....:1....
..~
~
?
0.0
20
l
40
"I
60
?
0.0
20
40
60
(% of 000)
Figure 3: The result of a perturbation on the phase, 0, for the chain (A) and
comparator (B) models. The terminus of the error bars gives the resulting deviation from the unperturbed value. Broken bars indicate both phase and frequency
decoupling.
The stability of the solutions of the comparator model to variability in the activity
of individual neurons can easily be demonstrated. For simplicity consider the case
of a single input of amplitude WI superposed on a background of amplitude Woo The
solutions in each region are:
dOo
dt
dOl
dt
wo+
I ( 00 -2 01)
+
1(0 00)
WI
1 -
2
(4)
(5)
.
=
We define the difference in the solutions to be ?(t) 01(t)-00(t) and Aw
We then have an equation for the rate the solutions converge or diverge:
= W1-WO.
d?
<P
<P
(6)
= Aw + 1(-) - 1(--)?
dt
2
2
If the solutions are stable (of constant velocity) then d01/dt dOo/dt and 0 1 = Oo+c
with c a constant. We then have the stable solution ?* = c d?* /dt = Aw + I(~)?
I( - ~) O. Stability of the solutions can be seen by perturbing 01 to 01 00 + c + f
with If I < 1. The perturbed solution, ? = ?* + f, has the derivative d?/dt df/dt.
Developing I( ?) into a Taylor series around ?* and neglecting terms on the order
of f2 and higher, we arrive at
-
=
=
=
df
dt
= :.2 [1'( 2c:..) + I'( =:..)]
2
.
=
(7)
Collective Oscillations in the Visual Cortex
If f(?) is odd then f'(?) is even, and eq. (7) reduces to
d?
dt
= ?f'(:')
2 .
(8)
Thus, if f'(c/2) < 0 the perturbations will decay to zero and the system will maintain phase locking within the excited regions.
4
THE FREQUENCY MODEL
The model discussed so far assumes that the feedback is only a function of the
phases. In particular, this implies that the comparator computes the average phase
across the population. Consider, however, a model where the feedback is proportional to the average firing frequency of a group of neurons. Let us therefore replace
phase in the feedback function with firing frequency,
= w(x)
aO(x, t)
at
?t It J;
+
f (ao(x, t) _ aO(x, t))
at
at
(9)
= ?t.
with
=
O(s, t)ds
This is a very special differential equation as can be
seen by setting v(x,t) = aO(x, t)/at. This yields an algebraic equation for v with
no explicit time dependency:
v(x)
= w(x) + f(v(x) -
v(x))
(10)
and, after an integration, we have,
O(x, t) =
lot v(x)dt = v(x)t + Oo(x).
(11)
Thus, the phase relationships depend on the initial conditions, Oo(x), and no phase
locking occurs. While frequency locking only occurs for w(x) = 0 the feedback can
lead to tight frequency coupling among the excited neurons.
Reformulating the chain model in terms of firing-frequencies, we have
ao(x, t) = !. (aw(x)
at
n
ax
under the assumption that f(-x)
at a stationary algebraic equation
-y(x)
+ !. a 2 f
= -f(x).
n ax 2
(ao(x, t)))
at
With -y(x,t)
2
1 a
)
=;;-1 (aw
ax + ;;- ax2fb(x))
and
?(x, t)
(12)
= a~~~!t), we again arrive
,
(13)
= lot -y(x)dt = -y(x)t + ?o(x)
(14)
In other words, the system will develop a time-dependent phase gradient. Frequency
0 everywhere only occur if w(x)
0 everywhere.
locked solutions of the sort
Thus, the chain architecture leads to very static behavior, with little ability to either
phase- or frequency-lock.
?t =
=
81
82
Kammen, Koch and Holmes
5
DISCUSSION
We have investigated the ability of two networks of relaxation oscillators with different connectivity patterns to synchronize their oscillations. Our investigation
has been prompted by recent experimental results pertaining to the existence of
frequency- and phase-locked oscillations in the mammalian visual cortex (Gray et
al., 1989; Eckhorn et al., 1988). While these 35 - 85 Hz oscillations are induced by
the visual stimulus, usually a flashing or moving bar, they are not locked to the frequency of the stimulus. Most surprising is the finding that cells tuned to the same
orientation, but separated by up to 7 mm, not only exhibit coherent oscillatory
activity, but do so with a phase-shift of less than 3 msec (Gray et al., 1989).1
We have assumed the existence of a population of cortical oscillators, such as those
reported in cortical slice preparations (Llimis, 1988; Chagnac-Amitai and Connors,
1989). The issue is then how such a population of oscillators can rapidly begin to
fire in near total synchrony. Two neuronal architectures suggest themselves.
As a mechanism for establishing coherent oscillatory activity the comparator model
is far superior to a nearest-neighbor model. The comparator rapidly (within 1 - 3
cycles) achieves phase coherence, while the chain model exhibits a far slower onset
of synchronization and is highly sensitive to the initial conditions. Once initiated,
the oscillations in the two models exhibit markedly different stability characteristics. The diffusive nature of communication in the chain results in little ability to
regUlate the firing of individual units and consequently only highly homogeneous
inputs will result in collective oscillations. The long-range connections present in
the comparator, however, result in stable collective oscillations even in the presence
of significant noise levels. Noise uniformly distributed about the mean firing level
will have little effect due to the averaging performed by the comparator unit.
A more realistic model of the interconnection architecture of the cortex will certainly have to take both local as well as global neuronal pathways into account and
the ever-present delays in cellular and network signal propagation (Kammen, et
al., 1990). Long range (up to 6 mm) lateral excitatory connections have been
reported (Gilbert and Wiesel, 1983). However, their low conduction velocities
(~ 1 mm/msec) would lead to significant phase-shifts in contrast to the data.
While the cortical circuitry contains both local as well as global connection, our
results imply that a cortical architecture with one or more "comparator" neurons
driven by the averaged activity of the hypercolumnar cell populations is an attractive mechanism for synchronizing the observed oscillations.
We have also developed a model where the firing frequency, and not the phase is
involved in the dynamics. Coding based on phase information requires that the
cells track the time interval between incident spikes whereas the firing frequency
is available as the raw spike rate. This computation can be readily implemented
1 Note that this result is obtained by averaging over many trials. The phase-shift for individual
trial may possibly be larger, but could be randomly distributed from trial to trial around the
origin.
Collective Oscillations in the Visual Cortex
neurobiologically and is entirely consistent with the known biophysics of cortical
cells.
Von der Malsburg (1985) has argued that the temporal synchronization of groups of
neurons labels perceptually distinct objects, subserving figure-ground segregation.
Both firing frequency and inter-cell phase (timing) relationships of ensembles of
neurons are potential channels to encode the signatures of various objects in the
visual field. Perceptually distinct objects could be coded by groups of synchronized
neurons, all locked to the same frequency with the groups only distinguished by
their phase relationships. We do not believe, however, that phase is a robust enough
variable to code this information across the cortex, A more robust scheme is one in
which groups of synchronized neurons are locked at different firing frequencies.
Acknowledgement
D.K. is a recipient of a Weizman Postdoctoral Fellowship. P.H. acknowledges support from the Sherman Fairchild Foundation and C.K. from the Air Force Office
of Scientific Research, a NSF Presidential Young Investigator Award and from the
James S. McDonnell Foundation. We would like to thank Francis Crick for useful
comments and discussions.
References
Chagnac-Amitai, Y. & Connors, B. W. (1989) 1. Neurophys., 62, 1149.
Cohen, A. H., Holmes, P. J. & Rand R. H. (1982) 1. Math. Bioi. 3,345.
Eckhorn, R., Bauer, R., Jordan, W., Brosch, M., Kruse, W., Munk, M. & Reitboeck,
H. J. (1988) Bioi. Cybern., 60, 121.
Freeman, W.J. (1972) 1. Neurophysiol. 35,762.
Gilbert, C. D. & T.N. Wiesel (1983) 1. Neurosci. 3, 1116.
Gray, C. M., Konig, P., Engel, A. K. & Singer, W. (1989) Nature 338, 334.
Kammen, D. M., Koch, C. and Holmes, P. J. (1990) Proc. Natl. Acad. Sci. USA,
submitted.
Kopell N. & Ermentrout, G. B. (1986) Comm. Pure Appl. Math. 39,623.
Llimis, R. R. (1988) Science 242, 1654.
von der Malsburg, C. (1985) Ber. Bunsenges Phys. Chem., 89, 703.
Wilson, M. A. & Bower, J. (1990) 1. Neurophysiol., in press.
83
| 285 |@word trial:4 version:1 middle:2 wiesel:2 excited:9 initial:6 series:1 contains:1 daniel:1 terminus:1 tuned:1 neurophys:1 surprising:1 intriguing:1 must:3 readily:1 additive:1 realistic:2 wx:1 stationary:1 math:2 location:2 mathematical:1 along:2 differential:1 consists:1 prove:1 dragged:1 pathway:1 olfactory:1 manner:1 inter:1 rapid:1 behavior:2 disrupts:1 themselves:1 mechanic:1 multi:1 freeman:2 little:3 project:1 begin:1 what:3 developed:1 finding:1 temporal:1 every:2 unit:10 christof:1 local:3 timing:1 limit:1 io:1 acad:1 initiated:1 establishing:1 firing:14 appl:1 range:4 locked:9 averaged:2 lost:2 spot:5 area:2 significantly:1 word:1 suggest:1 cybern:1 superposed:1 gilbert:2 demonstrated:1 regardless:1 rabbit:1 simplicity:2 pure:1 holmes:6 array:3 imax:1 population:10 stability:7 variation:2 fx:1 play:2 homogeneous:1 origin:1 velocity:2 neurobiologically:1 mammalian:1 observed:2 role:2 bottom:3 wj:1 region:6 connected:1 cycle:4 broken:2 locking:6 comm:1 ermentrout:1 dynamic:2 signature:1 depend:1 tight:1 purely:1 f2:1 basis:1 neurophysiol:2 easily:1 cat:2 various:1 separated:3 distinct:3 pertaining:1 labeling:1 widely:1 larger:1 interconnection:1 presidential:1 ability:4 kammen:8 analytical:1 interaction:1 maximal:1 reset:1 rapidly:4 description:1 konig:1 double:1 oscillating:1 hypercolumnar:1 object:5 coupling:7 derive:1 develop:2 depending:1 oo:3 nearest:5 odd:2 eq:1 strong:1 implemented:1 indicate:1 implies:1 synchronized:5 stochastic:1 settle:1 munk:1 argued:2 require:1 ao:6 investigation:1 theor:1 mm:5 hold:1 koch:6 around:2 ground:2 circuitry:1 sma:1 continuum:1 achieves:1 proc:1 geniculate:1 wet:1 label:1 sensitive:2 engel:1 gaussian:1 always:1 cornell:1 wilson:2 office:1 encode:1 ax:3 contrast:2 dependent:1 entire:2 pasadena:1 issue:1 among:2 orientation:3 spatial:1 special:1 integration:1 field:1 once:1 never:1 identical:1 represents:3 synchronizing:1 unstimulated:1 stimulus:5 randomly:1 tightly:1 individual:7 phase:43 fire:3 maintain:2 highly:6 investigate:1 adjust:1 certainly:1 natl:1 chain:21 neglecting:1 necessary:1 taylor:1 plotted:1 increased:1 portrait:1 earlier:1 deviation:2 jig:1 delay:2 too:2 reported:4 conduction:1 dependency:1 aw:5 periodic:2 perturbed:1 fundamental:1 sensitivity:1 receiving:2 diverge:1 jo:1 w1:1 central:3 again:1 connectivity:1 von:2 possibly:1 derivative:1 account:2 potential:3 coding:2 matter:1 onset:2 performed:1 lot:2 francis:1 sort:1 synchrony:1 oi:1 air:1 characteristic:1 ensemble:1 yield:1 weak:1 raw:1 confirmed:1 kopell:1 submitted:1 oscillatory:4 reach:1 phys:1 synaptic:3 frequency:31 involved:1 james:1 associated:1 static:1 amplitude:5 subserving:1 back:1 feed:1 ta:1 dt:13 higher:1 response:2 rand:1 though:1 strongly:2 furthermore:1 d:4 hand:1 horizontal:1 o:1 propagation:1 indicated:2 gray:5 scientific:1 perhaps:1 believe:1 usa:1 effect:4 reformulating:1 spatially:1 attractive:1 sin:1 width:1 maintained:2 excitation:4 rat:1 amitai:2 complete:1 recently:1 superior:1 ji:3 perturbing:1 cohen:2 discussed:1 he:1 numerically:1 significant:2 ai:2 tuning:2 subserve:1 eckhorn:3 sherman:1 moving:1 stable:4 cortex:10 recent:1 apart:2 driven:1 llimis:2 der:2 caltech:1 seen:2 greater:1 determine:1 converge:1 kruse:1 signal:2 ii:3 reduces:1 repetitively:1 long:6 lin:1 compensate:1 award:1 coded:1 biophysics:1 basic:2 circumstance:1 df:2 represent:1 cell:13 irregular:1 doo:2 background:4 diffusive:1 whereas:1 fellowship:1 interval:2 ithaca:1 markedly:2 pass:1 hz:4 recording:1 pmls:1 induced:1 comment:1 jordan:1 near:1 presence:1 exceed:1 enough:1 isolation:2 architecture:9 shift:4 wo:5 algebraic:2 oscillate:2 action:2 generally:1 useful:2 ten:1 gnd:1 exist:2 nsf:1 coherency:1 per:2 track:1 group:9 threshold:1 asymptotically:1 relaxation:1 everywhere:2 arrive:2 oscillation:22 coherence:7 illuminated:1 entirely:1 activity:8 strength:2 occur:1 awake:1 aspect:1 developing:1 mcdonnell:1 electrically:1 membrane:1 across:2 wi:6 flanking:1 equation:4 segregation:1 brosch:1 mechanism:2 singer:1 studying:1 available:1 regulate:1 distinguished:1 alternative:1 slower:1 existence:2 recipient:1 assumes:2 top:2 responding:3 lock:1 maintaining:1 malsburg:2 occurs:2 spike:2 exhibit:8 gradient:4 perturbs:1 thank:1 lateral:2 simulated:1 philip:1 sci:1 extent:1 cellular:1 code:1 relationship:4 prompted:1 ratio:1 lg:1 irt:1 collective:7 neuron:21 extended:1 variability:1 communication:1 ever:1 perturbation:2 connection:3 coherent:3 address:1 bar:5 dynamical:1 pattern:6 usually:1 regime:1 critical:1 force:1 synchronize:3 representing:1 scheme:1 imply:1 fcf:1 acknowledges:1 woo:2 coupled:1 acknowledgement:1 synchronization:4 men:1 proportional:2 ita:1 generator:1 foundation:2 nucleus:1 bulb:1 degree:2 incident:1 reitboeck:1 consistent:1 excitatory:2 changed:1 ber:1 neighbor:5 anesthetized:1 tolerance:1 slice:1 feedback:9 distributed:2 cortical:7 fairchild:1 bauer:1 computes:1 far:3 ignore:1 evoke:1 connors:2 global:4 investigating:1 conclude:1 assumed:2 postdoctoral:1 continuous:1 nature:3 channel:1 robust:3 ca:1 decoupling:2 eeg:1 investigated:1 necessarily:1 neurosci:1 noise:11 neuronal:5 fig:10 ny:1 slow:2 msec:3 explicit:1 bower:2 perceptual:1 young:1 specific:1 unperturbed:1 dol:1 decay:1 concern:1 intrinsic:1 flashing:1 perceptually:3 vii:1 led:2 visual:11 comparator:24 bioi:2 consequently:1 oscillator:10 replace:1 crick:1 included:1 uniformly:1 averaging:2 decouple:1 total:1 experimental:1 internal:1 support:1 chem:1 preparation:1 investigator:1 incorporate:1 dept:1 |
2,037 | 2,850 | Maximum Margin Semi-Supervised
Learning for Structured Variables
Y. Altun, D. McAllester
TTI at Chicago
Chicago, IL 60637
altun,[email protected]
M. Belkin
Department of Computer Science
University of Chicago
Chicago, IL 60637
[email protected]
Abstract
Many real-world classification problems involve the prediction of
multiple inter-dependent variables forming some structural dependency. Recent progress in machine learning has mainly focused on
supervised classification of such structured variables. In this paper,
we investigate structured classification in a semi-supervised setting.
We present a discriminative approach that utilizes the intrinsic geometry of input patterns revealed by unlabeled data points and we
derive a maximum-margin formulation of semi-supervised learning
for structured variables. Unlike transductive algorithms, our formulation naturally extends to new test points.
1
Introduction
Discriminative methods, such as Boosting and Support Vector Machines have significantly advanced the state of the art for classification. However, traditionally
these methods do not exploit dependencies between class labels where more than
one label is predicted. Many real-world classification problems, on the other hand,
involve sequential or structural dependencies between multiple labels. For example
labeling the words in a sentence with their part-of-speech tags involves sequential
dependency between part-of-speech tags; finding the parse tree of a sentence involves a structural dependency among the labels in the parse tree. Recently, there
has been a growing interest in generalizing kernel methods to predict structured
and inter-dependent variables in a supervised learning setting, such as dual perceptron [7], SVMs [2, 15, 14] and kernel logistic regression [1, 11]. These techniques
combine the efficiency of dynamic programming methods with the advantages of
the state-of-the-art learning methods. In this paper, we investigate classification of
structured objects in a semi-supervised setting.
The goal of semi-supervised learning is to leverage the learning process from a
small sample of labeled inputs with a large sample of unlabeled data. This idea has
recently attracted a considerable amount of interest due to ubiquity of unlabeled
data. In many applications from data mining to speech recognition it is easy to
produce large amounts of unlabeled data, while labeling is often manual and expensive. This is also the case for many structured classification problems. A variety
of methods ranging from Naive Bayes [12], Cotraining [4], to Transductive SVM [9]
to Cluster Kernels [6] and graph-based approaches [3] and references therein, have
been proposed. The intuition behind many of these methods is that the classification/regression function should be smooth with respect to the geometry of the data,
i. e. the labels of two inputs x and x
? are likely to be the same if x and x
? are similar.
This idea is often represented as the cluster assumption or the manifold assumption.
The unlabeled points reveal the intrinsic structure, which is then utilized by the
classification algorithm. A discriminative approach to semi-supervised learning was
developed by Belkin, Sindhwani and Niyogi [3, 13], where the Laplacian operator
associated with unlabeled data is used as an additional penalty (regularizer) on the
space of functions in a Reproducing Kernel Hilbert Space. The additional regularization from the unlabeled data can be represented as a new kernel ? a ?graph
regularized? kernel.
In this paper, building on [3, 13], we present a discriminative semi-supervised learning formulation for problems that involve structured and inter-dependent outputs
and give experimental results on max-margin semi-supervised structured classification using graph-regularized kernels. The solution of the optimization problem
that utilizes both labeled and unlabeled data is a linear combination of the graph
regularized kernel evaluated at the parts of the labeled inputs only, leading to a
large reduction in the number of parameters. It is important to note that our
classification function is defined on all input points whereas some previous work is
only defined for the input points in the (labeled and unlabeled) training sample, as
they use standard graph kernels, which are restricted to in-sample data points by
definition.
There is an the extensive literature on semi-supervised learning and the growing
number of studies on learning structured and inter-dependent variables. Delaleau
et. al. [8] propose a semi-supervised learning method for standard classification that
extends to out-of-sample points. Brefeld et. al. [5] is one of the first studies investigating semi-supervised structured learning problem in a discriminative framework.
The most relevant previous work is the transductive structured learning proposed
by Lafferty et. al. [11].
2
Supervised Learning for Structured Variables
In structured learning, the goal is to learn a mapping h : X ? Y from structured
inputs to structured response values, where the inputs and response values form a
dependency structure. For each input x, there is a set of feasible outputs, Y(x) ? Y.
For simplicity, let us assume that Y(x) is finite for all x ? X , which is the case in
many real world problems and in all our examples. We denote the set of feasible
input-output pairs by Z ? X ? Y.
It is common to construct a discriminant function F : Z ? < which maps the
feasible input-output pairs to a compatibility score of the pair. To make a prediction
for x, this score is maximized over the set of feasible outputs,
h(x) = argmax F (x, y).
(1)
y?Y(x)
The score of an hx, yi pair is computed from local fragments, or ?parts?, of hx, yi.
In Markov random fields, x is a graph, y is a labeling of the nodes of x and a
local fragment (a part) of hx, yi is a clique in x and its labeling y. In parsing with
probabilistic context free grammars, a local fragment (a part) of hx, yi consist of
a branch of the tree y, where a branch is an internal node in y together with its
children, plus all pairs of a leaf node in y with the word in x labeled by that node.
Note that a given branch structure, such as NP ? Det N, can occur more than
once in a given parse tree.
In general, we let P be a set of (all possible) parts. We assume a ?counting function?,
c, such that for p ? P and hx, yi ? Z, c(p, hx, yi) gives the number of times that
the part p occurs in the pair hx, yi (the count of p in hx, yi). For a Mercer kernel
k : P ? P ? < on P, there is an associated RHKS Hk of functions f : P ? <,
where f measures the goodness of a part p. For any f ? Hk , we define a function
Ff on Z as
X
Ff (x, y) =
c(p, hx, yi)f (p).
(2)
p?P
Consider a simple chain example. Let ? be a set of possible observations and ?
be a set of possible hidden states. We take the input x to be a sequence x1 , . . . , x`
with xi ? ? and we take Y(x) to be the set of all sequences y1 , . . . , y` with the same
length as x and with yi ? ?. We can take P to be the set of all pairs hs, s?i plus all
pairs hs, ui with s, s? ? ? and u ? ?. Often ? is taken to be a finite set of ?states?
and ? = <d is a set of possible feature vectors. k(p, p0 ) is commonly defined as
k(hs, s?i, hs0 , s?0 i) =
k(hs, ui, hs0 , u0 i) =
?(s, s0 )?(?
s, s?0 ),
0
?(s, s )ko (u, u0 ),
(3)
(4)
where ?(w, w0 ) denotes the Kronecker-?. Note that in this example there are two
types of parts ? pairs of hidden states and pairs of a hidden state and an observation. Here we take k(p, p0 ) to be 0 if p and p0 are of different types.
In the supervised learning scenario, we are given a sample S of ` pairs (hx1 , y 1 i, . . .,
hx` , y ` i) drawn i. i. d. from an unknown but fixed probability distribution P on
Z. The goal is to learn a function f on the local parts P with small expected loss
EP [L(x, y, f )] where L is a prescribed loss function. This is commonly realized by
learning f that minimizes the regularized loss functional
f ? = argmin
f ?Hk
`
X
L(xi , y i , f ) + ?kf k2k ,
(5)
i=1
where k.kk is the norm corresponding to Hk measuring the complexity of f . A variety of loss functions L have been considered in the literature. In kernel conditional
random fields (CRFs) [11], the loss function is given by
X
L(x, y, f ) = ?Ff (x, y) + log
exp(Ff (x, y?))
y??Y(x)
In structured Support Vector Machines (SVM), the loss function is given by
L(x, y, f )
=
max ?(x, y, y?) + Ff (x, y?) ? Ff (x, y),
y??Y(x)
(6)
where ?(x, y, y?) is some measure of distance between y and y? for a given observation
x. A natural choice for ? is to take ?(x, y, y?) to be the indicator 1[y6=y?] [2]. Another
choice is to take ?(x, y, y?) to be the size of the symmetric difference between the
sets P(hx, yi) and P(hx, y?i) [14].
Let P(x) ? P be the set of parts having nonzero count in some pair hx, yi for
y ? Y(x). Let P(S) be the union of all sets P(xi ) for xi in the sample. Then, we
have following straightforward variant of the Representer Theorem [10], which was
also presented in [11].
Definition: A loss L is local if L(x, y, f ) is determined by the value of f on the
set P(x), i.e., for f, g : P ? < we have that if f (p) = g(p) for all p ? P(x) then
L(x, y, f ) = L(x, y, g).
Theorem 1. For any local loss function L and sample S there exist weights ?p for
p ? P(S) such that f ? as defined by (5) can be written as follows.
X
f ? (p) =
?p0 k(p0 , p)
(7)
p0 ?P(S)
Thus, even though the set of feasible outputs for x generally scales exponentially
with the size of output, the solution can be represented in terms of the parts of
the sample, which commonly scales polynomially. This is true for any loss function
that partitions into parts, which is the case for loss functions discussed above.
3
A Semi-Supervised Learning Approach to Structured
Variables
In semi-supervised learning, we are given a sample S consisting of l input-output
pairs {(x1 , y 1 ), . . . , (x` , y ` )} drawn i. i. d. from the probability distribution P on
Z and u unlabeled input patterns {x`+1 , . . . , x`+u } drawn i. i. d from the marginal
distribution PX , where usually l < u. Let X (S) be the set {x1 , . . . , x`+u } and let
Z(S) be the set of all pairs hx, yi with x ? X (S) and y ? Y(x).
If the true classification function is smooth wrt the underlying marginal distribution,
one can utilize unlabeled data points to favor functions that are smooth in this sense.
Belkin et. al. [3] implement this assumption by introducing a new regularizer to the
standard RHKS optimization framework (as opposed to introducing a new kernel
as discussed in Section 5)
f ? = argmin
f ?Hk
`
X
L(xi , y i , f ) + ?1 ||f ||2k + ?2 ||f ||2kS ,
(8)
i=1
where kS is a kernel representing the intrinsic measure of the marginal distribution.
Sindhwani et. al.[13] prove that the minimizer of (8) is in the span of a new kernel
function (details below) evaluated at labeled data only. Here, we generalize this
framework to structured variables and give a simplified derivation of the new kernel.
The smoothness assumption in the structured setting states that f should be smooth
on the underlying density on the parts P, thus we enforce f to assign similar
goodness scores to two parts p and p0 , if p and p0 are similar, for all parts of Z(S).
Let P(S) be the union of all sets P(z) for z ? Z(S) and let W be symmetric matrix
where Wp,p0 represents the similarity of p and p0 for p, p0 ? P(S).
f?
=
argmin
f ?Hk
=
argmin
f ?Hk
`
X
i=1
`
X
L(xi , y i , f ) + ?1 ||f ||2k + ?2
X
Wp,p0 (f (p) ? f (p0 ))2
p,p0 ?P(S)
L(xi , y i , f ) + ?1 ||f ||2k + ?2 f T Lf
(9)
i=1
Here W is a similarity matrix (like a nearest neighbor graph) and L is the
P Laplacian
of W , L = D ? W , where D is a diagonal matrix defined by Dp,p = p0 Wp,p0 . f
denotes the vector of f (p) for all p ? P(S). Note that the last term depends only
on the value of f on the parts in the set P(S). Then, for any local loss L(x, y, f ),
we immediately have the following Representer Theorem for the semi-supervised
structured case where S includes the labeled and the unlabeled data.
X
f?? (p) =
?p0 k(p0 , p)
(10)
p0 ?P(S)
Substituting (10) into (9) leads to the following optimization problem
?? = argmin
?
`
X
L(xi , y i , f? ) + ?T Q?,
(11)
i=1
where Q = ?1 K + ?2 KLK, K is the matrix of k(p, p0 ) for all p, p0 ? P(S) and f? ,
as a vector in the space Hk , is a linear function of the vector ?. Note that (11)
applies to any local loss function and if L(x, y, f ) is convex in f , as in the case for
logistic or hinge loss, then (11) is convex in ?.
We now have a loss function over labeled data regularized by the L2 norm (wrt
the inner product Q), for which we can re-evoke the Representer Theorem. Let S `
be the set of labeled inputs {x1 , . . . , x` }, Z(S ` ) be the set of all pairs hx, yi with
x ? X (S ` ) and y ? Y(x) and P(S ` ) be the set of al parts having nonzero count for
some pair in Z(S ` ). Let ?p be a vector whose pth component is 1 and 0 elsewhere.
Using the standard orthogonality argument, let ?? decompose into two: the vector
in the span of ?p = ?p KQ?1 for all p ? P(S ` ), and the vector in the orthogonal
component (under the inner product Q).
X
? =
?p ?p + ??
p?P(S ` )
?? can only increase the quadratic term in the optimization problem. Notice that
the first term in (11) depends only on f? (p) for p ? P(S ` ),
f? (p) = ?p K? = (?p KQ?1 )Q? = ?p Q?.
Since ?p Q?? = 0, we conclude that the optimal solution to (11) is given by
X
?? =
?p ?p = ?KQ?1 ,
(12)
p?P(S ` )
where ? is required to be sparse, such that only parts from the labeled data are
nonzero. Plugging this into original equations we get
? p0 ) = kp Q?1 kp0
k(p,
(13)
X
? p0 )
f? (p0 ) =
?p k(p,
(14)
p?P(S ` )
??
=
?
argmin L(S ` , f? ) + ? T K?
(15)
?
? p0 )
? is the matrix of k(p,
where kp is the vector of k(p, p0 ) for all p0 ? P(S) and K
for all p, p0 in P(S ` ). k? is the same as in [13].
We call k? the graph-regularized kernel, in which unlabeled data points are used to
augment the base kernel k wrt the standard graph kernel to take the underlying
density on parts into account. This kernel is defined over the complete part space,
where as standard graph kernels are restricted to P(S) only.
Given the graph-regularized kernel, the semi-supervised structured learning problem
is reduced to supervised structured learning. Since in semi-supervised learning
problems, in general, labeled data points are far fewer than unlabeled data, the
dimensionality of the optimization problems is greatly reduced by this reduction.
4
Structured Max-Margin Learning
We now investigate optimizing the hinge loss as defined by (6) using graph? Defining ? x,y to be the vector where ? x,y = c(p, hx, yi) is
regularized kernel k.
p
the count of p in hx, yi, the linear discriminant can be written in matrix notation
for x ? S ` as
? x,y .
Ff? (x, y) = ? T K?
Then, the optimization problem for margin maximization is
?
?
=
l
X
argmin min
?
?
?
?i + ? T K?
i=1
? ? xi ,yi ? ? xi ,?y
?i ? max 4(?
y, yi ) ? ? T K
y??Y(xi )
?i ? l.
This gives a convex quadratic program over the vectors indexed by P(S), a polynomial size problem in terms of the size of the structures. Following [2], we replace
the convex constraints by linear constraints for all y ? Y(x) and using Lagrangian
duality techniques, we get the following dual Quadratic program:
??
=
argmin ?T dR ? ? ?T ?
?
X
?(xi ,y) ? 0,
?(xi ,y) = 1,
(16)
?y ? Y(xi ),
?i ? l,
y?Y(x)
where ? is a vector of 4(y, y?) for all y ? Y(x) of all labeled observations x, d? is a
i i
i
?
matrix whose (xi , y)th column d?., (xi , y) = ? x ,y ? ? x ,y and dR = d? T Kd?.
Due
to the sparse structure of the constraint matrix, even though this is an exponential
sized QP, the algorithm proposed in [2] is proven to solve (16) to ? proximity in
polynomial time in P(S l ) and ?1 [15].
5
Semi-Supervised vs Transductive Learning
Since one major contribution of this paper is learning a classifier for structured objects that is defined over the complete part space P, we now examine the differences
of semi-supervised and transductive learning in more detail. The most common approach to realize the smoothness assumption is to construct a data dependent kernel
kS derived from the graph Laplacian on a nearest neighbor graph on the labeled and
unlabeled input patterns in the sample S. Thus, kS is not defined on observations
that are out of the sample. Given kS , one can construct a function f?? on S as
f?? = argmin
`
X
f ?HkS i=1
L(xi , y i , f ) + ?||f ||2kS .
(17)
It is well known that kernels can be combined linearly to yield new kernels. This
observation in the transductive setting leads to the following optimization problem,
when the kernel of the optimization problem is taken to be a linear combination of
a graph kernel kS and a standard kernel k restricted to P(S).
f??
=
argmin
`
X
f ?H(?1 k+?2 kS ) i=1
L(xi , y i , f ) + ?||f ||2(?1 k+?2 kS )
(18)
A structured semi-supervised algorithm based on (18) has been evaluated in [11].
The kernel is (18) is the weighted mean of k and kS , whereas the graph-regularized
kernel, resulting from weighted mean of two regularizers, is the harmonic mean of k
and kS [16]. An important distinction between f?? and f ? in (8), the optimization
performed in this paper, is that f?? is only defined on P(S) (only on observations
in the training data) while f ? is defined on all of P and can be used for novel (out
of sample) inputs x. We note that in general P is infinite. Out-of-sample extension
is already a serious limitation for transductive learning, but it is even more severe
in the structured case where parts of P can be composed of multiple observation
tokens.
6
Experiments
Similarity Graph: We build the similarity matrix W over P(S) using K-nearest
neighborhood relationship. Wp,p0 is 0 if p and p0 are not in the K-nearest neighborhood of each other or if p and p0 are of different types. Otherwise, the similarity is
given by a heat kernel. In our applications, the structure is a simple chain, therefore
the cliques involved single observation label pairs,
Wp,p0 = ?(y(up ), y(u0p0 ))e
kup ?u0 0 k2
p
t
,
(19)
where up denotes the observation part of p and y(u) denotes the labeling of u 1 . In
cases where k(p, p0 ) = Wp,p0 = 0 for p, p0 of different types, as in our experiments, the
Gram matrix K and the Laplacian L can be presented as block diagonal matrices,
which significantly reduces the computational complexity, the computation of Q?1
in particular.
Applications: We performed experiments using a simple chain model for pitch
accent (PA) prediction and OCR. In PA prediction, Y(x) = {0, 1}T with T = |x|
and xt ? <31 , ?t. In OCR, xt ? {0, 1}128 and |?| = 15.
We ran experiments comparing
semi-supervised
PA
U:0
U:80
U:0
U:80 U:200
structured (referred as STR)
65.92 68.83 70.34 71.27 73.68
and unstructured (referred
SVM
69.94
72.00 73.11
as SVM) max-margin opti65.81
70.28
72.15
74.92 76.37
mization. For both SVM and
STR
70.72
75.66 77.45
STR, we used RBF kernel as
the base kernel ko in (4) and
Table 1: Per-label accuracy for Pitch Accent.
a 5-nearest neighbor graph
to construct the Laplacian.
We chose the width of the RBF kernel by cross-validation on SVM and used the
same value for STR. Following [3], we fixed ?1 : ?2 ratio at 1 : 9. We report
the average results of experiments with 5 random selection of labeled sequences
in Table 1 and 2, with number of labeled sequences 4 on the left side of Table
1, 40 on the right side, and 10 in Table 2. We varied the number of unlabeled
sequences and reported the per-label accuracy of test sequences (on top of each
cell) and of unlabeled sequences (bottom) (when U > 0). The results in pitch
accent prediction shows the advantage of a sequence model over a non-structured
1
For more complicated parts, different measures can apply. For example, in sequence
classification, if the classifier is evaluatedPwrt the correctly classified individual labels in
the sequence, W can be s. t. Wp,p0 = u?p,u0 ?p0 ?(y(u), y(u0 ))?
s(u, u0 ) where s? denotes
some similarity measure such as the heat kernel. If the evaluation
is over segments of
P
the sequence, the similarity can be Wp,p0 = ?(y(p), y 0 (p0 )) u?p,u0 ?p0 s?(u, u0 ) where y(p)
denotes all the label nodes in the part p.
model, where STR consistently performs better than SVM. We also observe the
usefulness of unlabeled data both in the structured and unstructured models, where
as U increases, so does the accuracy. The improvement from unlabeled data and
from structured classification can be considered as additive. The small difference
between the accuracy of in-sample unlabeled data and the test data indicates the
natural extension of our framework to new data points.
In OCR, on the other hand, STR does not improve
over SVM. Even though unlabeled data improves accuracy, performing sequence classification is not helpful due to the sparsity of structural information. Since
|?| = 15 and there are only 10 labeled sequences with
average length 8.3, the statistics of label-label dependency is quite noisy.
7
Conclusions
OCR
SVM
STR
U:0
43.62
49.25
-
U:412
49.96
47.56
49.91
49.65
Table 2: OCR
We presented a discriminative approach to semi-supervised learning of structured
and inter-dependent response variables. In this framework, we derived a maximum margin formulation and presented experiments for a simple chain model. Our
approach naturally extends to the classification of unobserved structured inputs
and this is supported by our empirical results which showed similar accuracy on
in-sample unlabeled data and out-of-sample test data.
References
[1] Y. Altun, T. Hofmann, and A. Smola. Gaussian process classification for segmenting
and annotating sequences. In ICML, 2004.
[2] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden markov support vector machines. In ICML, 2003.
[3] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: a geometric framework for learning from examples. Technical Report 06, UChicago CS, 2004.
[4] Avrim Blum and Tom Mitchell. Combining labeled and unlabeled data with cotraining. In COLT, 1998.
[5] U. Brefeld, C. B?
uscher, and T. Scheffer. Multi-view discriminative sequential learning.
In (ECML), 2005.
[6] O. Chappelle, J. Weston, and B. Scholkopf. Cluster kernels for semi-supervised learning. In (NIPS), 2002.
[7] M. Collins and N.l Duffy. Convolution kernels for natural language. In (NIPS), 2001.
[8] Olivier Delalleau, Yoshua Bengio, and Nicolas Le Roux. Efficient non-parametric
function induction in semi-supervised learning. In Proceedings of AISTAT, 2005.
[9] Thorsten Joachims. Transductive inference for text classification using support vector
machines. In (ICML), pages 200?209, 1999.
[10] G. Kimeldorf and G. Wahba. Some results on tchebychean spline functions. Journal
of Mathematics Analysis and Applications, 33:82?95, 1971.
[11] John Lafferty, Yan Liu, and Xiaojin Zhu. Kernel conditional random fields: Representation, clique selection, and semi-supervised learning. In (ICML), 2004.
[12] K. Nigam, A. K. McCallum, S. Thrun, and T. M. Mitchell. Learning to classify text
from labeled and unlabeled documents. In Proceedings of AAAI-98, pages 792?799,
Madison, US, 1998.
[13] V. Sindhwani, P. Niyogi, and M. Belkin. Beyond the point cloud: from transductive
to semi-supervised learning. In (ICML), 2005.
[14] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2004.
[15] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine
learning for interdependent and structured output spaces. In (ICML), 2004.
[16] T. Zhang. personal communication.
| 2850 |@word h:4 polynomial:2 norm:2 p0:40 klk:1 reduction:2 liu:1 score:4 fragment:3 document:1 comparing:1 attracted:1 parsing:1 written:2 realize:1 john:1 chicago:4 additive:1 partition:1 hofmann:3 v:1 leaf:1 fewer:1 mccallum:1 boosting:1 node:5 org:1 zhang:1 scholkopf:1 prove:1 combine:1 inter:5 expected:1 examine:1 growing:2 multi:1 kp0:1 str:7 underlying:3 notation:1 kimeldorf:1 argmin:10 minimizes:1 developed:1 finding:1 unobserved:1 classifier:2 k2:1 segmenting:1 local:8 plus:2 chose:1 therein:1 k:11 union:2 block:1 implement:1 lf:1 empirical:1 yan:1 significantly:2 word:2 altun:5 hx1:1 get:2 unlabeled:24 selection:2 operator:1 tsochantaridis:2 context:1 map:1 lagrangian:1 crfs:1 straightforward:1 convex:4 focused:1 simplicity:1 unstructured:2 immediately:1 roux:1 traditionally:1 programming:1 olivier:1 pa:3 recognition:1 expensive:1 utilized:1 labeled:18 ep:1 bottom:1 cloud:1 taskar:1 ran:1 intuition:1 ui:2 complexity:2 dynamic:1 personal:1 segment:1 efficiency:1 mization:1 represented:3 regularizer:2 derivation:1 heat:2 kp:2 labeling:5 neighborhood:2 whose:2 quite:1 solve:1 delalleau:1 otherwise:1 annotating:1 grammar:1 favor:1 niyogi:3 statistic:1 transductive:9 noisy:1 advantage:2 brefeld:2 sequence:14 propose:1 product:2 relevant:1 combining:1 aistat:1 cluster:3 produce:1 tti:2 object:2 derive:1 nearest:5 progress:1 c:2 predicted:1 involves:2 mcallester:2 hx:17 assign:1 decompose:1 extension:2 proximity:1 considered:2 exp:1 mapping:1 predict:1 substituting:1 major:1 label:12 weighted:2 gaussian:1 derived:2 joachim:2 improvement:1 consistently:1 indicates:1 mainly:1 hk:8 greatly:1 sense:1 helpful:1 inference:1 dependent:6 hidden:4 koller:1 compatibility:1 classification:19 among:1 dual:2 augment:1 colt:1 art:2 marginal:3 field:3 construct:4 once:1 having:2 y6:1 represents:1 icml:6 representer:3 np:1 report:2 yoshua:1 serious:1 belkin:5 spline:1 composed:1 individual:1 geometry:2 argmax:1 consisting:1 interest:2 investigate:3 mining:1 evaluation:1 severe:1 uscher:1 misha:1 behind:1 regularizers:1 chain:4 rhks:2 orthogonal:1 tree:4 indexed:1 re:1 column:1 classify:1 goodness:2 measuring:1 maximization:1 introducing:2 kq:3 usefulness:1 reported:1 dependency:7 combined:1 density:2 probabilistic:1 together:1 aaai:1 opposed:1 dr:2 leading:1 account:1 includes:1 depends:2 performed:2 view:1 bayes:1 complicated:1 contribution:1 il:2 accuracy:6 maximized:1 yield:1 generalize:1 kup:1 classified:1 manual:1 definition:2 involved:1 naturally:2 associated:2 mitchell:2 dimensionality:1 improves:1 hilbert:1 supervised:30 tom:1 response:3 formulation:4 evaluated:3 though:3 smola:1 hand:2 parse:3 accent:3 logistic:2 reveal:1 building:1 true:2 regularization:2 symmetric:2 nonzero:3 wp:8 width:1 complete:2 performs:1 ranging:1 harmonic:1 novel:1 recently:2 common:2 functional:1 qp:1 exponentially:1 discussed:2 smoothness:2 mathematics:1 language:1 similarity:7 base:2 recent:1 showed:1 optimizing:1 scenario:1 yi:18 guestrin:1 additional:2 u0:8 semi:25 branch:3 multiple:3 reduces:1 smooth:4 technical:1 cross:1 plugging:1 laplacian:5 prediction:5 variant:1 regression:2 ko:2 pitch:3 kernel:38 cell:1 whereas:2 unlike:1 lafferty:2 call:1 structural:4 leverage:1 counting:1 revealed:1 bengio:1 easy:1 variety:2 wahba:1 inner:2 idea:2 det:1 penalty:1 speech:3 generally:1 involve:3 amount:2 svms:1 reduced:2 exist:1 notice:1 per:2 correctly:1 blum:1 drawn:3 utilize:1 graph:18 extends:3 utilizes:2 quadratic:3 occur:1 kronecker:1 orthogonality:1 constraint:3 tag:2 argument:1 prescribed:1 span:2 min:1 performing:1 px:1 structured:34 department:1 combination:2 kd:1 restricted:3 thorsten:1 taken:2 equation:1 count:4 wrt:3 apply:1 observe:1 ocr:5 enforce:1 ubiquity:1 original:1 denotes:6 top:1 hinge:2 madison:1 exploit:1 build:1 already:1 realized:1 occurs:1 parametric:1 diagonal:2 dp:1 distance:1 thrun:1 w0:1 manifold:2 discriminant:2 induction:1 length:2 relationship:1 kk:1 ratio:1 unknown:1 observation:10 convolution:1 markov:3 finite:2 ecml:1 defining:1 communication:1 y1:1 varied:1 reproducing:1 pair:17 required:1 extensive:1 sentence:2 distinction:1 nip:3 beyond:1 usually:1 pattern:3 below:1 sparsity:1 program:2 max:6 natural:3 regularized:9 indicator:1 advanced:1 zhu:1 representing:1 hks:1 improve:1 naive:1 xiaojin:1 text:2 literature:2 l2:1 geometric:1 kf:1 interdependent:1 loss:15 limitation:1 proven:1 validation:1 s0:1 mercer:1 elsewhere:1 token:1 supported:1 last:1 free:1 side:2 uchicago:2 perceptron:1 neighbor:3 sparse:2 world:3 gram:1 commonly:3 simplified:1 pth:1 far:1 polynomially:1 evoke:1 clique:3 investigating:1 conclude:1 discriminative:7 xi:18 table:5 learn:2 nicolas:1 nigam:1 linearly:1 k2k:1 child:1 x1:4 referred:2 scheffer:1 ff:7 exponential:1 cotraining:2 theorem:4 xt:2 svm:9 intrinsic:3 consist:1 avrim:1 sequential:3 duffy:1 margin:8 generalizing:1 likely:1 forming:1 sindhwani:4 applies:1 minimizer:1 weston:1 conditional:2 goal:3 sized:1 hs0:2 rbf:2 replace:1 considerable:1 feasible:5 determined:1 infinite:1 duality:1 experimental:1 internal:1 support:5 collins:1 |
2,038 | 2,851 | Learning Minimum Volume Sets
Clayton Scott
Statistics Department
Rice University
Houston, TX 77005
[email protected]
Robert Nowak
Electrical and Computer Engineering
University of Wisconsin
Madison, WI 53706
[email protected]
Abstract
Given a probability measure P and a reference measure ?, one is
often interested in the minimum ?-measure set with P -measure at
least ?. Minimum volume sets of this type summarize the regions of
greatest probability mass of P , and are useful for detecting anomalies and constructing confidence regions. This paper addresses the
problem of estimating minimum volume sets based on independent
samples distributed according to P . Other than these samples, no
other information is available regarding P , but the reference measure ? is assumed to be known. We introduce rules for estimating
minimum volume sets that parallel the empirical risk minimization
and structural risk minimization principles in classification. As
in classification, we show that the performances of our estimators
are controlled by the rate of uniform convergence of empirical to
true probabilities over the class from which the estimator is drawn.
Thus we obtain finite sample size performance bounds in terms of
VC dimension and related quantities. We also demonstrate strong
universal consistency and an oracle inequality. Estimators based
on histograms and dyadic partitions illustrate the proposed rules.
1
Introduction
Given a probability measure P and a reference measure ?, the minimum volume
set (MV-set) with mass at least 0 < ? < 1 is
G?? = arg min{?(G) : P (G) ? ?, G measurable}.
MV-sets summarize regions where the mass of P is most concentrated. For example,
if P is a multivariate Gaussian distribution and ? is the Lebesgue measure, then the
MV-sets are ellipsoids (see also Figure 1). Applications of minimum volume sets
include outlier/anomaly detection, determining highest posterior density or multivariate confidence regions, tests for multimodality, and clustering. In comparison
to the closely related problem of density level set estimation [1, 2], the minimum
volume approach seems preferable in practice because the mass ? is more easily
specified than a level of a density. See [3, 4, 5] for further discussion of MV-sets.
This paper considers the problem of MV-set estimation using a training sample
drawn from P , which in most practical settings is the only information one has
Figure 1: Gaussian mixture data, 500 samples, ? = 0.9. (Left and Middle) Minimum volume set estimates based on recursive dyadic partitions, discussed in Section
6. (Right) True MV set.
about P . The specifications to the estimation process are the significance level ?,
the reference measure ?, and a collection of candidate sets G. All proofs, as well as
additional results and discussion, may be found in [6] . To our knowledge, ours is
the first work to establish finite sample bounds, an oracle inequality, and universal
consistency for the MV-set estimation problem.
The methods proposed herein are primarily of theoretical interest, although they
may be implemented effeciently for certain partition-based estimators as discussed
later. As a more practical alternative, the MV-set problem may be reduced to
Neyman-Pearson classification [7, 8] by simulating realizations from.
1.1
Notation
Let (X , B) be a measure space with X ? Rd . Let X be a random variable taking
values in X with distribution P . Let S = (X1 , . . . , Xn ) be an independent and
identically distributed (IID) sample drawn according to P . Let G denote a subset
of X , and let G be a collection of such subsets. Let Pb denote the empirical measure
Pn
based on S: Pb(G) = (1/n) i=1 I(Xi ? G). Here I(?) is the indicator function. Set
??? = inf {?(G) : P (G) ? ?},
G
(1)
where the inf is over all measurable sets. A minimum volume set, G?? , is a minimizer
of (1), when it exists. Let G be a class of sets. Given ? ? (0, 1), denote G? = {G ?
G : P (G) ? ?}, the collection of all sets in G with mass at least alpha. Define
?G,? = inf{?(G) : G ? G? } and GG,? = arg min{?(G) : G ? G? } when it exists.
Thus GG,? is the best approximation to the MV-set G?? from G. Existence and
uniqueness of these and related quantities are discussed in [6] .
2
Minimum Volume Sets and Empirical Risk Minimization
In this section we introduce a procedure inspired by the empirical risk minimization
(ERM) principle for classification. In classification, ERM selects a classifier from a
fixed set of classifiers by minimizing the empirical error (risk) of a training sample.
Vapnik and Chervonenkis established the basic theoretical properties of ERM (see
[9, 10]), and we find similar properties in the minimum volume setting. In this and
the next section we do not assume P has a density with respect to ?.
Let ?(G, S, ?) be a function of G ? G, the training sample S, and a confidence
parameter ? ? (0, 1). Set Gb? = {G ? G : Pb(G) ? ? ? ?(G, S, ?)} and
b G,? = arg min{?(G) : G ? Gb? }.
G
(2)
We refer to the rule in (2) as MV-ERM because of the analogy with empirical risk
minimization in classification. The quantity ? acts as a kind of ?tolerance? by which
the empirical mass estimate may deviate from the targeted value of ?. Throughout
this paper we assume that ? satisfies the following.
Definition 1. We say ? is a (distribution free) complexity penalty for G if and
only if for all distributions P and all ? ? (0, 1),
Pn
S : sup P (G) ? Pb(G) ? ?(G, S, ?) > 0
? ?.
G?G
Thus, ? controls the rate of uniform convergence of Pb(G) to P (G) for G ? G. It
is well known that the performance of ERM (for binary classification) relative to
the performance of the best classifier in the given class is controlled by the uniform
convergence of true to empirical probabilities. A similar result holds for MV-ERM.
Theorem 1. If ? is a complexity penalty for G, then
b G,? ) < ? ? 2?(G
b G,? , S, ?) or ?(G
b G,? ) > ?G,?
P n P (G
? ?.
Proof. Consider the sets
?P
??
?P
b G,? ) < ? ? 2?(G
b G,? , S, ?)},
= {S : P (G
b G,? ) > ?(GG,? )},
= {S : ?(G
=
S : sup P (G) ? Pb(G) ? ?(G, S, ?) > 0 .
G?G
The result follows easily from the following lemma.
b G,? as defined in (2) we
Lemma 1. With ?P , ?? , and ?P defined as above and G
have ?P ? ?? ? ?P .
The proof of this lemma (see [6] ) follows closely the proof of Lemma 1 in [7]. This
result may be understood by analogy with the result from classification that says
b )| (see [10], Ch. 8). Here R and R
b are
R(fb) ? inf f ?F R(f ) ? 2 supf ?F |R(f ) ? R(f
the true and empirical risks, fb is the empirical risk minimizer, and F is a set of
classifiers. Just as this result relates uniform convergence bounds to empirical risk
minimization in classification, so does Lemma 1 relate uniform convergence to the
performance of MV-ERM.
The theorem above allows direct translation of uniform convergence results into
performance guarantees for MV-ERM. Fortunately, many penalties (uniform convergence results) are known. We now give to important examples, although many
others, such as the Rademacher penalty, are possible.
2.1
Example: VC Classes
Let G be a class of sets with VC dimension V , and define
r
V log n + log(8/?)
?(G, S, ?) = 32
.
n
(3)
By a version of the VC inequality [10], we know that ? is a complexity penalty for G,
and therefore Theorem 1 applies. To view this result in perhaps a more recognizable
way, let > 0 and choose ? such that 2?(G, S, ?) = . By inverting the relationship
between ? and , we have the following.
Corollary 1. With the notation defined above,
2
b G,? ) < ? ? or ?(G
b G,? ) > ?G,?
P n P (G
? 8nV e?n /128 .
Thus, for any fixed > 0, the probability of being within of the target mass ?
and being less than the target volume ?G,? approaches one exponentially fast as
the sample size increases. This result may also be used to calculate a distribution
free upper bound on the sample size needed to be within a given tolerance of ?
and with a given confidence 1 ? ?. In particular, the sample size will grow no faster
than a polynomial in 1/ and 1/?, paralleling results for classification.
2.2
Example: Countable Classes
Suppose G is a countable
P class of sets. Assume that to every G ? G a number JGK
is assigned such that G?G 2?JGK ? 1. In light of the Kraft inequality for prefix
codes, JGK may be defined as the codelength of a codeword for G in a prefix code
for G. Let ? > 0 and define
r
JGK log 2 + log(2/?)
.
(4)
?(G, S, ?) =
2n
By Chernoff?s bound together with the union bound, ? is a penalty for G. Therefore
Theorem 1 applies and we have obtained a result analogous to the Occam?s Razor
bound for classification.
As a special case, suppose G is finite and take JGK = log2 |G|. Setting 2?(G, S, ?) =
and inverting the relationship between ? and , we have
b G,? from a finite class G
Corollary 2. For the MV-ERM estimate G
2
b G,? ) < ? ? or ?(G
b G,? ) > ?G,?
P n P (G
? 2|G|e?n /2 .
3
Consistency
A minimum volume set estimator is consistent if its volume and mass tend to the
optimal values ??? and ? as n ? ?. Formally, define the error quantity
M(G) := (?(G) ? ??? )+ + (? ? P (G))+ ,
where (x)+ = max(x, 0). (Note that without the (?)+ operator, this would not be
a meaningful error since one term could be negative and cause M to tend to zero,
even if the other error term does not go to zero.) We are interested in MV-set
b G,? ) tends to zero as n ? ?.
estimators such that M(G
b G,? is strongly consistent if limn?? M(G
b G,? ) = 0
Definition 2. A learning rule G
b
with probability 1. If GG,? is strongly consistent for every possible distribution of
b G,? is strongly universally consistent.
X, then G
To see how consistency might result from MV-ERM, it helps to rewrite Theorem
1 as follows. Let G be fixed and let ?(G, S, ?) be a penalty for G. Then with
probability at least 1 ? ?, both
b G,? ) ? ?? ? ?(GG,? ) ? ??
?(G
(5)
?
?
and
b G,? ) ? 2?(G
b G,? , S, ?)
? ? P (G
(6)
hold. We refer to the left-hand side of (5) as the excess volume of the class G and
b G,? . The upper bounds on the
the left-hand side of (6) as the missing mass of G
right-hand sides are an approximation error and a stochastic error, respectively.
The idea is to let G grow with n so that both errors tend to zero as n ? ?. If G
does not change with n, universal consistency is impossible.
To have both stochastic and approximation errors tend to zero, we apply MV-ERM
to a class G k from a sequence of classes G 1 , G 2 , . . ., where k = k(n) grows with the
b G k ,? .
sample size. Consider the estimator G
Theorem
2. Choose k = k(n) and ? = ?(n) such that k(n) ? ? as n ? ? and
P?
?(n)
<
?. Assume the sequence of sets G k and penalties ?k satisfy
n=1
lim inf ?(G) = ???
(7)
k
k?? G?G?
and
lim sup ?k (G, S, ?(n)) = o(1).
n?? G?G k
(8)
?
b G k ,? is strongly universally consistent.
Then G
The proof combines the Borel-Cantelli lemma and the distribution-free result of
Theorem 1 with the stated assumptions. Examples satisfying the hypotheses of the
theorem include families of VC classes with arbitrary approximating power (e.g.,
generalized linear discriminant rules with appropriately chosen basis functions and
neural networks), and histogram rules. See [6] for further discussion.
4
Structural Risk Minimization and an Oracle Inequality
In the previous section the rate of convergence of the two errors to zero is determined
by the choice of k = k(n), which must be chosen a priori. Hence it is possible that
the excess volume decays much more quickly than the missing mass, or vice versa.
In this section we introduce a new rule called MV-SRM, inspired by the principle of
structural risk minimization (SRM) from the theory of classification [11, 12], that
automatically balances the two errors.
The result in this section is not distribution free. We assume
A1 P has a density f with respect to ?.
A2 G?? exists and P (G?? ) = ?.
Under these assumptions (see [6] ) there exists ?? > 0 such that for any MV-set
G?? , {x : f (x) > ?? } ? G?? ? {x : f (x) ? ?? }.
Let G be a class of sets. Conceptualize G as a collection of sets of varying capacities,
such as a union of VC classes or a union of finite classes. Let ?(G, S, ?) be a penalty
for G. The MV-SRM principle selects the set
n
o
b G,? = arg min ?(G) + ?(G, S, ?) : Pb(G) ? ? ? ?(G, S, ?) .
G
(9)
G?G
Note that MV-SRM is different from MV-ERM because it minimizes a complexity
penalized volume instead of simply the volume. We have the following.1
1
Although the value of 1/?? is in practice unknown, it can be bounded by 1/?? ?
(1 ? ??? )/(1 ? ?) ? 1/(1 ? ?). This follows from the bound 1 ? ? ? ?? ? (1 ? ??? ) on the
mass outside the minimum volume set.
b G,? be the MV-set estimator in (9). With probability at least
Theorem 3. Let G
1 ? ? over the training sample S,
n
o
b G,? ) ? 1 + 1
?(G) ? ??? + 2?(G, S, ?) .
inf
(10)
M(G
?? G?G?
Sketch of proof: The proof is similar in some respects to oracle inequalities for
classification. The key difference is in the form of the error term M(G) =
(?(G) ? ??? )+ + (? ? P (G))+ . In classification both approximation and stochastic
errors are positive, whereas with MV-sets the excess volume ?(G) ? ??? or missing
mass ? ? P (G) could be negative. This necessitates the (?)+ operators, without
which the error would not be meaningful as mentioned earlier. The proof considers
b G,? ) ? ?? and P (G
b G,? ) < ?, (2) ?(G
b G,? ) ? ?? and
three cases separately: (1) ?(G
?
?
b G,? ) ? ?, and (3) ?(G
b G,? ) < ??? and P (G
b G,? ) < ?. In the first case, both
P (G
volume and mass errors are positive and the argument follows standard lines. The
second case can be seen to follow easily from the first. The third case (which occurs most frequently in practice) is most involved and requires use of the fact that
??? ? ???? ? /?? for > 0, which can be deduced from basic properties of MV and
density level sets.
The oracle inequality says that MV-SRM performs about as well as the set chosen
by an oracle to optimize the tradeoff between the stochastic and approximation
errors. To illustrate the power of the oracle inequality, in [6] we demonstrate that
MV-SRM applied to recursive dyadic partition-based estimators adapts optimally
to the number of relevant features (unknown a priori).
5
Damping the Penalty
In Theorem 1, the reader may have noticed that MV-ERM does not equitably balb G,? )
ance the volume error with the mass error. Indeed, with high probability, ?(G
b
b
is less than ?(GG,? ), while P (GG,? ) is only guaranteed to be within ?(GG,? ) of
?. The net effect is that MV-ERM (and MV-SRM) underestimates the MV-set.
Experimental comparisons have confirmed this to be the case [6] .
A minor modification of MV-ERM and MV-SRM leads to a more equitable distribution of error between the volume and mass, instead of having all the error reside in
the mass term. The idea is simple: scale the penalty in the constraint by a damping
factor ? < 1. In the case of MV-SRM, the penalty in the objective function also
needs to be scaled by 1 + ?. Moreover, the theoretical properties of these estimators
stated above are retained (the statements, omitted here, are slightly more involved
[6] ). Notice that in the case ? = 1 we recover the original estimators. Also note
that the above theorem encompasses the generalized quantile estimate of [3], which
corresponds to ? = 0. Thus we have finite sample size guarantees for that estimator
to match Polonik?s asymptotic analysis.
6
Experiments: Histograms and Trees
To gain some insight into the basic properties of our estimators, we devised some
simple numerical experiments. In the case of histograms, MV-SRM can be implemented in a two step process. First, compute the MV-ERM estimate (a very simple
procedure) for each G k , k = 1, . . . , K, where 1/k is the bin-width. Second, choose
the final estimate by minimizing the penalized volume of the MV-ERM estimates.
n = 10000, k = 20, ?=0
Error as a function of sample size
0.12
occam
rademacher
0.1
0.08
0.06
0.04
0.02
0
100
1000
10000
100000
1000000
Figure 2: Results for histograms. (Left) A typical MV-ERM estimate with binwidth 1/20, ? = 0, and based on 10000 points. True MV-set indicated by solid line.
b G,? ) as a function of sample size
(Right) The error of the MV-SRM estimate M(G
when ? = 0. The results indicated that the Occam?s Razor bound is tighter and
yields better performance than Rademacher.
We consider two penalties: one based on an Occam style bound, the other on the
(conditional) Rademacher average. As a data set we consider X = [0, 1]2 , the unit
square, and data generated by a two-dimensional truncated Gaussian distribution,
centered at the point (1/2, 1/2) and having spherical variance with parameter ? =
0.15. Other parameter settings are ? = 0.8, K = 40, and ? = 0.05. All experiments
were conducted at nine different sample sizes, logarithmically spaced from 100 to
1000000, and repeated 100 times. Results are summarized in Figure 2.
To illustrate the potential improvement offered by spatially adaptive partitioning
methods, we consider a minimum volume set estimator based on recursive dyadic
(quadsplit) partitions. We employ a penalty that is additive over the cells A of the
partition. The precise form of the penalty ?(A) for each cell is given in [6] , but
loosely speaking it is proportional to the square-root of the ratio of the empirical
mass of the cell to the sample size n. In this case, MV-SRM with ? = 0 is
X
X
min
[?(A)`(A) + ?(A)] subject to
Pb(A)`(A) ? ?
(11)
G?G L
A
A
where G L is the collection of all partitions with dyadic cell sidelengths no smaller
than 2?L and `(A) = 1 if A belongs to the candidate set and `(A) = 0 otherwise
(see [6] for further details). Although directly optimization appears formidable, an
efficient alternative is to consider the Lagrangian and conduct a bisection search over
the Lagrange multiplier until the mass constraint is nearly achieved with equality
(10 iterations is sufficient in practice). For each iteration, minimization of the
Lagrangian can be performed very rapidly using standard tree pruning techniques.
An experimental demonstration of the dyadic partition estimator is depicted in Figure 1. In the experiments we employed a dyadic quadtree structure with L = 8 (i.e.,
cell sidelengths no smaller than 2?8 ) and pruned according to the theoretical penalty
?(A) formally defined in [6] weighted by a factor of 1/30 (in practice the optimal
weight could be found via cross-validation or other techniques). Figure 1 shows
the results with data distributed according to a two-component Gaussian mixture
distribution. This figure (middle image) additionally illustrates the improvement
possible by ?voting? over shifted partitions, which in principle is equivalent to constructing 2L ? 2L different trees, each based on a partition offset by an integer
multiple of the base sidelength 2?L , and taking a majority vote over all the result-
ing set estimates to form the final estimate. This strategy mitigates the ?blocky?
structure due to the underlying dyadic partitions, and can be computed almost as
rapidly as a single tree estimate (within a factor of L) due to the large amount of
redundancy among trees. The actual running time was one to two seconds.
7
Conclusions
In this paper we propose two rules, MV-ERM and MV-SRM, for estimation of
minimum volume sets. Our theoretical analysis is made possible by relating the
performance of these rules to the uniform convergence properties of the class of sets
from which the estimate is taken. Ours are the first known results to feature finite
sample bounds, an oracle inequality, and universal consistency.
Acknowledgements
The authors thank Ercan Yildiz and Rebecca Willett for their assistance with the experiments involving dyadic trees.
References
[1] I. Steinwart, D. Hush, and C. Scovel, ?A classification framework for anomaly detection,? J. Machine Learning Research, vol. 6, pp. 211?232, 2005.
[2] S. Ben-David and M. Lindenbaum, ?Learning distributions by their density levels ? a
paradigm for learning without a teacher,? Journal of Computer and Systems Sciences,
vol. 55, no. 1, pp. 171?182, 1997.
[3] W. Polonik, ?Minimum volume sets and generalized quantile processes,? Stochastic
Processes and their Applications, vol. 69, pp. 1?24, 1997.
[4] G. Walther, ?Granulometric smoothing,? Ann. Stat., vol. 25, pp. 2273?2299, 1997.
[5] B. Sch?
olkopf, J. Platt, J. Shawe-Taylor, A. Smola, and R. Williamson, ?Estimating
the support of a high-dimensional distribution,? Neural Computation, vol. 13, no. 7,
pp. 1443?1472, 2001.
[6] C. Scott and R. Nowak, ?Learning minimum volume sets,? UW-Madison, Tech. Rep.
ECE-05-2, 2005. [Online]. Available: http://www.stat.rice.edu/?cscott
[7] A. Cannon, J. Howse, D. Hush, and C. Scovel, ?Learning with the Neyman-Pearson
and min-max criteria,? Los Alamos National Laboratory, Tech. Rep. LA-UR 02-2951,
2002. [Online]. Available: http://www.c3.lanl.gov/?kelly/ml/pubs/2002 minmax/
paper.pdf
[8] C. Scott and R. Nowak, ?A Neyman-Pearson approach to statistical learning,? IEEE
Trans. Inform. Theory, 2005, (in press).
[9] V. Vapnik, Statistical Learning Theory.
New York: Wiley, 1998.
[10] L. Devroye, L. Gy?
orfi, and G. Lugosi, A Probabilistic Theory of Pattern Recognition.
New York: Springer, 1996.
[11] V. Vapnik, Estimation of Dependencies Based on Empirical Data.
Springer-Verlag, 1982.
New York:
[12] G. Lugosi and K. Zeger, ?Concept learning using complexity regularization,? IEEE
Trans. Inform. Theory, vol. 42, no. 1, pp. 48?54, 1996.
| 2851 |@word version:1 middle:2 polynomial:1 seems:1 solid:1 minmax:1 pub:1 chervonenkis:1 ours:2 prefix:2 scovel:2 must:1 additive:1 zeger:1 numerical:1 partition:11 detecting:1 direct:1 walther:1 combine:1 recognizable:1 multimodality:1 introduce:3 indeed:1 frequently:1 inspired:2 spherical:1 automatically:1 gov:1 actual:1 estimating:3 notation:2 bounded:1 moreover:1 mass:18 formidable:1 underlying:1 codelength:1 kind:1 minimizes:1 guarantee:2 every:2 act:1 voting:1 preferable:1 classifier:4 scaled:1 platt:1 control:1 unit:1 partitioning:1 positive:2 engineering:1 understood:1 tends:1 lugosi:2 might:1 practical:2 practice:5 recursive:3 union:3 ance:1 procedure:2 universal:4 empirical:14 orfi:1 confidence:4 sidelength:1 lindenbaum:1 operator:2 risk:11 impossible:1 optimize:1 measurable:2 equivalent:1 lagrangian:2 missing:3 www:2 go:1 rule:9 estimator:15 insight:1 analogous:1 target:2 suppose:2 anomaly:3 paralleling:1 hypothesis:1 logarithmically:1 satisfying:1 recognition:1 electrical:1 calculate:1 region:4 highest:1 mentioned:1 complexity:5 engr:1 rewrite:1 kraft:1 basis:1 necessitates:1 easily:3 tx:1 fast:1 pearson:3 outside:1 say:3 otherwise:1 statistic:1 final:2 online:2 sequence:2 net:1 propose:1 relevant:1 realization:1 rapidly:2 adapts:1 olkopf:1 los:1 convergence:9 rademacher:4 ben:1 help:1 illustrate:3 stat:2 minor:1 strong:1 implemented:2 closely:2 stochastic:5 vc:6 centered:1 bin:1 tighter:1 hold:2 a2:1 omitted:1 uniqueness:1 estimation:6 vice:1 weighted:1 minimization:9 gaussian:4 pn:2 cannon:1 varying:1 corollary:2 improvement:2 cantelli:1 tech:2 yildiz:1 interested:2 selects:2 arg:4 classification:15 among:1 priori:2 polonik:2 special:1 conceptualize:1 smoothing:1 having:2 chernoff:1 nearly:1 others:1 primarily:1 employ:1 national:1 lebesgue:1 detection:2 interest:1 blocky:1 mixture:2 light:1 nowak:4 damping:2 tree:6 conduct:1 loosely:1 taylor:1 theoretical:5 earlier:1 subset:2 uniform:8 srm:13 alamo:1 conducted:1 optimally:1 dependency:1 teacher:1 deduced:1 density:7 probabilistic:1 together:1 quickly:1 choose:3 style:1 potential:1 gy:1 summarized:1 satisfy:1 mv:43 later:1 view:1 root:1 performed:1 sup:3 recover:1 parallel:1 square:2 variance:1 yield:1 spaced:1 iid:1 bisection:1 confirmed:1 inform:2 definition:2 underestimate:1 pp:6 involved:2 proof:8 gain:1 cscott:2 knowledge:1 lim:2 appears:1 follow:1 strongly:4 just:1 smola:1 until:1 hand:3 sketch:1 steinwart:1 indicated:2 perhaps:1 grows:1 effect:1 concept:1 true:5 multiplier:1 hence:1 assigned:1 equality:1 spatially:1 regularization:1 laboratory:1 assistance:1 width:1 razor:2 criterion:1 generalized:3 gg:8 pdf:1 demonstrate:2 performs:1 image:1 exponentially:1 volume:28 discussed:3 relating:1 willett:1 refer:2 versa:1 rd:1 consistency:6 shawe:1 specification:1 base:1 multivariate:2 posterior:1 inf:6 belongs:1 codeword:1 certain:1 verlag:1 inequality:9 binary:1 rep:2 equitable:1 seen:1 minimum:18 additional:1 houston:1 fortunately:1 employed:1 paradigm:1 relates:1 multiple:1 ing:1 match:1 faster:1 cross:1 devised:1 a1:1 controlled:2 involving:1 basic:3 histogram:5 iteration:2 achieved:1 cell:5 whereas:1 separately:1 grow:2 limn:1 appropriately:1 sch:1 nv:1 subject:1 tend:4 integer:1 structural:3 identically:1 regarding:1 idea:2 tradeoff:1 gb:2 penalty:16 speaking:1 cause:1 nine:1 york:3 useful:1 amount:1 concentrated:1 reduced:1 http:2 notice:1 shifted:1 vol:6 key:1 redundancy:1 pb:8 drawn:3 wisc:1 uw:1 throughout:1 family:1 reader:1 almost:1 bound:12 guaranteed:1 howse:1 oracle:8 constraint:2 argument:1 min:6 pruned:1 department:1 according:4 smaller:2 slightly:1 granulometric:1 ur:1 wi:1 modification:1 outlier:1 erm:19 taken:1 neyman:3 needed:1 know:1 available:3 apply:1 simulating:1 alternative:2 existence:1 original:1 clustering:1 include:2 running:1 log2:1 madison:2 quantile:2 establish:1 approximating:1 objective:1 noticed:1 quantity:4 occurs:1 jgk:5 strategy:1 thank:1 capacity:1 majority:1 considers:2 discriminant:1 devroye:1 code:2 retained:1 relationship:2 ellipsoid:1 ratio:1 minimizing:2 balance:1 demonstration:1 robert:1 statement:1 relate:1 negative:2 stated:2 countable:2 quadtree:1 unknown:2 upper:2 finite:7 truncated:1 precise:1 arbitrary:1 rebecca:1 clayton:1 inverting:2 david:1 specified:1 c3:1 lanl:1 herein:1 established:1 hush:2 trans:2 address:1 pattern:1 scott:3 summarize:2 encompasses:1 max:2 greatest:1 power:2 indicator:1 deviate:1 kelly:1 acknowledgement:1 determining:1 relative:1 wisconsin:1 asymptotic:1 proportional:1 analogy:2 validation:1 offered:1 sufficient:1 consistent:5 principle:5 occam:4 translation:1 penalized:2 free:4 side:3 taking:2 tolerance:2 distributed:3 dimension:2 xn:1 fb:2 reside:1 collection:5 adaptive:1 universally:2 made:1 author:1 excess:3 alpha:1 pruning:1 ml:1 assumed:1 xi:1 search:1 additionally:1 williamson:1 constructing:2 significance:1 dyadic:9 repeated:1 x1:1 borel:1 wiley:1 binwidth:1 candidate:2 third:1 theorem:11 mitigates:1 offset:1 decay:1 exists:4 vapnik:3 illustrates:1 supf:1 depicted:1 simply:1 lagrange:1 applies:2 springer:2 ch:1 corresponds:1 minimizer:2 satisfies:1 rice:3 conditional:1 targeted:1 ann:1 change:1 determined:1 typical:1 lemma:6 called:1 ece:1 experimental:2 la:1 vote:1 meaningful:2 formally:2 support:1 |
2,039 | 2,852 | Spiking Inputs to a Winner-take-all Network
Matthias Oster and Shih-Chii Liu
Institute of Neuroinformatics
University of Zurich and ETH Zurich
Winterthurerstrasse 190
CH-8057 Zurich, Switzerland
{mao,shih}@ini.phys.ethz.ch
Abstract
Recurrent networks that perform a winner-take-all computation have
been studied extensively. Although some of these studies include spiking networks, they consider only analog input rates. We present results
of this winner-take-all computation on a network of integrate-and-fire
neurons which receives spike trains as inputs. We show how we can configure the connectivity in the network so that the winner is selected after
a pre-determined number of input spikes. We discuss spiking inputs with
both regular frequencies and Poisson-distributed rates. The robustness of
the computation was tested by implementing the winner-take-all network
on an analog VLSI array of 64 integrate-and-fire neurons which have an
innate variance in their operating parameters.
1
Introduction
Recurrent networks that perform a winner-take-all computation are of great interest because of the computational power they offer. They have been used in modelling attention
and recognition processes in cortex [Itti et al., 1998,Lee et al., 1999] and are thought to be a
basic building block of the cortical microcircuit [Douglas and Martin, 2004]. Descriptions
of theoretical spike-based models [Jin and Seung, 2002] and analog VLSI (aVLSI) implementations of both spike and non-spike models [Lazzaro et al., 1989, Indiveri, 2000, Hahnloser et al., 2000] can be found in the literature. Although the competition mechanism in
these models uses spike signals, they usually consider the external input to the network to
be either an analog input current or an analog value that represents the spike rate.
We describe the operation and connectivity of a winner-take-all network that receives input
spikes. We consider the case of the hard winner-take-all mode, where only the winning
neuron is active and all other neurons are suppressed. We discuss a scheme for setting the
excitatory and inhibitory weights of the network so that the winner which receives input
with the shortest inter-spike interval is selected after a pre-determined number of input
spikes. The winner can be selected with as few as two input spikes, making the selection
process fast [Jin and Seung, 2002].
We tested this computation on an aVLSI chip with 64 integrate-and-fire neurons and various
dynamic excitatory and inhibitory synapses. The distribution of mismatch (or variance) in
the operating parameters of the neurons and synapses has been reduced using a spike coding
VI
VSelf
(a)
(b)
Figure 1: Connectivity of the winner-take-all network: (a) in biological networks, inhibition is mediated by populations of global inhibitory interneurons (filled circle). To perform
a winner-take-all operation, they are driven by excitatory neurons (unfilled circles) and
in return, they inhibit all excitatory neurons (black arrows: excitatory connections; dark
arrows: inhibitory). (b) Network model in which the global inhibitory interneuron is replaced by full inhibitory connectivity of efficacy VI . Self excitation of synaptic efficacy
Vself stabilizes the selection of the winning neuron.
mismatch compensation procedure described in [Oster and Liu, 2004]. The results shown
in Section 3 of this paper were obtained with a network that has been calibrated so that
the neurons have about 10% variance in their firing rates in response to a common input
current.
1.1
Connectivity
We assume a network of integrate-and-fire neurons that receive external excitatory or inhibitory spiking input. In biological networks, inhibition between these array neurons is
mediated by populations of global inhibitory interneurons (Fig. 1a). They are driven by
the excitatory neurons and inhibit them in return. In our model, we assume the forward
connections between the excitatory and the inhibitory neurons to be strong, so that each
spike of an excitatory neuron triggers a spike in the global inhibitory neurons. The strength
of the total inhibition between the array neurons is adjusted by tuning the backward connections from the global inhibitory neurons to the array neurons. This configuration allows
the fastest spreading of inhibition through the network and is consistent with findings that
inhibitory interneurons tend to fire at high frequencies.
With this configuration, we can simplify the network by replacing the global inhibitory
interneurons with full inhibitory connectivity between the array neurons (Fig. 1b). In addition, each neuron has a self-excitatory connection that facilitates the selection of this
neuron as winner for repeated input.
2
Network Connectivity Constraints for a Winner-Take-All Mode
We first discuss the conditions for the connectivity under which the network operates in
a hard winner-take-all mode. For this analysis, we assume that the neurons receive spike
trains of regular frequency. We also assume the neurons to be non-leaky.
The membrane potentials Vi , i = 1 . . . N then satisfy the equation of a non-leaky integrate-
(a)
V
VE
th
Vth
VE
Vself
(b)
VE
VI
VI
VE
Figure 2: Membrane potential of the winning neuron k (a) and another neuron in the array
(b). Black bars show the times of input spikes. Traces show the changes in the membrane
membrane potential caused by the various synaptic inputs. Black dots show the times of
output spikes of neuron k.
and-fire neuron model with non-conductance-based synapses:
N
X
XX
dVi
(n)
(m)
= VE
?(t ? ti ) ? VI
?(t ? sj )
dt
j=1 m
n
(1)
j6=i
The membrane resting potential is set to 0. Each neuron receives external excitatory input
and inhibitory connections from all other neurons. All inputs to a neuron are spikes and
its output is also transmitted as spikes to other neurons. We neglect the dynamics of the
synaptic currents and the delay in the transmission of the spikes. Each input spike causes a
fixed discontinuous jump in the membrane potential (VE for the excitatory synapse and VI
for the inhibitory). Each neuron i spikes when Vi ? Vth and is reset to Vi = 0. Immediately
afterwards, it receives a self-excitation of weight Vself . All potentials satisfy 0 ? Vi ? Vth ,
that is, an inhibitory spike can not drive the membrane potential below ground. All neurons
i ? 1 . . . N, i6= k receive excitatory input spike trains of constant frequency ri . Neuron k
receives the highest input frequency (rk > ri ? i6=k).
As soon as neuron k spikes once, it has won the computation. Depending on the initial conditions, other neurons can at most have transient spikes before the first spike of
neuron k. For this hard winner-take-all mode, the network has to fulfill the following constraints (Fig. 2):
(a) Neuron k (the winning neuron) spikes after receiving nk = n input spikes that cause
its membrane potential to exceed threshold. After every spike, the neuron is reset to Vself :
Vself + nk VE ? Vth
(2)
(b) As soon as neuron k spikes once, no other neuron i 6= k can spike because it receives
an inhibitory spike from neuron k. Another neuron can receive up to n spikes even if its
input spike frequency is lower than that of neuron k because the neuron is reset to Vself
after a spike, as illustrated in Figure 2. The resulting membrane voltage has to be smaller
than before:
ni ? V E ? nk ? V E ? V I
(3)
(c) If a neuron j other than neuron k spikes in the beginning, there will be some time
in the future when neuron k spikes and becomes the winning neuron. From then on, the
conditions (a) and (b) hold, so a neuron j 6= k can at most have a few transient spikes.
Let us assume that neurons j and k spike with almost the same frequency (but rk > rj ).
For the inter-spike intervals ?i =1/ri this means ?j >?k . Since the spike trains are not
synchronized, an input spike to neuron k has a changing phase offset ? from an input spike
of neuron j. At every output spike of neuron j, this phase decreases by ?? = nk (?j ??k )
until ? < nk (?j ??k ). When this happens, neuron k receives (nk +1) input spikes before
neuron j spikes again and crosses threshold:
(nk + 1) ? VE ? Vth
(4)
We can choose Vself = VE and VI = Vth to fulfill the inequalities (2)-(4). VE is adjusted to
achieve the desired nk .
Case (c) happens only under certain initial conditions, for example when Vk Vj or
when neuron j initially received a spike train of higher frequency than neuron k. A leaky
integrate-and-fire model will ensure that all membrane potentials are discharged (Vi = 0) at
the onset of a stimulus. The network will then select the winning neuron after receiving a
pre-determined number of input spikes and this winner will have the first output spike.
2.1
Poisson-Distributed Inputs
In the case of Poisson-distributed spiking inputs, there is a probability associated with the
correct winner being selected. This probability depends on the Poisson rate ? and the
number of spikes needed for the neuron to reach threshold n. The probability that m input
spikes arrive at a neuron in the period T is given by the Poisson distribution
P(m, ?T ) = e??T
(?T )m
m!
(5)
We assume that all neurons i receive an input rate ?i , except the winning neuron which
receives a higher rate ?k . All neurons are completely discharged at t = 0.
The network will make a correct decision at time T , if the winner crosses threshold exactly
then with its nth input spike, while all other neuron received less than n spikes until then.
The winner receives the nth input spike at T , if it received n?1 input spikes in [0; T [ and
one at time T . This results in the probability density function
pk (T ) = ?k P(n?1, ?k T )
(6)
The probability that the other N?1 neurons receive less or equal than n?1 spikes in [0; T [
is
?
?
N
n?1
Y
X
?
P0 (T ) =
P(j, ?i T )?
(7)
i=1
i6=k
j=0
For a correct decision, the output spike of the winner can happen at any time T > 0, so we
integrate over all times T :
!
Z?
Z?
N
n?1
Y
X
P = pk (T ) ? P0 (T ) dT = ?k P(n?1, ?k T ) ?
P(j, ?i T ) dT
(8)
0
0
j=1
i6=k
i=0
We did not find a closed solution for this integral, but we can discuss its properties n is
varied by changing the synaptic efficacies. For n = 1 every input spike elicits an output
spike. The probability of a having an output spike from neuron k is then directly dependent
on the input rates, since no computation in the network takes place. For n ? ?, the
integration times to determine the rates of the Poisson-distributed input spike trains are
large, and the neurons perform a good estimation of the input rate. The network can then
discriminate small changes in the input frequencies. This gain in precision leads a slow
response time of the network, since a large number of input spike is integrated before an
output spike of the network.
The winner-take-all architecture can also be used with a latency spike code. In this case,
the delay of the input spikes after a global reset determines the strength of the signal. The
winner is selected after the first input spike to the network (nk = 1). If all neurons are
discharged at the onset of the stimulus, the network does not require the global reset. In
general, the computation is finished at a time nk ??k after the stimulus onset.
3
Results
We implemented this architecture on a chip with 64 integrate-and-fire neurons implemented
in analog VLSI technology. These neurons follow the model equation 1, except that they
also show a small linear leakage. Spikes from the neurons are communicated off-chip
using an asynchronous event representation transmission protocol (AER). When a neuron
spikes, the chip outputs the address of this neuron (or spike) onto a common digital bus (see
Figure 3). An external spike interface module (consisting of a custom computer board that
can be programmed through the PCI bus) receives the incoming spikes from the chip, and
retransmits spikes back to the chip using information stored in a routing table. This module
can also monitor spike trains from the chip and send spikes from a stored list. Through
this module and the AER protocol, we implement the connectivity needed for the winnertake-all network in Figure 1. All components have been used and described in previous
work [Boahen, 2000, Liu et al., 2001].
neuron array
spike interface module
monitor
reroute
sequence
Figure 3: The connections are implemented by transmitting spikes over a common bus
(grey arrows). Spikes from aVLSI neurons in the network are recorded by the digital
interface and can be monitored and rerouted to any neuron in the array. Additionally,
externally generated spike trains can be transmitted to the array through the sequencer.
We configure this network according to the constraints which are described above. Figure 4
illustrates the network behaviour with a spike raster plot. At time t = 0, the neurons
receive inputs with the same regular firing frequency of 100Hz except for one neuron which
received a higher input frequency of 120Hz. The synaptic efficacies were tuned so that
threshold is reached with 6 input spikes, after which the network does select the neuron
with the strongest input as the winner.
We characterized the discrimination capability of the winner-take-all implementation by
64
(a)
32
1
64
(b)
32
1
64
(c)
32
1
?50
0
50
Time [ms]
100
150
Figure 4: Example raster plot of the spike trains to and from the neurons: (a) Input: starting
from 0 ms, the neurons are stimulated with spike trains of a regular frequency of 100Hz,
but randomized phase. Neuron number 42 receives an input spike train with an increased
frequency of 120Hz. (b) Output without WTA connectivity: after an adjustable number
of input spikes, the neurons start to fire with a regular output frequency. The output frequencies of the neurons are slightly different due to mismatch in the synaptic efficacies.
Neuron 42 has the highest output frequency since it receives the strongest input. (c) Output
with WTA connectivity: only neuron 42 with the strongest input fires, all other neurons are
suppressed.
measuring to which minimal frequency, compared to the other input, the input rate to this
neuron has to be raised to select it as the winner. The neuron being tested receives an input
of regular frequency of f ? 100Hz, while all other neuron receive 100Hz. The histogram
of the minimum factors f for all neurons is shown in Figure 5. On average, the network
can discriminate a difference in the input frequency of 10%. This value is identical with
the variation in the synaptic efficacies of the neurons, which had been compensated to a
mismatch of 10%. We can therefore conclude that the implemented winner-take-all network functions according to the above discussion of the constraints. Since only the timing
information of the spike trains is used, the results can be extended to a wide range of input
frequencies different from 100Hz.
To test the performance of the network with Poisson inputs, we stimulated all neurons with
Poisson-distibuted spike rates of rate ?, except neuron k which received the rate ?k = f ?.
Eqn. 8 then simplifies to
Z?
f ? P(n?1, f ? T ) ?
P =
0
n?1
X
!N?1
P (i, ?T )
dT
(9)
i=0
We show measured data and theoretical predictions for a winner-take-all network of 2 and
8 neurons (Fig. 6). Obviously, the discrimation performance of the network is substantially limited by the Poisson nature of the spike trains compared to spike trains of regular
frequency.
12
10
# neurons
8
6
4
2
0
1
1.05
1.1
1.15
Increase factor f
1.2
1
0.8
0.8
0.6
0.6
correct
1
0.4
P
P
correct
Figure 5: Discrimination capability of the winner-take-all network: X-axis: factor f to
which the input frequency of a neuron has to be increased, compared to the input rate of
the other neurons, in order for that neuron to be selected as the winner. Y-axis: histogram
of all 64 neurons.
0.2
0
0.4
0.2
1
1.2
1.4
1.6
fincrease (n=8)
1.8
2
0
2
4
6
n (fincrease=1.5)
8
Figure 6: Probability of a correct decision of the winner-take-all network, versus difference
in frequencies (left), and number of input spikes n for a neuron to reach threshold (right).
The measured data (crosses/circles) is shown with the prediction of the model (continuous
lines), for a winner-take-all network of 2 neurons (red,circles) and 8 neurons (blue, crosses).
4
Conclusion
We analysed the performance and behavior of a winner-take-all spiking network that receives input spike trains. The neuron that receives spikes with the highest rate is selected as the winner after a pre-determined number of input spikes. Assuming a non-leaky
integrate-and-fire model neuron with constant synaptic weights, we derived constraints for
the strength of the inhibitory connections and the self-excitatory connection of the neuron. A large inhibitory synaptic weight is in agreement with previous analysis for analog
inputs [Jin and Seung, 2002]. The ability of a single spike from the inhibitory neuron to
inhibit all neurons removes constraints on the matching of the time constants and efficacy
of the connections from the excitatory neurons to the inhibitory neuron and vice versa. This
feature makes the computation tolerant to variance in the synaptic parameters as demonstrated by the results of our experiment.
We also studied whether the network is able to select the winner in the case of input spike
trains which have a Poisson distribution. Because of the Poisson distributed inputs, the
network does not always chose the right winner (that is, the neuron with the highest input
frequency) but there is a certain probability that the network does select the right winner.
Results from the network show that the measured probabilities match that of the theoretical results. We are currently extending our analysis to a leaky integrate-and-fire neuron
model and conductance-based synapses, which results in a more complex description of
the network.
Acknowledgments
This work was supported in part by the IST grant IST-2001-34124. We acknowledge Sebastian Seung for discussions on the winner-take-all mechanism.
References
[Boahen, 2000] Boahen, K. A. (2000). Point-to-point connectivity between neuromorphic
chips using address-events. IEEE Transactions on Circuits & Systems II, 47(5):416?434.
[Douglas and Martin, 2004] Douglas, R. and Martin, K. (2004). Cortical microcircuits.
Annual Review of Neuroscience, 27(1f).
[Hahnloser et al., 2000] Hahnloser, R., Sarpeshkar, R., Mahowald, M. A., Douglas, R. J.,
and Seung, S. (2000). Digital selection and analogue amplification coexist in a cortexinspired silicon circuit. Nature, 405:947?951.
[Indiveri, 2000] Indiveri, G. (2000). Modeling selective attention using a neuromorphic
analog VLSI device. Neural Computation, 12(12):2857?2880.
[Itti et al., 1998] Itti, C., Niebur, E., and Koch, C. (1998). A model of saliency-based fast
visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 20(11):1254?1259.
[Jin and Seung, 2002] Jin, D. Z. and Seung, H. S. (2002). Fast computation with spikes in
a recurrent neural network. Physical Review E, 65:051922.
[Lazzaro et al., 1989] Lazzaro, J., Ryckebusch, S., Mahowald, M. A., and Mead, C. A.
(1989). Winner-take-all networks of O(n) complexity. In Touretzky, D., editor, Advances in Neural Information Processing Systems, volume 1, pages 703?711. Morgan
Kaufmann, San Mateo, CA.
[Lee et al., 1999] Lee, D., Itti, C., Koch, C., and Braun, J. (1999). Attention activates
winner-take-all competition among visual filters. Nature Neuroscience, 2:375?381.
[Liu et al., 2001] Liu, S.-C., Kramer, J., Indiveri, G., Delbr?uck, T., Burg, T., and Douglas,
R. (2001). Orientation-selective aVLSI spiking neurons. Neural Networks: Special
Issue on Spiking Neurons in Neuroscience and Technology, 14(6/7):629?643.
[Oster and Liu, 2004] Oster, M. and Liu, S.-C. (2004). A winner-take-all spiking network
with spiking inputs. In 11th IEEE International Conference on Electronics, Circuits and
Systems. ICECS ?04: Tel Aviv, Israel, 13?15 December.
| 2852 |@word grey:1 p0:2 electronics:1 liu:7 configuration:2 efficacy:7 initial:2 tuned:1 current:3 analysed:1 happen:1 remove:1 plot:2 discrimination:2 intelligence:1 selected:7 device:1 beginning:1 inter:2 rapid:1 behavior:1 becomes:1 xx:1 circuit:3 israel:1 substantially:1 finding:1 winterthurerstrasse:1 every:3 ti:1 braun:1 exactly:1 grant:1 before:4 timing:1 mead:1 firing:2 black:3 chose:1 studied:2 mateo:1 fastest:1 programmed:1 limited:1 range:1 acknowledgment:1 block:1 implement:1 communicated:1 sequencer:1 procedure:1 eth:1 thought:1 matching:1 pre:4 regular:7 onto:1 selection:4 coexist:1 demonstrated:1 compensated:1 send:1 attention:4 starting:1 immediately:1 array:9 population:2 variation:1 trigger:1 us:1 delbr:1 agreement:1 recognition:1 module:4 decrease:1 inhibit:3 highest:4 boahen:3 complexity:1 seung:7 dynamic:2 completely:1 chip:8 various:2 sarpeshkar:1 train:16 fast:3 describe:1 pci:1 neuroinformatics:1 ability:1 obviously:1 sequence:1 matthias:1 reset:5 achieve:1 amplification:1 description:2 competition:2 transmission:2 extending:1 depending:1 recurrent:3 avlsi:4 measured:3 received:5 strong:1 implemented:4 synchronized:1 switzerland:1 discontinuous:1 correct:6 filter:1 routing:1 transient:2 implementing:1 require:1 behaviour:1 biological:2 adjusted:2 hold:1 koch:2 ground:1 great:1 stabilizes:1 estimation:1 spreading:1 currently:1 vice:1 activates:1 always:1 fulfill:2 voltage:1 derived:1 indiveri:4 vk:1 modelling:1 dependent:1 integrated:1 initially:1 vlsi:4 selective:2 issue:1 among:1 orientation:1 raised:1 integration:1 special:1 equal:1 once:2 having:1 identical:1 represents:1 future:1 stimulus:3 simplify:1 few:2 ve:10 replaced:1 phase:3 consisting:1 fire:12 conductance:2 interest:1 interneurons:4 custom:1 configure:2 integral:1 filled:1 circle:4 desired:1 theoretical:3 minimal:1 increased:2 modeling:1 measuring:1 neuromorphic:2 mahowald:2 delay:2 stored:2 calibrated:1 density:1 international:1 randomized:1 lee:3 off:1 receiving:2 transmitting:1 connectivity:12 again:1 recorded:1 choose:1 external:4 itti:4 return:2 potential:9 coding:1 satisfy:2 caused:1 vi:12 onset:3 depends:1 closed:1 reached:1 start:1 red:1 capability:2 ni:1 variance:4 kaufmann:1 saliency:1 discharged:3 chii:1 niebur:1 drive:1 j6:1 synapsis:4 phys:1 reach:2 strongest:3 sebastian:1 synaptic:10 touretzky:1 raster:2 frequency:24 associated:1 monitored:1 gain:1 back:1 higher:3 dt:4 follow:1 response:2 synapse:1 microcircuit:2 until:2 receives:16 eqn:1 replacing:1 mode:4 aviv:1 innate:1 building:1 unfilled:1 illustrated:1 self:4 excitation:2 won:1 m:2 ini:1 interface:3 common:3 spiking:10 physical:1 winner:41 volume:1 analog:8 resting:1 silicon:1 versa:1 tuning:1 i6:4 winnertake:1 had:1 dot:1 cortex:1 operating:2 inhibition:4 driven:2 certain:2 inequality:1 transmitted:2 minimum:1 morgan:1 determine:1 shortest:1 period:1 signal:2 ii:1 full:2 afterwards:1 rj:1 match:1 characterized:1 offer:1 cross:4 prediction:2 basic:1 poisson:11 histogram:2 receive:8 addition:1 interval:2 hz:7 tend:1 facilitates:1 december:1 exceed:1 architecture:2 simplifies:1 whether:1 cause:2 lazzaro:3 latency:1 dark:1 extensively:1 reduced:1 inhibitory:22 neuroscience:3 blue:1 ist:2 shih:2 threshold:6 monitor:2 changing:2 douglas:5 backward:1 vself:8 arrive:1 almost:1 place:1 decision:3 annual:1 strength:3 aer:2 constraint:6 ri:3 scene:1 martin:3 according:2 membrane:10 smaller:1 slightly:1 suppressed:2 wta:2 making:1 happens:2 equation:2 zurich:3 bus:3 discus:4 reroute:1 mechanism:2 needed:2 operation:2 robustness:1 include:1 ensure:1 burg:1 neglect:1 leakage:1 icecs:1 spike:96 ryckebusch:1 elicits:1 assuming:1 code:1 trace:1 implementation:2 adjustable:1 perform:4 neuron:117 acknowledge:1 jin:5 compensation:1 extended:1 varied:1 connection:9 address:2 able:1 bar:1 usually:1 below:1 mismatch:4 pattern:1 analogue:1 power:1 event:2 nth:2 scheme:1 technology:2 finished:1 axis:2 mediated:2 oster:4 review:2 literature:1 versus:1 digital:3 integrate:10 consistent:1 dvi:1 editor:1 excitatory:15 supported:1 soon:2 asynchronous:1 institute:1 wide:1 leaky:5 distributed:5 cortical:2 forward:1 jump:1 san:1 transaction:2 sj:1 global:8 active:1 incoming:1 tolerant:1 conclude:1 continuous:1 table:1 additionally:1 stimulated:2 nature:3 ca:1 tel:1 complex:1 protocol:2 vj:1 did:1 pk:2 arrow:3 repeated:1 fig:4 board:1 slow:1 precision:1 mao:1 winning:7 externally:1 rk:2 offset:1 list:1 illustrates:1 nk:10 interneuron:1 visual:2 vth:6 ch:2 determines:1 hahnloser:3 kramer:1 hard:3 change:2 determined:4 except:4 operates:1 total:1 uck:1 discriminate:2 select:5 ethz:1 tested:3 |
2,040 | 2,853 | Separation of Music Signals by Harmonic
Structure Modeling
Yun-Gang Zhang
Department of Automation
Tsinghua University
Beijing 100084, China
[email protected]
Chang-Shui Zhang
Department of Automation
Tsinghua University
Beijing 100084, China
[email protected]
Abstract
Separation of music signals is an interesting but difficult problem. It is
helpful for many other music researches such as audio content analysis.
In this paper, a new music signal separation method is proposed, which is
based on harmonic structure modeling. The main idea of harmonic structure modeling is that the harmonic structure of a music signal is stable,
so a music signal can be represented by a harmonic structure model. Accordingly, a corresponding separation algorithm is proposed. The main
idea is to learn a harmonic structure model for each music signal in the
mixture, and then separate signals by using these models to distinguish
harmonic structures of different signals. Experimental results show that
the algorithm can separate signals and obtain not only a very high Signalto-Noise Ratio (SNR) but also a rather good subjective audio quality.
1
Introduction
Audio content analysis is an important area in music research. There are many open problems in this area, such as content based music retrieval and classification, Computational
Auditory Scene Analysis (CASA), Multi-pitch Estimation, Automatic Transcription, Query
by Humming, etc. [1, 2, 3, 4]. In all these problems, content extraction and representation
is where the shoe pinches. In a song, the sounds of different instruments are mixed together,
and it is difficult to parse the information of each instrument. Separation of sound sources
in a mixture is a difficult problem and no reliable methods are available for the general
case. However, music signals are so different from general signals. So, we try to find a way
to separate music signals by utilizing the special character of music signals. After source
separation, many audio content analysis problems will become much easier. In this paper,
a music signal means a monophonic music signal performed by one instrument. A song is
a mixture of several music signals and one or more singing voice signals.
As we know, music signals are more ?ordered? than voice. The entropy of music is much
more constant in time than that of speech [5]. More essentially, we found that an important
character of a music signal is that its harmonic structure is stable. And the harmonic structures of music signals performed by different instruments are different. So, a harmonic
structure model is built to represent a music signal. This model is the fundamental of the
separation algorithm. In the separation algorithm, an extended multi-pitch estimation al-
gorithm is used to extract harmonic structures of all sources, and a clustering algorithm is
used to calculate harmonic structure models. Then, signals are separated by using these
models to distinguish harmonic structures of different signals.
There are many other signal separation methods, such as ICA [6]. General signal separation
methods do not sufficiently utilize the special character of music signals. Gil-Jin and TeWon proposed a probabilistic approach to single channel blind signal separation [7], which
is based on exploiting the inherent time structure of sound sources by learning a priori
sets of basis filters. In our approach, training sets are not required, and all information
are directly learned from the mixture. Feng et al. applied FastICA to extract singing and
accompaniment from a mixture [8]. Vanroose used ICA to remove music background
from speech by subtracting ICA components with the lowest entropy [9]. Compared to
these approaches, our method can separate each individual instrument sound, preserve the
harmonic structure in the separated signals and obtain a good subjective audio quality. One
of the most important contributions of our method is that it can significantly improve the
accuracy of multi-pitch estimation. Compared to previous methods, our method learns
models from the primary multi-pitch estimation results, and uses these models to improve
the results. More importantly, pitches of different sources can be distinguished by these
models. This advantage is significant for automatic transcription.
The rest of this paper is organized as follows: Harmonic structure modeling is detailed in
Section two. The algorithm is described in section three. Experimental results are shown
in section four. Finally, conclusion and discussions are given in section five.
2
Harmonic structure modeling for music signals
A monophonic music signal s(t) can be represented by a sinusoidal model [10]:
s(t) =
R
X
Ar (t) cos[?r (t)] + e(t)
(1)
r=1
Rt
where Ar (t) and ?r (t) = 0 2?rf0 (? )d? are the instantaneous amplitude and phase of the
rth harmonic, respectively, R is the maximal harmonic number, f0 (? ) is the fundamental
frequency at time ? , e(t) is the noise component.
We divide s(t) into overlapped frames and calculate f0l and Alr by detecting peaks in the
magnitude spectrum. Alr = 0, if there doesn?t exist the rth harmonic. l = 1, . . . , L is the
frame index. f0l and [Al1 , . . . , AlR ] describe the position and amplitudes of harmonics. We
normalize Alr by multiplying a factor ?l = C/Al1 ( C is an arbitrary constant) to eliminate
the influence of the amplitude. We translate the amplitudes into a log scale, because the
human ear has a roughly logarithmic sensitivity to signal intensity. Harmonic Structure
Coefficient is then defined as equation (2). The timbre of a sound is mostly controlled
l
by the number of harmonics and the ratio of their amplitudes, so Bl = [B1l , . . . , BR
],
which is free from the fundamental frequency and amplitude, exactly represents the timbre
of a sound. In this paper, these coefficients are used to represent the harmonic structure
of a sound. Average Harmonic Structure and Harmonic Structure Stability are defined as
follows to model music signals and measure the stability of harmonic structures.
? Harmonic Structure Bl , Bil is Harmonic Structure Coefficient:
l
Bl = [B1l , . . . , BR
], Bil = log(?l Ali )/ log(?l Al1 ), i = 1, . . . , R
?=
? Average Harmonic Structure (AHS): B
1
L
L
P
l=1
Bl
(2)
? Harmonic Structure Stability (HSS):
L
R X
L
X
1 1X
?r )2
Bl ? B
?
2 = 1
HSS =
?
(Brl ? B
R L
RL r=1
l=1
(3)
l=1
AHS and HSS are the mean and variance of Bl . Since timbres of most instruments are
stable, Bl varies little in different frames in a music signal and AHS is a good model
to represent music signals. On the contrary, Bl varies much in a voice signal and the
corresponding HSS is much bigger than that of a music signal. See figure 1.
50
50
0
0
?50
0
50
100
150
?50
200
0
50
100
150
200
(a) Spectra in different frames of a voice signal. The number of harmonics (significant peaks in
the spectrum) and their amplitude ratios are totally different.
50
50
0
0
?50
0
50
100
150
?50
200
0
50
100
150
200
(b) Spectra in different frames of a piccolo signal. The number of harmonics (significant peaks
in the spectrum) and their amplitude ratios are almost the same.
1.5
1
1
0.5
0.5
0
0
2
4
6
8
HSS=0.056576
10
12
14
16
(c) The AHS and HSS of a oboe signal
0
1
0.5
0.5
0
2
4
6
8
HSS=0.17901
10
12
14
2
4
6
8
HSS=0.037645
10
12
14
16
(d) The AHS and HSS of a SopSax signal
1
0
0
16
0
0
2
4
6
8
HSS=0.1
10
12
14
16
(e) The AHS and HSS of a male singing voice (f) The AHS and HSS of a female singing voice
Figure 1: Spectra, AHSs and HSSs of voice and music signals. In (c)-(f), x-axis is harmonic
number, y-axis is the corresponding harmonic structure coefficient.
3
Separation algorithm based on harmonic structure modeling
Without loss of generality, suppose we have a signal mixture consisting of one voice and
several music signals. The separation algorithm consists of four steps: preprocessing, extraction of harmonic structures, music AHSs analysis, separation of signals.
In preprocessing step, the mean and energy of the input signal are normalized. In the second step, the pitch estimation algorithm of Terhardt [11] is extended and used to extract
harmonic structures. This algorithm is suitable for estimating both the fundamental frequency and all its harmonics. In Terhardt?s algorithm, in each frame, all spectral peaks
exceeding a given threshold are detected. The frequencies of these peaks are [f1 , . . . , fK ],
K is the number of peaks. For a fundamental frequency candidate f , count the number of
fi which satisfies the following condition:
f loor[(1 + d)fi /f ] ? (1 ? d)fi /f
(4)
f loor(x) denotes the greatest integer less than or equal to x. This condition means whether
ri f ? (1 ? d) ? fi ? ri f ? (1 + d). If the condition is fulfilled, fi is the frequency of
the rith harmonic component when fundamental frequency is f . For each fundamental
frequency candidate f , the coincidence number is calculated and f? corresponding to the
largest coincidence number is selected as the estimated fundamental frequency.
The original algorithm is extended in the following ways: Firstly, not all peaks exceeding
the given threshold are detected, only the significant ones are selected by an edge detection
procedure. This is very important for eliminating noise and achieving high performances
in next steps. Secondly, not only the fundamental frequency but also all its harmonics are
extracted, then B can be calculated. Thirdly, the original optimality criterion is to select
f? corresponding to the largest coincidence number. This criterion is not stable when the
signal is polyphonic, because harmonic components of different sources may influence
each other. A new optimality criterion is define as follows (n is the coincidence number):
d=
1
n
K
X
i=1,fi coincident with f
|ri ? fi /f |
ri
(5)
f? corresponding to the smallest d is the estimated fundamental frequency. The new criterion measures the precision of coincidence. For each fundamental frequency, harmonic
components of the same source are more probably to have a high coincidence precision
than those of a different source. So, the new criterion is helpful for separation of harmonic
structures of different sources. Note that, the coincidence number is required to be larger
than a threshold, such as 4-6. This requirement eliminates many errors. Finally, in the original algorithm, only one pitch was detected in each frame. Here, the sound is polyphonic.
So, all pitches for which the corresponding d is below a given threshold are extracted.
After harmonic structure extraction, a data set of harmonic structures is obtained. As the
analysis in section two, in different frames, music harmonic structures of the same instrument are similar to each other and different from those of other instruments. So, in the data
set all music harmonic structures form several high density clusters. Each cluster corresponds to an instrument. Voice harmonic structures scatter around like background noise,
because the harmonic structure of the voice signal is not stable.
In the third step, NK algorithm [12] is used to learn music AHSs. NK algorithm is a clustering algorithm, which can cluster data on data sets consisting of clusters with different
shapes, densities, sizes and even with some background noise. It can deal with high dimensional data sets. Actually, the harmonic structure data set is such a data set. Clusters
of harmonic structures of different instruments have different densities. Voice harmonic
structure are background noise. Each data point, a harmonic structure, has a high dimensionality (20 in our experiments). In NK algorithm, first find K neighbors for each point
and construct a neighborhood graph. Each point and its neighbors form a neighborhood.
Then local PCA is used to calculate eigenvalues of a neighborhood. In a cluster, data points
are close to each other and the neighborhood is small, so the corresponding eigenvalues are
small. On the contrary, for a noise point, corresponding eigenvalues are much bigger. So
noise points can be removed by eigenvalue analysis. After denoising, in the neighborhood
graph, all points of a cluster are connected together by edges between neighbors. If two
clusters are connected together, there must exist long edges between them. Then the eigenvalues of the corresponding neighborhoods are bigger than others. So all edges between
clusters can be found and removed by eigenvalue analysis. Then data points are clustered
correctly and AHSs can be obtained by calculate the mean of each cluster.
In the separation step, all harmonic structures of an instrument in all frames are extracted
to reconstruct the corresponding music signals and then removed from the mixture. After
removing all music signals, the rest of the mixture is the separated voice signal.
The procedure of music harmonic structure detection is detailed as follows. Given a music
?1 , . . . , B
?R ] and a fundamental frequency candidate f , a music harmonic structure
AHS [B
?1 , . . . , B
?R ] are its frequencies and harmonic structure
is predicted. [f, 2f, . . . , Rf ] and [B
coefficients. The closest peak in the magnitude spectrum for each predicted harmonic
component is detected. Suppose [f1 , . . . , fR ] and [B1 , . . . , BR ] are the frequencies and
harmonic structure coefficients of these peaks (measured peaks). Formula 6 is defined to
calculate the distance between the predicted harmonic structure and the measured peaks.
+
R
P
{?fr ? (rf )?p
? r >0,Br >0
r=1,B
R
P
?r
?r ? Br )2
a
( B?Bmax
)(B
?
r=1,Br >0,Br >0
D(f ) =
+
?r
B
? max
B
? q?fr ? (rf )?p }
(6)
The first part of D is a modified version of Two-Way Mismatch measure defined by Maher
and Beauchamp, which measures the frequency difference between predicted peaks and
measured peaks [13], where p and q are parameters, and ?fr = |fr ? r ? f |. The second
part measures the shape difference between the two, a is a normalization coefficient. Note
that, only harmonic components with none-zero harmonic structure coefficients are considered. Let f? indicate the fundamental frequency candidate corresponding to the smallest
distance between the predicted peaks and the actual spectral peaks. If D(f?) is smaller
than a threshold Td , a music harmonic structure is detected. Otherwise there is no music
harmonic structure in the frame. If a music harmonic structure is detected, the corresponding measured peaks in the spectrum are extracted, and the music signal is reconstructed
by IFFT. Smoothing between frames is needed to eliminate errors and click noise between
frames.
4
Experimental results
We have tested the performance of the proposed method on mixtures of different voice
and music signals. The sample rate of the mixtures is 22.05kHz. Audio files for all the
experiments are accessible at the website1 .
Figure 2 shows experimental results. In experiments 1 and 2, the mixed signals consist of
one voice signal and one music signal. In experiment 3, the mixture consists of two music
signals. In experiment 4, the mixture consists of one voice and two music signals. Table 1
shows SNR results. It can be seen that the mixtures are well separated into voice and music
signals and very high SNRs are obtained in the separated signals. Experimental results
show that music AHS is a good model for music signal representation and separation.
There is another important fact that should be emphasized. In the separation procedure,
music harmonic structures are detected by the music AHS model and separated from the
mixture, and most of the time voice harmonic structures remain almost untouched. This
procedure makes separated signals with a rather good subjective audio quality due to the
good harmonic structure in the separated signals. Few existing methods can obtain such a
good result because the harmonic structure is distorted in most of the existing methods.
It is difficult to compare our method with other methods, because they are so different.
However, we compared our method with a speech enhancement method, because separation
1
http://www.au.tsinghua.edu.cn/szll/bodao/zhangchangshui/bigeye/member/zyghtm/
experiments.htm
Table 1: SNR results (DB): snrv , snrm1 and snrm2 are the SNRs of voice and music
?
signals in the mixed signal. snre? is the SNR of speech enhancement result. snrv? , snrm1
?
and snrm2
are the SNRs of the separated voice and music signals.
?
?
snrv snrm1 snrm2 snre? snrv? snrm1
snrm2
Total inc.
Experiment 1 -7.9
7.9
/
-6.0
6.7
10.8
/
17.5
5.2
/
-1.5
6.6
10.0
/
16.6
Experiment 2 -5.2
Experiment 3
/
1.6
-1.6
/
/
9.3
7.1
16.4
0.7
-2.2
/
2.8
8.6
6.3
29.2
Experiment 4 -10.0
of voice and music can be regarded as a speech enhancement problem by regarding music
as background noise. Figure 2 (b), (d) give speech enhancement results obtained by a
speech enhancement software which tries to estimate the spectrum of noise in the pause
of speech and enhance the speech by spectral subtraction [14]. Detecting pauses in speech
with music background and enhancing speech with fast music noise are both very difficult
problems, so traditional speech enhancement techniques can?t work here.
5
Conclusion and discussion
In this paper, a harmonic structure model is proposed to represent music signals and used
to separate music signals. Experimental results show a good performance of this method.
The proposed method has many applications, such as multi-pitch estimation, audio content
analysis, audio edit, speech enhancement with music background, etc.
Multi-pitch estimation is an important problem in music research. There are many existing methods, such as pitch perception model based methods, and probabilistic approaches
[4, 15, 16, 17]. However, multi-pitch estimation is a very difficult problem and remains
unsolved. Furthermore, it is difficult to distinguish pitches of different instruments in the
mixture. In our algorithm, not only harmonic structures but also corresponding fundamental frequencies are extracted. So, the algorithm is also a new multi-pitch estimation method.
It analyzes the primary multi-pitch estimation results and learns models to represent music
signals and improve multi-pitch estimation results. More importantly, pitches of different
sources can be distinguished by the AHS models. This advantage is significant for automatic transcription. Figure 2 (f) shows multi-pitch estimation results in experiment 3. It
can be seen that, the multi-pitch estimation results are fairly good.
The proposed method is useful for melody extraction. As we know, in a mixed signal,
multi-pitch estimation is a difficult problem. After separation, pitch estimation on the separated voice signal that contains melody becomes a monophonic pitch estimation problem,
which can be done easily. The estimated pitch sequence represents the melody of the song.
Then, many content base audio analysis tasks such as audio retrieval and classification
become much easier and many midi based algorithms can be used on audio files.
There are still some limitations. Firstly, the proposed algorithm doesn?t work for nonharmonic instruments, such as some drums. Some rhythm tracking algorithms can be used
instead to separate drum sounds. Fortunately, most instrument sounds are harmonic. Secondly, for some instruments, the timbre in the onset is somewhat different from that in
the stable duration. Also, different performing methods (pizz. or arco) produces different
timbres. In these cases, the music harmonic structures of this instrument will form several
clusters, not one. Then a GMM model instead of an average harmonic structure model
(actually a point model) should be used to represent the music.
1
1
1
0
0
0
?1
?1
?1
original voice signal
original music signal
mixed signal
(a) Experiment1: The original voice and piccolo signals and the mixed signal
1
1
1
0
0
0
?1
?1
?1
separated voice signal
speech enhancement result
separated music signal
(b) Experiment1:The separated signals and the speech enhancement result
1
1
1
0
0
0
?1
?1
?1
original voice signal
original music signal
mixed signal
(c) Experiment2: The original voice and organ signals and the mixed signal
1
1
1
0
0
0
?1
?1
?1
separated voice signal
speech enhancement result
separated music signal
(d) Experiment2:The separated signals and the speech enhancement result
1
1
0
0
0
?1
?1
?1
original piccolo signal
1
original organ signal
mixed signal
(e) Experiment3:The original piccolo and organ signals and the mixed signal
1
1
40
35
25
20
30
0
0
25
20
15
10
15
?1
?1
10
5
separated piccolo signal
0
0
100
200
300
400
500
5
separated organ signal
600
0
0
100
200
300
400
500
600
(f) Experiment3:The separated signals and the multi-pitch estimation results
1
1
1
1
0
0
0
0
?1
?1
?1
?1
original voice signal
original piccolo signal
original organ signal
mixed signal
(g) Experiment4:The original voice, piccolo and organ signals and the mixed signal
1
1
1
0
0
0
?1
separated piccolo signal
?1
separated voice signal
?1
separated organ signal
(h) Experiment4:The separated signals
1
1
0.5
0.5
0
0
2
4
6
8
10
12
HSS=0.0066462
14
16
18
20
0
0
2
4
6
8
(i) Experiment4:The learned music AHSs
Figure 2: Experimental results.
10
12
HSS=0.012713
14
16
18
20
Acknowledgments
This work is supported by the project (60475001) of the National Natural Science Foundation of China.
References
[1] J. S. Downie, ?Music information retrieval,? Annual Review of Information Science
and Technology, vol. 37, pp. 295?340, 2003.
[2] Roger Dannenberg, ?Music understanding by computer,? in IAKTA/LIST International Workshop on Knowledge Technology in the Arts Proc., 1993, pp. 41?56.
[3] G. J. Brown and M. Cooke, ?Computational auditory scene analysis,? Computer
Speech and Language, vol. 8, no. 4, pp. 297?336, 1994.
[4] M.Goto, ?A robust predominant-f0 estimation method for real-time detection of
melody and bass lines in cd recordings,? in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP2000), 2000, pp. 757?760.
[5] J. Pinquier, J. Rouas, and R. Andre-Obrecht, ?Robust speech / music classification in
audio documents,? in 7th International Conference On Spoken Language Processing
(ICSLP), 2002, pp. 2005?2008.
[6] P. Comon, ?Independent component analysis, a new concept?,? Signal Processing,
vol. 36, pp. 287?314, 1994.
[7] Gil-Jin Jang and Te-Won Lee, ?A probabilistic approach to single channel blind signal
separation,? in Neural Information Processing Systems 15 (NIPS2002), 2003.
[8] Yazhong Feng, Yueting Zhuang, and Yunhe Pan, ?Popular music retrieval by independent component analysis,? in ISMIR, 2002, pp. 281?282.
[9] Peter Vanroose, ?Blind source separation of speech and background music for improved speech recognition,? in The 24th Symposium on Information Theory, May
2003, pp. 103?108.
[10] X. Serra, ?Musical sound modeling with sinusoids plus noise,? in Musical Signal
Processing, C. Roads, S. Popea, A. Picialli, and G. De Poli, Eds. Swets & Zeitlinger
Publishers, 1997.
[11] E. Terhardt, ?Calculating virtual pitch,? Hearing Res., vol. 1, pp. 155?182, 1979.
[12] Yungang Zhang, Changshui Zhang, and Shijun Wang, ?Clustering in knowledge embedded space,? in ECML, 2003, pp. 480?491.
[13] R. C. Maher and J. W. Beauchamp, ?Fundamental frequency estimation of musical
signals using a two-way mismatch procedure,? Journal of the Acoustical Society of
America, vol. 95, no. 4, pp. 2254?2263, 1994.
[14] Serguei Koval, Mikhail Stolbov, and Mikhail Khitrov, ?Broadband noise cancellation
systems: new approach to working performance optimization,? in EUROSPEECH?99,
1999, pp. 2607?2610.
[15] Anssi Klapuri, ?Automatic transcription of music,? M.S. thesis, Tampere University
of Technology, Finland, 1998.
[16] Keerthi C. Nagaraj., ?Toward automatic transcription - pitch tracking in polyphonic
environment,? Literature survey, Mar. 2003.
[17] Hirokazu Kameoka, Takuya Nishimoto, and Shigeki Sagayama, ?Separation of harmonic structures based on tied gaussian mixture model and information criterion for
concurrent sounds,? in IEEE International Conference on Acoustics, Speech, and
Signal Processing (ICASSP04), 2004.
| 2853 |@word version:1 eliminating:1 open:1 takuya:1 contains:1 accompaniment:1 document:1 subjective:3 existing:3 scatter:1 must:1 shape:2 remove:1 polyphonic:3 selected:2 accordingly:1 experiment3:2 detecting:2 beauchamp:2 firstly:2 zhang:4 five:1 become:2 symposium:1 consists:3 poli:1 swets:1 ica:3 roughly:1 multi:14 td:1 little:1 actual:1 totally:1 becomes:1 project:1 estimating:1 lowest:1 spoken:1 exactly:1 local:1 tsinghua:5 plus:1 au:1 china:3 co:1 acknowledgment:1 procedure:5 area:2 significantly:1 road:1 close:1 influence:2 www:1 nishimoto:1 duration:1 survey:1 utilizing:1 importantly:2 regarded:1 stability:3 suppose:2 us:1 overlapped:1 recognition:1 gorithm:1 coincidence:7 wang:1 singing:4 calculate:5 connected:2 bass:1 removed:3 environment:1 kameoka:1 ali:1 basis:1 htm:1 easily:1 represented:2 america:1 separated:23 snrs:3 fast:1 describe:1 bmax:1 query:1 detected:7 brl:1 neighborhood:6 pinch:1 larger:1 reconstruct:1 otherwise:1 advantage:2 eigenvalue:6 sequence:1 subtracting:1 maximal:1 fr:5 translate:1 normalize:1 exploiting:1 cluster:11 requirement:1 enhancement:11 produce:1 downie:1 measured:4 predicted:5 indicate:1 filter:1 human:1 virtual:1 melody:4 f1:2 clustered:1 icslp:1 secondly:2 sufficiently:1 around:1 considered:1 al1:3 finland:1 smallest:2 estimation:19 proc:1 edit:1 largest:2 organ:7 changshui:1 concurrent:1 gaussian:1 modified:1 rather:2 helpful:2 eliminate:2 classification:3 priori:1 smoothing:1 special:2 fairly:1 art:1 equal:1 construct:1 extraction:4 represents:2 others:1 inherent:1 few:1 bil:2 preserve:1 national:1 individual:1 phase:1 consisting:2 keerthi:1 detection:3 male:1 mixture:16 predominant:1 edge:4 divide:1 re:1 website1:1 modeling:7 ar:2 alr:4 hearing:1 snr:4 fastica:1 eurospeech:1 varies:2 density:3 fundamental:15 sensitivity:1 peak:16 accessible:1 international:4 probabilistic:3 lee:1 enhance:1 together:3 thesis:1 ear:1 sinusoidal:1 de:1 automation:2 coefficient:8 inc:1 blind:3 onset:1 performed:2 try:2 contribution:1 accuracy:1 variance:1 musical:3 ahs:11 none:1 multiplying:1 andre:1 ed:1 energy:1 frequency:19 pp:12 unsolved:1 auditory:2 popular:1 knowledge:2 dimensionality:1 organized:1 amplitude:8 actually:2 improved:1 done:1 mar:1 generality:1 furthermore:1 roger:1 working:1 parse:1 quality:3 zeitlinger:1 klapuri:1 normalized:1 brown:1 concept:1 sinusoid:1 shigeki:1 deal:1 rhythm:1 won:1 criterion:6 yun:1 harmonic:75 instantaneous:1 fi:7 rl:1 experiment4:3 khz:1 thirdly:1 untouched:1 rth:2 significant:5 automatic:5 fk:1 cancellation:1 language:2 stable:6 f0:2 etc:2 base:1 zcs:1 closest:1 female:1 seen:2 analyzes:1 fortunately:1 somewhat:1 subtraction:1 signal:110 sound:12 ifft:1 long:1 retrieval:4 bigger:3 controlled:1 pitch:26 essentially:1 enhancing:1 represent:6 normalization:1 background:8 source:11 publisher:1 rest:2 eliminates:1 probably:1 f0l:2 file:2 goto:1 recording:1 db:1 member:1 contrary:2 integer:1 click:1 drum:2 idea:2 cn:3 regarding:1 br:7 whether:1 pca:1 song:3 peter:1 speech:23 useful:1 detailed:2 http:1 exist:2 gil:2 fulfilled:1 estimated:3 correctly:1 vol:5 four:2 threshold:5 achieving:1 gmm:1 utilize:1 graph:2 beijing:2 distorted:1 almost:2 ismir:1 separation:23 distinguish:3 annual:1 gang:1 scene:2 ri:4 maher:2 software:1 optimality:2 performing:1 department:2 smaller:1 remain:1 pan:1 character:3 comon:1 equation:1 remains:1 count:1 needed:1 know:2 instrument:16 available:1 b1l:2 spectral:3 distinguished:2 voice:30 jang:1 original:16 denotes:1 clustering:3 rf0:1 music:76 calculating:1 society:1 feng:2 bl:8 primary:2 rt:1 traditional:1 distance:2 separate:6 acoustical:1 mail:2 toward:1 loor:2 index:1 ratio:4 difficult:8 mostly:1 jin:2 coincident:1 ecml:1 extended:3 frame:12 arbitrary:1 intensity:1 required:2 acoustic:2 learned:2 below:1 perception:1 mismatch:2 built:1 reliable:1 rf:3 max:1 greatest:1 suitable:1 natural:1 pause:2 tewon:1 improve:3 zhuang:1 technology:3 axis:2 extract:3 review:1 understanding:1 literature:1 embedded:1 loss:1 mixed:12 interesting:1 limitation:1 hs:15 foundation:1 cd:1 cooke:1 supported:1 free:1 neighbor:3 mikhail:2 serra:1 calculated:2 arco:1 doesn:2 preprocessing:2 experiment1:2 reconstructed:1 midi:1 transcription:5 b1:1 spectrum:9 table:2 learn:2 channel:2 robust:2 sagayama:1 monophonic:3 main:2 noise:14 broadband:1 precision:2 position:1 exceeding:2 candidate:4 tied:1 third:1 learns:2 removing:1 formula:1 shijun:1 casa:1 emphasized:1 list:1 timbre:5 consist:1 workshop:1 piccolo:8 magnitude:2 te:1 nk:3 easier:2 signalto:1 entropy:2 logarithmic:1 tampere:1 shoe:1 ordered:1 tracking:2 chang:1 corresponds:1 satisfies:1 extracted:5 content:7 denoising:1 total:1 experimental:7 select:1 audio:13 tested:1 |
2,041 | 2,854 | Non-Gaussian Component Analysis: a
Semi-parametric Framework for Linear
Dimension Reduction
1,4
?
G. Blanchard1 , M. Sugiyama1,2 , M. Kawanabe1 , V. Spokoiny3 , K.-R. Muller
1
Fraunhofer FIRST.IDA, Kekul?estr. 7, 12489 Berlin, Germany
Dept. of CS, Tokyo Inst. of Tech., 2-12-1, O-okayama, Meguro-ku, Tokyo, 152-8552, Japan
3
Weierstrass Institute and Humboldt University, Mohrenstr. 39, 10117 Berlin, Germany
4
Dept. of CS, University of Potsdam, August-Bebel-Strasse 89, 14482 Potsdam, Germany
[email protected]
{blanchar,sugi,nabe,klaus}@first.fhg.de
2
Abstract
We propose a new linear method for dimension reduction to identify nonGaussian components in high dimensional data. Our method, NGCA
(non-Gaussian component analysis), uses a very general semi-parametric
framework. In contrast to existing projection methods we define what is
uninteresting (Gaussian): by projecting out uninterestingness, we can estimate the relevant non-Gaussian subspace. We show that the estimation
error of finding the non-Gaussian components tends to zero at a parametric rate. Once NGCA components are identified and extracted, various
tasks can be applied in the data analysis process, like data visualization,
clustering, denoising or classification. A numerical study demonstrates
the usefulness of our method.
1
Introduction
Suppose {Xi }ni=1 are i.i.d. samples in a high dimensional space Rd drawn from an unknown distribution with density p(x) . A general multivariate distribution is typically too
complex to analyze from the data, thus dimensionality reduction is necessary to decrease
the complexity of the model (see, e.g., [4, 11, 10, 12, 1]). We will follow the rationale
that in most real-world applications the ?signal? or ?information? contained in the highdimensional data is essentially non-Gaussian while the ?rest? can be interpreted as high
dimensional Gaussian noise. Thus we implicitly fix what is not interesting (Gaussian part)
and learn its orthogonal complement, i.e. what is interesting. We call this approach nonGaussian components analysis (NGCA).
We want to emphasize that we do not assume the Gaussian components to be of smaller
order of magnitude than the signal components. This setting therefore excludes the use
of common (nonlinear) dimensionality reduction methods such as Isomap [12], LLE [10],
that are based on the assumption that the data lies, say, on a lower dimensional manifold,
up to some small noise distortion. In the restricted setting where the number of Gaussian
components is at most one and all the non-Gaussian components are mutually independent,
Independent Component Analysis (ICA) techniques (e.g., [9]) are applicable to identify the
non-Gaussian subspace.
A framework closer in spirit to NGCA is that of projection pursuit (PP) algorithms [5, 7, 9],
where the goal is to extract non-Gaussian components in a general setting, i.e., the number
of Gaussian components can be more than one and the non-Gaussian components can be
dependent. Projection pursuit methods typically proceed by fixing a single index which
measures the non-Gaussianity (or ?interestingness?) of a projection direction. This index is
then optimized to find a good direction of projection, and the procedure is iterated to find
further directions. Note that some projection indices are suitable for finding super-Gaussian
components (heavy-tailed distribution) while others are suited for identifying sub-Gaussian
components (light-tailed distribution) [9]. Therefore, traditional PP algorithms may not
work effectively if the data contains, say, both super- and sub-Gaussian components.
Technically, the NGCA approach to identify the non-Gaussian subspace uses a very general semi-parametric framework based on a central property: there exists a linear mapping
h 7? ?(h) ? Rd which, to any arbitrary (smooth) nonlinear function h : Rd ? R, associates a vector ? lying in the non-Gaussian subspace. Using a whole family of different
b which all approximately
nonlinear functions h then yields a family of different vectors ?(h)
lie in, and span, the non-Gaussian subspace. We finally perform PCA on this family of vectors to extract the principal directions and estimate the target space. Our main theoretical
contribution in this paper is to prove consistency
of the NGCA procedure, i.e. that the
p
above estimation error vanishes at a rate log(n)/n with the sample size n. In practice,
we consider functions of the particular form h?,a (x) = fa (h?, xi) where f is a function
class parameterized, say, by a parameter a, and k?k = 1.
Apart from the conceptual point, defining uninterestingness as the point of departure instead of interestingness, another way to look at our method is to say that it allows the
combination of information coming from different indices h: here the above function f a
(for fixed a) plays a role similar to that of a non-Gaussianity index in PP, but we do combine a rich family of such functions (by varying a and even by considering several function
classes at the same time). The important point here is while traditional projection pursuit
does not provide a well-founded justification for combining directions obtained from different indices, our framework allows to do precisely this ? thus implicitly selecting, in a
given family of indices, the ones which are the most informative for the data at hand (while
always maintaining consistency).
In the following section we will outline our main theoretical contribution, a novel semiparametric theory for linear dimension reduction. Section 3 discusses the algorithmic procedures and simulation results underline the usefulness of NGCA; finally a brief conclusion
is given.
2
Theoretical framework
The model. We assume the unknown probability density function p(x) of the observations in Rd is of the form
p(x) = g(T x)?? (x),
(1)
where T is an unknown linear mapping from Rd to another space Rm with m ? d ,
g is an unknown function on Rm , and ?? is a centered Gaussian density with unknown
covariance matrix ? . The above decomposition may be possible for any density p since g
can be any function. Therefore, this decomposition is not restrictive in general.
Note that the model (1) includes as particular cases both the pure parametric ( m = 0 ) and
pure non-parametric ( m = d ) models. We effectively consider an intermediate case where
d is large and m is rather small. In what follows we denote by I the m -dimensional linear
subspace in Rd generated by the dual operator T > :
I = Ker(T )? = Range(T > ) .
We call I the non-Gaussian subspace. Note how this definition implements the general
point of view outlined in the introduction: by this model we define rather what is considered
uninteresting, i.e. the null space of T ; the target space is defined indirectly as the orthogonal
of the uninteresting component. More precisely, using the orthogonal decomposition X =
X0 +XI , where X0 ? Ker(T ) and XI ? I, equation (1) implies that conditionally to XI ,
X0 has a Gaussian distribution. X0 is therefore ?not interesting? and we wish to project it
out.
Our goal is therefore to estimate I by some subspace Ib computed from i.i.d. samples
{Xi }ni=1 which follows the distribution with density p(x). In this paper we assume the
effective dimension m to be known or fixed a priori by the user. Note that we do not
estimate ? , g , and T when estimating I.
Population analysis. The main idea underlying our approach is summed up in the following Proposition (proof in Appendix). Whenever variable X has covariance matrix identity, this result allows, from an arbitrary smooth real function h on Rd , to find a vector
?(h) ? I.
Proposition 1 Let X be a random variable whose density function p(x) satisfies (1)
d
and suppose
that
?
? h(x) is a smooth real function on R . Assume furthermore that
>
? = E XX
= Id . Then under mild regularity conditions the following vector belongs to the target space I:
?(h) = E [?h ? Xh(X)] .
(2)
Estimation using empirical data. Since the unknown density p(x) is used to define ?
by Eq.(2), one can not directly use this formula in practice, and it must be approximated
using the empirical data. We therefore have to estimate the population expectations using
empirical ones. A bound on the corresponding approximation error is then given by the
following theorem:
Theorem 1 Let h be a smooth function. Assume that supy max (k?h(y)k , kh(y)k) < B
?
?
and that X has covariance matrix E XX > = Id and is such that for some ?0 > 0:
E [exp (?0 kXk)] ? a0 < ?.
Denote e
h(x) = ?h(x) ? xh(x). Suppose X1 , . . . , Xn are i.i.d. copies of X and define
n
n
?2
1 Xe
1 X?
?e
b
b ?
?(h)
=
h(Xi ) , and ?
b(h) =
?h(Xi ) ? ?(h)
? ;
n i=1
n i=1
(3)
(4)
then with probability 1 ? 4? the following holds:
r
?
?
?
?
log ? ?1 + log d
log(n? ?1 ) log ? ?1
b
dist ?(h), I ? 2 ?
b(h)
+ C(?0 , a0 , B, d)
.
3
n
n4
Comments. 1. The proof of the theorem relies on standard tools using Chernoff?s bounding method and is omitted for space. In this theorem, the covariance matrix of X is assumed
to be known and equal to identity which is not a realistic assumption; in practice, we use
a standard ?whitening? procedure (see next section) using the empirical covariance matrix.
Of course there is an additional error coming from this step, since the covariance matrix
is also estimated empirically. In the extended version of the paper [3], we prove (under
somewhat stronger assumptions) a bound for the entirely empirical procedure including
whitening, resulting in an approximation error of the same order in n (up to a logarithmic
factor). This result was omitted here due to space constraints.
b
2. Fixing ?, Theorem 1 implies that the vector ?(h)
obtained from any h(x)?converges to
the unknown non-Gaussian subspace I at a ?parametric? rate of order 1/ n . Furthermore, the theorem gives us an estimation of the relative size of the p
estimation error for
different functions h through the (computable from the data) factor ?
b(h) in the main
term of the bound. This suggests using this quantity as a renormalizing factor so that the
typical approximation error is (roughly) independent of the function h used. This normalization principle will be used in the main procedure.
3. Note the theorem results in an exponential deviation inequality (the dependence in the
confidence level ? is logarithmic). As a consequence, using the union bound over a finite
net, we can obtain as a corollary of the above theorem a uniform deviation bound of the
same form over a (discretized) set of functions (where the log-cardinality of the set appears
as an additional factor). For instance, if we consider a 1/n-discretization net of functions
with d parameters, hence of size O(nd ), then the above bounds holds uniformly when replacing the log ? ?1 term by d log n + log ? ?1p
. This does not change fundamentally the
bound (up to an additional complexity factor d log(n)), and justifies that we consider
simultaneously such a family of functions in the main algorithm.
h1
h(x)
h2
h3
h4
h5
^
?4
^
?1
I
^
?3
^
?2
^
?5
x
Figure 1: The NGCA main idea: from a varied family of real functions h, compute a family
of vectors ?b belonging to the target space up to small estimation error.
3
The NGCA algorithm
In the last section, we have established that given an arbitrary smooth real function h on
b
Rd , we are able to construct a vector ?(h)
which belongs to the target space I up to a small
estimation error. The main idea is now to consider a large family of such functions (h k ),
giving rise to a family of vectors ?bk (see Fig. 1). Theorem 1 ensures that the estimation
error remains controlled uniformly, and we can also normalize the vectors such that the
estimation error is of the same order for all vectors (see Comments 2 and 3 above). Under
this condition, it can be shown that vectors with a longer norm are more informative about
the target subspace, and that vectors with too small a norm are uninformative. We therefore
throw out the smaller vectors, then estimate the target space I by applying a principal
components analysis to the remaining vector family.
In the proposed algorithm we will restrict our attention to functions of the form h f,? (x) =
f (h?, xi), where ? ? Rd , k?k = 1, and f belongs to a finite family F of smooth real
functions of real variable. Our theoretical setting allows to ensure that the approximation
error remains small uniformly over F and ? (rigorously, ? should be restricted to a finite
?-net of the unit sphere in order to consider a finite family of functions: in practice we
will overlook this weak restriction). However, it is not feasible in practice to sample the
whole parameter space for ? as soon as it has more than a few dimensions. To overcome
this difficulty, we advocate using a well-known PP algorithm, FastICA [8], as a proxy to
find good candidates for ?f for a fixed f . Note that this does not make NGCA equivalent
to FastICA: the important point is that FastICA, as a stand-alone procedure, requires to fix
the ?index function? f beforehand. The crucial novelty of our method is that we provide a
theoretical setting and a methodology which allows to combine the results of this projection
pursuit method when used over a possibly large spectrum of arbitrary index functions f .
NGCA ALGORITHM .
Input: Data points (Xi ) ? Rd , dimension m of target subspace.
Parameters: Number Tmax of FastICA iterations; threshold ?;
family of real functions (fk ).
Whitening.
The data Xi is recentered by subtracting the empirical mean.
b denote the empirical covariance matrix of the data sample (Xi ) ;
Let ?
b
b ? 21 Xi the empirically whitened data.
put Yi = ?
Main Procedure.
Loop on k = 1, . . . , L:
Draw ?0 at random on the unit sphere of Rd .
Loop on t = 1, . . . , Tmax : [FastICA loop]
n
?
1 X ?b
Put ?bt ?
Yi fk (h?t?1 , Ybi i) ? fk0 (h?t?1 , Ybi i)?t?1 .
n i=1
Put ?t ? ?bt /k?bt k.
End Loop on t
Let Ni be the trace of the empirical covariance matrix of ?bTmax :
n
?2 ?
?2
1 X?
?b
?
?
?
Ni =
?Yi fk (h?Tmax ?1 , Ybi i) ? fk0 (h?Tmax ?1 , Ybi i)?Tmax ?1 ? ? ??bTmax ? .
n i=1
p
Store v (k) ? ?bTmax ? n/Ni . [Normalization]
End Loop on k
Thresholding.
From the family v (k) , throw away vectors having norm smaller than threshold ?.
PCA step.
Perform PCA on the set of remaining v (k) .
Let Vm be the space spanned by the first m principal directions.
Pull back in original space.
b ? 12 Vm .
Output: Wm = ?
Summing up, the NGCA algorithm finally consists of the following steps (see above pseudocode): (1) Data whitening (see Comment 1 in the previous section), (2) Apply FastICA to each function f ? F to find a promising candidate value for ?f , (3) Comb f,? ))f ?F (using Eq. (4)), (4) Normalize
pute the corresponding family of vectors (?(h
f
the vectors appropriately; threshold and throw out uninformative ones, (5) Apply PCA,
(6) Pull back in original space (de-whitening). In the implementation tested, we have
(1)
used the following forms of the functions fk : f? (z) = z 3 exp(?z 2 /2? 2 ) (Gauss-Pow3),
(2)
(3)
fb (z) = tanh(bz) (Hyperbolic Tangent), fa (z) = {sin, cos} (az) (Fourier). More
precisely, we consider discretized ranges for a ? [0, A], b ? [0, B], ? ? [?min , ?max ];
this gives rise to a finite family (fk ) (which includes simultaneously functions of the three
different above families).
4
Numerical results
Parameters used. All the experiments presented where obtained with exactly the same
set of parameters: a ? [0, 4] for the Fourier functions; b ? [0, 5] for the Hyperbolic Tangent
functions; ? 2 ? [0.5, 5] for the Gauss-pow3 functions. Each of these ranges was divided
?3
2.5
x 10
0.16
0.03
0.03
0.025
0.025
0.14
2
0.12
0.1
1.5
0.08
1
0.02
0.02
0.015
0.015
0.01
0.01
0.005
0.005
0.06
0.04
0.5
0.02
0
PP(pow3)
PP(tanh)
0
NGCA
PP(pow3)
PP(tanh)
(A)
0
NGCA
PP(pow3)
PP(tanh)
(B)
0
NGCA
PP(pow3)
PP(tanh)
(C)
NGCA
(D)
b I) over 100 training samples of size 1000.
Figure 2: Boxplots of the error criterion E(I,
?3
x 10
0.12
0.03
0.1
0.025
0.08
0.02
0.014
0.012
1.5
0.06
0.04
PP (pow3)
1
PP (pow3)
0.01
PP (pow3)
PP (pow3)
2
0.015
0.01
0.008
0.006
0.004
0.5
0.02
0
0
0.5
1
1.5
0
2
NGCA
0.005
0
0.02
0.04
0.06
0.08
0.1
0
0.12
0.002
0
0.005
0.01
NGCA
?3
x 10
0.015
0.02
0.025
0
0.03
0
0.002
0.004
NGCA
0.006
0.008
0.01
0.012
0.014
0.012
0.014
NGCA
?3
x 10
0.12
0.03
0.1
0.025
0.08
0.02
0.014
2
0.012
0.06
PP (tanh)
1
PP (tanh)
0.01
PP (tanh)
PP (tanh)
1.5
0.015
0.04
0.01
0.02
0.005
0.008
0.006
0.004
0.5
0
0
0.5
1
NGCA
(A)
1.5
0
2
?3
x 10
0
0.02
0.04
0.06
0.08
0.1
0.12
0
0.002
0
0.005
0.01
0.015
0.02
0.025
0.03
0
0
0.002
0.004
0.006
0.008
NGCA
NGCA
NGCA
(B)
(C)
(D)
0.01
b I)) of
Figure 3: Sample-wise performance comparison plots (for error criterion E( I,
NGCA versus FastICA; top: versus pow3 index; bottom: versus tanh index. Each point
represents a different sample of size 1000. In (C)-top, about 25% of the points corresponding to a failure of FastICA fall outside of the range and were not represented.
into 1000 equispaced values, thus yielding a family (fk ) of size 4000 (Fourier functions
count twice because of the sine and cosine parts). Some preliminary calibration suggested
to take ? = 1.5 as the threshold under which vectors are not informative. Finally we fixed
the number of FastICA iterations Tmax = 10. With this choice of parameters, with 1000
points of data the computation time is typically of the order of 10 seconds on a modern PC
under a Matlab implementation.
Tests in a controlled setting. We performed numerical experiments using various synthetic data. We report exemplary results using 4 data sets. Each data set includes 1000
samples in 10 dimensions, and consists of 8-dimensional independent standard Gaussian
and 2 non-Gaussian components as follows:
(A) Simple Gaussian Mixture: 2-dimensional independent bimodal Gaussian mixtures;
(B) Dependent super-Gaussian: 2-dimensional density is proportional to exp(?kxk);
(C) Dependent sub-Gaussian: 2-dimensional uniform on the unit circle;
(D) Dependent super- and sub-Gaussian: 1-dimensional Laplacian with density proportional to exp(?|xLap |) and 1-dimensional dependent uniform U (c, c + 1), where c = 0 for
|xLap | ? log 2 and c = ?1 otherwise.
We compare the NGCA method against stand-alone FastICA with two different index functions. Figure 2 shows boxplots and Figure 3 sample-wise comparison plots, over 100 samb I) = m?1 Pm k(Id ? PI )b
ples, of the error criterion E(I,
vi k2 , where {b
v i }m
i=1 is an
i=1
Figure 4: 2D projection of the ?oil flow? (12-dimensional) data obtained by different algorithms, from left two right: PCA, Isomap, FastICA (tanh index), NGCA. In each case, the
data was first projected in 3D using the respective methods, from which a 2D projection
was chosen visually so as to yield the clearest cluster structure. Available label information
was not used to determine the projections.
b Id is the identity matrix, and PI denotes the orthogonal projection
orthonormal basis of I,
on I. In datasets (A),(B),(C), NGCA appears to be on par with the best FastICA method.
As expected, the best index for FastICA is data-dependent: the ?tanh? index is more suited
to the super-Gaussian data (B), while the ?pow3? index works best with the sub-Gaussian
data (C) (although, in this case, FastICA with this index has a tendency to get caught in
local minima, leading to a disastrous result for about 25% of the samples. Note that NGCA
does not suffer from this problem). Finally, the advantage of the implicit index adaptation
feature of NGCA can be clearly observed in the data set (D), which includes both sub- and
super-Gaussian components. In this case, neither of the two FastICA index functions taken
alone does well, and NGCA gives significantly lower error than either FastICA flavor.
Example of application for realistic data: visualization and clustering We now give
an example of application of NGCA to visualization and clustering of realistic data. We
consider here ?oil flow? data, which has been obtained by numerical simulation of a complex physical model. This data was already used before for testing techniques of dimension
reduction [2]. The data is 12-dimensional and our goal is to visualize the data, and possibly
exhibit a clustered structure. We compared results obtained with the NGCA methodology,
regular PCA, FastICA with tanh index and Isomap. The results are shown on Figure 4. A
3D projection of the data was first computed using these methods, which was in turn projected in 2D to draw the figure; this last projection was chosen manually so as to make the
cluster structure as visible as possible in each case. The NGCA result appears better with a
clearer clustered structure appearing. This structure is only partly visible in the Isomap result; the NGCA method additionally has the advantage of a clear geometrical interpretation
(linear orthogonal projection). Finally, datapoints in this dataset are distributed in 3 classes.
This information was not used in the different procedures, but we can see a posteriori that
only NGCA clearly separates the classes in distinct clusters. Clustering applications on
other benchmark datasets is presented in the extended paper [3].
5
Conclusion
We proposed a new semi-parametric framework for constructing a linear projection to separate an uninteresting, possibly of large amplitude multivariate Gaussian ?noise? subspace
from the ?signal-of-interest? subspace. We provide generic consistency results on how well
the non-Gaussian directions can be identified (Theorem 1). Once the low-dimensional ?signal? part is extracted, we can use it for a variety of applications such as data visualization,
clustering, denoising or classification.
Numerically we found comparable or superior performance to, e.g., FastICA in deflation
mode as a generic representative of the family of PP algorithms. Note that in general,
PP methods need to pre-specify a projection index with which they search non-Gaussian
components. By contrast, an important advantage of our method is that we are able to
simultaneously use several families of nonlinear functions; moreover, also inside a same
function family we are able to use an entire range of parameters (such as frequency for
Fourier functions). Thus, NGCA provides higher flexibility, and less restricting assumptions a priori on the data. In a sense, the functional indices that are the most relevant for
the data at hand are automatically selected.
Future research will adapt the theory to simultaneously estimate the dimension of the nonGaussian subspace. Extending the proposed framework to non-linear projection scenarios
[4, 11, 10, 12, 1, 6] and to finding the most discriminative directions using labels are examples for which the current theory could be taken as a basis.
Acknowledgements: This work was supported in part by the PASCAL Network of Excellence (EU # 506778).
Proof of Proposition 1 Put ? = E [Xh(X)] and ?(x) = h(x)??> x. Note that ?? = ?h??,
hence ?(h) = E [??(X)]. Furthermore, it holds by change of variable that
Z
Z
?(x + u)p(x)dx = ?(x)p(x ? u)dx.
Under mild regularity conditions on p(x) and h(x), differentiating this with respect to u gives
Z
Z
E [??(X)] = ??(x)p(x)dx = ? ?(x)?p(x)dx = ?E [?(X)? log p(X)] ,
where we have used ?p(x) = ? log p(x) p(x). Eq.(1) now implies ? log p(x) = ? log g(T x) ?
? ?1 x, hence
?
?
?(?) = ?E [?(X)? log g(T X)] + E ?(X)? ?1 X
h
i
= ?T > E [?(X)?g(T X)/g(T X)] + ? ?1 E Xh(X) ? XX > E [Xh(X)] .
?
?
The last term above vanishes because we assumed E XX > = Id . The first term belongs to I by
definition. This concludes the proof.
?
References
[1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373?1396, 2003.
[2] C.M. Bishop, M. Svensen and C.K.I. Wiliams. GTM: The generative topographic mapping.
Neural Computation, 10(1):215?234, 1998.
[3] G. Blanchard, M. Sugiyama, M. Kawanabe, V. Spokoiny, K.-R. M u? ller. In search of nonGaussian components of a high-dimensional distribution. Technical report of the Weierstrass
Institute for Applied Analysis and Stochastics, 2006.
[4] T.F. Cox and M.A.A. Cox. Multidimensional Scaling. Chapman & Hall, London, 2001.
[5] J.H. Friedman and J.W. Tukey. A projection pursuit algorithm for exploratory data analysis.
IEEE Transactions on Computers, 23(9):881?890, 1975.
[6] S. Harmeling, A. Ziehe, M. Kawanabe and K.-R. Mu? ller. Kernel-based nonlinear blind source
separation. Neural Computation, 15(5):1089?1124, 2003.
[7] P.J. Huber. Projection pursuit. The Annals of Statistics, 13:435?475, 1985.
[8] A. Hyv?arinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE
Transactions on Neural Networks, 10(3):626?634, 1999.
[9] A. Hyv?arinen, J. Karhunen and E. Oja. Independent component analysis. Wiley, 2001.
[10] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323?2326, 2000.
[11] B. Sch?olkopf, A.J. Smola and K.?R. Mu? ller. Nonlinear component analysis as a kernel Eigenvalue problem. Neural Computation, 10(5):1299?1319, 1998.
[12] J.B. Tenenbaum, V. de Silva and J.C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290(5500):2319?2323, 2000.
| 2854 |@word mild:2 cox:2 version:1 stronger:1 norm:3 underline:1 nd:1 hyv:2 simulation:2 covariance:8 decomposition:3 reduction:9 contains:1 selecting:1 okayama:1 existing:1 current:1 ida:1 discretization:1 dx:4 must:1 numerical:4 realistic:3 informative:3 visible:2 plot:2 alone:3 generative:1 selected:1 weierstrass:2 provides:1 h4:1 prove:2 consists:2 combine:2 advocate:1 comb:1 inside:1 excellence:1 x0:4 huber:1 expected:1 ica:1 roughly:1 dist:1 discretized:2 automatically:1 considering:1 blanchar:1 cardinality:1 project:1 estimating:1 underlying:1 xx:4 moreover:1 null:1 what:5 interpreted:1 finding:3 multidimensional:1 exactly:1 demonstrates:1 rm:2 k2:1 unit:3 interestingness:2 before:1 local:1 tends:1 consequence:1 id:5 approximately:1 tmax:6 twice:1 suggests:1 co:1 range:5 harmeling:1 testing:1 practice:5 union:1 implement:1 procedure:9 ker:2 strasse:1 empirical:8 hyperbolic:2 significantly:1 projection:20 confidence:1 pre:1 regular:1 get:1 operator:1 put:4 applying:1 restriction:1 equivalent:1 attention:1 caught:1 identifying:1 pure:2 spanned:1 pull:2 orthonormal:1 datapoints:1 population:2 embedding:1 exploratory:1 justification:1 annals:1 target:8 play:1 suppose:3 user:1 us:2 equispaced:1 associate:1 approximated:1 bottom:1 role:1 observed:1 ensures:1 eu:1 decrease:1 vanishes:2 mu:2 complexity:2 rigorously:1 technically:1 basis:2 various:2 represented:1 gtm:1 distinct:1 fast:1 effective:1 london:1 klaus:1 outside:1 whose:1 say:4 distortion:1 recentered:1 otherwise:1 niyogi:1 statistic:1 topographic:1 advantage:3 eigenvalue:1 net:3 exemplary:1 propose:1 subtracting:1 coming:2 adaptation:1 relevant:2 combining:1 loop:5 flexibility:1 roweis:1 meguro:1 kh:1 normalize:2 az:1 olkopf:1 regularity:2 cluster:3 extending:1 renormalizing:1 converges:1 clearer:1 svensen:1 fixing:2 h3:1 eq:3 throw:3 c:2 implies:3 direction:8 tokyo:2 centered:1 humboldt:1 arinen:2 fix:2 clustered:2 preliminary:1 proposition:3 hold:3 lying:1 considered:1 hall:1 exp:4 visually:1 mapping:3 algorithmic:1 visualize:1 omitted:2 estimation:9 applicable:1 label:2 tanh:13 tool:1 clearly:2 gaussian:38 always:1 super:6 rather:2 varying:1 corollary:1 tech:1 contrast:2 sense:1 inst:1 bebel:1 posteriori:1 dependent:6 typically:3 bt:3 a0:2 entire:1 fhg:1 germany:3 classification:2 dual:1 pascal:1 priori:2 summed:1 equal:1 once:2 construct:1 having:1 chernoff:1 manually:1 represents:1 chapman:1 look:1 future:1 others:1 report:2 fundamentally:1 few:1 belkin:1 modern:1 oja:1 simultaneously:4 friedman:1 interest:1 mixture:2 yielding:1 light:1 pc:1 beforehand:1 closer:1 necessary:1 respective:1 orthogonal:5 ples:1 circle:1 theoretical:5 instance:1 kekul:1 deviation:2 uninteresting:4 usefulness:2 uniform:3 fastica:18 eigenmaps:1 too:2 synthetic:1 density:9 vm:2 nongaussian:4 central:1 possibly:3 leading:1 japan:1 de:4 gaussianity:2 includes:4 blanchard:1 spokoiny:2 vi:1 blind:1 sine:1 view:1 h1:1 performed:1 analyze:1 tukey:1 wm:1 contribution:2 ni:5 yield:2 identify:3 ybi:4 weak:1 iterated:1 overlook:1 whenever:1 definition:2 failure:1 against:1 pp:22 sugi:1 clearest:1 frequency:1 proof:4 dataset:1 dimensionality:5 amplitude:1 fk0:2 back:2 appears:3 higher:1 follow:1 methodology:2 specify:1 furthermore:3 implicit:1 smola:1 langford:1 hand:2 replacing:1 nonlinear:8 mode:1 oil:2 isomap:4 hence:3 conditionally:1 sin:1 sugiyama1:1 cosine:1 criterion:3 outline:1 silva:1 geometrical:1 estr:1 wise:2 novel:1 common:1 superior:1 pseudocode:1 functional:1 empirically:2 physical:1 interpretation:1 numerically:1 rd:11 consistency:3 outlined:1 fk:6 pm:1 sugiyama:1 pute:1 calibration:1 longer:1 whitening:5 multivariate:2 belongs:4 apart:1 scenario:1 store:1 inequality:1 xe:1 yi:3 muller:1 minimum:1 additional:3 somewhat:1 novelty:1 determine:1 ller:3 signal:4 semi:4 smooth:6 technical:1 adapt:1 sphere:2 dept:2 divided:1 controlled:2 laplacian:2 whitened:1 essentially:1 expectation:1 bz:1 iteration:2 normalization:2 kernel:2 bimodal:1 want:1 semiparametric:1 uninformative:2 source:1 crucial:1 appropriately:1 sch:1 rest:1 comment:3 flow:2 spirit:1 call:2 intermediate:1 variety:1 identified:2 nabe:1 restrict:1 idea:3 computable:1 pca:6 suffer:1 proceed:1 matlab:1 clear:1 locally:1 tenenbaum:1 estimated:1 threshold:4 drawn:1 neither:1 boxplots:2 excludes:1 parameterized:1 family:22 separation:1 draw:2 appendix:1 scaling:1 comparable:1 entirely:1 bound:7 precisely:3 constraint:1 fourier:4 span:1 min:1 combination:1 belonging:1 smaller:3 stochastics:1 n4:1 projecting:1 restricted:2 taken:2 equation:1 visualization:4 mutually:1 remains:2 discus:1 count:1 turn:1 deflation:1 end:2 pursuit:6 available:1 apply:2 kawanabe:2 away:1 indirectly:1 generic:2 appearing:1 original:2 top:2 clustering:5 remaining:2 ensure:1 denotes:1 maintaining:1 giving:1 restrictive:1 already:1 quantity:1 parametric:8 fa:2 dependence:1 traditional:2 exhibit:1 subspace:14 separate:2 berlin:3 manifold:1 index:22 disastrous:1 trace:1 rise:2 implementation:2 unknown:7 perform:2 observation:1 datasets:2 benchmark:1 finite:5 defining:1 extended:2 varied:1 arbitrary:4 august:1 bk:1 complement:1 optimized:1 potsdam:2 established:1 able:3 suggested:1 departure:1 max:2 including:1 suitable:1 difficulty:1 wias:1 brief:1 concludes:1 fraunhofer:1 extract:2 geometric:1 acknowledgement:1 tangent:2 relative:1 par:1 rationale:1 interesting:3 proportional:2 versus:3 h2:1 supy:1 proxy:1 principle:1 thresholding:1 pi:2 heavy:1 course:1 supported:1 last:3 copy:1 soon:1 lle:1 institute:2 fall:1 saul:1 differentiating:1 distributed:1 overcome:1 dimension:9 xn:1 world:1 stand:2 rich:1 fb:1 projected:2 founded:1 transaction:2 emphasize:1 implicitly:2 global:1 conceptual:1 summing:1 assumed:2 xi:13 discriminative:1 spectrum:1 search:2 tailed:2 additionally:1 promising:1 ku:1 learn:1 robust:1 complex:2 constructing:1 main:9 whole:2 noise:3 bounding:1 x1:1 fig:1 representative:1 wiley:1 sub:6 wish:1 xh:5 exponential:1 lie:2 ngca:37 candidate:2 ib:1 formula:1 theorem:10 bishop:1 exists:1 restricting:1 effectively:2 magnitude:1 justifies:1 karhunen:1 flavor:1 suited:2 logarithmic:2 wiliams:1 kxk:2 contained:1 satisfies:1 relies:1 extracted:2 goal:3 identity:3 feasible:1 change:2 typical:1 uniformly:3 denoising:2 principal:3 partly:1 gauss:2 tendency:1 highdimensional:1 ziehe:1 h5:1 tested:1 |
2,042 | 2,855 | Modeling Neuronal Interactivity using Dynamic
Bayesian Networks
Lei Zhang?,?, Dimitris Samaras?, Nelly Alia-Klein?, Nora Volkow?, Rita Goldstein?
? Computer Science Department, SUNY at Stony Brook, Stony Brook, NY
? Medical Department, Brookhaven National Laboratory, Upton, NY
Abstract
Functional Magnetic Resonance Imaging (fMRI) has enabled scientists
to look into the active brain. However, interactivity between functional
brain regions, is still little studied. In this paper, we contribute a novel
framework for modeling the interactions between multiple active brain
regions, using Dynamic Bayesian Networks (DBNs) as generative models for brain activation patterns. This framework is applied to modeling
of neuronal circuits associated with reward. The novelty of our framework from a Machine Learning perspective lies in the use of DBNs to
reveal the brain connectivity and interactivity. Such interactivity models which are derived from fMRI data are then validated through a
group classification task. We employ and compare four different types
of DBNs: Parallel Hidden Markov Models, Coupled Hidden Markov
Models, Fully-linked Hidden Markov Models and Dynamically MultiLinked HMMs (DML-HMM). Moreover, we propose and compare two
schemes of learning DML-HMMs. Experimental results show that by
using DBNs, group classification can be performed even if the DBNs are
constructed from as few as 5 brain regions. We also demonstrate that, by
using the proposed learning algorithms, different DBN structures characterize drug addicted subjects vs. control subjects. This finding provides
an independent test for the effect of psychopathology on brain function.
In general, we demonstrate that incorporation of computer science principles into functional neuroimaging clinical studies provides a novel approach for probing human brain function.
1. Introduction
Functional Magnetic Resonance Imaging (fMRI) has enabled scientists to look into the
active human brain [1] by providing sequences of 3D brain images with intensities representing blood oxygenation level dependent (BOLD) regional activations. This has revealed
exciting insights into the spatial and temporal changes underlying a broad range of brain
functions, such as how we see, feel, move, understand each other and lay down memories. This fMRI technology offers further promise by imaging the dynamic aspects of the
functioning human brain. Indeed, fMRI has encouraged a growing interest in revealing
brain connectivity and interactivity within the neuroscience community. It is for example understood that a dynamically managed goal directed behavior requires neural control mechanisms orchestrated to select the appropriate and task-relevant responses while
inhibiting irrelevant or inappropriate processes [12]. To date, the analyses and interpretation of fMRI data that are most commonly employed by neuroscientists depend on the
cognitive-behavioral probes that are developed to tap regional brain function. Thus, brain
responses are a-priori labeled based on the putative underlying task condition and are then
used to separate a priori defined groups of subjects. In recent computer science research
[18][13][3][19], machine learning methods have been applied for fMRI data analysis. However, in these approaches information on the connectivity and interactivity between brain
voxels is discarded and brain voxels are assumed to be independent, which is an inaccurate
assumption (see use of statistical maps [3][19] or the mean of each fMRI time interval[13]).
In this paper, we exploit Dynamic Bayesian Networks for modeling dynamic (i.e., connecting and interacting) neuronal circuits from fMRI sequences. We suggest that through
incorporation of graphical models into functional neuroimaging studies we will be able
to identify neuronal patterns of connectivity and interactivity that will provide invaluable
insights into basic emotional and cognitive neuroscience constructs. We further propose
that this interscientific incorporation may provide a valid tool where objective brain imaging data are used for the clinical purpose of diagnosis of psychopathology. Specifically, in
our case study we will model neuronal circuits associated with reward processing in drug
addiction. We have previously shown loss of sensitivity to the relative value of money in
cocaine users [9]. It has also been previously highlighted that the complex mechanism of
drug addiction requires the connectivity and interactivity between regions comprising the
mesocorticolimbic circuit [12][8]. However, although advancements have been made in
studying this circuit?s role in inhibitory control and reward processing, inference about the
connectivity and interactivity of these regions is at best indirect. Dynamical causal models
have been compared in [16]. Compared with dynamic causal models, DBNs admit a class
of nonlinear continuous-time interactions among the hidden states and model both causal
relationships between brain regions and temporal correlations among multiple processes,
useful for both classification and prediction purposes.
Probabilistic graphical models [14][11] are graphs in which nodes represent random variables, and the (lack of) arcs represent conditional independence assumptions. In our case,
interconnected brain regions can be considered as nodes of a probabilistic graphical model
and interactivity relationships between regions are modeled by probability values on the
arcs (or the lack of) between these nodes. However, the major challenge in such a machine learning approach is the choice of a particular structure that models connectivity
and interactivity between brain regions in an accurate and efficient manner. In this work,
we contribute a framework of exploiting Dynamic Bayesian Networks to model such a
structure for the fMRI data. More specifically, instead of modeling each brain region in
isolation, we aim to model the interactive pattern of multiple brain regions. Furthermore,
the revealed functional information is validated through a group classification case study:
separating drug addicted subjects from healthy non-drug-using controls based on trained
Dynamic Bayesian Networks. Both conventional BBNs and HMMs are unsuitable for
modeling activities underpinned not only by causal but also by clear temporal correlations
among multiple processes [10], and Dynamic Bayesian Networks [5][7] are required. Since
the state of each brain region is not known (only observations of activation exist), it can be
thought of as a hidden variable[15]. An intuitive way to construct a DBN is to extend a
standard HMM to a set of interconnected multiple HMMs. For example, Vogler et al. [17]
proposed Parallel Hidden Markov Models (PaHMMs) that factorize state space into multiple independent temporal processes without causal connections in-between. Brand et al.
[2] exploited Coupled Hidden Markov Models (CHMMs) for complex action recognitions.
Gong et al. [10] developed a Dynamically Multi-Linked Hidden Markov Model (DMLHMM) for the recognition of group activities involving multiple different object events in
a noisy outdoor scene. This model is the only one of those models that learns both the
structure and parameters of the graphical model, instead of presuming a structure (possibly
inaccurate) given the lack of knowledge of human brain connectivity. In order to model
the dynamic neuronal circuits underlying reward processing in the human brains, we explore and compare the above DBNs. We propose and compare two learning schemes of
DML-HMMs, one is greedy structure search (Hill-Climbing) and the other is Structural
Expectation-Maximization (SEM).
To our knowledge, this is the first time that Dynamic Bayesian Networks are exploited in
modeling the connectivity and interactivity among brain regions activated during a fMRI
study. Our current experimental classification results show that by using DBNs, group
classification can be performed even if the DBNs are constructed from as few as 5 brain
regions. We also demonstrate that, by using the proposed learning algorithms, different
DBN structures characterize drug addicted subjects vs. control subjects which provides
an independent test for the effects of psychopathology on brain function. From the machine learning point of view, this paper provides an innovative application of Dynamic
Bayesian Networks in modeling dynamic neuronal circuits. Furthermore, since the structures to be explored are exclusively represented by hidden (cannot be observed directly)
states and their interconnecting arcs, the structure learning of DML-HMMs poses a greater
challenge than other DBNs [5]. From the neuroscientific point of view, drug addiction is a
complex disorder characterized by compromised inhibitory control and reward processing.
However, individuals with compromised mechanisms of control and reward are difficult to
identify unless they are directly subjected to challenging conditions. Modeling the interactive brain patterns is therefore essential since such patterns may be unique to a certain
psychopathology and could hence be used for improving diagnosis and prevention efforts
(e.g., diagnosis of drug addiction, prevention of relapse or craving). In addition, the development of this framework can be applied to further our understanding of other human
disorders and states such as those impacting insight and awareness, that similarly to drug
addiction are currently identified based mostly on subjective criteria and self-report.
Figure 1: Four types of Dynamic Bayesian Networks: PaHMM, CHMM, FHMM and
DML-HMM.
2. Dynamic Bayesian Networks
In this section, we will briefly describe the general framework of Dynamic Bayesian Networks. DBNs are Bayesian Belief Networks that have been extended to model the stochastic evolution of a set of random variables over time [5][7]. As described in [10], a DBN
B can be represented by two sets of parameters (m, ?) where the first set m represents
the structure of the DBN including the number of hidden state variables S and observation
variables O per time instance, the number of states for each hidden state variable and the
topology of the network (set of directed arcs connecting the nodes). More specifically, the
ith hidden state variable and the jth observation variable at time instance t are denoted as
(i)
(j)
St and Ot with i ? {1, ..., Nh } and j ? {1, ..., No }, Nh and No are the number of hidden state variables and observation variables respectively. The second set of parameters ?
includes the state transition matrix A, the observation matrix B and a matrix ? modeling the
initial state distribution P (S1i ). More specifically, A and B quantify the transition models
(i)
(i)
(i)
(i)
(i)
P (St |P a(St )) and observation models P (Ot |P a(Ot )) respectively where P a(St )
(i)
(i)
are the parents of St (similarly P a(Ot ) for observations). In this paper, we will examine four types of DBNs: Parallel Hidden Markov Models (PaHMM) [17], Coupled Hidden Markov Models (CHMM)[2], Fully Connected Hidden Markov Models (FHMM) and
Dynamically Multi-Linked Hidden Markov Models (DML-HMM)[10] as shown in Fig 1
where observation nodes are shown as shaded circles, hidden nodes as clear circles and
the causal relationships among hidden state variables are represented by the arcs between
hidden nodes. Notice that the first three DBNs are essentially three special cases of the
DML-HMM.
2.1. Learning of DBNs
Given the form of DBNs in the previous sections, there are two learning problems that must
be solved for real-world applications: 1) Parameter Learning: assuming fixed structure,
given the training sequences of observations O, how we adjust the model parameters B =
(m, ?) to maximize P (O|B); 2) Structure Learning: for DBNs with unknown structure
(i.e. DML-HMMs), how we learn the structure from the observation O. Parameter learning
has been well studied in [17][2]. Given fixed structure, parameters can be learned iteratively
using Expectation-Maximization (EM). The E step, which involves the inference of hidden
states given parameters, can be implemented using an exact inference algorithm such as
the junction tree algorithm. Then the parameters and maximal likelihood L(?) can be
computed iteratively from the M step.
In [10], the DML-HMM was selected from a set of candidate structures, however the selection of candidate structure is non-trivial for most applications including brain region
connectivity. For a DML-HMM with N hidden nodes, the total number of different struc2
tures is 2N ?N , thus it is impossible to conduct an exhaustive search in most cases. The
learning of DBNs involving both parameter learning and structure learning has been discussed in [5], where the scoring rules for standard probabilistic networks were extended
to the dynamic case and the Structural EM (SEM) algorithm was developed for structure
learning when some of the variables are hidden. The structure learning of DML-HMMs
is more challenging since the structures to be explored are exclusively represented by the
hidden states and none of them can be directly observed. In the following, we will explain
two learning schemes for the DML-HMMs. One standard way is to perform parametric
EM within an outer-loop structural search. Thus, our first scheme is to use an outer-loop
of the Hill-Climbing algorithm (DML-HMM-HC). For each step of the algorithm, from
the current DBN, we first compute a neighbor list by adding, deleting, or reversing one
arc. Then we perform parameter learning for each of the neighbors and go to the neighbor
with the minimum score until there is no neighbor whose score is higher than the current
DBN. Our second learning scheme is similar to the Structural EM algorithm [5] in the
sense that the structural and parametric modification are performed within a single EM
process. As described in [5][4], a structural search can be performed efficiently given complete observation data. However, as we described above, the structure of DML-HMMs are
represented by the hidden states which can not be observed directly. Hence, we develop
the DML-HMM-SEM algorithm as follows: given the current structure, we first perform
a parameter learning and then, for each training data, we compute the Most Probable Explanation (MPE), which computes the most likely value for each hidden node (similar to
Viterbi in standard HMM). The MPE thus provides a complete estimation of the hidden
states and a complete-data structural search [4] is then performed to find the best structure.
We perform learning iteratively until the structure converges. In this scheme, the structural
search is performed in the inner loop thus making the learning more efficient. Pseudocodes of both learning schemes are described in Table 1. In this paper, we use Schwarz?s
Bayesian Information Criterion (BIC): BIC = ?2 log L(?B ) + KB log N as our score
function where for a DBN B, L(?B ) is the maximal likelihood under B, KB is the dimension of the parameters of B and N is the size of the training data. Theoretically, the
DML-HMM-SEM algorithm is not guaranteed to converge since for the same training data,
the most probably explanations (Si , Sj ) of two DML-HMMs Bi , Bj might be different. In
the worst case, oscillation between two structures is possible. To guarantee halting of the
algorithm, a loop detector can be added so that, once any structure is selected in a second
time, we stop the learning and select the structure with the minimum score visited during
the searching. However, in our experiments, the learning algorithm always converged in a
few steps.
Procedure DML-HMM-HC
Procedure DML-HMM-SEM
Initial M odel(B0 );
Initial M odel(B0 );
Loop i = 0, 1, ... until convergence:
Loop i = 0, 1, ... until convergence:
0
0
[Bi , score0i ] = Learn P arameter(Bi );
[Bi , score0i ] = Learn P arameter(Bi );
0
Bi1..J = Generate N eighbors(Bi );
S = M ost P rob Expl(Bi , O);
Bimax = F ind Best Struct(S);
for j=1..J
0
j0
j
j
[Bi , scorei ] = Learn P arameter(Bi );
if Bimax == Bi
0
);
return Bi ;
j = F ind M inscore(score1..J
i
if (scoreji > score0i )
else
0
return Bi ;
Bi+1 = Bimax ;
else
Bi+1 = Bij ;
Table 1: Two schemes of learning DML-HMMs: the left column lists the DML-HMM-HC
scheme and the right column lists the DML-HMM-SEM scheme.
3. Modeling Reward Neuronal Circuits: A Case Study
In this section, we will describe our case study of modeling Reward Neuronal Circuits:
by using DBNs, we aim to model the interactive pattern of multiple brain regions for the
neuropsychological problem of sensitivity to the relative value of money. Furthermore, we
will examine the revealed functional information encapsulated in the trained DBNs through
a group classification study: separating drug addicted subjects from healthy non-drug-using
controls based on trained DBNs.
3.1. Data Collection and Preprocess
In our experiments, data were collected to study the neuropsychological problem of loss of
sensitivity to the relative value of money in cocaine users[9]. MRI studies were performed
on a 4T Varian scanner and all stimuli were presented using LCD-goggles connected to
a PC. Human participants pressed a button or refrained from pressing based on a picture
shown to them. They received a monetary reward if they performed correctly. Specifically, three runs were repeated twice (T1, T2, T3; and T1R, T2R, T3R) and in each run,
there were three monetary conditions (high money, low money, no money) and a baseline
condition; the order of monetary conditions was pseudo-randomized and identical for all
participants. Participants were informed about the monetary condition by a 3-sec instruction slide, presenting the stimuli: $0.45, $0.01 or $0.00. Feedback for correct responses in
each condition consisted of the respective numeral designating the amount of money the
subject has earned if correct or the symbol (X) otherwise. To simulate real-life motivational
salience, subjects could gain up to $50 depending on their performance on this task. 16 cocaine dependent individuals, 18-55 years of age, in good health, were matched with 12
non-drug-using controls on sex, race, education and general intellectual functioning. Statistical Parametric Mapping (SPM)[6] was used for fMRI data preprocessing (realignment,
normalization/registration and smoothing) and statistical analyses.
3.2. Feature Selection and Neuronal Circuit Modeling
The fMRI data are extremely high dimensional (i.e. 53 ? 63 ? 46 voxels per scan). Prior
to training the DBN, we selected 5 brain regions: Left Inferior Frontal Gyrus (Left IFG),
Prefrontal Cortex (PFC, including lateral and medial dorsolateral PFC and the anterior cingulate), Midbrain (including substantia nigra), Thalamus and Cerebellum. These regions
were selected based on prior SPM analyses random-effects analyses (ANOVA) where the
goal was to differentiate effect of money (high, low, no) from the effect of group (cocaine,
Figure 2: Learning processes and learned structures from two algorithms. The leftmost
column demonstrates two (superimposed) learned structures where light gray dashed arcs
(long dash) are learned from DML-HMM-HC, dark gray dashed arcs (short dash) from
DML-HMM-SEM and black solid arcs from both. The right columns shows the transient
structures of the learning processes of two algorithms where black represents existence of
arc and white represents no arc.
control) on all regions that were activated to monetary reward in all subjects. In all these
five regions, the monetary main effect was significant as evidenced by region of interest
follow-up analyses. Of note is the fact that these five regions are part of the mesocorticolimbic reward circuit, previously implicated in addiction. Each of the above brain regions
is presented by a k-D feature vector where k is the number of brain voxels selected in
this brain region (i.e. k = 3 for Left IFG and k = 8 for PFC). After feature selection, a
DML-HMM with 5 hidden nodes can be learned as described in Sec. 2 from the training
data. The leftmost image in Fig. 2 shows two superimposed possible structures of such
DML-HMMs. The causal relationships discovered among different brain regions are embodied in the topology of the DML-HMM. Each of the five hidden variables has two states
(activated or not) and each continuous observation variable (given by a k-D feature vector) represents the observed activation of each brain region. The Probabilistic Distribution
Function (PDF) of each observation variable is a mixture of Gaussians conditioned by the
state of its discrete parent node.
Figure 3: Left three images shows the structures learned from the 3 subsets of Group C
and the right three images shows those learned from subsets of Group S. Figure shows that
some arcs consistently appeared in Group C but not consistently in Group S (marked in
dark gray) and vice versa (marked in light gray), which implies such group differences in
the interactive brain patterns may correspond to the loss of sensitivity to the relative value
of money in cocaine users.
4. Experiments and Results
We collected fMRI data of 16 drug addicted subjects and 12 control subjects, 6 runs per
participant. Due to head motion, some data could not be used. In our experiments, we
used a total of 152 fMRI sequences (87 scans per sequence) with 86 sequences for the drug
addicted subjects (Group S) and 66 for control subjects (Group C).
First we compare the two learning schemes for DML-HMMs proposed in Sec. 2. Fig. 2
demonstrates the learning process (initialized with the FHMM) for drug addicted subjects.
The leftmost column shows two learned structures where red arcs are learned from DMLHMM-HC, green arcs from DML-HMM-SEM and black arcs from both. The right columns
show the learning processes of DML-HMM-SEM (top) and DML-HMM-HC (bottom) with
black representing existence of arc and white representing no arc. Since in DML-HMMSEM, structure learning is in the inner loop, the learning process is much faster than that of
DML-HMM-HC. We also compared the BIC scores of the learned structures and we found
DML-HMM-SEM selected better structures than DML-HMM-HC.
It is also very interesting to examine the structure learning processes by using different
training data. For each participant group, we randomly separated the data set into three
subsets and trained DBNs are reported in Fig. 3 where the left three images show the structures learned from the 3 subsets of Group C and the right three images show those learned
from subsets of Group S. In Fig. 3, we found the learned structures of each group are similar. We also found that some arcs consistently appeared in Group C but not consistently
in Group S (marked in red) and vice versa (marked in green), which implies such group
differences in the interactive brain patterns may correspond to the loss of sensitivity to the
relative value of money in cocaine users. More specifically, in Fig. 3, the average intragroup similarity scores were 80% and 78.3%, while cross-group similarity was 56.7%.
Figure 4: Classification results: All DBN methods significantly improved classification
rates compared to K-Nearest Neighbor with DML-HMM performing best.
The second set of experiments was to apply the trained DBNs for group classification. In
our data collection, there were 6 runs of fMRI collection: T1, T2, T3, T1R, T2R and T3R
with the latter latter repeating the former three, grouped into 4 data sets {T 1, T 2, T 3, ALL}
with ALL containing all the data. We performed classification experiments on each of the
4 data sets where the data were randomly divided into a training set and a testing set of
equal size. During training, the described four DBN type were employed using the training set while during the learning of DML-HMMs, different initial structures (PaHMM,
CHMM, FHMM) were used and the structure with the minimum BIC score was selected
from the three learned DML-HMMs. For each model, two DBNs {Bc , Bs } were trained
on the training data of Group C and Group S respectively. During testing, for each
testing fMRI sequence Otest , we computed two likelihoods Pctest = P (Otest |Bc ) and
Pstest = P (Otest |Bs ) using the two trained DBNs. Since the two DBNs may have different structures, instead of directly comparing the two likelihoods, we used the difference
between these two likelihoods for classification. More specifically, during training, for each
training sequence T Ri , we computed the ratio of two likelihoods RiT R = Pci /Psi where
Pci = P (T Ri |Bc ) and Psi = P (T Ri |Bs ). As expected, generally the ratios of Group
C training data were significantly greater than those of Group S. During testing, the ratio
Rtest = Pctest /Pstest for each test sequence was also computed and compared to the ratios
of the training data for classification. Fig. 4 reports the classification rates of the different
DBNs on each data set. For comparison, the k-th Nearest Neighbor (KNN) algorithm was
applied on the fMRI sequences directly and Fig. 4 shows that by using DBNs, classification
rates are significantly better with DML-HMM outperforming all other models.
5. Conclusions and Future Work
In this work, we contributed a framework of exploiting Dynamic Bayesian Networks to
model the functional information of the fMRI data. We explored four types of DBNs: a
Parallel Hidden Markov Model (PaHMM), a Coupled Hidden Markov Model (CHMM),
a Fully-linked Hidden Markov Model (FHMM) and a Dynamically Multi-linked Hidden
Markov Model. Furthermore, we proposed and compared two structural learning schemes
of DML-HMMs and applied the DBNs to a group classification problem. To our knowledge, this is the first time that Dynamic Bayesian Networks are exploited in modeling
the connectivity and interactivity among brain voxels from fMRI data. This framework
of exploring functional information of fMRI data provides a novel approach of revealing
brain connectivity and interactivity and provides an independent test for the effect of psychopathology on brain function.
Currently, DBNs use independently pre-selected brain regions, thus some other important
interactivity information may have been discarded in the feature selection step. Our future
work will focus on developing a dynamic neuronal circuit modeling framework performing
feature selection and DBN learning simultaneously. Due to computational limits and for
clarity purposes, we explored only 5 brain regions and thus another direction of future work
is to develop a hierarchical DBN topology to comprehensively model all implicated brain
regions efficiently.
References
[1] S. Anders, M. Lotze, M. Erb, W. Grodd, and N. Birbaumer. Brain activity underlying emotional
valence and arousal: A response-related fmri study. In Human Brain Mapping.
[2] M. Brand, N. Oliver, and A. Pentland. Coupled hidden markov models for complex action
recognition. In CVPR, pages 994?999, 1996.
[3] J. Ford, H. Farid, F. Makedon, L.A. Flashman, T.W. McAllister, V. Megalooikonomou, and A.J.
Saykin. Patient classification of fmri activation maps. In MICCAI, 2003.
[4] N. Friedman. The bayesian structual algorithm. In UAI, 1998.
[5] N. Friedman, K. Murphy, and S. Russell. Learning the structure of dynamic probabilistic networks. In Uncertainty in AI, pages 139?147, 1998.
[6] K. Friston, A. Holmes, K. Worsley, and et al. Statistical parametric maps in functional imaging:
A general linear approach. Human Brain Mapping, pages 2:189?210, 1995.
[7] G. Ghahramani. Learning dynamic bayesian networks. In Adaptive Processing of Sequences
and Data Structures, Lecture Notes in AI, pages 168?197, 1998.
[8] R.Z. Goldstein and N.D. Volkow. Drug addiction and its underlying neurobiological basis: Neuroimaging evidence for the involvement of the frontal cortex. American Journal of Psychiatry,
(10):1642?1652.
[9] R.Z. Goldstein et al. A modified role for the orbitofrontal cortex in attribution of salience to
monetary reward in cocaine addiction: an fmri study at 4t. In Human Brain Mapping Conference, 2004.
[10] S. Gong and T. Xiang. Recognition of group activities using dynamic probabilistic networks.
In ICCV, 2003.
[11] M.I. Jordan and Y. Weiss. Graphical models: probabilistic inference, Arbib, M. (ed): Handbook
of Neural Networks and Brain Theory. MIT Press, 2002.
[12] A.W. MacDonald et al. Dissociating the role of the dorsolateral prefrontal and anterior cingulate
cortex in cognitive control. Science, 288(5472):1835?1838, 2000.
[13] T.M. Mitchell, R. Hutchinson, R. Niculescu, F. Pereira, X. Wang, M. Just, and S. Newman.
Learning to decode cognitive states from brain images. Machine Learning, 57:145?175, 2004.
[14] K.P. Murphy. An introduction to graphical models. 2001.
[15] L.K. Hansen P. Hojen-Sorensen and C.E. Rasmussen. Bayesian modeling of fmri time series.
In NIPS, 1999.
[16] W.D. Penny, K.E. Stephan, A. Mechelli, and K.J. Friston. Comparing dynamic causal models.
NeuroImage, 22(3):1157?1172, 2004.
[17] C. Vogler and D. Metaxas. A framework for recognizing the simultaneous aspects of american
sign language. In CVIU, pages 81:358?384, 2001.
[18] X. Wang, R. Hutchinson, and T.M. Mitchell. Training fmri classifiers to detect cognitive states
across multiple human subjects. In NIPS03, Dec 2003.
[19] L. Zhang, D. Samaras, D. Tomasi, N. Volkow, and R. Goldstein. Machine learning for clinical
diagnosis from functional magnetic resonance imaging. In CVPR, 2005.
| 2855 |@word mri:1 briefly:1 cingulate:2 sex:1 instruction:1 t1r:2 pressed:1 solid:1 initial:4 series:1 exclusively:2 score:7 bc:3 subjective:1 current:4 comparing:2 anterior:2 activation:5 si:1 stony:2 must:1 oxygenation:1 medial:1 v:2 generative:1 greedy:1 advancement:1 selected:8 ith:1 short:1 provides:7 contribute:2 node:11 intellectual:1 zhang:2 five:3 constructed:2 hojen:1 addiction:8 behavioral:1 manner:1 theoretically:1 expected:1 indeed:1 behavior:1 examine:3 growing:1 multi:3 brain:52 t2r:2 little:1 inappropriate:1 motivational:1 moreover:1 underlying:5 circuit:12 matched:1 developed:3 informed:1 finding:1 megalooikonomou:1 guarantee:1 temporal:4 pseudo:1 interactive:5 demonstrates:2 classifier:1 control:13 medical:1 underpinned:1 t1:2 scientist:2 understood:1 limit:1 might:1 black:4 twice:1 studied:2 dynamically:5 challenging:2 shaded:1 hmms:17 range:1 bi:14 presuming:1 neuropsychological:2 directed:2 unique:1 testing:4 procedure:2 substantia:1 j0:1 drug:16 thought:1 revealing:2 significantly:3 pre:1 suggest:1 cannot:1 selection:5 impossible:1 conventional:1 map:3 expl:1 go:1 attribution:1 independently:1 disorder:2 insight:3 rule:1 holmes:1 bbns:1 enabled:2 searching:1 structual:1 feel:1 dbns:30 user:4 exact:1 decode:1 designating:1 rita:1 recognition:4 lay:1 labeled:1 observed:4 role:3 bottom:1 solved:1 wang:2 worst:1 region:29 s1i:1 connected:2 earned:1 russell:1 reward:12 dynamic:24 lcd:1 trained:7 depend:1 samara:2 dissociating:1 basis:1 indirect:1 represented:5 separated:1 describe:2 newman:1 pci:2 exhaustive:1 whose:1 cvpr:2 otherwise:1 knn:1 highlighted:1 noisy:1 ford:1 differentiate:1 sequence:11 pressing:1 propose:3 interaction:2 interconnected:2 maximal:2 relevant:1 loop:7 monetary:7 date:1 intuitive:1 exploiting:2 parent:2 convergence:2 converges:1 object:1 depending:1 develop:2 pose:1 gong:2 nearest:2 received:1 b0:2 implemented:1 involves:1 implies:2 quantify:1 direction:1 correct:2 stochastic:1 kb:2 human:11 transient:1 numeral:1 education:1 bi1:1 probable:1 exploring:1 scanner:1 considered:1 viterbi:1 bj:1 mapping:4 inhibiting:1 major:1 purpose:3 estimation:1 encapsulated:1 currently:2 visited:1 healthy:2 hansen:1 schwarz:1 grouped:1 vice:2 tool:1 mit:1 always:1 aim:2 modified:1 derived:1 validated:2 focus:1 consistently:4 bimax:3 likelihood:6 nora:1 superimposed:2 psychiatry:1 baseline:1 detect:1 sense:1 inference:4 dependent:2 anders:1 niculescu:1 inaccurate:2 relapse:1 hidden:34 comprising:1 classification:17 among:7 denoted:1 priori:2 impacting:1 prevention:2 resonance:3 spatial:1 development:1 special:1 smoothing:1 equal:1 construct:2 once:1 encouraged:1 identical:1 represents:4 broad:1 look:2 mcallister:1 fmri:26 future:3 dml:40 report:2 stimulus:2 t2:2 employ:1 few:3 randomly:2 simultaneously:1 national:1 individual:2 murphy:2 friedman:2 chmm:4 neuroscientist:1 interest:2 goggles:1 adjust:1 mixture:1 pc:1 activated:3 light:2 sorensen:1 accurate:1 oliver:1 pseudocodes:1 respective:1 unless:1 tree:1 conduct:1 initialized:1 circle:2 arousal:1 causal:8 varian:1 instance:2 column:6 modeling:16 maximization:2 subset:5 recognizing:1 characterize:2 reported:1 hutchinson:2 st:5 sensitivity:5 randomized:1 probabilistic:7 connecting:2 realignment:1 connectivity:12 containing:1 possibly:1 prefrontal:2 cognitive:5 cocaine:7 admit:1 american:2 return:2 worsley:1 halting:1 bold:1 sec:3 includes:1 race:1 performed:9 view:2 linked:5 mpe:2 red:2 participant:5 parallel:4 odel:2 efficiently:2 t3:2 identify:2 preprocess:1 climbing:2 correspond:2 farid:1 fhmm:5 bayesian:18 metaxas:1 none:1 converged:1 explain:1 detector:1 simultaneous:1 ed:1 associated:2 psi:2 stop:1 gain:1 mitchell:2 knowledge:3 goldstein:4 higher:1 follow:1 response:4 improved:1 wei:1 furthermore:4 just:1 miccai:1 psychopathology:5 correlation:2 until:4 nonlinear:1 lack:3 spm:2 reveal:1 gray:4 lei:1 effect:7 consisted:1 functioning:2 managed:1 evolution:1 hence:2 former:1 laboratory:1 iteratively:3 white:2 ind:2 cerebellum:1 during:7 self:1 inferior:1 scorei:1 criterion:2 leftmost:3 pdf:1 hill:2 presenting:1 complete:3 demonstrate:3 invaluable:1 motion:1 image:7 novel:3 functional:11 birbaumer:1 nh:2 extend:1 interpretation:1 discussed:1 significant:1 versa:2 ai:2 dbn:13 similarly:2 language:1 cortex:4 money:10 similarity:2 recent:1 perspective:1 involvement:1 irrelevant:1 certain:1 ost:1 outperforming:1 life:1 exploited:3 scoring:1 minimum:3 greater:2 nigra:1 employed:2 novelty:1 maximize:1 converge:1 dashed:2 multiple:9 thalamus:1 faster:1 characterized:1 clinical:3 offer:1 long:1 cross:1 divided:1 prediction:1 involving:2 basic:1 essentially:1 expectation:2 patient:1 represent:2 normalization:1 score1:1 dec:1 addition:1 nelly:1 interval:1 else:2 ot:4 regional:2 probably:1 subject:16 jordan:1 structural:9 revealed:3 stephan:1 independence:1 isolation:1 bic:4 arbib:1 identified:1 topology:3 inner:2 effort:1 action:2 useful:1 generally:1 clear:2 amount:1 repeating:1 slide:1 dark:2 addicted:7 gyrus:1 generate:1 exist:1 inhibitory:2 notice:1 t3r:2 neuroscience:2 sign:1 per:4 correctly:1 klein:1 diagnosis:4 discrete:1 promise:1 group:30 erb:1 four:5 blood:1 suny:1 clarity:1 anova:1 registration:1 imaging:6 graph:1 button:1 year:1 run:4 uncertainty:1 upton:1 putative:1 oscillation:1 dorsolateral:2 orbitofrontal:1 guaranteed:1 dash:2 activity:4 incorporation:3 scene:1 ri:3 alia:1 aspect:2 simulate:1 innovative:1 extremely:1 performing:2 department:2 developing:1 across:1 em:5 rob:1 modification:1 making:1 arameter:3 midbrain:1 b:3 iccv:1 previously:3 mechanism:3 subjected:1 studying:1 junction:1 gaussians:1 probe:1 apply:1 hierarchical:1 appropriate:1 magnetic:3 struct:1 existence:2 top:1 graphical:6 emotional:2 unsuitable:1 exploit:1 ghahramani:1 move:1 objective:1 added:1 mechelli:1 parametric:4 valence:1 separate:1 separating:2 lateral:1 hmm:27 outer:2 macdonald:1 collected:2 trivial:1 nips03:1 assuming:1 modeled:1 relationship:4 providing:1 ratio:4 difficult:1 neuroimaging:3 mostly:1 neuroscientific:1 unknown:1 perform:4 contributed:1 observation:13 markov:15 discarded:2 arc:18 pentland:1 extended:2 head:1 interacting:1 discovered:1 community:1 intensity:1 evidenced:1 required:1 connection:1 tap:1 tomasi:1 learned:14 nip:1 brook:2 chmms:1 able:1 dynamical:1 pattern:8 dimitris:1 appeared:2 challenge:2 including:4 memory:1 explanation:2 belief:1 deleting:1 green:2 event:1 friston:2 representing:3 scheme:12 technology:1 picture:1 coupled:5 health:1 embodied:1 prior:2 voxels:5 understanding:1 relative:5 xiang:1 fully:3 loss:4 lecture:1 volkow:3 interactivity:15 interesting:1 tures:1 age:1 awareness:1 principle:1 exciting:1 rasmussen:1 jth:1 implicated:2 salience:2 understand:1 neighbor:6 comprehensively:1 saykin:1 penny:1 feedback:1 dimension:1 valid:1 transition:2 world:1 computes:1 commonly:1 made:1 collection:3 preprocessing:1 adaptive:1 sj:1 neurobiological:1 active:3 uai:1 handbook:1 assumed:1 factorize:1 continuous:2 search:6 compromised:2 table:2 learn:4 sem:10 improving:1 hc:8 complex:4 pfc:3 main:1 repeated:1 neuronal:11 fig:8 ny:2 probing:1 interconnecting:1 neuroimage:1 pereira:1 lie:1 outdoor:1 candidate:2 rtest:1 learns:1 bij:1 ifg:2 down:1 symbol:1 explored:4 list:3 evidence:1 essential:1 adding:1 conditioned:1 cviu:1 explore:1 likely:1 conditional:1 goal:2 marked:4 change:1 specifically:7 craving:1 reversing:1 total:2 orchestrated:1 brookhaven:1 experimental:2 brand:2 select:2 rit:1 latter:2 scan:2 otest:3 frontal:2 |
2,043 | 2,856 | Computing the Solution Path for the
Regularized Support Vector Regression
Ji Zhu?
Department of Statistics
University of Michigan
Ann Arbor, MI 48109
[email protected]
Lacey Gunter
Department of Statistics
University of Michigan
Ann Arbor, MI 48109
[email protected]
Abstract
In this paper we derive an algorithm that computes the entire solution path of the support vector regression, with essentially the same
computational cost as ?tting one SVR model. We also propose an
unbiased estimate for the degrees of freedom of the SVR model,
which allows convenient selection of the regularization parameter.
1
Introduction
The support vector regression (SVR) is a popular tool for function estimation problems, and it has been widely used on many real applications in the past decade, for
example, time series prediction [1], signal processing [2] and neural decoding [3].
In this paper, we focus on the regularization parameter of the SVR, and propose
an e?cient algorithm that computes the entire regularized solution path; we also
propose an unbiased estimate for the degrees of freedom of the SVR, which allows
convenient selection of the regularization parameter.
Suppose we have a set of training data (x1 , y1 ), . . . , (xn , yn ), where the input xi ?
Rp and the output yi ? R. Many researchers have noted that the formulation for
the linear -SVR can be written in a loss + penalty form [4]:
n
yi ? ?0 ? ? T xi + ? ? T ?
?0 ,?
2
i=1
min
(1)
where |?| is the so called -insensitive loss function:
0
if |?| ?
|?| =
|?| ? otherwise
The idea is to disregard errors as long as they are less than . Figure 1 plots the loss
function. Notice that it has two non-di?erentiable points at ?. The regularization
parameter ? controls the trade-o? between the -insensitive loss and the complexity
of the ?tted model.
?
To whom the correspondence should be addressed.
2.5
Right
1.0
Loss
1.5
2.0
Left
0.5
Center
Elbow R
0.0
Elbow L
?3
?2
?1
0
1
2
3
y?f
Figure 1: The -insensitive loss function.
In practice, one often maps x into a high (often in?nite) dimensional reproducing
kernel Hilbert space (RKHS), and ?ts a nonlinear kernel SVR model [4]:
min
?0 ,?
n
1
?i ?i K(xi , xi )
2? i=1
n
|yi ? f (xi )| +
i=1
n
(2)
i =1
n
where f (x) = ?0 + ?1 i=1 ?i K(x, xi ), and K(?, ?) is a positive-de?nite reproducing
kernel that generates a RKHS. Notice that we write f (x) in a way that involves ?
explicitly, and we will see later that ?i ? [?1, 1].
Both (1) and (2) can be transformed into a quadratic programming problem, hence
most commercially available packages can be used to solve the SVR. In the past
years, many speci?c algorithms for the SVR have also been developed, for example,
interior point algorithms [4-5], subset selection algorithms [6?7], and sequential
minimal optimization [4, 8?9]. All these algorithms solve the SVR for a pre-?xed
regularization parameter ?, and it is well known that an appropriate value of ? is
crucial for achieving small prediction error of the SVR.
In this paper, we show that the solution ?(?) is piecewise linear as a function of
?, which allows us to derive an e?cient algorithm that computes the exact entire
solution path {?(?), 0 ? ? ? ?}. We acknowledge that this work was inspired by
one of the authors? earlier work on the SVM setting [10].
Before delving into the technical details, we illustrate the concept of piecewise linearity of the solution path with a simple example. We generate 10 training observations
using the famous sinc(?) function:
sin(?x)
+ e, where x ? U (?2?, 2?) and e ? N (0, 0.192 )
?x
We use the SVR with a 1-dimensional spline kernel
y=
K(x, x ) = 1 + k1 (x)k1 (x ) + k2 (x)k2 (x ) ? k4 (|x ? x |)
(3)
where k1 (?) = ? ? 1/2, k2 =
? 1/12)/2, k4 =
?
+ 7/240)/24. Figure 2
shows a subset of the piecewise linear solution path ?(?) as a function of ?.
(k12
(k14
k12 /2
In section 2, we describe the algorithm that computes the entire solution path of
the SVR. In section 3, we propose an unbiased estimate for the degrees of freedom
of the SVR, which can be used to select the regularization parameter ?. In section
4, we present numerical results on simulation data. We conclude the paper with a
discussion section.
1.0
0.5
?
0.0
?0.5
?1.0
1
2
3
4
?
5
Figure 2: A subset of the solution path ?(?) as a function of ?.
2
Algorithm
For simplicity in notation, we describe the problem setup using the linear SVR, and
the algorithm using the kernel SVR.
2.1
Problem Setup
The linear -SVR (1) can be re-written in an equivalent way:
n
?
min
(?i + ?i ) + ? T ?
?0 ,?
2
i=1
subject to
?(?i + ) ? yi ? f (xi ) ? (?i + ),
?i , ?i ? 0;
T
f (xi ) = ?0 + ? xi , i = 1, . . . n
This gives us the Lagrangian primal function
n
n
?
LP :
(?i + ?i ) + ? T ? +
?i (yi ? f (xi ) ? ?i ? ) ?
2
i=1
i=1
n
?i (yi ? f (xi ) + ?i + ) ?
i=1
n
?i ?i ?
i=1
n
i=1
?i =
n
?i ? i .
i=1
Setting the derivatives to zero we arrive at:
n
1
?
:
?=
(?i ? ?i )xi
??
? i=1
?
:
??0
n
?i
(4)
(5)
i=1
?
:
?i = 1 ? ?i
??i
?
:
?i = 1 ? ?i
??i
where the Karush-Kuhn-Tucker conditions are
?i (yi ? f (xi ) ? ?i ? )
?i (yi ? f (xi ) + ?i + )
?i ?i
?i ?i
(6)
(7)
=
=
=
=
0
0
0
0
(8)
(9)
(10)
(11)
Along with the constraint that our Lagrange multipliers must be non-negative, we
can conclude from (6) and (7) that both 0 ? ?i ? 1 and 0 ? ?i ? 1. We also see
from (8) and (9) that if ?i is positive, then ?i must be zero, and vice versa. These
lead to the following relationships:
yi ? f (xi ) >
yi ? f (xi ) < ?
yi ? f (xi ) ? (?, )
yi ? f (xi ) =
yi ? f (xi ) = ?
?
?
?
?
?
?i
?i
?i
?i
?i
= 1,
= 0,
= 0,
? [0, 1],
= 0,
?i
?i
?i
?i
?i
> 0,
= 0,
= 0,
= 0,
= 0,
?i
?i
?i
?i
?i
= 0,
= 1,
= 0,
= 0,
? [0, 1],
?i
?i
?i
?i
?i
= 0;
> 0;
= 0;
= 0;
= 0.
Using these relationships, we de?ne the following sets that will be used later on
when we are calculating the regularization path of the SVR:
?
?
?
?
?
R = {i : yi ? f (xi ) > , ?i = 1, ?i = 0} (Right of the elbows)
ER = {i : yi ? f (xi ) = , 0 ? ?i ? 1, ?i = 0} (Right elbow)
C = {i : ? < yi ? f (xi ) < , ?i = 0, ?i = 0} (Center)
EL = {i : yi ? f (xi ) = ?, ?i = 0, 0 ? ?i ? 0} (Left elbow)
L = {i : yi ? f (xi ) < ?, ?i = 0, ?i = 1} (Left of the elbows)
Notice from (4) that for every ?, ? is fully determined by the values of ?i and ?i .
For points in R, L and C, the values of ?i and ?i are known; therefore, the algorithm
will focus on points resting at the two elbows ER and EL .
2.2
Initialization
Initially, when ? = ? we can see from (4) that ? = 0. We can determine the value
of ?0 via a simple 1-dimensional optimization. For lack of space, we focus on the
case that all the values of yi are distinct, and furthermore, the initial sets ER and
EL have at most one point combined (which is the usual situation). In this case ?0
will not be unique and each of the ?i and ?i will be either 0 or 1.
Since ?0 is not unique, we can focus on one particular solution path, for example,
by always setting ?0 equal to one of its boundary values (thus keeping one point
at an elbow). As ? decreases, the range of ?0 shrinks toward zero and reaches zero
when we have two points at the elbows, and the algorithm proceeds from there.
2.3
The Path
The formalized setup above can be easily modi?ed to accommodate non-linear kernels; in fact, ?i in (2) is equal to ?i ? ?i . For the remaining portion of the algorithm
we will use the kernel notation.
The algorithm focuses on the sets of points ER and EL . These points have either
f (xi ) = yi ? with ?i ? [0, 1], or f (xi ) = yi + with ?i ? [0, 1]. As we follow the
path we will examine these sets until one or both of them change, at which point
we will say an event has occurred. Thus events can be categorized as:
1.
2.
3.
4.
The initial event, for which two points must enter the elbow(s)
A point from R has just entered ER , with ?i initially 1
A point from L has just entered EL , with ?i initially 1
A point from C has just entered ER , with ?i initially 0
5. A point from C has just entered EL , with ?i initially 0
6. One or more points in ER and/or EL have just left the elbow(s) to join
either R, L, or C, with ?i and ?i initially 0 or 1
Until another event has occurred, all sets will remain the same. As a point passes
through ER or EL , its respective ?i or ?i must change from 0 ? 1 or 1 ? 0. Relying
on the fact that f (xi ) = yi ? or f (xi ) = yi + for all points in ER or EL respectively,
we can calculate ?i and ?i for these points.
We use the subscript to index the sets above immediately after the th event has
occurred, and let ?i , ?i , ?0 and ? be the parameter values immediately after the
th event. Also let f be the function at this point. We de?ne for convenience
?0,? = ? ? ?0 and hence ?0,?
= ? ? ?0 . Then since
n
1
f (x) =
(?i ? ?i )K(x, xi ) + ?0,?
? i=1
for ?+1 < ? < ? we can write
?
?
f (x) =
f (x) ? f (x) + f (x)
?
?
?
?
1 ?
=
?i K(x, xi ) ?
?j K(x, xj ) + ?0 + ? f (x)? ,
?
i?ER
j?EL
, and we can do the reduction
where ?i = ?i ? ?i , ?j = ?j ? ?j and ?0 = ?0,? ? ?0,?
in the second line since the ?i and ?i are ?xed for all points in R , L , and C and
all points remain in their respective sets. Suppose |ER
| = nR and |EL | = nL , so for
the nR + nL points staying at the elbows we have (after some algebra) that
?
?
1 ?
?i K(xk , xi ) ?
?j K(xk , xj ) + ?0 ? = ? ? ? , ?k ? ER
yk ?
i?ER
j?EL
?
?
1 ?
?i K(xm , xi ) ?
?j K(xm , xj ) + ?0 ? = ? ? ? , ?m ? EL
ym +
i?ER
j?EL
Also, by condition (5) we have that
i?ER
?i ?
?j = 0
j?EL
This gives us nR + nL + 1 linear equations we can use to solve for each of the
nR + nL + 1 unknown variables ?i , ?j and ?0 . Notice this system is linear in ? ? ? ,
which implies that ?i , ?j and ?0,? change linearly in ? ? ? . So we can write:
=
?i + (? ? ? )bi
?j
=
?0,?
=
?j + (? ? ? )bj
?0,?
+ (? ? ? )b0
?
?i
f (x) =
?
?i ? ER
(12)
EL
(13)
?j ?
f (x) ? h (x) + h (x)
(14)
(15)
where (bi , bj , b0 ) is the solution when ? ? ? is equal to 1, and
h (x) =
bi K(x, xi ) ?
bj K(x, xj ) + b0 .
i?ER
j?EL
Given ? , equations (12), (13) and (15) allow us to compute the ? at which the next
event will occur, ?+1 . This will be the largest ? less than ? , such that either ?i
for i ? ER
reaches 0 or 1, or ?j for j ? EL reaches 0 or 1, or one of the points in R,
L or C reaches an elbow.
We terminate the algorithm either when the sets R and L become empty, or when
? has become su?ciently close to zero. In the later case we must have f ? h
su?ciently small as well.
2.4
Computational cost
The major computational cost for updating the solutions at any event involves two
things: solving the system of (nR +nL ) linear equations, and computing h (x). The
former takes O((nR + nL )2 ) calculations by using inverse updating and downdating
since the elbow sets usually di?er by only one point between consecutive events,
and the latter requires O(n(nR + nL )) computations.
According to our experience, the total number of steps taken by the algorithm is on
average some small multiple of n. Letting m be the average
of ER
?
EL , then
size
2
2
the approximate computational cost of the algorithm is O cn m + nm , which is
comparable to a single SVR ?tting algorithm that uses quadratic programming.
3
The Degrees of Freedom
The degrees of freedom is an informative measure of the complexity of a ?tted model.
In this section, we propose an unbiased estimate for the degrees of freedom of the
SVR, which allows convenient selection of the regularization parameter ?.
Since the usual goal of regression analysis is to minimize the predicted squared-error
loss, we study the degrees of freedom using Stein?s unbiased risk estimation (SURE)
theory [11]. Given x, assuming y is generated according to a homoskedastic model:
y ? (?(x), ? 2 )
where ? is the true mean and ? 2 is the common variance. Then the degrees of
freedom of a ?tted model f (x) can be de?ned as
df(f ) =
n
cov(f (xi ), yi )/? 2
i=1
n
Stein showed that under mild conditions, i=1 ?fi /?yi is an
unbiased estimate of
n
df(f ). It turns out that for the SVR model, for every ?xed ?, i=1 ?fi /?yi has an
extremely simple formula:
?
df
n
?fi
i=1
?yi
= |ER | + |EL |
(16)
Therefore, |ER | + |EL | is a convenient unbiased estimate for the degrees of freedom
of f (x). Due to the space restriction, we omit the proof here, but make a note that
the proof relies on our SVR algorithm.
In applying (16) to select the regularization parameter ?, we plug it into the GCV
criterion [12] for model selection:
n
2
i=1 (yi ? f (xi ))
2
(n ? df)
The advantages of this criterion are that it does not assume a known ? 2 , and it
avoids cross-validation, which is computationally intensive. In practice, we can ?rst
use our e?cient algorithm to compute the entire solution path, then identify the
appropriate value of ? that minimizes the GCV criterion.
4
Numerical Results
To demonstrate our algorithm and the selection of ? using the GCV criterion, we
show numerical results on simulated data. We consider both additive and multiplicative kernels using the 1-dimensional spline kernel (3), which are respectively
K(x, x ) =
p
K(xj , xj )
and K(x, x ) =
j=1
p
K(xj , xj )
j=1
Simulations were based on the following four functions [13]:
1. f (x) =
sin(?x)
?x
+ e1 , x ? (?2?, 2?)
1
+ 1+e?20(x
+ 3x3 + 2x4 + x5 + e2 ,
2 ?.5)
1/2
1 2
3. f (R, ?, L, C) = R2 + ?L + ?C
+ e3 ,
1
?L+ ?C
+ e4 ,
4. f (R, ?, L, C) = tan?1
R
4x1
2. f (x) = 0.1e
x ? (0, 1)2
where (R, ?, L, C) ? (0, 100) ? (2?(20, 280)) ? (0, 1) ? (1, 11)
ei are distributed as N (0, ?i2 ), where ?1 = 0.19, ?2 = 1, ?3 = 218.5, ?4 = 0.18.
We generated 300 training observations from each function along with 10,000 validation observations and 10,000 test observations. For the ?rst two simulations we
used the additive 1-dimensional spline kernel and for the second two simulations
the multiplicative 1-dimensional spline kernel. We then found the ? that minimized
the GCV criterion. The validation set was used to select the gold standard ? which
minimized the prediction MSE. Using these ??s we calculated the prediction MSE
with the test data for each criterion. After repeating this for 20 times, the average
MSE and standard deviation for the MSE can be seen in Table 1, which indicates
the GCV criterion performs closely to optimal.
Table 1: Simulation results of ? selection for SVR
f (x) MSE-Gold Standard
MSE-GCV
1
0.0385 (0.0011)
0.0389 (0.0011)
2
1.0999 (0.0367)
1.1120 (0.0382)
3
50095 (1358)
50982 (2205)
4
0.0459 (0.0023)
0.0471 (0.0028)
5
Discussion
In this paper, we have proposed an e?cient algorithm that computes the entire regularization path of the SVR. We have also proposed the GCV criterion for selecting
the best ? given the entire path. The GCV criterion seems to work su?ciently well
on the simulation data. However, we acknowledge that according to our experience
on real data sets (not shown here due to lack of the space), the GCV criterion
sometimes tends to over-?t the model. We plan to explore this issue further.
Due to the di?culty of also selecting the best for the SVR, an alternate algorithm
exists that automatically adjusts the value of , called the ?-SVR [4]. In this
scenario, is treated as another free parameter. Using arguments similar to those
for ?0 in our above algorithm, one can show that is piecewise linear in 1/? and
its path can be calculated similarly.
Acknowledgments
We would like to thank Saharon Rosset for helpful comments. Gunter and Zhu are
partially supported by grant DMS-0505432 from the National Science Foundation.
References
[1] M?
uler K, Smola A, R?
atsch G, Sch?
olkopf B, Kohlmorgen J & Vapnik V (1997) Predicting
time series with support vector machines. Arti?cial Neural Networks, 999-1004.
[2] Vapnik V, Golowich S & Smola A (1997) Support vector method for function approximation, regression estimation, and signal processing. NIPS 9.
[3] Shpigelman L, Crammer K, Paz R, Vaadia E & Singer Y (2004) A temporal kernel-based
model for tracking hand movements from neural activities. NIPS 17, 1273-1280.
[4] Smola A & Sch?
olkopf B (2004) A tutorial on support vector regression. Statistics and
Computing 14: 199-222.
[5] Vanderbei, R. (1994) LOQO: An interior point code for quadratic programming. Technical Report SOR-94-15, Princeton University.
[6] Osuna E, Freund R & Girosi F (1997) An improved training algorithm for support
vector machines. Neural Networks for Signal Processing, 276-284.
[7] Joachims T (1999) Making large-scale SVM learning practical. Advances in Kernel
Methods ? Support Vector Learning, 169-184.
[8] Platt J (1999) Fast training of support vector machines using sequential minimal optimization. Advances in Kernel Methods ? Support Vector Learning, 185-208.
[9] Keerthi S, Shevade S, Bhattacharyya C & Murthy K (1999) Improvements to Platt?s
SMO algorithm for SVM classi?er design. Technical Report CD-99-14, NUS.
[10] Hastie, T., Rosset, S., Tibshirani, R. & Zhu, J. (2004) The Entire Regularization Path
for the Support Vector Machine. JMLR, 5, 1391-1415.
[11] Stein, C. (1981) Estimation of the mean of a multivariate normal distribution. Annals
of Statistics 9: 1135-1151.
[12] Craven, P. & Wahba, G. (1979) Smoothing noisy data with spline function. Numerical
Mathematics 31: 377-403.
[13] Friedman, J. (1991) Multivariate Adaptive Regression Splines. Annals of Statistics
19: 1-67.
| 2856 |@word mild:1 seems:1 simulation:6 arti:1 accommodate:1 reduction:1 initial:2 series:2 selecting:2 rkhs:2 bhattacharyya:1 past:2 written:2 must:5 numerical:4 additive:2 informative:1 girosi:1 plot:1 xk:2 along:2 become:2 shpigelman:1 examine:1 inspired:1 relying:1 automatically:1 kohlmorgen:1 elbow:14 linearity:1 notation:2 xed:3 minimizes:1 developed:1 cial:1 temporal:1 every:2 k2:3 platt:2 control:1 grant:1 omit:1 yn:1 positive:2 before:1 tends:1 subscript:1 path:17 initialization:1 range:1 bi:3 unique:2 acknowledgment:1 practical:1 practice:2 x3:1 nite:2 convenient:4 pre:1 svr:26 convenience:1 interior:2 selection:7 close:1 risk:1 applying:1 restriction:1 equivalent:1 map:1 lagrangian:1 center:2 simplicity:1 formalized:1 immediately:2 adjusts:1 tting:2 annals:2 suppose:2 tan:1 exact:1 programming:3 us:1 updating:2 calculate:1 trade:1 decrease:1 movement:1 yk:1 complexity:2 erentiable:1 solving:1 algebra:1 easily:1 distinct:1 fast:1 describe:2 widely:1 solve:3 say:1 otherwise:1 statistic:5 cov:1 noisy:1 advantage:1 vaadia:1 propose:5 entered:4 culty:1 gold:2 olkopf:2 rst:2 empty:1 staying:1 uler:1 derive:2 illustrate:1 golowich:1 b0:3 predicted:1 involves:2 implies:1 kuhn:1 closely:1 sor:1 karush:1 normal:1 bj:3 major:1 consecutive:1 estimation:4 largest:1 vice:1 tool:1 gunter:2 always:1 focus:5 downdating:1 joachim:1 improvement:1 indicates:1 helpful:1 el:21 entire:8 initially:6 transformed:1 issue:1 plan:1 smoothing:1 equal:3 x4:1 commercially:1 minimized:2 report:2 spline:6 piecewise:4 modi:1 national:1 keerthi:1 friedman:1 freedom:9 nl:1 primal:1 experience:2 respective:2 re:1 minimal:2 earlier:1 cost:4 deviation:1 subset:3 paz:1 gcv:9 rosset:2 combined:1 decoding:1 ym:1 squared:1 nm:1 derivative:1 de:4 explicitly:1 later:3 multiplicative:2 portion:1 minimize:1 variance:1 identify:1 famous:1 researcher:1 murthy:1 reach:4 ed:1 homoskedastic:1 tucker:1 e2:1 dm:1 proof:2 mi:2 di:3 vanderbei:1 popular:1 hilbert:1 follow:1 improved:1 formulation:1 shrink:1 furthermore:1 just:5 smola:3 until:2 shevade:1 hand:1 ei:1 su:3 nonlinear:1 lack:2 concept:1 unbiased:7 multiplier:1 true:1 former:1 regularization:11 hence:2 i2:1 sin:2 x5:1 noted:1 criterion:10 demonstrate:1 performs:1 saharon:1 fi:3 common:1 ji:1 insensitive:3 occurred:3 resting:1 versa:1 enter:1 mathematics:1 similarly:1 multivariate:2 showed:1 scenario:1 yi:28 seen:1 speci:1 determine:1 signal:3 multiple:1 technical:3 calculation:1 plug:1 long:1 cross:1 e1:1 prediction:4 regression:7 essentially:1 df:4 sometimes:1 kernel:14 addressed:1 crucial:1 sch:2 pass:1 sure:1 subject:1 comment:1 thing:1 ciently:3 xj:6 hastie:1 wahba:1 idea:1 cn:1 intensive:1 penalty:1 e3:1 repeating:1 stein:3 generate:1 tutorial:1 notice:4 tibshirani:1 write:3 four:1 achieving:1 k4:2 year:1 package:1 inverse:1 arrive:1 comparable:1 correspondence:1 quadratic:3 activity:1 occur:1 constraint:1 generates:1 argument:1 min:3 extremely:1 loqo:1 ned:1 department:2 according:3 alternate:1 craven:1 remain:2 osuna:1 lp:1 making:1 taken:1 computationally:1 equation:3 turn:1 singer:1 letting:1 umich:2 available:1 appropriate:2 rp:1 remaining:1 calculating:1 k1:3 usual:2 nr:1 thank:1 simulated:1 whom:1 toward:1 assuming:1 code:1 index:1 relationship:2 setup:3 negative:1 design:1 unknown:1 observation:4 jizhu:1 acknowledge:2 t:1 situation:1 y1:1 reproducing:2 smo:1 nu:1 nip:2 proceeds:1 usually:1 xm:2 event:9 treated:1 regularized:2 predicting:1 zhu:3 ne:2 freund:1 loss:7 fully:1 validation:3 foundation:1 degree:9 cd:1 supported:1 keeping:1 free:1 allow:1 k12:2 distributed:1 boundary:1 calculated:2 xn:1 avoids:1 computes:5 author:1 adaptive:1 approximate:1 conclude:2 xi:35 decade:1 table:2 terminate:1 delving:1 mse:6 linearly:1 categorized:1 x1:2 cient:4 join:1 jmlr:1 formula:1 e4:1 er:23 r2:1 svm:3 sinc:1 exists:1 vapnik:2 sequential:2 michigan:2 explore:1 lagrange:1 tracking:1 partially:1 relies:1 goal:1 ann:2 tted:3 change:3 determined:1 classi:1 called:2 total:1 arbor:2 disregard:1 atsch:1 select:3 support:11 latter:1 crammer:1 princeton:1 |
2,044 | 2,857 | Sparse Gaussian Processes using Pseudo-inputs
Edward Snelson
Zoubin Ghahramani
Gatsby Computational Neuroscience Unit
University College London
17 Queen Square, London WC1N 3AR, UK
{snelson,zoubin}@gatsby.ucl.ac.uk
Abstract
We present a new Gaussian process (GP) regression model whose covariance is parameterized by the the locations of M pseudo-input points,
which we learn by a gradient based optimization. We take M N ,
where N is the number of real data points, and hence obtain a sparse
regression method which has O(M 2 N ) training cost and O(M 2 ) prediction cost per test case. We also find hyperparameters of the covariance function in the same joint optimization. The method can be viewed
as a Bayesian regression model with particular input dependent noise.
The method turns out to be closely related to several other sparse GP approaches, and we discuss the relation in detail. We finally demonstrate
its performance on some large data sets, and make a direct comparison to
other sparse GP methods. We show that our method can match full GP
performance with small M , i.e. very sparse solutions, and it significantly
outperforms other approaches in this regime.
1
Introduction
The Gaussian process (GP) is a popular and elegant method for Bayesian non-linear nonparametric regression and classification. Unfortunately its non-parametric nature causes
computational problems for large data sets, due to an unfavourable N 3 scaling for training,
where N is the number of data points. In recent years there have been many attempts to
make sparse approximations to the full GP in order to bring this scaling down to M 2 N
where M N [1, 2, 3, 4, 5, 6, 7, 8, 9]. Most of these methods involve selecting a subset
of the training points of size M (active set) on which to base computation. A typical way of
choosing such a subset is through some sort of information criterion. For example, Seeger
et al. [7] employ a very fast approximate information gain criterion, which they use to
greedily select points into the active set.
A major common problem to these methods is that they lack a reliable way of learning
kernel hyperparameters, because the active set selection interferes with this learning procedure. Seeger et al. [7] construct an approximation to the full GP marginal likelihood, which
they try to maximize to find the hyperparameters. However, as the authors state, they have
persistent difficulty in practically doing this through gradient ascent. The reason for this
is that reselecting the active set causes non-smooth fluctuations in the marginal likelihood
and its gradients, meaning that they cannot get smooth convergence. Therefore the speed
of active set selection is somewhat undermined by the difficulty of selecting hyperparameters. Inappropriately learned hyperparameters will adversely affect the quality of solution,
especially if one is trying to use them for automatic relevance determination (ARD) [10].
In this paper we circumvent this problem by constructing a GP regression model that enables us to find active set point locations and hyperparameters in one smooth joint optimization. The covariance function of our GP is parameterized by the locations of pseudo-inputs
? an active set not constrained to be a subset of the data, found by a continuous optimization. This is a further major advantage, since we can improve the quality of our fit by the
fine tuning of their precise locations.
Our model is closely related to several sparse GP approximations, in particular Seeger?s
method of projected latent variables (PLV) [7, 8]. We discuss these relations in section 3.
In principle we could also apply our technique of moving active set points off data points to
approximations such as PLV. However we empirically demonstrate that a crucial difference
between PLV and our method (SPGP) prevents this idea from working for PLV.
1.1
Gaussian processes for regression
We provide here a concise summary of GPs for regression, but see [11, 12, 13, 10] for
more detailed reviews. We have a data set D consisting of N input vectors X = {xn }N
n=1
of dimension D and corresponding real valued targets y = {yn }N
n=1 . We place a zero
mean Gaussian process prior on the underlying latent function f (x) that we are trying to
model. We therefore have a multivariate Gaussian distribution on any finite subset of latent
variables; in particular, at X: p(f |X) = N (f |0, KN ), where N (f |m, V) is a Gaussian
distribution with mean m and covariance V. In a Gaussian process the covariance matrix
is constructed from a covariance function, or kernel, K which expresses some prior notion
of smoothness of the underlying function: [KN ]nn0 = K(xn , xn0 ). Usually the covariance
function depends on a small number of hyperparameters ?, which control these smoothness
properties. For our experiments later on we will use the standard Gaussian covariance with
ARD hyperparameters:
h
i
XD
(d) 2
K(xn , xn0 ) = c exp ? 21
bd x(d)
,
? = {c, b} .
(1)
n ? xn0
d=1
In standard GP regression we also assume a Gaussian noise model or likelihood p(y|f ) =
N (y|f , ? 2 I). Integrating out the latent function values we obtain the marginal likelihood:
p(y|X, ?) = N (y|0, KN + ? 2 I) ,
(2)
which is typically used to train the GP by finding a (local) maximum with respect to the
hyperparameters ? and ? 2 .
Prediction is made by considering a new input point x and conditioning on the observed
data and hyperparameters. The distribution of the target value at the new point is then:
2 ?1
2 ?1
p(y|x, D, ?) = N y k>
y, Kxx ? k>
kx + ? 2 , (3)
x (KN + ? I)
x (KN + ? I)
where [kx ]n = K(xn , x) and Kxx = K(x, x). The GP is a non-parametric model, because
the training data are explicitly required at test time in order to construct the predictive
distribution, as is clear from the above expression.
GPs are prohibitive for large data sets because training requires O(N 3 ) time due to the
inversion of the covariance matrix. Once the inversion is done, prediction is O(N ) for the
predictive mean and O(N 2 ) for the predictive variance per new test case.
2
Sparse Pseudo-input Gaussian processes (SPGPs)
In order to derive a sparse model that is computationally tractable for large data sets, which
still preserves the desirable properties of the full GP, we examine in detail the GP predictive
distribution (3). Consider the mean and variance of this distribution as functions of x, the
new input. Regarding the hyperparameters as known and fixed for now, these functions
are effectively parameterized by the locations of the N training input and target pairs,
X and y. In this paper we consider a model with likelihood given by the GP predictive
distribution, and parameterized by a pseudo data set. The sparsity in the model will arise
? of size M < N : pseudo-inputs
because we will generally consider a pseudo data set D
M
M
?
?
?
X = {?
xm }m=1 and pseudo targets f = {fm }m=1 . We have denoted the pseudo targets
?f instead of y
? because as they are not real observations, it does not make much sense
to include a noise variance for them. They are therefore equivalent to the latent function
values f . The actual observed target value will of course be assumed noisy as before. These
assumptions therefore lead to the following single data point likelihood:
? ?f ) = N y k> K?1 ?f , Kxx ? k> K?1 kx + ? 2 ,
p(y|x, X,
(4)
x
M
x
M
? m0 ) and [kx ]m = K(?
where [KM ]mm0 = K(?
xm , x
xm , x), for m = 1, . . . , M .
This can be viewed as a standard regression model with a particular form of parameterized
mean function and input-dependent noise model. The target data are generated i.i.d. given
the inputs, giving the complete data likelihood:
YN
? ?f ) =
? ?f ) = N (y|KNM K?1 ?f , ? + ? 2 I) ,
p(y|X, X,
p(yn |xn , X,
(5)
M
n=1
?1
? m ).
where ? = diag(?), ?n = Knn ? k>
n KM kn , and [KNM ]nm = K(xn , x
Learning in the model involves finding a suitable setting of the parameters ? an appropriate
pseudo data set that explains the real data well. However rather than simply maximize the
? and ?f it turns out that we can integrate out the pseudo targets
likelihood with respect to X
?f . We place a Gaussian prior on the pseudo targets:
? = N (?f |0, KM ) .
p(?f |X)
(6)
This is a very reasonable prior because we expect the pseudo data to be distributed in a
very similar manner to the real data, if they are to model them well. It is not easy to place a
prior on the pseudo-inputs and still remain with a tractable model, so we will find these by
maximum likelihood (ML). For the moment though, consider the pseudo-inputs as known.
We find the posterior distribution over pseudo targets ?f using Bayes rule on (5) and (6):
? = N ?f |KM Q?1 KMN (? + ? 2 I)?1 y, KM Q?1 KM ,
p(?f |D, X)
(7)
M
M
where QM = KM + KMN (? + ? 2 I)?1 KNM .
Given a new input x? , the predictive distribution is then obtained by integrating the likelihood (4) with the posterior (7):
Z
?
? ?f ) p(?f |D, X)
? = N (y? |?? , ? 2 ) ,
p(y? |x? , D, X) = d?f p(y? |x? , X,
(8)
?
where
?1
2 ?1
?? = k>
y
? QM KMN (? + ? I)
?1
?1
2
??2 = K?? ? k>
? (KM ? QM )k? + ? .
Note that inversion of the matrix ? + ? 2 I is not a problem because it is diagonal. The
computational cost is dominated by the matrix multiplication KMN (? + ? 2 I)?1 KNM in
the calculation of QM which is O(M 2 N ). After various precomputations, prediction can
then be made in O(M ) for the mean and O(M 2 ) for the variance per test case.
(a)
y
(b)
(c)
y
x
y
x
x
Figure 1: Predictive distributions (mean and two standard deviation lines) for: (a) full GP,
(b) SPGP trained using gradient ascent on (9), (c) SPGP trained using gradient ascent on
(10). Initial pseudo point positions are shown at the top as red crosses; final pseudo point
positions are shown at the bottom as blue crosses (the y location on the plots of these
crosses is not meaningful).
? and hyperparameters
We are left with the problem of finding the pseudo-input locations X
2
? = {?, ? }. We can do this by computing the marginal likelihood from (5) and (6):
Z
? ?) = d?f p(y|X, X,
? ?f ) p(?f |X)
?
p(y|X, X,
(9)
2
= N (y|0, KNM K?1
K
+
?
+
?
I)
.
MN
M
The marginal likelihood can then be maximized with respect to all these parameters
? ?} by gradient ascent. The details of the gradient calculations are long and tedious
{X,
and therefore omitted here for brevity. They closely follow the derivations of hyperparameter gradients of Seeger et al. [7] (see also section 3), and as there, can be most efficiently
coded with Cholesky factorisations. Note that KM , KMN and ? are all functions of the
? and ?. The exact form of the gradients will of course depend on the
M pseudo-inputs X
functional form of the covariance function chosen, but our method will apply to any covariance that is differentiable with respect to the input points. It is worth saying that the
SPGP can be viewed as a standard GP with a particular non-stationary covariance function
parameterized by the pseudo-inputs.
Since we now have M D +|?| parameters to fit, instead of just |?| for the full GP, one may
? =X
be worried about overfitting. However, consider the case where we let M = N and X
? the pseudo-inputs coincide with the real inputs. At this point the marginal likelihood is
equal to that of a full GP (2). This is because at this point KMN = KM = KN and ? = 0.
Moreover the predictive distribution (8) also collapses to the full GP predictive distribution
(3). These are clearly desirable properties of the model, and they give confidence that a
good solution will be found when M < N . However it is the case that hyperparameter
learning complicates matters, and we discuss this further in section 4.
3
Relation to other methods
It turns out that Seeger?s method of PLV [7, 8] uses a very similar marginal likelihood
approximation and predictive distribution. If you remove ? from all the SPGP equations
you get precisely their expressions. In particular the marginal likelihood they use is:
? ?) = N (y|0, KNM K?1 KMN + ? 2 I) ,
p(y|X, X,
(10)
M
which has also been used elsewhere before [1, 4, 5]. They have derived this expression from
a somewhat different route, as a direct approximation to the full GP marginal likelihood.
(a)
y
(b)
y
x
(c)
y
x
x
Figure 2: Sample data drawn from the marginal likelihood of: (a) a full GP, (b) SPGP, (c)
PLV. For (b) and (c), the blue crosses show the location of the 10 pseudo-input points.
As discussed earlier, the major difference between our method and these other methods,
is that they do not use this marginal likelihood to learn locations of active set input points
? only the hyperparameters are learnt from (10). This begged the question of what would
happen if we tried to use their marginal likelihood approximation (10) instead of (9) to try
to learn pseudo-input locations by gradient ascent. We show that the ? that appears in the
SPGP marginal likelihood (9) is crucial for finding pseudo-input points by gradients.
Figure 1 shows what happens when we try to optimize these two likelihoods using gradient
ascent with respect to the pseudo inputs, on a simple 1D data set. Plotted are the predictive
distributions, initial and final locations of the pseudo inputs. Hyperparameters were fixed
to their true values for this example. The initial pseudo-input locations were chosen adversarially: all towards the left of the input space (red crosses). Using the SPGP likelihood, the
pseudo-inputs spread themselves along the extent of the training data, and the predictive
distribution matches the full GP very closely (Figure 1(b)). Using the PLV likelihood, the
points begin to spread, but very quickly become stuck as the gradient pushing the points
towards the right becomes tiny (Figure 1(c)).
Figure 2 compares data sampled from the marginal likelihoods (9) and (10), given a particular setting of the hyperparameters and a small number of pseudo-input points. The major
difference between the two is that the SPGP likelihood has a constant marginal variance of
Knn + ? 2 , whereas the PLV decreases to ? 2 away from the pseudo-inputs. Alternatively,
the noise component of the PLV likelihood is a constant ? 2 , whereas the SPGP noise grows
to Knn + ? 2 away from the pseudo-inputs. If one is in the situation of Figure 1(c), under
the SPGP likelihood, moving the rightmost pseudo-input slightly to the right will immediately start to reduce the noise in this region from Knn + ? 2 towards ? 2 . Hence there
will be a strong gradient pulling it to the right. With the PLV likelihood, the noise is fixed
at ? 2 everywhere, and moving the point to the right does not improve the quality of fit of
the mean function enough locally to provide a significant gradient. Therefore the points
become stuck, and we believe this effect accounts for the failure of the PLV likelihood in
Figure 1(c).
It should be emphasised that the global optimum of the PLV likelihood (10) may well be a
good solution, but it is going to be difficult to find with gradients. The SPGP likelihood (9)
also suffers from local optima of course, but not so catastrophically. It may be interesting
in the future to compare which performs better for hyperparameter optimization.
4
Experiments
In the previous section we showed our gradient method successfully learning the pseudoinputs on a 1D example. There the initial pseudo input points were chosen adversarially, but
on a real problem it is sensible to initialize by randomly placing them on real data points,
n = 10000
random
n = 10000
info?gain
?1
?1
10
?2
10
10
?2
0
200
400
600
800
1000
10
1200
0
?2
200
400
600
800
random
1000
10
1200
?2
40
60
80
100
120
400
600
800
140
160
10
0
1000
1200
smo?bart
info?gain
10
?2
?2
20
200
?1
?1
10
0
0
info?gain
?1
10
10
n = 10000
smo?bart
?1
10
20
40
60
80
100
120
140
160
10
0
20
40
60
80
100
120
140
140
160
160
Figure 3: Our results have been added to plots reproduced with kind permission from [7].
The plots show mean square test error as a function of active/pseudo set size M . Top row
? data set kin-40k, bottom row ? pumadyn-32nm1 . We have added circles which show
SPGP with both hyperparameter and pseudo-input learning from random initialisation. For
kin-40k the squares show SPGP with hyperparameters obtained from a full GP and fixed.
For pumadyn-32nm the squares show hyperparameters initialized from a full GP. random,
info-gain and smo-bart are explained in the text. The horizontal lines are a full GP trained
on a subset of the data.
and this is what we do for all of our experiments. To compare our results to other methods
we have run experiments on exactly the same data sets as in Seeger et al. [7], following
precisely their preprocessing and testing methods. In Figure 3, we have reproduced their
learning curves for two large data sets1 , superimposing our test error (mean squared).
Seeger et al. compare three methods: random, info-gain and smo-bart. random involves
picking an active set of size M randomly from among training data. info-gain is their own
greedy subset selection method, which is extremely cheap to train ? barely more expensive
than random. smo-bart is Smola and Bartlett?s [1] more expensive greedy subset selection
method. Also shown with horizontal lines is the test error for a full GP trained on a subset
of the data of size 2000 for data set kin-40k and 1024 for pumadyn-32nm. For these learning
curves, they do not actually learn hyperparameters by maximizing their approximation to
the marginal likelihood (10). Instead they fix them to those obtained from the full GP2 .
For kin-40k we follow Seeger et al.?s procedure of setting the hyperparameters from the full
GP on a subset. We then optimize the pseudo-input positions, and plot the results as red
squares. We see the SPGP learning curve lying significantly below all three other methods
in Figure 3. We rapidly approach the error of a full GP trained on 2000 points, using a
pseudo set of only a few hundred points. We then try the harder task of also finding the
hyperparameters at the same time as the pseudo-inputs. The results are plotted as blue
circles. The method performs extremely well for small M , but we see some overfitting
1
kin-40k: 10000 training, 30000 test, 9 attributes, see www.igi.tugraz.at/aschwaig/data.html.
pumadyn-32nm: 7168 training, 1024 test, 33 attributes, see www.cs.toronto/ delve.
2
Seeger et al. have a separate section testing their likelihood approximation (10) to learn hyperparameters, in conjunction with the active set selection methods. They show that it can be used to
reliably learn hyperparameters with info-gain for active set sizes of 100 and above. They have more
trouble reliably learning hyperparameters for very small active sets.
standard GP
y
SPGP
Figure 4: Regression on a data
set with input dependent noise.
Left: standard GP. Right: SPGP.
Predictive mean and two standard deviation lines are shown.
Crosses show final locations of
pseudo-inputs for SPGP. Hyperparameters are also learnt.
y
x
x
behaviour for large M which seems to be caused by the noise hyperparameter being driven
too small (the blue circles have higher likelihood than the red squares below them).
For data set pumadyn-32nm, we again try to jointly find hyperparameters and pseudoinputs. Again Figure 3 shows SPGP with extremely low error for small pseudo set size
? with just 10 pseudo-inputs we are already close to the error of a full GP trained on 1024
points. However, in this case increasing the pseudo set size does not decrease our error. In
this problem there is a large number of irrelevant attributes, and the relevant ones need to
be singled out by ARD. Although the hyperparameters learnt by our method are reasonable
(2 out of the 4 relevant dimensions are found), they are not good enough to get down to the
error of the full GP. However if we initialize our gradient algorithm with the hyperparameters of the full GP, we get the points plotted as squares (this time red likelihoods > blue
likelihoods, so it is a problem of local optima not overfitting). Now with only a pseudo set
of size 25 we reach the performance of the full GP, and significantly outperform the other
methods (which also had their hyperparameters set from the full GP).
Another main difference between the methods lies in training time. Our method performs
optimization over a potentially large parameter space, and hence is relatively expensive to
train. On the face of it methods such as info-gain and random are extremely cheap. However all these methods must be combined with obtaining hyperparameters in some way ?
either by a full GP on a subset (generally expensive), or by gradient ascent on an approximation to the likelihood. When you consider this combined task, and that all methods
involve some kind of gradient based procedure, then none of the methods are particularly
cheap. We believe that the gain in accuracy achieved by our method can often be worth the
extra training time associated with optimizing in a larger parameter space.
5
Conclusions, extensions and future work
Although GPs are very flexible regression models, they are still limited by the form of the
covariance function. For example it is difficult to model non-stationary processes with a GP
because it is hard to construct sensible non-stationary covariance functions. Although the
SPGP is not specifically designed to model non-stationarity, the extra flexibility associated
with moving pseudo inputs around can actually achieve this to a certain extent. Figure
4 shows the SPGP fit to some data with an input dependent noise variance. The SPGP
achieves a much better fit to the data than the standard GP by moving almost all the pseudoinput points outside the region of data3 . It will be interesting to test these capabilities further
in the future. The extension to classification is also a natural avenue to explore.
We have demonstrated a significant decrease in test error over the other methods for a given
small pseudo/active set size. Our method runs into problems when we consider much larger
3
It should be said that there are local optima in this problem, and other solutions looked closer
to the standard GP. We ran the method 5 times with random initialisations. All runs had higher
likelihood than the GP; the one with the highest likelihood is plotted.
pseudo set size and/or high dimensional input spaces, because the space in which we are
optimizing becomes impractically big. However we have currently only tried using an ?off
the shelf? conjugate gradient minimizer, or L-BFGS, and there are certainly improvements
that can be made in this area. For example we can try optimizing subsets of variables
iteratively (chunking), or stochastic gradient ascent, or we could make a hybrid by picking
some points randomly and optimizing others. In general though we consider our method
most useful when one wants a very sparse (hence fast prediction) and accurate solution.
One further way in which to deal with large D is to learn a low dimensional projection of
the input space. This has been considered for GPs before [14], and could easily be applied
to our model.
In conclusion, we have presented a new method for sparse GP regression, which shows
a significant performance gain over other methods especially when searching for an extremely sparse solution. We have shown that the added flexibility of moving pseudo-input
points which are not constrained to lie on the true data points leads to better solutions, and
even some non-stationary effects can be modelled. Finally we have shown that hyperparameters can be jointly learned with pseudo-input points with reasonable success.
Acknowledgements
Thanks to the authors of [7] for agreeing to make their results and plots available for reproduction. Thanks to all at the Sheffield GP workshop for helping to clarify this work.
References
[1] A. J. Smola and P. Bartlett. Sparse greedy Gaussian process regression. In Advances in Neural
Information Processing Systems 13. MIT Press, 2000.
[2] C. K. I. Williams and M. Seeger. Using the Nystr?om method to speed up kernel machines. In
Advances in Neural Information Processing Systems 13. MIT Press, 2000.
[3] V. Tresp. A Bayesian committee machine. Neural Computation, 12:2719?2741, 2000.
[4] L. Csat?o. Sparse online Gaussian processes. Neural Computation, 14:641?668, 2002.
[5] L. Csat?o. Gaussian Processes ? Iterative Sparse Approximations. PhD thesis, Aston University, UK, 2002.
[6] N. D. Lawrence, M. Seeger, and R. Herbrich. Fast sparse Gaussian process methods: the
informative vector machine. In Advances in Neural Information Processing Systems 15. MIT
Press, 2002.
[7] M. Seeger, C. K. I. Williams, and N. D. Lawrence. Fast forward selection to speed up sparse
Gaussian process regression. In C. M. Bishop and B. J. Frey, editors, Proceedings of the Ninth
International Workshop on Artificial Intelligence and Statistics, 2003.
[8] M. Seeger. Bayesian Gaussian Process Models: PAC-Bayesian Generalisation Error Bounds
and Sparse Approximations. PhD thesis, University of Edinburgh, 2003.
[9] J. Qui?nonero Candela. Learning with Uncertainty ? Gaussian Processes and Relevance Vector
Machines. PhD thesis, Technical University of Denmark, 2004.
[10] D. J. C. MacKay. Introduction to Gaussian processes. In C. M. Bishop, editor, Neural Networks
and Machine Learning, NATO ASI Series, pages 133?166. Kluwer Academic Press, 1998.
[11] C. K. I. Williams and C. E. Rasmussen. Gaussian processes for regression. In Advances in
Neural Information Processing Systems 8. MIT Press, 1996.
[12] C. E. Rasmussen. Evaluation of Gaussian Processes and Other Methods for Non-Linear Regression. PhD thesis, University of Toronto, 1996.
[13] M. N. Gibbs. Bayesian Gaussian Processes for Regression and Classification. PhD thesis,
Cambridge University, 1997.
[14] F. Vivarelli and C. K. I. Williams. Discovering hidden features with Gaussian processes regression. In Advances in Neural Information Processing Systems 11. MIT Press, 1998.
| 2857 |@word inversion:3 seems:1 tedious:1 km:10 tried:2 covariance:14 concise:1 nystr:1 harder:1 catastrophically:1 moment:1 initial:4 series:1 selecting:2 initialisation:2 rightmost:1 outperforms:1 bd:1 must:1 happen:1 informative:1 enables:1 cheap:3 remove:1 plot:5 designed:1 bart:5 stationary:4 greedy:3 prohibitive:1 intelligence:1 discovering:1 location:13 toronto:2 herbrich:1 along:1 constructed:1 direct:2 become:2 persistent:1 manner:1 themselves:1 examine:1 actual:1 considering:1 increasing:1 becomes:2 begin:1 underlying:2 moreover:1 what:3 kind:2 finding:5 pseudo:49 xd:1 exactly:1 qm:4 uk:3 control:1 unit:1 yn:3 before:3 local:4 frey:1 fluctuation:1 precomputations:1 delve:1 collapse:1 limited:1 testing:2 procedure:3 area:1 asi:1 significantly:3 projection:1 confidence:1 integrating:2 aschwaig:1 zoubin:2 get:4 cannot:1 close:1 selection:6 optimize:2 equivalent:1 www:2 demonstrated:1 maximizing:1 williams:4 immediately:1 factorisation:1 rule:1 searching:1 notion:1 nn0:1 target:10 exact:1 gps:4 us:1 expensive:4 particularly:1 observed:2 bottom:2 region:2 decrease:3 highest:1 ran:1 spgp:22 trained:6 depend:1 predictive:13 easily:1 joint:2 various:1 derivation:1 train:3 fast:4 london:2 artificial:1 choosing:1 outside:1 whose:1 larger:2 valued:1 statistic:1 knn:4 gp:44 jointly:2 noisy:1 final:3 reproduced:2 singled:1 online:1 advantage:1 differentiable:1 interferes:1 ucl:1 relevant:2 nonero:1 rapidly:1 flexibility:2 achieve:1 convergence:1 optimum:4 derive:1 ac:1 ard:3 strong:1 edward:1 c:1 involves:2 closely:4 attribute:3 stochastic:1 unfavourable:1 explains:1 behaviour:1 fix:1 extension:2 helping:1 clarify:1 practically:1 lying:1 around:1 considered:1 exp:1 lawrence:2 m0:1 major:4 achieves:1 omitted:1 nm1:1 currently:1 successfully:1 mit:5 clearly:1 gaussian:24 rather:1 shelf:1 conjunction:1 derived:1 improvement:1 likelihood:39 seeger:13 greedily:1 sense:1 dependent:4 typically:1 hidden:1 relation:3 going:1 classification:3 among:1 html:1 denoted:1 flexible:1 constrained:2 initialize:2 mackay:1 marginal:16 equal:1 construct:3 once:1 adversarially:2 placing:1 future:3 others:1 employ:1 few:1 randomly:3 preserve:1 consisting:1 attempt:1 stationarity:1 evaluation:1 certainly:1 wc1n:1 accurate:1 closer:1 kxx:3 initialized:1 circle:3 plotted:4 complicates:1 earlier:1 ar:1 queen:1 cost:3 deviation:2 subset:11 hundred:1 too:1 kn:7 learnt:3 combined:2 thanks:2 international:1 off:2 picking:2 quickly:1 pumadyn:5 squared:1 again:2 nm:5 thesis:5 plv:12 adversely:1 account:1 knm:6 bfgs:1 matter:1 explicitly:1 igi:1 depends:1 caused:1 later:1 try:6 candela:1 doing:1 red:5 start:1 sort:1 bayes:1 capability:1 om:1 square:7 accuracy:1 variance:6 efficiently:1 maximized:1 modelled:1 bayesian:6 none:1 worth:2 reach:1 suffers:1 failure:1 associated:2 gain:11 sampled:1 popular:1 actually:2 appears:1 higher:2 follow:2 done:1 though:2 just:2 smola:2 working:1 horizontal:2 lack:1 quality:3 pulling:1 believe:2 grows:1 effect:2 true:2 hence:4 iteratively:1 deal:1 criterion:2 trying:2 complete:1 demonstrate:2 performs:3 bring:1 meaning:1 snelson:2 common:1 functional:1 empirically:1 conditioning:1 discussed:1 kluwer:1 significant:3 cambridge:1 gibbs:1 smoothness:2 automatic:1 tuning:1 had:2 moving:6 base:1 multivariate:1 posterior:2 recent:1 showed:1 own:1 optimizing:4 irrelevant:1 driven:1 route:1 certain:1 success:1 somewhat:2 gp2:1 maximize:2 full:24 desirable:2 smooth:3 technical:1 match:2 determination:1 calculation:2 cross:6 long:1 academic:1 coded:1 prediction:5 regression:18 sheffield:1 kernel:3 achieved:1 whereas:2 want:1 fine:1 crucial:2 extra:2 ascent:8 elegant:1 easy:1 enough:2 affect:1 fit:5 fm:1 reduce:1 idea:1 regarding:1 avenue:1 expression:3 bartlett:2 cause:2 generally:2 useful:1 detailed:1 involve:2 clear:1 nonparametric:1 locally:1 outperform:1 neuroscience:1 per:3 csat:2 blue:5 hyperparameter:5 data3:1 express:1 drawn:1 year:1 run:3 parameterized:6 you:3 everywhere:1 uncertainty:1 place:3 saying:1 reasonable:3 almost:1 scaling:2 qui:1 bound:1 precisely:2 dominated:1 speed:3 extremely:5 relatively:1 conjugate:1 remain:1 slightly:1 agreeing:1 happens:1 explained:1 chunking:1 computationally:1 equation:1 turn:3 discus:3 committee:1 vivarelli:1 tractable:2 available:1 sets1:1 apply:2 away:2 appropriate:1 permission:1 top:2 include:1 trouble:1 tugraz:1 pushing:1 emphasised:1 giving:1 ghahramani:1 especially:2 question:1 added:3 already:1 looked:1 parametric:2 diagonal:1 said:1 gradient:22 separate:1 sensible:2 extent:2 reason:1 barely:1 denmark:1 difficult:2 unfortunately:1 potentially:1 info:8 reliably:2 observation:1 finite:1 situation:1 precise:1 ninth:1 pair:1 required:1 smo:5 learned:2 usually:1 below:2 xm:3 regime:1 sparsity:1 reliable:1 suitable:1 difficulty:2 natural:1 circumvent:1 hybrid:1 mn:1 improve:2 aston:1 tresp:1 text:1 review:1 prior:5 acknowledgement:1 pseudoinputs:2 multiplication:1 expect:1 interesting:2 integrate:1 principle:1 editor:2 tiny:1 row:2 course:3 summary:1 elsewhere:1 rasmussen:2 face:1 sparse:18 distributed:1 edinburgh:1 curve:3 dimension:2 xn:6 worried:1 author:2 made:3 stuck:2 projected:1 coincide:1 preprocessing:1 forward:1 approximate:1 nato:1 ml:1 global:1 active:15 mm0:1 overfitting:3 assumed:1 alternatively:1 continuous:1 latent:5 iterative:1 learn:7 nature:1 obtaining:1 constructing:1 inappropriately:1 diag:1 spread:2 main:1 big:1 noise:11 hyperparameters:30 arise:1 kmn:7 gatsby:2 position:3 lie:2 kin:5 down:2 bishop:2 pac:1 reproduction:1 workshop:2 effectively:1 phd:5 kx:4 simply:1 explore:1 prevents:1 minimizer:1 viewed:3 towards:3 hard:1 typical:1 specifically:1 generalisation:1 impractically:1 superimposing:1 xn0:3 meaningful:1 select:1 college:1 cholesky:1 brevity:1 relevance:2 |
2,045 | 2,858 | Integrate-and-Fire models with adaptation are
good enough: predicting spike times under
random current injection
Renaud Jolivet?
Brain Mind Institute, EPFL
CH-1015 Lausanne, Switzerland
[email protected]
Alexander Rauch
MPI for Biological Cybernetics
D-72012 T?ubingen, Germany
[email protected]
?
Hans-Rudolf Luscher
Institute of Physiology
CH-3012 Bern, Switzerland
[email protected]
Wulfram Gerstner
Brain Mind Institute, EPFL
CH-1015 Lausanne, Switzerland
[email protected]
Abstract
Integrate-and-Fire-type models are usually criticized because of their
simplicity. On the other hand, the Integrate-and-Fire model is the basis of most of the theoretical studies on spiking neuron models. Here,
we develop a sequential procedure to quantitatively evaluate an equivalent Integrate-and-Fire-type model based on intracellular recordings of
cortical pyramidal neurons. We find that the resulting effective model
is sufficient to predict the spike train of the real pyramidal neuron with
high accuracy. In in vivo-like regimes, predicted and recorded traces are
almost indistinguishable and a significant part of the spikes can be predicted at the correct timing. Slow processes like spike-frequency adaptation are shown to be a key feature in this context since they are necessary
for the model to connect between different driving regimes.
1
Introduction
In a recent paper, Feng [1] was questioning the ?goodness? of the Integrate-and-Fire model
(I&F). This is a question of importance since the I&F model is one of the most commonly
used spiking neuron model in theoretical studies as well as in the machine learning community (see [2-3] for a review). The I&F model is usually criticized in the biological
community because of its simplicity. It is believed to be much too simple to capture the
firing dynamics of real neurons beyond a very rough and conceptual description of input
integration and spikes initiation.
Nevertheless, recent years have seen several groups reporting that this type of model yields
quantitative predictions of the activity of real neurons. Rauch and colleagues have shown
that I&F-type models (with adaptation) reliably predict the mean firing rate of cortical
?
homepage: http://icwww.epfl.ch/?rjolivet
pyramidal cells [4]. Keat and colleagues have shown that a similar model is able to predict
almost exactly the timing of spikes of neurons in the visual pathway [5]. However, the
question is still open of how the predictions of I&F-type models compare to the precise
structure of spike trains in the cortex. Indeed, cortical pyramidal neurons are known to
produce spike trains whose reliability highly depends on the input scenario [6].
The aim of this paper is twofold. Firstly, we will show that there exists a systematic way
to extract relevant parameters of an I&F-type model from intracellular recordings. To do
so, we will follow the method exposed in [7] and which is based on optimal filtering techniques. Alternative approaches like maximum-likelihood methods exist and have been explored recently by Paninski and colleagues [8]. Note that both approaches had already been
mentioned by Brillinger and Segundo [9]. Secondly, we will show by a quantitative evaluation of the model performances that the quality of simple threshold models is surprisingly
good and is close to the intrinsic reliability of real neurons. We will try to convince the
reader that, given the addition of a slow process, the I&F model is in fact a model that can
be considered good enough for pyramidal neurons of the neocortex under random current
injection.
2
Model and Methods
We started by collecting recordings. Layer 5 pyramidal neurons of the rat neocortex were
recorded intracellularly in vitro while stimulated at the soma by a randomly fluctuating current generated by an Ornstein-Uhlenbeck (OU) process with a 1 ms autocorrelation time.
Both the mean ?I and the variance ?I2 of the OU process were varied in order to sample the
response of the neurons to various levels of tonic and noisy inputs. Details of the experimental procedure can be found in [4]. A subset of these recordings was used to construct,
separately for each recorded neuron, a generalized I&F-type model that we formulated in
the framework of the Spike Response Model [3].
2.1
Definition of the model
The Spike Response Model (SRM) is written
Z
u(t) = ?(t ? t?) +
+?
?(s) I(t ? s)ds
(1)
0
with u the membrane voltage of the neuron and I the external driving current. The kernel
? models the integrative properties of the membrane. The kernel ? acts as a template for
the shape of spikes (usually highly stereotyped). Like in the I&F model, the model neuron
fires each time that the membrane voltage u crosses the threshold ? from below
if u(t) ? ?(t) and
d
d
u(t) ? ?(t), then t? = t
dt
dt
(2)
Here, the threshold includes a mechanism of spike-frequency adaptation. ? is given by the
following equation
X
d?
? ? ?0
=?
+ A?
?(t ? tk )
(3)
dt
??
k
Each time that a spike is fired, the threshold ? is increased by a fixed amount A ? . It then
decays back to its resting value ?0 with time constant ?? . tk denote the past firing times of
the model neuron. During discharge at rate f , the threshold fluctuates around the average
value
?? ? ?0 + ? f
(4)
where ? = A? ?? . This type of adaptation mechanism has been shown to constitute a
universal model for spike-frequency adaptation [10] and has already been applied in a similar context [11]. During the model estimation, we use as a first step a traditional constant
threshold denoted by ?(t) = ?cst which is then transformed in the adaptive threshold of
Equation (3) by a procedure to be detailed below.
2.2
Mapping technique
The mapping technique itself is extensively described in [7,12-13] and we refer interested
readers to these publications. In short, it is a systematic step-by-step evaluation and optimization procedure based on intracellular recordings. It consists in sequentially evaluating
kernels (? and ?) and parameters [A? , ?0 and ?? in Equation (3)] that characterize a specific
instance of the model. The consecutive steps of the procedure are as follows
1. Extract the kernel ? from a sample voltage recording by spike triggered averaging.
For the sake of simplicity, we assume that the mean drive ?I = 0.
2. Subtract ? from the voltage recording to isolate the subthreshold fluctuations.
3. Extract the kernel ? by the Wiener-Hopf optimal filtering technique [7,14]. This
step involves a comparison between the subthreshold fluctuations and the corresponding input current.
4. Find the optimal constant threshold ?cst . The optimal value of ?cst is the one
that maximizes the coefficient ? (see subsection 2.3 below for the definition of ?).
The parameter ?cst depends on the specific set of input parameters (mean ?I and
variance ?I2 ) used during stimulation.
5. Plot the threshold ?cst as a function of the firing frequency f of the neuron and
run a linear regression. ?0 is identified with the value of the fit at f = 0 and ?
with the slope [see Equation (4) and Figure 1C].
6. Optimize A? for the best performances (again measured with ?), ?? is defined as
?? = ?/A? .
Figure 1A and B show kernels ? (step 1) and ? (step 3) for a typical neuron. The double
exponential shape of ? is due to the coupling between somatic and dendritic compartments
[15]. Figure 1C shows the optimal constant ?cst plotted versus f . It is very well fitted by a
simple linear function and allows to determine the parameters ?0 and ? (steps 4 and 5).
2.3
Evaluation of performances
The performances of the model are evaluated with the coincidence factor ? [16]. It is
defined by
Ncoinc ? hNcoinc i 1
(5)
?= 1
N
2 (Ndata + NSRM )
where Ndata is the number of spikes in the reference spike train, NSRM is the number of
spikes in the predicted spike train SSRM , Ncoinc is the number of coincidences with precision ? between the two spike trains, and hNcoinc i = 2??Ndata is the expected number
of coincidences generated by a homogeneous Poisson process with the same rate ? as the
spike train SSRM . The factor N = 1?2?? normalizes ? to a maximum value ? = 1 which
is reached if and only if the spike train of the SRM reproduces exactly that of the cell. A
homogeneous Poisson process with the same number of spikes as the SRM would yield
? = 0. We compute the coincidence factor ? by comparing the two complete spike trains
as in [7]. Throughout the paper, we use ? = 2 ms. Results do depend on ? but the exact
value of ? is not critical as long as it is chosen in a reasonable range 1 ? ? ? 4 ms [17].
The coincidence factor ? is similar to the ?reliability? as defined in [6]. All measures of ?
A
B
0.010
20
-40
-20
-40
0.005
?cst (mV)
-1
?? (G?s )
? (mV)
0
0.000
-60
-80
-5
C
0
5
10
time (msec)
15
-5
0
5
10
time (msec)
15
2
R = 0.93
p < 0.0001
-45
-50
-55
0
10
f (Hz)
20
30
Figure 1: Kernels ? (A) and ? (B) as extracted by the method exposed in this paper. Raw
data (symbols) and fit by double exponential functions (solid line). C. The optimal constant
threshold ?cst is plotted versus the output frequency f (symbols). It is very neatly fitted by
a linear function (line).
reported in this paper are given for new stimuli, independent of those used for parameter
optimization during the model estimation procedure.
3
Results
Figure 2 shows a direct comparison between predicted and recorded spike train for a typical
neuron. Both spike trains are almost indistinguishable (A). Even when zooming on the
subthreshold regime, differences are in the range of a few millivolts only (B). The spike
dynamics is correctly predicted apart from a short period of time just after a spike is emitted
(C). This is due to the fact that the kernel ? was extracted for a mean drive ?I = 0.
Here, the mean is much larger than 0 and the neuron has already adapted to this new
regime. It produces slightly different after-spike effects. This can be corrected easily in
our framework by taking a time-dependent time constant in the kernel ?, i.e. ?(s) ?
?(t ? t?, s). This dependence is of importance to account for spike-to-spike interactions
[18]. The mapping procedure discussed above allows, in principle, to compute ?(t ? t?, s)
for any t ? t? (see [7] for further details). However, it requires longer recordings than the
ones provided by our experiments and was dropped here.
Before moving to a quantitative estimate of the quality of the predictions of our model, we
need to understand what kind of limits are imposed on predictions by the modelled neurons themselves. It is well known that pyramidal neurons of the cortex respond with very
different reliability depending on the type of stimulation they receive [6]. Neurons tend to
fire regularly but without conserving the exact timing of spikes in response to constant or
quasi constant input current. On the other hand, they fire irregularly but reliably in terms of
spike timing in response to fluctuating current. We do not expect our model to yield better
predictions than the intrinsic reliability of the modelled neuron. To evaluate the intrinsic
reliability of the pyramidal neurons, we repeated injection of the same OU process, i.e. injection of processes with the same seed, and computed ? between the repeated spike trains
obtained in response to this procedure. Figure 3A shows a surface plot of the intrinsic reliability ?n?n of a typical neuron (the subscript n ? n is written for neuron to itself). It is
plotted versus the parameters of the stimulation, the current mean drive ?I and its standard
deviation ?I . We find that the mean drive ?I has almost no impact on ?n?n (measured
cross-correlation coefficient r = 0.04 with a p-value p = 0.81). On the other hand, ? I
has a strong impact on the reliability of the neuron (r = 0.93 with p < 10?4 ). When
?I is large (?I & 300 pA), ?n?n reaches a plateau at about 0.84 ? 0.05 (mean ? s.d.).
membrane voltage (mV)
A
40
20
0
-20
-40
-60
-80
1700
B
1800
time (msec)
1900
C
Figure 2: Performances of the SRM constructed by the method presented in this paper. A.
The prediction of the model (black line) is compared to the spike train of the corresponding
neuron (thick grey line). B. Zoom on the subthreshold regime. This panel corresponds
to the first dotted zone in A (horizontal bar is 5 ms; vertical bar is 5 mV) C. Zoom on a
correctly predicted spike. This panel corresponds to the second dotted zone in A (horizontal
bar is 1 ms; vertical bar is 20 mV). The model slightly undershoots during about 4 ms after
the spike (see text for further details).
When ?I decreases to 100 ? ?I ? 300 pA, ?n?n quickly drops to an intermediate value
of 0.65 ? 0.1 and finally for ?I ? 100 pA drops down to 0.09 ? 0.05. These findings are
stable across the different neurons that we recorded and repeat the findings of Mainen and
Sejnowski [6].
In order to connect model predictions to these findings, we evaluate the ? coincidence
factor between the predicted spike train and the recorded spike trains (this ? is labelled
m ? n for model to neuron). Figure 3B shows a plot of ?m?n versus ?n?n . We find
that the predictions of our minimal model are close to the natural upper bound set by the
intrinsic reliability of the pyramidal neuron. On average, the minimal model achieves a
quality ?m?n which is 65% (?3% s.e.m.) of the upper bound, i.e. ?m?n = 0.65 ?n?n .
Furthermore, let us recall that due to the definition of the coincidence factor ?, the threshold
for statistical significance here is ?m?n = 0. All the points are well above this value, hence
highly significant. Finally, we compare the predictions of our minimal model in terms
of two other indicators, the mean rate and the coefficient of variation of the interspike
interval distribution (Cv ). The mean rate is usually correctly predicted by our minimal
model (see Figure 3C) in agreement with the findings of Rauch and colleagues [4]. The C v
is predicted in the correct range as well but may vary due to missed or extra spikes added
in the prediction (data not shown). It is also noteworthy that available spike trains are not
very long (a few seconds) and the number of spikes is sometimes too low to yield a reliable
estimate of the Cv .
B
1
?m?n
A
2
R = 0.68
p < 0.0002
0.5
0.5
150
predicted rate (Hz)
C
?n?n
D
2
1
R = 0.96
p < 0.0001
1
2
R = 0.81
p < 0.0001
?m?n
100
0.5
50
0
0
50
100
actual rate (Hz)
150
0.5
?n?n
1
Figure 3: Quantitative performances of the model. A. Intrinsic reliability ?n?n of a typical
pyramidal neuron in function of the mean drive ?I and its standard deviation ?I . B. Performances of the SRM in correct spike timing prediction ?m?n are plotted versus the cells
intrinsic reliability ?n?n (symbols) for the very same stimulation parameters. The diagonal line (solid) denotes the ?natural? upper bound limit imposed by the neurons intrinsic
reliability. C. Predicted frequency versus actual frequency (symbols). D. Same as in A but
in a model without adaptation where the threshold has been optimized separately for each
set of stimulation parameters (see text for further details.)
Previous model studies had shown that a model with a threshold simpler than the one used
here is able to reliably predict the spike train of more detailed neuron models [7,12]. Here,
we used a threshold including an adaptation mechanism. Without adaptation, i.e. when
the sum over all preceding spikes in Equation (3) is replaced by the contribution of the
last emitted spike only, it is still possible to reach the same quality of predictions for each
driving regime (Figure 3D) under the condition that the three threshold parameters (A ? , ?0
and ?? ) are chosen differently for each set of input parameters ?I and ?I . In contrast to this,
our I&F model with adaptation achieves the same level of predictive quality (Figure 3B)
with one single set of threshold parameters. This illustrates the importance of adaptation to
I&F models or SRM.
4
Discussion
Mapping real neurons to simplified neuronal models has benefited from many developments in recent years [4-5,7-8,11-13,19-22] and was applied to both in vitro [4,9,13,22]
and in vivo recordings [5]. We have shown here that a simple estimation procedure allows
to build an equivalent I&F-type model for a collection of cortical neurons. The model neuron is built sequentially from intracellular recordings. The resulting model is very efficient
in the sense that it allows a quantitative and accurate prediction of the spike train of the real
neuron. Most of the time, the predicted subthreshold membrane voltage differs from the
recorded one by a few millivolts only. The mean firing rate of the minimal model corresponds to that of the real neuron. The statistical structure of the spike train is approximately
conserved since we observe that the coefficient of variation (Cv ) of the interspike interval
distribution is predicted in the correct range by our minimal model. But most important,
our minimal model has the ability to predict spikes with the correct timing (?2 ms) and the
level of prediction that is reached is close to the intrinsic reliability of the real neuron in
terms of spike timing [6]. The adapting threshold has been found to play an important role.
It allows the model to tune to variable input characteristics and to extend its predictions
beyond the input regimes used for model evaluation.
This work suggests that L5 neocortical pyramidal neurons under random current injection
behave very much like I&F neurons including a spike-frequency adaptation process. This
is a result of importance. Indeed, the I&F-type models are extremely popular in large scale
network studies. Our results can be viewed as a strong a posteriori justification to the use
of this class of model neurons. They also indicate that the picture of a neuron combining a
linear summation in the subthreshold regime with a threshold criterion for spike initiation
is good enough to account for much of the behavior in an in vivo-like lab setting. This
should however be moderated since several important aspects were neglected in this study.
First, we used random current injection rather than a more realistic random conductance
protocol [23]. In a previous report [12], we had checked the consequences of random
conductance injection with simulated data. We found that random conductance injection
mainly changes the effective membrane time constant of the neuron and can be accounted
for by making the time course of the optimal linear filter (? here) depend on the mean input to the neuron. The minimal model reached the same quality level of predictions when
driven by random conductance injection [12] as the level it reaches when driven by random current injection [7]. Second, a largely fluctuating current generated by a random
process can only be seen as a poor approximation to the input a neuron would receive
in vivo. Our input has stationary statistics with a spectrum that is close to white (cut-off
at 1 kHz), but a lower cut-off frequency could be used as well. Whether random input
is a reasonable model of the input a neuron would receive in vivo is highly controversial
[24-26], but from a purely practical point of view random stimulation provides at least a
well-defined experimental paradigm for in vitro experiments that mimics some aspects of
synaptic bombardment [27]. Third, all transient effects have been excluded since neuronal
data is analyzed in the adapted state. Finally, our experimental paradigm used somatic
current injection. Thus, all dendritic non-linearities, including backpropagating action potentials and dendritic spikes are excluded.
In summary, simple threshold models will never be able to account for all the variety of neuronal responses that can be probed in an artificial laboratory setting. For example, effects
of delayed spike initiation cannot be reproduced by simple threshold models that combine
linear subthreshold behavior with a strict threshold criterion (but could be reproduced by
quadratic or exponential I&F models). For this reason, we are currently studying exponential I&F models with adaptation that allow us to relate our approach with other known
models [21,28]. However, for random current injection that mimics synaptic bombardment,
the picture of a neuron that combines linear summation with a threshold criterion is not too
wrong. Moreover, in contrast to more complicated neuron models, the simple threshold
model allows rapid parameter extraction from experimental traces; efficient numerical simulation; and rigorous mathematical analysis. Our results also suggest that, if any elaborated
computation is taking place in single neurons, it is likely to happen at dendritic level rather
than at somatic level. In absence of a clear understanding of dendritic computation, the
I&F neuron with adaptation thus appears as a model that we consider ?good enough?.
Acknowledgments
This work was supported by Swiss National Science Foundation grants number FN 200020103530/1 to WG and number 3100-061335.00 to HRL.
References
[1] Feng J. Neural Net. 14: 955?975, 2001.
[2] Maass W & Bishop C. Pulsed Neural Networks. MIT Press, Cambridge, 1998.
[3] Gerstner W & Kistler W. Spiking neurons models: single neurons, populations, plasticity. Cambridge Univ. Press, Cambridge, 2002.
[4] Rauch A, La Camera G, L?uscher H, Senn W & Fusi S. J. Neurophysiol. 90: 1598?1612, 2003.
[5] Keat J, Reinagel P, Reid R & Meister M. Neuron 30: 803-817, 2001.
[6] Mainen Z and Sejnowski T. Science 268: 1503?1506, 1995.
[7] Jolivet R, Lewis TJ & Gerstner W. J. Neurophysiol. 92: 959?976, 2004.
[8] Paninski L, Pillow J & Simoncelli E. Neural Comp. 16: 2533-2561, 2004.
[9] Brillinger D & Segundo J. Biol. Cyber. 35: 213-220, 1979.
[10] Benda J & Herz A. Neural Comp. 15: 2523-2564, 2003.
[11] La Camera G, Rauch A, L?uscher H, Senn W & Fusi S. Neural Comp. 16: 2101-2124, 2004.
[12] Jolivet R & Gerstner W. J. Physiol.-Paris 98: 442-451, 2004.
[13] Jolivet R, Rauch A, L?uscher H & Gerstner W. Accepted in J. Comp. Neuro.
[14] Wiener N. Nonlinear problems in random theory. MIT Press, Cambridge, 1958.
[15] Roth A & H?ausser M. J. Physiol. 535: 445-472, 2001.
[16] Kistler W, Gerstner W & van Hemmen J. Neural Comp. 9: 1015-1045, 1997.
[17] Jolivet R (2005). Effective minimal threshold models of neuronal activity. PhD thesis, EPFL,
Lausanne.
[18] Arcas B & Fairhall A. Neural Comp. 15: 1789-1807, 2003.
[19] Brillinger D. Ann. Biomed. Engineer. 16: 3-16, 1988.
[20] Arcas B, Fairhall A & Bialek W. Neural Comp. 15: 1715-1749, 2003.
[21] Izhikevich E. IEEE Trans. Neural Net. 14: 1569-1572, 2003.
[22] Paninski L, Pillow J & Simoncelli E. Neurocomp. 65-66: 379-385, 2005.
[23] Robinson H & Kawai N. J. Neurosci. Meth. 49: 157-165, 1993.
[24] Arieli A, Sterkin A, Grinvald A & Aertsen A. Science 273: 1868?1871, 1996.
[25] De Weese M & Zador A. J. Neurosci. 23: 7940?7949, 2003.
[26] Stevens C & Zador A. In Proc. of the 5th Joint Symp. on Neural Comp., Inst. for Neural Comp.,
La Jolla, 1998.
[27] Destexhe A, Rudolph M & Par?e D. Nat. Rev. Neurosci. 4: 739-751, 2003.
[28] Fourcaud-Trocm?e N, Hansel D, van Vreeswijk C & Brunel N. J. Neurosci. 23: 11628-11640,
2003.
| 2858 |@word open:1 grey:1 integrative:1 simulation:1 solid:2 mainen:2 past:1 current:14 comparing:1 written:2 fn:1 physiol:2 numerical:1 realistic:1 plasticity:1 happen:1 interspike:2 shape:2 plot:3 drop:2 stationary:1 short:2 provides:1 firstly:1 simpler:1 mathematical:1 constructed:1 direct:1 hopf:1 consists:1 pathway:1 combine:2 symp:1 autocorrelation:1 expected:1 indeed:2 rapid:1 mpg:1 themselves:1 behavior:2 brain:2 actual:2 provided:1 linearity:1 moreover:1 maximizes:1 homepage:1 panel:2 what:1 kind:1 finding:4 brillinger:3 quantitative:5 collecting:1 act:1 exactly:2 wrong:1 grant:1 reid:1 before:1 dropped:1 timing:7 ssrm:2 limit:2 consequence:1 subscript:1 firing:5 fluctuation:2 noteworthy:1 approximately:1 black:1 suggests:1 lausanne:3 range:4 practical:1 acknowledgment:1 camera:2 differs:1 swiss:1 procedure:9 universal:1 physiology:1 adapting:1 suggest:1 cannot:1 close:4 context:2 optimize:1 equivalent:2 imposed:2 roth:1 zador:2 simplicity:3 reinagel:1 population:1 variation:2 justification:1 moderated:1 discharge:1 weese:1 play:1 exact:2 homogeneous:2 agreement:1 pa:3 intracellularly:1 cut:2 role:1 coincidence:7 capture:1 renaud:2 decrease:1 mentioned:1 dynamic:2 neglected:1 depend:2 exposed:2 predictive:1 purely:1 basis:1 neurophysiol:2 easily:1 joint:1 differently:1 various:1 train:19 univ:1 effective:3 sejnowski:2 artificial:1 whose:1 fluctuates:1 larger:1 wg:1 ability:1 statistic:1 noisy:1 itself:2 rudolph:1 reproduced:2 triggered:1 questioning:1 net:2 interaction:1 adaptation:14 relevant:1 combining:1 fired:1 conserving:1 description:1 double:2 produce:2 tk:2 coupling:1 develop:1 depending:1 measured:2 strong:2 predicted:13 involves:1 indicate:1 switzerland:3 thick:1 stevens:1 correct:5 filter:1 transient:1 kistler:2 biological:2 dendritic:5 secondly:1 summation:2 around:1 considered:1 seed:1 mapping:4 predict:5 driving:3 achieves:2 consecutive:1 vary:1 estimation:3 proc:1 currently:1 hansel:1 rough:1 mit:2 aim:1 rather:2 voltage:6 publication:1 likelihood:1 mainly:1 contrast:2 rigorous:1 sense:1 posteriori:1 inst:1 dependent:1 epfl:6 transformed:1 quasi:1 interested:1 germany:1 biomed:1 denoted:1 development:1 integration:1 construct:1 never:1 extraction:1 mimic:2 report:1 stimulus:1 quantitatively:1 few:3 randomly:1 national:1 zoom:2 delayed:1 replaced:1 fire:8 conductance:4 highly:4 evaluation:4 uscher:3 analyzed:1 tj:1 accurate:1 necessary:1 segundo:2 plotted:4 theoretical:2 minimal:9 fitted:2 criticized:2 increased:1 instance:1 goodness:1 deviation:2 subset:1 bombardment:2 srm:6 too:3 characterize:1 reported:1 connect:2 convince:1 l5:1 systematic:2 off:2 quickly:1 again:1 thesis:1 recorded:7 external:1 account:3 potential:1 de:2 includes:1 coefficient:4 mv:5 depends:2 ornstein:1 try:1 view:1 lab:1 reached:3 complicated:1 slope:1 vivo:5 elaborated:1 contribution:1 compartment:1 accuracy:1 wiener:2 variance:2 largely:1 characteristic:1 yield:4 subthreshold:7 modelled:2 raw:1 comp:9 drive:5 cybernetics:1 plateau:1 reach:3 checked:1 synaptic:2 definition:3 colleague:4 frequency:9 popular:1 recall:1 subsection:1 ou:3 back:1 appears:1 dt:3 follow:1 response:7 evaluated:1 furthermore:1 just:1 correlation:1 d:1 hand:3 horizontal:2 nonlinear:1 quality:6 izhikevich:1 effect:3 hence:1 excluded:2 laboratory:1 maass:1 i2:2 white:1 indistinguishable:2 during:5 backpropagating:1 mpi:1 rat:1 m:7 generalized:1 criterion:3 complete:1 neocortical:1 recently:1 stimulation:6 spiking:3 vitro:3 khz:1 discussed:1 extend:1 resting:1 significant:2 refer:1 cambridge:4 cv:3 neatly:1 had:3 reliability:13 moving:1 stable:1 han:1 cortex:2 longer:1 surface:1 recent:3 pulsed:1 apart:1 driven:2 scenario:1 ausser:1 jolla:1 initiation:3 ubingen:1 conserved:1 seen:2 preceding:1 determine:1 paradigm:2 period:1 simoncelli:2 believed:1 cross:2 long:2 impact:2 prediction:16 neuro:1 regression:1 arca:2 poisson:2 kernel:9 uhlenbeck:1 sometimes:1 cell:3 receive:3 addition:1 separately:2 interval:2 pyramidal:11 unibe:1 extra:1 strict:1 recording:10 isolate:1 hz:3 tend:1 cyber:1 regularly:1 emitted:2 intermediate:1 enough:4 destexhe:1 variety:1 fit:2 identified:1 whether:1 rauch:7 constitute:1 action:1 detailed:2 clear:1 tune:1 benda:1 amount:1 neocortex:2 extensively:1 http:1 exist:1 dotted:2 senn:2 correctly:3 herz:1 probed:1 group:1 key:1 soma:1 nevertheless:1 threshold:24 millivolt:2 year:2 sum:1 run:1 respond:1 reporting:1 almost:4 reader:2 throughout:1 reasonable:2 place:1 missed:1 fusi:2 hrl:1 layer:1 bound:3 quadratic:1 activity:2 fairhall:2 adapted:2 sake:1 aspect:2 extremely:1 ncoinc:2 injection:12 arieli:1 poor:1 membrane:6 across:1 slightly:2 rev:1 making:1 equation:5 vreeswijk:1 mechanism:3 mind:2 irregularly:1 studying:1 available:1 meister:1 observe:1 fluctuating:3 alternative:1 denotes:1 build:1 feng:2 question:2 already:3 spike:54 added:1 dependence:1 traditional:1 diagonal:1 bialek:1 aertsen:1 zooming:1 simulated:1 tuebingen:1 reason:1 relate:1 trace:2 reliably:3 upper:3 vertical:2 neuron:56 behave:1 tonic:1 precise:1 varied:1 somatic:3 community:2 paris:1 optimized:1 jolivet:6 trans:1 robinson:1 beyond:2 able:3 bar:4 usually:4 below:3 regime:8 built:1 reliable:1 keat:2 including:3 critical:1 natural:2 predicting:1 indicator:1 meth:1 picture:2 started:1 extract:3 text:2 review:1 understanding:1 expect:1 par:1 filtering:2 versus:6 foundation:1 integrate:5 controversial:1 sufficient:1 fourcaud:1 principle:1 normalizes:1 ndata:3 course:1 accounted:1 surprisingly:1 repeat:1 last:1 bern:1 summary:1 supported:1 allow:1 understand:1 institute:3 template:1 taking:2 van:2 cortical:4 evaluating:1 pillow:2 commonly:1 adaptive:1 collection:1 simplified:1 reproduces:1 sequentially:2 conceptual:1 spectrum:1 stimulated:1 gerstner:7 protocol:1 significance:1 stereotyped:1 intracellular:4 neurosci:4 repeated:2 neuronal:4 benefited:1 hemmen:1 slow:2 precision:1 msec:3 grinvald:1 exponential:4 third:1 down:1 specific:2 bishop:1 symbol:4 explored:1 decay:1 exists:1 intrinsic:9 sequential:1 importance:4 phd:1 nat:1 illustrates:1 subtract:1 paninski:3 likely:1 visual:1 brunel:1 ch:7 corresponds:3 lewis:1 extracted:2 viewed:1 formulated:1 ann:1 twofold:1 labelled:1 absence:1 wulfram:2 change:1 cst:8 typical:4 corrected:1 averaging:1 engineer:1 accepted:1 experimental:4 la:3 zone:2 rudolf:1 alexander:2 kawai:1 evaluate:3 biol:1 |
2,046 | 2,859 | Learning in Silicon: Timing is Everything
John V. Arthur and Kwabena Boahen
Department of Bioengineering
University of Pennsylvania
Philadelphia, PA 19104
{jarthur, boahen}@seas.upenn.edu
Abstract
We describe a neuromorphic chip that uses binary synapses with spike
timing-dependent plasticity (STDP) to learn stimulated patterns of activity and to compensate for variability in excitability. Specifically, STDP
preferentially potentiates (turns on) synapses that project from excitable
neurons, which spike early, to lethargic neurons, which spike late. The
additional excitatory synaptic current makes lethargic neurons spike earlier, thereby causing neurons that belong to the same pattern to spike in
synchrony. Once learned, an entire pattern can be recalled by stimulating
a subset.
1
Variability in Neural Systems
Evidence suggests precise spike timing is important in neural coding, specifically, in the
hippocampus. The hippocampus uses timing in the spike activity of place cells (in addition
to rate) to encode location in space [1]. Place cells employ a phase code: the timing at
which a neuron spikes relative to the phase of the inhibitory theta rhythm (5-12Hz) conveys
information. As an animal approaches a place cell?s preferred location, the place cell not
only increases its spike rate, but also spikes at earlier phases in the theta cycle.
To implement a phase code, the theta rhythm is thought to prevent spiking until the input
synaptic current exceeds the sum of the neuron threshold and the decreasing inhibition on
the downward phase of the cycle [2]. However, even with identical inputs and common
theta inhibition, neurons do not spike in synchrony. Variability in excitability spreads the
activity in phase. Lethargic neurons (such as those with high thresholds) spike late in the
theta cycle, since their input exceeds the sum of the neuron threshold and theta inhibition
only after the theta inhibition has had time to decrease. Conversely, excitable neurons
(such as those with low thresholds) spike early in the theta cycle. Consequently, variability
in excitability translates into variability in timing.
We hypothesize that the hippocampus achieves its precise spike timing (about 10ms)
through plasticity enhanced phase-coding (PEP). The source of hippocampal timing precision in the presence of variability (and noise) remains unexplained. Synaptic plasticity can
compensate for variability in excitability if it increases excitatory synaptic input to neurons
in inverse proportion to their excitabilities. Recasting this in a phase-coding framework, we
desire a learning rule that increases excitatory synaptic input to neurons directly related to
their phases. Neurons that lag require additional synaptic input, whereas neurons that lead
120?m
190?m
A
B
Figure 1: STDP Chip. A The chip has a 16-by-16 array of microcircuits; one microcircuit
includes four principal neurons, each with 21 STDP circuits. B The STDP Chip is embedded in a circuit board including DACs, a CPLD, a RAM chip, and a USB chip, which
communicates with a PC.
require none. The spike timing-dependent plasticity (STDP) observed in the hippocampus
satisfies this requirement [3]. It requires repeated pre-before-post spike pairings (within a
time window) to potentiate and repeated post-before-pre pairings to depress a synapse.
Here we validate our hypothesis with a model implemented in silicon, where variability is
as ubiquitous as it is in biology [4]. Section 2 presents our silicon system, including the
STDP Chip. Section 3 describes and characterizes the STDP circuit. Section 4 demonstrates that PEP compensates for variability and provides evidence that STDP is the compensation mechanism. Section 5 explores a desirable consequence of PEP: unconventional
associative pattern recall. Section 6 discusses the implications of the PEP model, including
its benefits and applications in the engineering of neuromorphic systems and in the study
of neurobiology.
2
Silicon System
We have designed, submitted, and tested a silicon implementation of PEP. The STDP Chip
was fabricated through MOSIS in a 1P5M 0.25?m CMOS process, with just under 750,000
transistors in just over 10mm2 of area. It has a 32 by 32 array of excitatory principal neurons commingled with a 16 by 16 array of inhibitory interneurons that are not used here
(Figure 1A). Each principal neuron has 21 STDP synapses. The address-event representation (AER) [5] is used to transmit spikes off chip and to receive afferent and recurrent spike
input.
To configure the STDP Chip as a recurrent network, we embedded it in a circuit board (Figure 1B). The board has five primary components: a CPLD (complex programmable logic
device), the STDP Chip, a RAM chip, a USB interface chip, and DACs (digital-to-analog
converters). The central component in the system is the CPLD. The CPLD handles AER
traffic, mediates communication between devices, and implements recurrent connections
by accessing a lookup table, stored in the RAM chip. The USB interface chip provides
a bidirectional link with a PC. The DACs control the analog biases in the system, including the leak current, which the PC varies in real-time to create the global inhibitory theta
rhythm.
The principal neuron consists of a refractory period and calcium-dependent potassium circuit (RCK), a synapse circuit, and a soma circuit (Figure 2A). RCK and the synapse are
ISOMA
Soma
Synapse
STDP
Presyn.
Spike
PE
LPF
A
Presyn.
Spike
Raster
AH
0
0.1
Spike probability
RCK
Postsyn.
Spike
B
0.05
0.1
0.05
0.1
0.08
0.06
0.04
0.02
0
0
Time(s)
Figure 2: Principal neuron. A A simplified schematic is shown, including: the synapse,
refractory and calcium-dependent potassium channel (RCK), soma, and axon-hillock (AH)
circuits, plus their constituent elements, the pulse extender (PE) and the low-pass filter
(LPF). B Spikes (dots) from 81 principal neurons are temporally dispersed, when excited
by poisson-like inputs (58Hz) and inhibited by the common 8.3Hz theta rhythm (solid line).
The histogram includes spikes from five theta cycles.
composed of two reusable blocks: the low-pass filter (LPF) and the pulse extender (PE).
The soma is a modified version of the LPF, which receives additional input from an axonhillock circuit (AH).
RCK is inhibitory to the neuron. It consists of a PE, which models calcium influx during
a spike, and a LPF, which models calcium buffering. When AH fires a spike, a packet of
charge is dumped onto a capacitor in the PE. The PE?s output activates until the charge
decays away, which takes a few milliseconds. Also, while the PE is active, charge accumulates on the LPF?s capacitor, lowering the LPF?s output voltage. Once the PE deactivates, this charge leaks away as well, but this takes tens of milliseconds because the leak is
smaller. The PE?s and the LPF?s inhibitory effects on the soma are both described below
in terms of the sum (ISHUNT ) of the currents their output voltages produce in pMOS transistors whose sources are at Vdd (see Figure 2A). Note that, in the absence of spikes, these
currents decay exponentially, with a time-constant determined by their respective leaks.
The synapse circuit is excitatory to the neuron. It is composed of a PE, which represents
the neurotransmitter released into the synaptic cleft, and a LPF, which represents the bound
neurotransmitter. The synapse circuit is similar to RCK in structure but differs in function:
It is activated not by the principal neuron itself but by the STDP circuits (or directly by
afferent spikes that bypass these circuits, i.e., fixed synapses). The synapse?s effect on the
soma is also described below in terms of the current (ISYN ) its output voltage produces in a
pMOS transistor whose source is at Vdd.
The soma circuit is a leaky integrator. It receives excitation from the synapse circuit and
shunting inhibition from RCK and has a leak current as well. Its temporal behavior is
described by:
?
dISOMA
ISYN I0
+ ISOMA =
dt
ISHUNT
where ISOMA is the current the capacitor?s voltage produces in a pMOS transistor whose
source is at Vdd (see Figure 2A). ISHUNT is the sum of the leak, refractory, and calciumdependent potassium currents. These currents also determine the time constant: ? =
C Ut
?ISHUNT , where I0 and ? are transistor parameters and Ut is the thermal voltage.
STDP circuit
~LTP
SRAM
Presynaptic spike
A
~LTD
Inverse number of pairings
Integrator
Decay
Postsynaptic spike
Potentiation
0.1
0.05
0
0.05
0.1 Depression
-80
-40
0
Presynaptic spike
Postsynaptic spike
40
Spike timing: t pre - t post (ms)
80
B
Figure 3: STDP circuit design and characterization. A The circuit is composed of three
subcircuits: decay, integrator, and SRAM. B The circuit potentiates when the presynaptic
spike precedes the postsynaptic spike and depresses when the postsynaptic spike precedes
the presynaptic spike.
The soma circuit is connected to an AH, the locus of spike generation. The AH consists
of model voltage-dependent sodium and potassium channel populations (modified from [6]
by Kai Hynna). It initiates the AER signaling process required to send a spike off chip.
To characterize principal neuron variability, we excited 81 neurons with poisson-like 58Hz
spike trains (Figure 2B). We made these spike trains poisson-like by starting with a regular
200Hz spike train and dropping spikes randomly, with probability of 0.71. Thus spikes
were delivered to neurons that won the coin toss in synchrony every 5ms. However, neurons
did not lock onto the input synchrony due to filtering by the synaptic time constant (see
Figure 2B). They also received a common inhibitory input at the theta frequency (8.3Hz),
via their leak current. Each neuron was prevented from firing more than one spike in a theta
cycle by its model calcium-dependent potassium channel population.
The principal neurons? spike times were variable. To quantify the spike variability, we used
timing precision, which we define as twice the standard deviation of spike times accumulated from five theta cycles. With an input rate of 58Hz the timing precision was 34ms.
3
STDP Circuit
The STDP circuit (related to [7]-[8]), for which the STDP Chip is named, is the most
abundant, with 21,504 copies on the chip. This circuit is built from three subcircuits:
decay, integrator, and SRAM (Figure 3A). The decay and integrator are used to implement
potentiation, and depression, in a symmetric fashion. The SRAM holds the current binary
state of the synapse, either potentiated or depressed.
For potentiation, the decay remembers the last presynaptic spike. Its capacitor is charged
when that spike occurs and discharges linearly thereafter. A postsynaptic spike samples the
charge remaining on the capacitor, passes it through an exponential function, and dumps
the resultant charge into the integrator. This charge decays linearly thereafter. At the time
of the postsynaptic spike, the SRAM, a cross-coupled inverter pair, reads the voltage on the
integrator?s capacitor. If it exceeds a threshold, the SRAM switches state from depressed
to potentiated (?LTD goes high and ?LTP goes low). The depression side of the STDP
circuit is exactly symmetric, except that it responds to postsynaptic activation followed by
presynaptic activation and switches the SRAM?s state from potentiated to depressed (?LTP
goes high and ?LTD goes low). When the SRAM is in the potentiated state, the presynaptic
50
After STDP
83
92
100
Timing precision(ms)
Before STDP
75
B
Before STDP
After STDP
40
30
20
10
0
50
60
70
80
90
Input rate(Hz)
100
50
58
67
text
A
0.2
0.4
Time(s)
0.6
0.2
0.4
Time(s)
0.6
C
Figure 4: Plasticity enhanced phase-coding. A Spike rasters of 81 neurons (9 by 9 cluster)
display synchrony over a two-fold range of input rates after STDP. B The degree of enhancement is quantified by timing precision. C Each neuron (center box) sends synapses to
(dark gray) and receives synapses from (light gray) twenty-one randomly chosen neighbors
up to five nodes away (black indicates both connections).
spike activates the principal neuron?s synapse; otherwise the spike has no effect.
We characterized the STDP circuit by activating a plastic synapse and a fixed synapse?
which elicits a spike at different relative times. We repeated this pairing at 16Hz. We
counted the number of pairings required to potentiate (or depress) the synapse. Based
on this count, we calculated the efficacy of each pairing as the inverse number of pairings required (Figure 3B). For example, if twenty pairings were required to potentiate the
synapse, the efficacy of that pre-before-post time-interval was one twentieth. The efficacy
of both potentiation and depression are fit by exponentials with time constants of 11.4ms
and 94.9ms, respectively. This behavior is similar to that observed in the hippocampus:
potentiation has a shorter time constant and higher maximum efficacy than depression [3].
4
Recurrent Network
We carried out an experiment designed to test the STDP circuit?s ability to compensate for
variability in spike timing through PEP. Each neuron received recurrent connections from
21 randomly selected neurons within an 11 by 11 neighborhood centered on itself (see
Figure 4C). Conversely, it made recurrent connections to randomly chosen neurons within
the same neighborhood. These connections were mediated by STDP circuits, initialized to
the depressed state. We chose a 9 by 9 cluster of neurons and delivered spikes at a mean
rate of 50 to 100Hz to each one (dropping spikes with a probability of 0.75 to 0.5 from a
regular 200Hz train) and provided common theta inhibition as before.
We compared the variability in spike timing after five seconds of learning with the initial
distribution. Phase coding was enhanced after STDP (Figure 4A). Before STDP, spike
timing among neurons was highly variable (except for the very highest input rate). After
STDP, variability was virtually eliminated (except for the very lowest input rate). Initially,
the variability, characterized by timing precision, was inversely related to the input rate,
decreasing from 34 to 13ms. After five seconds of STDP, variability decreased and was
largely independent of input rate, remaining below 11ms.
Potentiated synapses
25
A
Synaptic state
after STDP
20
15
10
5
0
B
50
100
150
200
Spiking order
250
Figure 5: Compensating for variability. A Some synapses (dots) become potentiated (light)
while others remain depressed (dark) after STDP. B The number of potentiated synapses
neurons make (pluses) and receive (circles) is negatively (r = -0.71) and positively (r =
0.76) correlated to their rank in the spiking order, respectively.
Comparing the number of potentiated synapses each neuron made or received with its excitability confirmed the PEP hypothesis (i.e., leading neurons provide additional synaptic
current to lagging neurons via potentiated recurrent synapses). In this experiment, to eliminate variability due to noise (as opposed to excitability), we provided a 17 by 17 cluster
of neurons with a regular 200Hz excitatory input. Theta inhibition was present as before
and all synapses were initialized to the depressed state. After 10 seconds of STDP, a large
fraction of the synapses were potentiated (Figure 5A). When the number of potentiated
synapses each neuron made or received was plotted versus its rank in spiking order (Figure
5B), a clear correlation emerged (r = -0.71 or 0.76, respectively). As expected, neurons that
spiked early made more and received fewer potentiated synapses. In contrast, neurons that
spiked late made fewer and received more potentiated synapses.
5
Pattern Completion
After STDP, we found that the network could recall an entire pattern given a subset, thus
the same mechanisms that compensated for variability and noise could also compensate
for lack of information. We chose a 9 by 9 cluster of neurons as our pattern and delivered
a poisson-like spike train with mean rate of 67Hz to each one as in the first experiment.
Theta inhibition was present as before and all synapses were initialized to the depressed
state. Before STDP, we stimulated a subset of the pattern and only neurons in that subset
spiked (Figure 6A). After five seconds of STDP, we stimulated the same subset again. This
time they recruited spikes from other neurons in the pattern, completing it (Figure 6B).
Upon varying the fraction of the pattern presented, we found that the fraction recalled
increased faster than the fraction presented. We selected subsets of the original pattern
randomly, varying the fraction of neurons chosen from 0.1 to 1.0 (ten trials for each). We
classified neurons as active if they spiked in the two second period over which we recorded.
Thus, we characterized PEP?s pattern-recall performance as a function of the probability
that the pattern in question?s neurons are activated (Figure 6C). At a fraction of 0.50 presented, nearly all of the neurons in the pattern are consistently activated (0.91?0.06), showing robust pattern completion. We fitted the recall performance with a sigmoid that reached
0.50 recall fraction with an input fraction of 0.30. No spurious neurons were activated during any trials.
Rate(Hz)
Rate(Hz)
8
7
7
6
6
5
5
4
2
A
0
B
Network activity
after STDP
0.6
0.4
2
0.2
0
0
1
1
0.8
4
3
3
Network activity
before STDP
1
Fraction of pattern actived
8
C
0
0.2
0.4
0.6
0.8
Fraction of pattern stimulated
1
Figure 6: Associative recall. A Before STDP, half of the neurons in a pattern are stimulated;
only they are activated. B After STDP, half of the neurons in a pattern are stimulated, and
all are activated. C The fraction of the pattern activated grows faster than the fraction
stimulated.
6
Discussion
Our results demonstrate that PEP successfully compensates for graded variations in our silicon recurrent network using binary (on?off) synapses (in contrast with [8], where weights
are graded). While our chip results are encouraging, variability was not eliminated in every
case. In the case of the lowest input (50Hz), we see virtually no change (Figure 4A). We
suspect the timing remains imprecise because, with such low input, neurons do not spike
every theta cycle and, consequently, provide fewer opportunities for the STDP synapses to
potentiate. This shortfall illustrates the system?s limits; it can only compensate for variability within certain bounds, and only for activity appropriate to the PEP model.
As expected, STDP is the mechanism responsible for PEP. STDP potentiated recurrent
synapses from leading neurons to lagging neurons, reducing the disparity among the diverse population of neurons. Even though the STDP circuits are themselves variable, with
different efficacies and time constants, when using timing the sign of the weight-change
is always correct (data not shown). For this reason, we chose STDP over other more
physiological implementations of plasticity, such as membrane-voltage-dependent plasticity (MVDP), which has the capability to learn with graded voltage signals [9], such as those
found in active dendrites, providing more computational power [10].
Previously, we investigated a MVDP circuit, which modeled a voltage-dependent NMDAreceptor-gated synapse [11]. It potentiated when the calcium current analog exceeded a
threshold, which was designed to occur only during a dendritic action potential. This circuit
produced behavior similar to STDP, implying it could be used in PEP. However, it was
sensitive to variability in the NMDA and potentiation thresholds, causing a fraction of the
population to potentiate anytime the synapse received an input and another fraction to never
potentiate, rendering both subpopulations useless. Therefore, the simpler, less biophysical
STDP circuit won out over the MVDP circuit: In our system timing is everything.
Associative storage and recall naturally emerge in the PEP network when synapses between
neurons coactivated by a pattern are potentiated. These synapses allow neurons to recruit
their peers when a subset of the pattern is presented, thereby completing the pattern. However, this form of pattern storage and completion differs from Hopfield?s attractor model
[12] . Rather than forming symmetric, recurrent neuronal circuits, our recurrent network
forms asymmetric circuits in which neurons make connections exclusively to less excitable
neurons in the pattern. In both the poisson-like and regular cases (Figures 4 & 5), only
about six percent of potentiated connections were reciprocated, as expected by chance. We
plan to investigate the storage capacity of this asymmetric form of associative memory.
Our system lends itself to modeling brain regions that use precise spike timing, such as
the hippocampus. We plan to extend the work presented to store and recall sequences of
patterns, as the hippocampus is hypothesized to do. Place cells that represent different
locations spike at different phases of the theta cycle, in relation to the distance to their preferred locations. This sequential spiking will allow us to link patterns representing different
locations in the order those locations are visited, thereby realizing episodic memory.
We propose PEP as a candidate neural mechanism for information coding and storage in the
hippocampal system. Observations from the CA1 region of the hippocampus suggest that
basal dendrites (which primarily receive excitation from recurrent connections) support
submillisecond timing precision, consistent with PEP [13]. We have shown, in a silicon
model, PEP?s ability to exploit such fast recurrent connections to sharpen timing precision
as well as to associatively store and recall patterns.
Acknowledgments
We thank Joe Lin for assistance with chip generation. The Office of Naval Research funded
this work (Award No. N000140210468).
References
[1] O?Keefe J. & Recce M.L. (1993). Phase relationship between hippocampal place units and the
EEG theta rhythm. Hippocampus 3(3):317-330.
[2] Mehta M.R., Lee A.K. & Wilson M.A. (2002) Role of experience and oscillations in transforming
a rate code into a temporal code. Nature 417(6890):741-746.
[3] Bi G.Q. & Wang H.X. (2002) Temporal asymmetry in spike timing-dependent synaptic plasticity.
Physiology & Behavior 77:551-555.
[4] Rodriguez-Vazquez, A., Linan, G., Espejo S. & Dominguez-Castro R. (2003) Mismatch-induced
trade-offs and scalability of analog preprocessing visual microprocessor chips. Analog Integrated
Circuits and Signal Processing 37:73-83.
[5] Boahen K.A. (2000) Point-to-point connectivity between neuromorphic chips using address
events. IEEE Transactions on Circuits and Systems II 47:416-434.
[6] Culurciello E.R., Etienne-Cummings R. & Boahen K.A. (2003) A biomorphic digital image sensor. IEEE Journal of Solid State Circuits 38:281-294.
[7] Bofill A., Murray A.F & Thompson D.P. (2005) Citcuits for VLSI Implementation of Temporally
Asymmetric Hebbian Learning. In: Advances in Neural Information Processing Systems 14, MIT
Press, 2002.
[8] Cameron K., Boonsobhak V., Murray A. & Renshaw D. (2005) Spike timing dependent plasticity (STDP) can ameliorate process variations in neuromorphic VLSI. IEEE Transactions on Neural
Networks 16(6):1626-1627.
[9] Chicca E., Badoni D., Dante V., D?Andreagiovanni M., Salina G., Carota L., Fusi S. & Del Giudice P. (2003) A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses
with long-term memory. IEEE Transaction on Neural Networks 14(5):1297-1307.
[10] Poirazi P., & Mel B.W. (2001) Impact of active dendrites and structural plasticity on the memory
capacity of neural tissue. Neuron 29(3)779-796.
[11] Arthur J.V. & Boahen K. (2004) Recurrently connected silicon neurons with active dendrites for
one-shot learning. In: IEEE International Joint Conference on Neural Networks 3, pp.1699-1704.
[12] Hopfield J.J. (1984) Neurons with graded response have collective computational properties like
those of two-state neurons. Proceedings of the National Academy of Science 81(10):3088-3092.
[13] Ariav G., Polsky A. & Schiller J. (2003) Submillisecond precision of the input-output transformation function mediated by fast sodium dendritic spikes in basal dendrites of CA1 pyramidal
neurons. Journal of Neuroscience 23(21):7750-7758.
| 2859 |@word trial:2 version:1 hippocampus:9 proportion:1 mehta:1 pulse:2 excited:2 thereby:3 solid:2 shot:1 initial:1 efficacy:5 disparity:1 exclusively:1 current:14 comparing:1 activation:2 john:1 recasting:1 plasticity:10 hypothesize:1 designed:3 implying:1 half:2 selected:2 device:2 fewer:3 sram:8 realizing:1 renshaw:1 provides:2 characterization:1 node:1 location:6 simpler:1 five:7 become:1 pairing:8 consists:3 lagging:2 upenn:1 expected:3 behavior:4 themselves:1 integrator:7 brain:1 compensating:1 decreasing:2 encouraging:1 window:1 project:1 provided:2 circuit:37 lowest:2 recruit:1 ca1:2 transformation:1 fabricated:1 rck:7 temporal:3 every:3 charge:7 exactly:1 demonstrates:1 control:1 unit:1 before:12 engineering:1 timing:26 limit:1 consequence:1 accumulates:1 firing:1 black:1 plus:2 twice:1 chose:3 quantified:1 cpld:4 suggests:1 conversely:2 range:1 bi:1 hynna:1 acknowledgment:1 responsible:1 block:1 implement:3 differs:2 signaling:1 episodic:1 area:1 thought:1 physiology:1 postsyn:1 pre:4 imprecise:1 regular:4 subpopulation:1 suggest:1 onto:2 storage:4 charged:1 center:1 compensated:1 send:1 go:4 starting:1 thompson:1 chicca:1 rule:1 array:3 population:4 handle:1 variation:2 transmit:1 discharge:1 enhanced:3 us:2 hypothesis:2 pa:1 element:1 asymmetric:3 observed:2 role:1 wang:1 region:2 cycle:9 connected:3 decrease:1 highest:1 trade:1 boahen:5 accessing:1 leak:7 transforming:1 vdd:3 negatively:1 upon:1 joint:1 hopfield:2 chip:22 neurotransmitter:2 train:5 fast:2 describe:1 precedes:2 neighborhood:2 salina:1 pmos:3 whose:3 lag:1 kai:1 emerged:1 peer:1 otherwise:1 compensates:2 ability:2 itself:3 delivered:3 associative:4 sequence:1 transistor:5 biophysical:1 propose:1 causing:2 academy:1 deactivates:1 validate:1 scalability:1 constituent:1 potassium:5 cluster:4 requirement:1 enhancement:1 sea:1 produce:3 asymmetry:1 cmos:1 recurrent:14 completion:3 received:7 dumped:1 implemented:1 quantify:1 correct:1 filter:2 centered:1 packet:1 everything:2 extender:2 require:2 potentiation:6 activating:1 dendritic:2 hold:1 stdp:51 achieves:1 early:3 inverter:1 released:1 unexplained:1 visited:1 sensitive:1 create:1 successfully:1 offs:1 mit:1 sensor:1 activates:2 always:1 modified:2 rather:1 varying:2 voltage:10 wilson:1 office:1 encode:1 naval:1 consistently:1 rank:2 indicates:1 contrast:2 culurciello:1 dependent:10 i0:2 accumulated:1 entire:2 eliminate:1 integrated:1 initially:1 spurious:1 relation:1 vlsi:3 among:2 animal:1 plan:2 once:2 never:1 eliminated:2 kwabena:1 identical:1 biology:1 mm2:1 buffering:1 represents:2 nearly:1 others:1 inhibited:1 employ:1 few:1 primarily:1 randomly:5 composed:3 national:1 phase:13 fire:2 attractor:1 interneurons:1 highly:1 investigate:1 pc:3 configure:1 activated:7 light:2 implication:1 bioengineering:1 arthur:2 experience:1 respective:1 shorter:1 initialized:3 abundant:1 circle:1 plotted:1 fitted:1 increased:1 earlier:2 modeling:1 neuromorphic:4 deviation:1 subset:7 characterize:1 stored:1 hillock:1 varies:1 explores:1 international:1 shortfall:1 lee:1 off:3 connectivity:1 again:1 central:1 recorded:1 opposed:1 leading:2 potential:1 lookup:1 coding:6 includes:2 afferent:2 characterizes:1 traffic:1 reached:1 depresses:1 capability:1 synchrony:5 isoma:3 largely:1 plastic:2 produced:1 none:1 confirmed:1 usb:3 vazquez:1 tissue:1 ah:6 submitted:1 classified:1 synapsis:23 synaptic:11 raster:2 frequency:1 pp:1 conveys:1 resultant:1 naturally:1 recall:9 anytime:1 ut:2 ubiquitous:1 nmda:1 bidirectional:1 exceeded:1 higher:1 dt:1 cummings:1 response:1 synapse:17 microcircuit:2 box:1 though:1 just:2 until:2 correlation:1 receives:3 lack:1 rodriguez:1 del:1 gray:2 grows:1 effect:3 hypothesized:1 excitability:7 symmetric:3 read:1 assistance:1 during:3 rhythm:5 excitation:2 won:2 mel:1 m:9 hippocampal:3 demonstrate:1 interface:2 percent:1 image:1 common:4 sigmoid:1 spiking:5 refractory:3 exponentially:1 belong:1 analog:5 extend:1 pep:16 silicon:8 potentiate:6 depressed:7 sharpen:1 had:1 dot:2 funded:1 inhibition:8 isyn:2 certain:1 store:2 binary:3 additional:4 determine:1 period:2 signal:2 ii:1 desirable:1 hebbian:1 exceeds:3 faster:2 characterized:3 cross:1 compensate:5 lin:1 long:1 post:4 shunting:1 prevented:1 award:1 cameron:1 schematic:1 impact:1 poisson:5 submillisecond:2 histogram:1 represent:1 cell:5 receive:3 addition:1 whereas:1 interval:1 decreased:1 pyramidal:1 source:4 sends:1 pass:1 associatively:1 hz:16 recruited:1 ltp:3 virtually:2 suspect:1 induced:1 capacitor:6 structural:1 presence:1 rendering:1 switch:2 fit:1 pennsylvania:1 converter:1 poirazi:1 translates:1 six:1 depress:2 ltd:3 action:1 programmable:1 depression:5 clear:1 dark:2 ten:2 inhibitory:6 millisecond:2 sign:1 neuroscience:1 diverse:1 dropping:2 basal:2 reusable:1 four:1 soma:8 threshold:7 thereafter:2 badoni:1 prevent:1 lowering:1 dacs:3 ram:3 mosis:1 fraction:14 sum:4 inverse:3 ameliorate:1 named:1 place:6 oscillation:1 fusi:1 bound:2 completing:2 followed:1 display:1 fold:1 potentiates:2 activity:6 aer:3 occur:1 influx:1 giudice:1 subcircuits:2 department:1 membrane:1 describes:1 smaller:1 remain:1 postsynaptic:7 dominguez:1 castro:1 spiked:4 remains:2 previously:1 turn:1 discus:1 mechanism:4 count:1 locus:1 unconventional:1 initiate:1 polsky:1 away:3 presyn:2 appropriate:1 coin:1 original:1 remaining:2 lock:1 opportunity:1 etienne:1 exploit:1 dante:1 murray:2 graded:4 lpf:9 question:1 spike:69 occurs:1 primary:1 responds:1 mvdp:3 lends:1 distance:1 link:2 elicits:1 thank:1 capacity:2 schiller:1 presynaptic:7 reason:1 bofill:1 code:4 modeled:1 useless:1 relationship:1 providing:1 preferentially:1 implementation:3 design:1 calcium:6 collective:1 twenty:2 gated:1 potentiated:17 neuron:69 observation:1 compensation:1 thermal:1 neurobiology:1 variability:22 precise:3 communication:1 pair:1 required:4 connection:9 cleft:1 recalled:2 learned:1 mediates:1 address:2 below:3 pattern:28 mismatch:1 built:1 including:5 memory:4 power:1 event:2 carota:1 sodium:2 representing:1 recce:1 theta:20 inversely:1 temporally:2 carried:1 excitable:3 coupled:1 mediated:2 philadelphia:1 remembers:1 text:1 relative:2 embedded:2 generation:2 filtering:1 versus:1 digital:2 integrate:1 degree:1 consistent:1 bypass:1 excitatory:6 last:1 copy:1 bias:1 side:1 allow:2 neighbor:1 emerge:1 leaky:1 biomorphic:1 benefit:1 calculated:1 made:6 preprocessing:1 simplified:1 counted:1 transaction:3 preferred:2 logic:1 global:1 active:5 table:1 stimulated:7 nature:1 learn:2 channel:3 robust:1 reciprocated:1 eeg:1 dendrite:5 investigated:1 complex:1 microprocessor:1 did:1 spread:1 linearly:2 noise:3 repeated:3 positively:1 neuronal:1 dump:1 espejo:1 board:3 fashion:1 axon:1 precision:9 exponential:2 candidate:1 pe:10 communicates:1 late:3 showing:1 coactivated:1 recurrently:1 decay:8 physiological:1 evidence:2 joe:1 sequential:1 keefe:1 downward:1 illustrates:1 twentieth:1 forming:1 visual:1 desire:1 satisfies:1 dispersed:1 chance:1 stimulating:1 consequently:2 toss:1 absence:1 change:2 specifically:2 determined:1 except:3 reducing:1 principal:10 pas:2 support:1 tested:1 correlated:1 |
2,047 | 286 | Neural Network Visualization
NEURAL NETWORK VISUALIZATION
Jakub Wejchert
Gerald Tesauro
IB M Research
T.J. Watson Research
Center
Yorktown Heights
NY 10598
ABSTRACT
We have developed graphics to visualize static and dynamic information in layered neural network learning systems. Emphasis was
placed on creating new visuals that make use of spatial arrangements, size information, animation and color. We applied these
tools to the study of back-propagation learning of simple Boolean
predicates, and have obtained new insights into the dynamics of
the learning process.
1
INTRODUCTION
Although neural network learning systems are being widely investigated by many
researchers via computer simulations, the graphical display of information in these
simulations has received relatively little attention. In other fields such as fluid
dynamics and chaos theory, the development of "scientific visualization" techniques
(1,3) have proven to be a tremendously useful aid to research, development, and
education. Similar benefits should result from the application of these techniques
to neural networks research.
In this article, several visualization methods are introduced to investigate learning
in neural networks which use the back-propagation algorithm. A multi-window
465
466
Wejchert and Tesauro
environment is used that allows different aspects of the simulation to be displayed
simultaneously in each window.
As an application, the toolkit is used to study small networks learning Boolean
functions. The animations are used to observe the emerging structure of connection
strengths, to study the temporal behaviour, and to understand the relationships and
effects of parameters. The simulations and graphics can run at real-time speeds.
2
VISUAL REPRESENTATIONS
First, we introduce our techniques for representing both the instantaneous dynamics
of the learning process, and the full temporal trajectory of the network during the
course of one or more learning runs.
2.1
The Bond Diagram
In the first of these diagrams, the geometrical structure of a connected network is
used as a basis for the representation. As it is of interest to try to see how the
internal configuration of weights relates to the problem the network is learning, it is
clearly worthwile to have a graphical representation that explicitly includes weight
information integrated with network topology. This differs from "Hinton diagrams"
(2), in which data may only be indirectly related to the network structure. In our
representation nodes are represented by circles, the area of which are proportional
to the threshold values. Triangles or lines are used to represent the weights or their
rate of change. The triangles or line segments emanate from the nodes and point
toward the connecting nodes. Their lengths indicate the magnitude of the weight
or weight derivative. We call this the "bond diagram".
In this diagram, one can look at any node and clearly see the magnitude of the
weights feeding into and out of it. Also, a sense of direction is built into the picture
since the bonds point to the node that they are connected to. Further, the collection
of weights form distinct patterns that can be easily perceived, so that one can also
infer global information from the overall patterns formed.
2.2
The Trajectory Diagram
A further limitation of Hinton diagrams is that they provide a relatively poor representation of dynamic information. Therefore, to understand more about the dynamics of learning we introduce another visual tool that gives a two-dimensional
projection of the weight space of the network. This represents the learning process as a trajectory in a reduced dimensional space. By representing the value of
the error function as the color of the point in weight space, one obtains a sense of
the contours of the error hypersurface, and the dynamics of the gradient-descent
evolution on this hypersurface. We call this the "trajectory diagram".
The scheme is based on the premise that the human user has a good visual notion
of vector addition. To represent an n-dimensional point, its axial components are
defined as vectors and then are plotted radially in the plane; the vector sum of
these is then calculated to yield the point representing the n-dimensional position.
Neural Network Visualization
It is obvious that for n > 2 the resultant point is not unique, however, the method
does allow one to infer information about families of similar trajectories, make
comparisons between trajectories and notice important deviations in behaviour.
2.3
Implementation
The graphics software was written in C using X-Windows v. 11. The C code was
interfaced to a FORTRAN neural network simulator. The whole package ran under
UNIX, on an RT workstation. Using the portability of X- Windows the graphics
could be run remotely on different machines using a local area network. Excecution
time was slow for real-time interaction except for very small networks (typically
up 30 weights). For larger networks, the Stellar graphics workstation was used,
whereby the simulator code could be vectorized and parallelized.
3
APPLICATION EXAMPLES
With the graphics we investigated networks learning Boolean functions: binary
input vectors were presented to the network through the input nodes, and the
teacher signal was set to either 1 or O. Here, we show networks learning majority, and
symmetry functions. The output of the majority function is 1 only if more than half
of the input nodes are on; simple symmetry distiguishes between input vectors that
are symmetric or anti-symmetric about a central axis; general symmetry identifies
perfectly symmetric patterns out of all other permutations. Using the graphics,
one can watch how solutions to a particular problem are obtained, how different
parameters affect these solutions, and observe stages at which learning decisions are
made.
At the start of the simulations the weights are set to small random values. During
learning, many example patterns of vectors are presented to the input of the network
and weights are adjusted accordingly. Initially the rate of change of weights is
small, later as the simulation gets under way the weights change rapidly, until small
changes are made as the system moves toward the final solution. Distinct patterns
of triangles show the configuration of weights in their final form.
3.1
The Majority Function
Figure 1 shows a bond diagram for a network that has learnt the majority function.
During the run, many input patterns were presented to the network during which
time the weights were changed. The weights evolve from small random values
through to an almost uniform set corresponding to the solution of the problem.
Towards the end, a large output node is displayed and the magnitudes of all the
weights are roughly uniform, indicating that a large bias (or threshold) is required
to offset the sum of the weights. Majority is quite a simple problem for the network
to learn; more complicated functions require hidden units.
3.2
The Simple Symmetry Function
In this case only symmetric or perfectly anti-symmetric patterns are presented and
the network is taught to distinguish between these. In solving this problem, the
467
468
Wejchert and Tesauro
Figure 1: A near-final configuration of weights for the majority function. All the
weights are positive. The disc corresponds to the threshold of the output unit.
Neural Network Visualization
network chose (correctly) that it needs only two units to make the decision whether
the input is totally symmetric or totally anti-symmetric. (In fact, any symmetrically
separated input pair will work.) It was found that the simple pattern created by the
bond representation carries over into the more general symmetry function, where the
network must identify perfectly symmetric inputs from all the other permutations.
3.3
The General Symmetry Function
Here, the network is required to detect symmtery out of all the possible input
patterns. As can be seen from the bond diagram (figure 2) the network has chosen
a hierarchical structure of weights to solve the problem, using the basic pattern of
weights of simple symmtery. The major decision is made on the outer pair and
additional decisions are made on the remaining pairs with decreasing strength. As
before, the choice of pairs in the hierarchy depends on the initial random weights.
By watching the animations, we could make some observations about the stages of
learning. We found that the early behavior was the most critical as it was at this
stage that the signs of the weights feeding to the hidden units were determined. At
the later stages the relative magnitudes of the weights were adapted.
3.4
The Visualization Environment
Figure 3 shows the visualization environment with most of the windows active. The
upper window shows the total error, and the lower window the state of the output
unit. Typically, the error initially stays high then decreases rapidly and then levels
off to zero as final adjustments are made to the weights. Spikes in this curve are
due to the method of presenting patterns at random. The state of the output unit
initially oscillates and then bifurcates into the two requires output states.
The two extra windows on the right show the trajectory diagrams for the two
hidden units. These diagrams are generalizations of phase diagrams: components
of a point in a high dimensional space are plotted radially in the plane and treated
as vectors whose sum yields a point in the two-dimensional representation. We have
found these diagrams useful in observing the trajectories of the two hidden units,
in which case they are representations of paths in a six-dimensional weight space.
In cases where the network does converge to a correct solution, the paths of the two
hidden units either try to match each other (in which case the configurations of the
units were identical) or move in opposite directions (in which case the units were
opposites ).
By contrast, for learning runs which do not converge to global optima we found
that usually one of the hidden units followed a normal trajectory whereas the other
unit was not able to achieve the appropriate match or anti-match. This is because
the signs of the weights to the second hidden unit were not correct and the learning
algorithm could not make the necessary adjustments. At a certain point early in
learning the unit would travel off on a completely different trajectory. These observations suggest a heuristic that could improve learning by setting initial trajectories
in the "correct" directions.
469
470
\Vejchert and Tesauro
Figure 2: The bond diagram for a network that has learnt the symmetry function.
There are six input units, two hidden and one output. Weights are shown by bonds
emantating from nodes. In the graphics positive and negative weights are colored
red and blue respectively. In this grey-scale photo the negative weights are marked
with diagonal lines to distiguish them from positive weights.
Neural Network Visualization
Figure 3: An example of the graphics with most of the windows active; the command line appears on the bottom. The central window shows the bond diagram
of the General Symmetry function. The upp er left window shows the total error,
and the lower left window the state of the output unit. The two windows on the
right show the trajectory diagrams for the two hidden units. The "spokes" in this
diagram correspond to the magnitude of the weights. The trace of dots are the
paths of the two units in weight space.
471
472
Wejchert and Tesauro
In general, the trajectory diagram has similar uses to a conventional phase plot: it
can distinguish between different regions of configuration space; it can be used to
detect critical stages of the dynamics of a system; and it gives a "trace" of its time
evolution.
4
CONCLUSION
A set of computer graphics visualization programs have been designed and interfaced
to a back-propagation simulator. Some new visualization tools were introduced such
as the bond and trajectory diagrams. These and other visual tools were integrated
into an interactive multi-window environment.
During the course of the work it was found that the graphics was useful in a number
of ways: in giving a clearer picture of the internal representation of weights, the
effects of parameters, the detection of errors in the code, and pointing out aspects
of the simulation that had not been expected beforehand. Also, insight was gained
into principles of designing graphics for scientific processes.
It would be of interest to extend our visualization techniques to include large networks with thousands of nodes and tens of thousands of weights. We are currently
examining a number of alternative techniques which are more appropriate for large
data-set regimes.
Acknow ledgements
We wish to thank Scott Kirkpatrick for help and encouragment during the project.
We also thank members of the visualization lab and the animation lab for use of
their resources.
References
(1) McCormick B H, DeFanti T A Brown M D (Eds), "Visualization in Scientific
Computing" Computer Graphics 21, 6, November (1987). See also "Visualization
in Scientific Computing-A Synopsis", IEEE Computer Graphics and Applications,
July (1987).
(2) Rumelhart D E, McClelland J L, "Parallel Distributed Processing: Explorations
in the Microstructure of Cognition. Volume 1" MIT Press, Cambridge, MA (1986).
(3) Tufte E R, "The Visual Display of Quantitative Information", Graphic Press,
Chesire, CT (1983).
PART VI:
NEW LEARNING ALGORITHMS
| 286 |@word grey:1 simulation:7 carry:1 initial:2 configuration:5 written:1 must:1 plot:1 designed:1 half:1 accordingly:1 plane:2 colored:1 node:10 height:1 introduce:2 expected:1 roughly:1 behavior:1 multi:2 simulator:3 decreasing:1 little:1 window:14 totally:2 project:1 emerging:1 developed:1 temporal:2 quantitative:1 interactive:1 oscillates:1 unit:19 positive:3 before:1 local:1 path:3 chose:1 emphasis:1 unique:1 differs:1 emanate:1 area:2 remotely:1 projection:1 suggest:1 get:1 layered:1 conventional:1 center:1 attention:1 insight:2 notion:1 hierarchy:1 user:1 us:1 designing:1 rumelhart:1 bottom:1 thousand:2 region:1 connected:2 decrease:1 ran:1 environment:4 dynamic:8 gerald:1 solving:1 segment:1 completely:1 basis:1 triangle:3 easily:1 represented:1 separated:1 distinct:2 quite:1 whose:1 widely:1 larger:1 solve:1 heuristic:1 final:4 bifurcates:1 interaction:1 rapidly:2 achieve:1 optimum:1 help:1 clearer:1 axial:1 received:1 indicate:1 direction:3 correct:3 exploration:1 human:1 education:1 require:1 premise:1 behaviour:2 feeding:2 microstructure:1 generalization:1 adjusted:1 normal:1 cognition:1 visualize:1 pointing:1 major:1 early:2 perceived:1 travel:1 bond:10 currently:1 tool:4 mit:1 clearly:2 command:1 contrast:1 tremendously:1 sense:2 detect:2 integrated:2 typically:2 initially:3 hidden:9 overall:1 development:2 spatial:1 field:1 identical:1 represents:1 look:1 simultaneously:1 phase:2 detection:1 interest:2 investigate:1 kirkpatrick:1 beforehand:1 necessary:1 circle:1 plotted:2 boolean:3 deviation:1 uniform:2 predicate:1 examining:1 graphic:15 teacher:1 learnt:2 stay:1 off:2 connecting:1 central:2 watching:1 creating:1 derivative:1 includes:1 explicitly:1 depends:1 vi:1 later:2 try:2 lab:2 observing:1 red:1 start:1 complicated:1 parallel:1 formed:1 yield:2 interfaced:2 identify:1 correspond:1 disc:1 trajectory:14 researcher:1 ed:1 obvious:1 resultant:1 static:1 workstation:2 radially:2 color:2 back:3 appears:1 synopsis:1 stage:5 until:1 propagation:3 scientific:4 effect:2 brown:1 evolution:2 symmetric:8 during:6 upp:1 whereby:1 yorktown:1 presenting:1 geometrical:1 chaos:1 instantaneous:1 volume:1 extend:1 cambridge:1 had:1 dot:1 toolkit:1 tesauro:5 certain:1 binary:1 watson:1 seen:1 additional:1 parallelized:1 converge:2 signal:1 july:1 relates:1 full:1 infer:2 match:3 basic:1 represent:2 addition:1 whereas:1 diagram:20 extra:1 member:1 call:2 near:1 symmetrically:1 affect:1 topology:1 perfectly:3 opposite:2 whether:1 six:2 useful:3 ten:1 mcclelland:1 reduced:1 notice:1 sign:2 correctly:1 blue:1 ledgements:1 taught:1 threshold:3 spoke:1 sum:3 run:5 package:1 unix:1 family:1 almost:1 decision:4 ct:1 followed:1 distinguish:2 display:2 strength:2 adapted:1 software:1 aspect:2 speed:1 relatively:2 poor:1 resource:1 visualization:15 fortran:1 end:1 photo:1 observe:2 hierarchical:1 indirectly:1 appropriate:2 alternative:1 remaining:1 include:1 graphical:2 giving:1 move:2 arrangement:1 spike:1 rt:1 diagonal:1 gradient:1 thank:2 majority:6 outer:1 toward:2 length:1 code:3 relationship:1 trace:2 negative:2 fluid:1 acknow:1 implementation:1 mccormick:1 upper:1 observation:2 descent:1 anti:4 displayed:2 november:1 hinton:2 introduced:2 pair:4 required:2 connection:1 able:1 usually:1 pattern:11 scott:1 regime:1 program:1 built:1 critical:2 treated:1 representing:3 scheme:1 improve:1 picture:2 identifies:1 axis:1 created:1 evolve:1 relative:1 permutation:2 limitation:1 proportional:1 proven:1 vectorized:1 article:1 principle:1 course:2 changed:1 placed:1 bias:1 allow:1 understand:2 benefit:1 distributed:1 curve:1 calculated:1 contour:1 visuals:1 collection:1 made:5 hypersurface:2 obtains:1 global:2 active:2 portability:1 learn:1 symmetry:8 investigated:2 whole:1 animation:4 ny:1 aid:1 slow:1 position:1 wish:1 ib:1 stellar:1 jakub:1 er:1 offset:1 gained:1 magnitude:5 visual:5 adjustment:2 watch:1 corresponds:1 ma:1 marked:1 towards:1 change:4 determined:1 except:1 total:2 indicating:1 internal:2 |
2,048 | 2,860 | Generalization error bounds for classifiers
trained with interdependent data
Nicolas Usunier, Massih-Reza Amini, Patrick Gallinari
Department of Computer Science, University of Paris VI
8, rue du Capitaine Scott, 75015 Paris France
{usunier, amini, gallinari}@poleia.lip6.fr
Abstract
In this paper we propose a general framework to study the generalization
properties of binary classifiers trained with data which may be dependent, but are deterministically generated upon a sample of independent
examples. It provides generalization bounds for binary classification and
some cases of ranking problems, and clarifies the relationship between
these learning tasks.
1
Introduction
Many machine learning (ML) applications deal with the problem of bipartite ranking where
the goal is to find a function which orders relevant elements over irrelevant ones. Such
problems appear for example in Information Retrieval, where the system returns a list of
documents, ordered by relevancy to the user?s demand. The criterion widely used to measure the ranking quality is the Area Under the ROC Curve (AUC) [6]. Given a training set
S = ((xp , yp ))np=1 with yp ? {?1}, its optimization over a class of real valued functions G
can be carried out by finding a classifier of the form cg (x, x ) = sign(g(x) ? g(x )), g ? G
which minimizes the error rate over pairs of examples (x, 1) and (x , ?1) in S [6]. More
generally, it is well-known that the learning of scoring functions can be expressed as a
classification task over pairs of examples [7, 5].
The study of the generalization properties of ranking problems is a challenging task, since
the pairs of examples violate the central i.i.d. assumption of binary classification. Using
task-specific studies, this issue has recently been the focus of a large amount of work. [2]
showed that SVM-like algorithms optimizing the AUC have good generalization guarantees, and [11] showed that maximizing the margin of the pairs, defined by the quantity
g(x) ? g(x ), leads to the minimization of the generalization error. While these results
suggest some similarity between the classification of the pairs of examples and the classification of independent data, no common framework has been established. As a major
drawback, it is not possible to directly deduce results for ranking from those obtained in
classification.
In this paper, we present a new framework to study the generalization properties of classifiers over data which can exhibit a suitable dependency structure. Among others, the
problems of binary classification, bipartite ranking, and the ranking risk defined in [5] are
special cases of our study. It shows that it is possible to infer generalization bounds for clas-
sifiers trained over interdependent examples using generalization results known for binary
classification. We illustrate this property by proving a new margin-based, data-dependent
bound for SVM-like algorithms optimizing the AUC. This bound derives straightforwardly
from the same kind of bounds for SVMs for classification given in [12]. Since learning algorithms aim at minimizing the generalization error of their chosen hypothesis, our results
suggest that the design of bipartite ranking algorithms can follow the design of standard
classification learning systems.
The remainder of this paper is as follows. In section 2, we give the formal definition of
our framework and detail the progression of our analysis over the paper. In section 3, we
present a new concentration inequality which allows to extend the notion of Rademacher
complexity (section 4), and, in section 5, we prove generalization bounds for binary classification and bipartite ranking tasks under our framework. Finally, the missing proofs are
given in a longer version of the paper [13].
2
Formal framework
We distinguish between the input and the training data. The input data S = (sp )np=1 is
a set of n independent examples, while the training data Z = (zi )N
i=1 is composed of
N binary classified elements where each zi is in Xtr ? {?1, +1}, with Xtr the space
of characteristics. For example, in the general case of bipartite ranking, the input data
is the set of elements to be ordered, while the training data is constituted by the pairs of
examples to be classified. The purpose of this work is the study of generalization properties
of classifiers trained using a possibly dependent training data, but in the special case where
the latter is deterministically generated from the input data. The aim here is to select a
hypothesis h ? H = {h? : Xtr ? {?1, 1}|? ? ?} which optimizes the empirical risk
N
L(h, Z) = N1 i=1 (h, zi ), being the instantaneous loss of h, over the training set Z.
Definition 1 (Classifiers trained with interdependent data). A classification algorithm over
interdependent training data takes as input data a set S = (sp )np=1 supposed to be drawn
according to an unknown product distribution ?np=1 Dp over a product sample space S n 1 ,
outputs a binary classifier chosen in a hypothesis space H : {h : Xtr ? {+1, ?1}}, and
has a two-step learning process. In a first step, the learner applies to its input data S a
fixed function ? : S n ? (Xtr ? {?1, 1})N to generate a vector Z = (zi )N
i=1 = ?(S) of
N training examples zi ? Xtr ? {?1, 1}, i = 1, ..., N . In the second step, the learner runs
a classification algorithm in order to obtain h which minimizes the empirical classification
loss L(h, Z), over its training data Z = ?(S).
Examples Using the notations above, when S = Xtr ? {?1}, n = N , ? is the identity
function and S is drawn i.i.d. according to an unknown distribution D, we recover the
classical definition of a binary classification algorithm. Another example is the ranking task
described in [5] where S = X ?R, Xtr = X 2 , N = n(n?1) and, given S = ((xp , yp ))np=1
l
drawn i.i.d. according to a fixed D, ? generates all the pairs ((xk , xl ), sign( yk ?y
2 )), k = l.
In the remaining of the paper, we will prove generalization error bounds of the selected
hypothesis by upper bounding
sup L(h) ? L(h, ?(S))
(1)
h?H
with high confidence over S, where L(h) = ES L(h, ?(S)). To this end we decompose
Z = ?(S) using the dependency graph of the random variables composing Z with a technique similar to the one proposed by [8]. We go towards this result by first bounding
1
It is equivalent to say that the input data is a vector of independent, but not necessarilly identically
distributed random variables.
N
? i ) ? 1 N q(?(S)i ) with high confidence over samples
supq?Q ES? N1 i=1 q(?(S)
i=1
N
S, where S? is also drawn according to ?np=1 Dp , Q is a class of functions taking values
in [0, 1], and ?(S)i denotes the i-th training example (Theorem 4). This bound uses an
extension of the Rademacher complexity [3], the fractional Rademacher complexity (FRC)
(definition 3), which is a weighted sum of Rademacher complexities over independent subsets of the training data. We show that the FRC of an arbitrary class of real-valued functions
can be trivially computed given the Rademacher complexity of this class of functions and
? (theorem 6). This theorem shows that generalization error bounds for classes of classifiers over interdependent data (in the sense of definition 1) trivially follows from the same
kind of bounds for the same class of classifiers trained over i.i.d. data. Finally, we show
an example of the derivation of a margin-based, data-dependent generalization error bound
(i.e. a bound on equation (1) which can be computed on the training data) for the bipartite
ranking case when H = {(x, x ) ? sign(K(?, x) ? K(?, x ))|K(?, ?) ? B 2 }, assuming
that the input examples are drawn i.i.d. according to a distribution D over X ? {?1},
X ? Rd and K is a kernel over X 2 .
Notations Throughout the paper, we will use the notations of the preceding subsection,
N
except for Z = (zi )N
i=1 , which will denote an arbitrary element of (Xtr ? {?1, 1}) . In
order to obtain the dependency graph of the random variables ?(S)i , we will consider, for
each 1 ? i ? N , a set [i] ? {1, ..., n} such that ?(S)i depends only on the variables sp ? S
for which p ? [i]. Using these notations, if we consider two indices k, l in {1, ..., N }, we
can notice that the two random variables ?(S)k and ?(S)l are independent if and only if
[k] ? [l] = ?. The dependency graph of the ?(S)i s follows, by constructing the graph ?(?),
with the set of vertices V = {1, ..., N }, and with an edge between k and l if and only if
[k] ? [l] = ?. The following definitions, taken from [8], will enable us to separate the set of
partly dependent variables into sets of independent variables:
? A subset A of V is independent if all the elements in A are independent.
m
? A sequence C = (C
j )j=1 of subsets of V is a proper cover of V if, for all j, Cj is
independent, and j Cj = V
a proper, exact fractional cover of ? if wj > 0
? A sequence C = (Cj , wj )m
j=1 is
m
for all j, and, for each i ? V , j=1 wj ICj (i) = 1, where ICj is the indicator
function of Cj .
? The
fractional chromatic number of ?, noted ?(?), is equal to the minimum of
j wj over all proper, exact fractional cover.
It is to be noted that from lemma 3.2 of [8], the existence of proper, exact fractional covers
is ensured. Since ? is fully determined by the function ?, we will note ?(?) = ?(?).
Moreover,
we will denote by C(?) = (Cj , wj )?j=1 a proper, exact fractional cover of ?
such that j wj = ?(?). Finally, for a given C(?), we denote by ?j the number of
elements in Cj , and we fix the notations: Cj = {Cj1 , ..., Cj?j }. It is to be noted that if
N
?
(ti )N
i=1 ? R , and C(?) = (Cj , wj )j=1 , lemma 3.1 of [8] states that:
N
i=1
3
ti =
?
wj Tj , where Tj =
j=1
?j
tCj k
(2)
k=1
A new concentration inequality
Concentration inequalities bound the probability that a random variable deviates too much
from its expectation (see [4] for a survey). They play a major role in learning theory as
they can be used for example to bound the probability of deviation of the expected loss of a
function from its empirical value estimated over a sample set. A well-known inequality is
McDiarmid?s theorem [9] for independent random variables, which bounds the probability
of deviation from its expectation of an arbitrary function with bounded variations over
each one of its parameters. While this theorem is very general, [8] proved a large deviation
bound for sums of partly random variables where the dependency structure of the variables
is known, which can be tighter in some cases. Since we also consider variables with known
dependency structure, using such results may lead to tighter bounds. However, we will
bound functions like in equation (1), which do not write as a sum of partly dependent
variables. Thus, we need a result on more general functions than sums of random variables,
but which also takes into account the known dependency structure of the variables.
Theorem 2. Let ? : X n ? X . Using the notations defined above, let C(?) =
N
(Cj , wj )?j=1 . Let f : X ? R such that:
N
j
? R which satisfy ?Z = (z1 , ..., zN ) ? X ,
1. There exist
? functions fj : X
f (Z) = j wj fj (zCj1 , ..., zCj?j ).
?
N
2. There exist ?1 , ..., ?N ? R+ such that ?j, ?Zj , Zjk ? X j such that Zj and Zjk
differ only in the k-th dimension, |fj (Zj ) ? fj (Zjk )| ? ?Cjk .
?
Let finally D1 , ..., Dn be n probability distributions over X . Then, we have:
PX??ni=1 Di (f ? ?(X) ? Ef ? ? > ) ? exp(?
?(?)
22
N
i=1
?i2
)
(3)
and the same holds for P(Ef ? ? ? f ? ? > ).
The proof of this theorem (given in [13]) is a variation of the demonstrations in [8] and
McDiarmid?s theorem. The main idea of this theorem is to allow the decomposition of
f , which will take as input partly dependent random variables when applied to ?(S), into
a sum of functions which, when considering f ? ?(S), will be functions of independent
variables. As we will see, this theorem will be the major tool in our analysis. It is to be
noted that when X = X , N = n and ? is the identity function of X n , the theorem 2 is
N
exactly McDiarmid?s theorem. On the other hand, when f takes the form i=1 qi (zi ) with
for all z ? X , a ? qi (z) ? a + ?i with a ? R, then theorem 2 reduces to a particular case
of the large deviation bound of [8].
4
The fractional Rademacher complexity
N
Let Z = (zi )N
i=1 ? Z . If Z is supposed to be drawn i.i.d. according to a distribution DZ
over Z, for a class F of functions from Z to R, the Rademacher complexity of F is defined
N
by [10] RN (F) = EZ?DZ RN (F, Z), where RN (F, Z) = E? supf ?F i=1 ?i f (zi ) is
the empirical Rademacher complexity of F on Z, and ? = (?i )ni=1 is a sequence of independent Rademacher variables, i.e. ?i, P(?i = 1) = P(?i = ?1) = 12 . This quantity has been extensively used to measure the complexity of function classes in previous
bounds for binary classification [3, 10]. In particular, if we consider a class of functions
Q = {q : Z ? [0, 1]}, it can be shown (theorem 4.9 in [12]) that with probability at least
1 ? ? over Z, all q ? Q verify the following inequality, which serves as a preliminary result
to show data-dependent bounds for SVMs in [12]:
N
1
ln(1/?)
EZ?DZ q(z) ?
(4)
q(zi ) + RN (Q) +
N i=1
2N
In this section, we generalize equation (4) to our case with theorem 4, using the following
N
definition2 (we denote ?(q, ?(S)) = N1 i=1 q(?(S)i ) and ?(q) = ES ?(q, ?(S))):
Definition 3. Let Q, be class of functions from a set Z to R, Let ? : X n ? Z N and S
a sample of size n drawn according to a product distribution ?np=1 Dp over X n . Then, we
define the empirical fractional Rademacher complexity 3 of Q given ? as:
2
wj sup
?i q(?(S)i )
Rn? (Q, S, ?) = E?
N
q?Q
j
i?Cj
As well as the fractional Rademacher complexity of Q as Rn? (Q, ?) = ES Rn? (Q, S, ?)
Theorem 4. Let Q be a class of functions from
to [0, 1]. Then, with probability at least
Z
n
1 ? ? over the samples S drawn according to p=1 Dp , for all q ? Q:
N
1
?(?) ln(1/?)
?
q(?i (S)) ? Rn (Q, ?) +
?(q) ?
N i=1
2N
N
1
?(?) ln(2/?)
And: ?(q) ?
q(?i (S)) ? Rn? (Q, S, ?) + 3
N i=1
2N
In the definition of the fractional Rademacher complexity (FRC), if ? is the identity function, we recover the standard Rademacher complexity, and theorem 4 reduces to equation
(4). These results are therefore extensions of equation (4), and show that the generalization
error bounds for the tasks falling in our framework will follow from a unique approach.
Proof. In order to find a bound for all q in Q of ?(q) ? ?(q, ?(S)), we write:
N
N
1
1
? i) ?
?(q) ? ?(q, ?(S)) ? sup ES?
q(?(S)
q(?(S)i )
N i=1
N i=1
q?Q
?
?
1
? i) ?
?
wj sup ?ES?
q(?(S)
q(?(S)i )?
N j
q?Q
i?Cj
(5)
i?Cj
Where we have used equation (2). Now, consider, for each j, fj : Z ?j ? R such that, for
?j
? C k ) ? ?j q(z (j) ). It is clear
all z (j) ? Z ?j , fj (z (j) ) = N1 supq?Q ES? k=1
q(?(S)
j
k
N k=1
that if f : Z N ? R is defined by: for all Z ? Z N , f (Z) = j=1 wj fj (zCj1 , ..., zCj?j ),
then the right side of equation (5) is equal to f ? ?(S), and that f satisfies all the conditions
of theorem 2 with, for all i ? {1, ..., N }, ?i = N1 . Therefore, with a direct application
of theorem 2,we can claim that, with probability atleast 1 ? ? over samples S drawn
n
according to p=1 Dp (we denote ?j (q, ?(S)) = N1 i?Cj q(?(S)i )):
? ? ?j (q, ?(S)) + ?(?) ln(1/?)
wj sup ES? ?j (q, ?(S))
?(q) ? ?(q, ?(S)) ? ES
2N
q?Q
j
wj
? i ) ? q(?(S)i )] + ?(?) ln(1/?)
sup
[q(?(S)
? ES,S?
N q?Q
2N
j
i?Cj
(6)
2
The fractional Rademacher complexity depends on the cover C(?) chosen, since it is not unique.
However in practice, our bounds only depend on ?(?) (see section 4.1).
3
this denomination stands as it is a sum of Rademacher averages over independent parts of ?(S).
Now fix j, and consider ? = (?i )N
i=1 , a sequence of N independent Rademacher variables.
For a given realization of ?, we have that
? i ) ? q(?(S)i )] = E ? sup
? i ) ? q(?(S)i )] (7)
[q(?(S)
?i [q(?(S)
ES,S? sup
S,S
q?Q
q?Q
i?Cj
i?Cj
? the
because, for each ?i considered, ?i = ?1 simply corresponds to permutating, in S, S,
two sequences S[i] and S?[i] (where S[i] denotes the subset of S ?(S)i really depends on)
which have the same distribution (even though the sp ?s are not identically distributed), and
are independent from the other S[k] and S?[k] since we are considering i, k ? Cj . Therefore,
taking the expection over S, S? is the same with the elements permuted this way as if they
were not permuted. Then, from equation (6), the first inequality of the theorem follow. The
second inequality is due to an application of theorem 2 to Rn? (Q, S, ?).
Remark 5. The symmetrization performed in equation (7) requires the variables ?(S)i
appearing in the same sum to be independent. Thus, the generalization of Rademacher
complexities could only be performed using a decomposition in independent sets, and the
cover C assures some optimality of the decomposition. Moreover, even though McDiarmid?s theorem could be applied each time we used theorem 2, the derivation of the real
numbers bounding the differences is not straightforward, and may not lead to the same
result. The creation of the dependency graph of ? and theorem 2 are therefore necessary
tools for obtaining theorem 4.
Properties of the fractional Rademacher complexity
Theorem 6. Let Q be a class of functions from a set Z to R, and ? : X n ? Z N . For
S ? X n , the following results are true.
1. Let ? : R ? R, an L-Lipschitz function. Then Rn? (? ? Q, S, ?) ? LRn? (Q, S, ?)
2. If there exist M > 0 such that
for every k, and samples Sk of size k Rk (Q, Sk ) ?
M
?
,
k
then Rn? (Q, S, ?) ? M
?(?)
N
3. Let K be a kernel over Z, B > 0, denote ||x||K = K(x, x) and define HK,B =
{h? : Z ? R, h? (x) = K(?, x)|||?||K ? B}. Then:
N
2B
?(?)
?
Rn (HK,B , S, ?) ?
||?(S)i ||2K
N
i=1
The first point of this theorem is a direct consequence of a Rademacher process comparison
theorem, namely theorem 7 of [10], and will enable the obtention of margin-based bounds.
The second and third points show that the results regarding the Rademacher complexity
can be used to immediately deduce bounds on the FRC. This result, as well as theorem 4
show that binary classifiers of i.i.d. data and classifiers of interdependent data will have
generalization bounds of the same form, but with different convergence rate depending on
the dependency structure imposed by ?.
elements
of proof. The second point results
from Jensen?s inequality, using the facts that
w
=
?(?)
and,
from
equation
(2),
j
j
j wj |Cj | = N . The third point is based
on the same calculations by noting that (see e.g. [3]), if Sk = ((xp , yp ))kp=1 , then
k
2
Rk (HK,B , Sk ) ? 2B
p=1 ||xp ||K .
k
5
Data-dependent bounds
The fact that classifiers trained on interdependent data will ?inherit? the generalization
bound of the same classifier trained on i.i.d. data suggests simple ways of obtaining bipartite ranking algorithms. Indeed, suppose we want to learn a linear ranking function, for
example a function h ? HK,B as defined in theorem 6, where K is a linear kernel, and
consider a sample S ? (X ? {?1, 1})n with X ? Rd , drawn i.i.d. according to some D.
Then we have, for input examples (x, 1) and (x , ?1) in S, h(x) ? h(x ) = h(x ? x ).
Therefore, we can learn a bipartite ranking function by applying an SVM algorithm to the
pairs ((x, 1), (x , ?1)) in S, each pair being represented by x ? x , and our framework
allows to immediately obtain generalization bounds for this learning process based on the
generalization bounds for SVMs. We show these bounds in theorem 7.
To derive the bounds, we consider ?, the 1-Lipschitz function defined by ?(x) =
min(1, max(1 ? x, 0)) ? [[x ? 0]]4 . Given a training example z, we denote by
z l its label and z f its feature representation. With an abuse of notation, we denote
n
N
?(h, Z) = N1 i=1 ?(zil h(zif )). For a sample S drawn according to p=1 Dp , we have,
for all h in some function class H:
1
1
ES
(h, zi ) ? ES
?(zil h(zif ))
N i
N i
2wj
?(?) ln(2/?)
f
l
sup
? ?(h, Z) + E?
?i ?(zi h(zi )) + 3
N h?H
2N
j
i?Cj
[[zil h(zif )
? 0]], and the last inequality holds with probability at least
where (h, zi ) =
1 ? ? over samples S from theorem 4. Notice that when ?Cjk is a Rademacher variable, it
l
l
?
since zC
? {?1, 1}. Thus, using the first result of
has the same distribution as zC
jk Cjk
jk
theorem 6 we have that with probability 1 ? ? over the samples S, all h in H satisfy:
1
1
?(?) ln(2/?)
l
f
?
(8)
(h, zi ) ?
?(z h(z )) + Rn (H, S, ?) + 3
ES
N i
N i
2N
Now putting in equation (8) the third point of theorem 6, with H = HK,B as defined in
theorem 6 with Z = X , we obtain the following theorem:
Theorem 7. Let S ? (X ? {?1, 1})n be a sample of size n drawn i.i.d. according to an
unknown distribution D. Then, with probability at least 1 ? ?, all h ? HK,B verify:
n
n
1
2B
ln(2/?)
2
ES [[yi h(xi ) ? 0]] ?
?(yi h(xi )) +
||xi ||K + 3
n i=1
n
2n
i=1
S
S
n+ n?
1
?(h(x?(i) ) ? h(x?(j) ))
And E{[[h(x) ? h(x )]]|y = 1, y = ?1} ? S S
n+ n? i=1 j=1
nS nS
S
S
+
?
2B max(n+ , n? )
ln(2/?)
2
+
||x?(i) ? x?(j) ||K + 3
S
S
n+ n?
2 min(nS+ , nS? )
i=1 j=1
Where nS+ , nS? are the number of positive and negative instances in S, and ? and ? also
depend on S, and are such that x?(i) is the i-th positive instance in S and ?(j) the j-th
negative instance.
4
remark that ? is upper bounded by the slack variables of the SVM optimization problem (see
e.g. [12]).
It is to be noted that when h ? HK,B with a non linear kernel, the same bounds apply, with,
for the case of bipartite ranking, ||x?(i) ? x?(j) ||2K replaced by ||x?(i) ||2K + ||x?(j) ||2K ?
2K(x?(i) , x?(j) ).
For binary classification, we recover the bounds of [12], since our framework is a generalization of their approach. As expected, the bounds suggest that kernel machines will
generalize well for bipartite ranking. Thus, we recover the results of [2] obtained in a specific framework of algorithmic stability. However, our bound suggests that the convergence
rate is controlled by 1/ min(nS+ , nS? ), while their results suggested 1/nS+ + 1/nS? . The full
proof, in which we follow the approach of [1], is given in [13].
6
Conclusion
We have shown a general framework for classifiers trained with interdependent data, and
provided the necessary tools to study their generalization properties. It gives a new insight
on the close relationship between the binary classification task and the bipartite ranking,
and allows to prove the first data-dependent bounds for this latter case. Moreover, the
framework could also yield comparable bounds on other learning tasks.
Acknowledgments
This work was supported in part by the IST Programme of the European Community, under
the PASCAL Network of Excellence, IST-2002-506778. This publication only reflects the
authors views.
References
[1] Agarwal S., Graepel T., Herbrich R., Har-Peled S., Roth D. (2005) Generalization Error Bounds
for the Area Under the ROC curve, Journal of Machine Learning Research.
[2] Agarwal S., Niyogi P. (2005) Stability and generalization of bipartite ranking algorithms, Conference on Learning Theory 18.
[3] Bartlett P., Mendelson S. (2002) Rademacher and Gaussian Complexities: Risk Bounds and Structural Results, Journal of Machine Learning Research 3, pp. 463-482.
[4] Boucheron S., Bousquet O., Lugosi G. (2004) Concentration inequalities, in O. Bousquet, U.v.
Luxburg, and G. Rtsch (editors), Advanced Lectures in Machine Learning, Springer, pp. 208-240.
[5] Clemenc?on S., Lugosi G., Vayatis N. (2005) Ranking and scoring using empirical risk minimization, Conference on Learning Theory 18.
[6] Cortes C., Mohri M. (2004) AUC optimization vs error rate miniminzation NIPS 2003,
[7] Freund Y., Iyer R.D., Schapire R.E., Singer Y. (2003) An Efficient Boosting Algorithm for Combining Preferences, Journal of Machine Learning Research 4, pp. 933-969.
[8] Janson S. (2004) Large deviations for sums of partly dependent random variables, Random Structures and Algorithms 24, pp. 234-248.
[9] McDiarmid C. (1989) On the method of bounded differences, Surveys in Combinatorics.
[10] Meir R., Zhang T. (2003) Generalization Error Bounds for Bayesian Mixture Algorithms, Journal of Machine Learning Research 4, pp. 839-860.
[11] Rudin C., Cortes C., Mohri M., Schapire R.E. (2005) Margin-Based Ranking meets Boosting in
the middle, Conference on Learning Theory 18.
[12] Shawe-Taylor J., Cristianini N. (2004) Kernel Methods for Pattern Analysis, Cambridge U. Prs.
[13] Long version of this paper, Available at http://www-connex.lip6.fr/?usunier/nips05-lv.pdf
| 2860 |@word middle:1 version:2 relevancy:1 decomposition:3 document:1 janson:1 v:1 selected:1 rudin:1 xk:1 provides:1 boosting:2 herbrich:1 preference:1 mcdiarmid:5 zhang:1 dn:1 direct:2 prove:3 excellence:1 indeed:1 expected:2 considering:2 provided:1 notation:7 moreover:3 bounded:3 kind:2 minimizes:2 finding:1 guarantee:1 every:1 ti:2 exactly:1 ensured:1 classifier:14 gallinari:2 appear:1 positive:2 consequence:1 meet:1 abuse:1 lugosi:2 suggests:2 challenging:1 supq:2 unique:2 acknowledgment:1 practice:1 area:2 empirical:6 confidence:2 suggest:3 close:1 risk:4 applying:1 www:1 equivalent:1 imposed:1 missing:1 maximizing:1 dz:3 go:1 straightforward:1 roth:1 survey:2 immediately:2 insight:1 proving:1 stability:2 notion:1 variation:2 denomination:1 play:1 suppose:1 user:1 exact:4 us:1 hypothesis:4 element:8 jk:2 role:1 wj:17 yk:1 complexity:18 peled:1 cristianini:1 trained:9 depend:2 creation:1 upon:1 bipartite:12 learner:2 tcj:1 represented:1 derivation:2 kp:1 widely:1 valued:2 say:1 niyogi:1 sequence:5 propose:1 product:3 fr:2 massih:1 remainder:1 relevant:1 combining:1 realization:1 supposed:2 convergence:2 rademacher:22 illustrate:1 depending:1 derive:1 connex:1 differ:1 drawback:1 enable:2 fix:2 generalization:26 really:1 decompose:1 preliminary:1 tighter:2 extension:2 hold:2 considered:1 capitaine:1 exp:1 algorithmic:1 claim:1 major:3 purpose:1 label:1 symmetrization:1 lip6:2 tool:3 weighted:1 reflects:1 minimization:2 xtr:9 gaussian:1 aim:2 chromatic:1 publication:1 focus:1 hk:7 zcj:2 cg:1 sense:1 dependent:11 france:1 issue:1 classification:18 among:1 pascal:1 special:2 equal:2 np:7 others:1 composed:1 replaced:1 n1:7 mixture:1 lrn:1 tj:2 har:1 edge:1 necessary:2 taylor:1 instance:3 cover:7 zn:1 vertex:1 subset:4 deviation:5 icj:2 too:1 straightforwardly:1 dependency:9 central:1 possibly:1 return:1 yp:4 account:1 satisfy:2 combinatorics:1 ranking:21 vi:1 depends:3 performed:2 view:1 sup:9 recover:4 ni:2 characteristic:1 clarifies:1 yield:1 generalize:2 bayesian:1 classified:2 definition:8 pp:5 proof:5 di:1 proved:1 subsection:1 fractional:12 cj:20 graepel:1 follow:4 though:2 hand:1 zif:3 quality:1 zjk:3 verify:2 true:1 boucheron:1 i2:1 deal:1 auc:4 noted:5 criterion:1 pdf:1 fj:7 instantaneous:1 ef:2 recently:1 common:1 permuted:2 reza:1 extend:1 zil:3 cambridge:1 rd:2 trivially:2 shawe:1 similarity:1 longer:1 deduce:2 patrick:1 showed:2 optimizing:2 irrelevant:1 optimizes:1 inequality:10 binary:13 yi:2 scoring:2 minimum:1 preceding:1 violate:1 full:1 infer:1 reduces:2 calculation:1 long:1 retrieval:1 controlled:1 qi:2 expectation:2 kernel:6 agarwal:2 vayatis:1 want:1 structural:1 noting:1 identically:2 zi:15 idea:1 regarding:1 bartlett:1 remark:2 generally:1 clear:1 amount:1 extensively:1 svms:3 generate:1 schapire:2 meir:1 exist:3 http:1 zj:3 notice:2 sign:3 estimated:1 write:2 ist:2 putting:1 falling:1 drawn:12 graph:5 sum:8 run:1 luxburg:1 throughout:1 comparable:1 bound:43 frc:4 distinguish:1 cj1:1 generates:1 bousquet:2 optimality:1 min:3 px:1 department:1 according:12 pr:1 taken:1 ln:9 equation:11 assures:1 slack:1 singer:1 end:1 serf:1 usunier:3 available:1 apply:1 progression:1 amini:2 appearing:1 existence:1 denotes:2 remaining:1 classical:1 quantity:2 concentration:4 exhibit:1 dp:6 separate:1 assuming:1 index:1 relationship:2 minimizing:1 demonstration:1 negative:2 design:2 proper:5 unknown:3 upper:2 rn:14 arbitrary:3 community:1 pair:9 paris:2 namely:1 z1:1 established:1 nip:1 suggested:1 pattern:1 scott:1 max:2 suitable:1 indicator:1 advanced:1 carried:1 deviate:1 interdependent:8 freund:1 loss:3 fully:1 lecture:1 lv:1 xp:4 editor:1 mohri:2 supported:1 last:1 zc:2 formal:2 allow:1 side:1 taking:2 distributed:2 curve:2 dimension:1 stand:1 author:1 programme:1 ml:1 xi:3 sk:4 learn:2 nicolas:1 composing:1 obtaining:2 du:1 european:1 constructing:1 rue:1 sp:4 inherit:1 main:1 constituted:1 bounding:3 roc:2 n:10 deterministically:2 xl:1 third:3 theorem:38 rk:2 specific:2 jensen:1 list:1 svm:4 cortes:2 derives:1 cjk:3 mendelson:1 iyer:1 demand:1 margin:5 supf:1 simply:1 clas:1 ez:2 expressed:1 ordered:2 applies:1 springer:1 corresponds:1 satisfies:1 goal:1 identity:3 towards:1 lipschitz:2 determined:1 except:1 lemma:2 partly:5 e:15 select:1 latter:2 d1:1 |
2,049 | 2,861 | Kernels for gene regulatory regions
Jean-Philippe Vert
Geostatistics Center
Ecole des Mines de Paris - ParisTech
[email protected]
Robert Thurman
Division of Medical Genetics
University of Washington
[email protected]
William Stafford Noble
Department of Genome Sciences
University of Washington
[email protected]
Abstract
We describe a hierarchy of motif-based kernels for multiple alignments
of biological sequences, particularly suitable to process regulatory regions of genes. The kernels incorporate progressively more information,
with the most complex kernel accounting for a multiple alignment of
orthologous regions, the phylogenetic tree relating the species, and the
prior knowledge that relevant sequence patterns occur in conserved motif blocks. These kernels can be used in the presence of a library of
known transcription factor binding sites, or de novo by iterating over all
k-mers of a given length. In the latter mode, a discriminative classifier built from such a kernel not only recognizes a given class of promoter regions, but as a side effect simultaneously identifies a collection of relevant, discriminative sequence motifs. We demonstrate the
utility of the motif-based multiple alignment kernels by using a collection of aligned promoter regions from five yeast species to recognize
classes of cell-cycle regulated genes. Supplementary data is available at
http://noble.gs.washington.edu/proj/pkernel.
1
Introduction
In a eukaryotic cell, a variety of DNA switches?promoters, enhancers, silencers, etc.?
regulate the production of proteins from DNA. These switches typically contain multiple
binding site motifs, each of length 5?15 nucleotides, for a class of DNA-binding proteins
known as transcription factors. As a result, the detection of such regulatory motifs proximal
to a gene provides important clues about its regulation and, therefore, its function. These
motifs, if known, are consequently interesting features to extract from genomic sequences
in order to compare genes, or cluster them into functional families.
These regulatory motifs, however, usually represent a tiny fraction of the intergenic sequence, and their automatic detection remains extremely challenging. For well-studied
transcription factors, libraries of known binding site motifs can be used to scan the intergenic sequence. A common approach for the de novo detection of regulatory motifs is to
start from a set of genes known to be similarly regulated, for example by clustering gene expression data, and search for over-represented short sequences in their proximal intergenic
regions. Alternatively, some authors have proposed to represent each intergenic sequence
by its content in short sequences, and to correlate this representation with gene expression
data [1]. Finally, additional information to characterize regulatory motifs can be gained by
comparing the intergenic sequences of orthologous genes, i.e., genes from different species
that have evolved from a common ancestor, because regulatory motifs are more conserved
than non-functional intergenic DNA [2].
We propose in this paper a hierarchy of increasingly complex representations for intergenic sequences. Each representation yields a positive definite kernel between intergenic
sequences. While various motif-based sequence kernels have been described in the literature (e.g., [3, 4, 5]), these kernels typically operate on sequences from a single species,
ignoring relevant information from orthologous sequences. In contrast, our hierarchy of
motif-based kernels accounts for a multiple alignment of orthologous regions, the phylogenetic tree relating the species, and the prior knowledge that relevant sequence patterns
occur in conserved motif blocks. These kernels can be used in the presence of a library
of known transcription factor binding sites, or de novo by iterating over all k-mers of a
given length. In the latter mode, a discriminative classifier built from such a kernel not
only recognizes a given class of regulatory sequences, but as a side effect simultaneously
identifies a collection of discriminative sequence motifs. We demonstrate the utility of the
motif-based multiple alignment kernels by using a collection of aligned intergenic regions
from five yeast species to recognize classes of co-regulated genes.
From a methodological point of view, this paper can be seen as an attempt to incorporate
an increasing amount of prior knowledge into a kernel. In particular, this prior information
takes the form of a probabilistic model describing with increasing accuracy the object we
want to represent. All kernels were designed before any experiment was conducted, and
we then performed an objective empirical evaluation of each kernel without further parameter optimization. In general, classification performance improved as the amount of prior
knowledge increased. This observation supports the notion that tuning a kernel with prior
knowledge is beneficial. However, we observed no improvement in performance following
the last modification of the kernel, highlighting the fact that a richer model of the data does
not always lead to better performance accuracy.
2
Kernels for intergenic sequences
In a complex eukaryotic genome, regulatory switches may occur anywhere within a relatively large genomic region near a given gene. In this work we focus on a well-studied
model organism, the budding yeast Saccharomyces cerevisiae, in which the typical intergenic region is less than 1000 bases long. We refer to the intergenic region upstream
of a yeast gene as its promoter region. Denoting the four-letter set of nucleotides as
A = {A, C, S
G, T }, the promoter region of a gene is a finite-length sequence of nucleotides
?
x ? A? = i=0 Ai . Given several sequenced organisms, in silico comparison of genes
between organisms often allows the detection of orthologous genes, that is, genes that
evolved from a common ancestor. If the species are evolutionarily close, as are different
yeast strains, then the promoter regions are usually quite similar and can be represented as
a multiple alignment. Each position in this alignment represents one letter in the shared
ancestor?s promoter region. Mathematically speaking, a multiple alignment of length n of
p sequences is a sequence c = c1 , c2 , . . . , cn , where each ci ? A?p , for i = 1, . . . , n, is
a column of p letters in the alphabet A? = A ? {?}. The additional letter ??? is used
to represent gaps in sequences, which represent insertion or deletion of letters during the
evolution of the sequences.
We are now in the position to describe a family of representations and kernels for promoter
regions, incorporating an increasing amount of prior knowledge about the properties of regulatory motifs. All kernels below are simple inner products between vector representations
of promoter regions. These vector representations are always indexed by a set M of short
sequences of fixed length d, which can either be all d-mers, i.e., M = Ad , or a predefined
library of indexing sequences. A promoter region P (either single sequence or multiple
alignment) is therefore always represented by a vector ?M (P ) = (?a (P ))a?M .
Motif kernel on a single sequence The simplest approach to index a single promoter
region x ? A? with an alphabet M is to define
?Spectrum
(x) = na (x) ,
a
?a ? M ,
where na (x) counts the number of occurrences of a in x. When M = Ad , the resulting
kernel is the spectrum kernel [3] between single promoter regions.
Motif kernel on multiple sequences When a gene has p orthologs in other species, then a
p
set of p promoter regions {x1 , x2 , . . . , xp } ? (A? ) , which are expected to contain similar
regulatory motifs, is available. We propose the following representation for such a set:
?Summation
({x1 , x2 , . . . , xp }) =
a
p
X
?Spectrum
(xi ) ,
a
?a ? M .
i=1
We call the resulting kernel the summation kernel. It is essentially the spectrum kernel on
the concatenation of the available promoter regions?ignoring, however, k-mers that overlap different sequences in the concatenation. The rationale behind this kernel, compared to
the spectrum kernel, is two-fold. First, if all promoters contain common functional motifs
and randomly varying nonfunctional motifs, then the signal-to-noise ratio of the relevant
regulatory features compared to other irrelevant non-functional features increases by taking
the sum (or mean) of individual feature vectors. Second, even functional motifs representing transcription factor binding sites are known to have some variability in some positions,
and merging the occurrences of a similar motif in different sequences is a way to model
this flexibility in the framework of a vector representation.
Marginalized motif kernel on a multiple alignment The summation kernel might suffer
from at least two limitations. First, it does not include any information about the relationships between orthologs, in particular their relative similarities. Suppose for example that
three species are compared, two of them being very similar. Then the promoter regions
of two out of three orthologs would be virtually identical, giving an unjustified double
weight to this duplicated species compared to the third one in the summation kernel. Second, although mutations in functional motifs between different species would correspond
to different short motifs in the summation kernel feature vector, these varying short motifs
might not cover all allowed variations in the functional motifs, especially if the motifs are
extracted from a small number of orthologs. In such cases, probabilistic models such as
weight matrices, which estimate possible variations for each position independently, are
known to make more efficient use of the data.
In order to overcome these limitations, we propose to transform the set of promoter regions
into a multiple alignment. We therefore assume that a fixed number of q species has been
selected, and that a probabilistic model p(h, c), with h ? A? and c ? A?q has been tuned on
these species. By ?tuned,? we mean that p(h, c) is a distribution that accurately describes
the probability of a given letter h in the common ancestor of the species, together with
the set of letters c at the corresponding position in the set of species. Such distributions
are commonly used in computational genomics, often resulting from the estimation of a
phylogenetic tree model [6]. We also assume that all sets of q promoter regions of groups
of orthologous genes in the q species have been turned into multiple alignments.
Given an alignment c = c1 , c2 , . . . , cn , suppose for the moment that we know the corresponding true sequence of nucleotides of the common ancestor h = h1 , h2 , . . . , hn . Then
the spectrum of the sequence h, that is, ?Spectrum
(h), would be a good summary for the
M
multiple alignment, and the inner product between two such spectra would be a candidate
kernel between multiple alignments. The sequence h being of course unknown, we propose to estimate its conditional probability given the multiple alignment c, under the model
where all columns are independent
and identically distributed according to the evolutionary
Qn
model, that is, p(h|c) = i=1 p (hi |ci ). Under this probabilistic model, it is now possible
to define the representation of the multiple alignment as the expectation of the spectrum
representation of h with respect to this conditional probability, that is:
X
?Marginalized
(c) =
?Spectrum
(h)p(h|c) , ?a ? M .
(1)
a
a
h
In order to compute this representation, we observe that if h has length n and a = a1 . . . ad
has length d, then
n?d+1
X
?Spectrum
(h)
=
?(a, hi . . . hi+d?1 ) ,
a
i=1
where ? is the Kronecker function. Therefore,
( n?d+1
! n
)
X
X
Y
Marginalized
?a
(c) =
?(a, hi . . . hi+d?1 )
p (hi |ci )
h?An
=
n?d+1
X
i=1
?
d?1
Y
?
i=1
i=1
?
p(aj+1 |ci+j )? .
j=0
This computation can be performed explicitly by computing p(aj+1 |ci+j ) at each position
i = 1, . . . , n, and performing the sum of the products of these probabilities over a moving
window. We call the resulting kernel the marginalized kernel because it corresponds to the
marginalization of the spectrum kernel under the phylogenetic probabilistic model [7].
Marginalized motif kernel with phylogenetic shadowing The marginalized kernel is
expected to be useful when relevant information is distributed along the entire length of the
sequences analyzed. In the case of promoter regions, however, the relevant information is
more likely to be located within a few short motifs. Because only a small fraction of the
total set of promoter regions lies within such motifs, this information is likely to be lost
when the whole sequence is represented by its spectrum. In order to overcome this limitation, we exploit the observation that relevant motifs are more evolutionarily conserved
on average than the surrounding sequence. This hypothesis has been confirmed by many
studies that show that functional parts, being under more evolutionary pressure, are more
conserved than non-functional ones.
Given a multiple alignment c, let us assume (temporarily) that we know which parts are relevant. We can encode this information into a sequence of binary variables s = s1 . . . sn ?
n
{0, 1} , where si = 1 means that the ith position is relevant, and irrelevant if si = 0. A
typical sequence for a promoter region consist primarily of 0?s, except for a few positions
indicating the position of the transcription factor binding motifs. Let us also assume that
we know the nucleotide sequence h of the common ancestor. Then it would make sense to
use a spectrum kernel based on the spectrum of h restricted to the relevant positions only.
In other words, all positions where si = 0 could be thrown away, in order to focus only on
the relevant positions. This corresponds to defining the features:
?Relevant
(h, s) =
a
n?d+1
X
i=1
?(a, hi . . . hi+d?1 )?(si , 1) . . . ?(si+d?1 , 1) ,
?a ? M .
Given only a multiple alignment c, the sequences h and s are not known but can be estimated. This is where the hypothesis that relevant nucleotides are more conserved than
irrelevant nucleotides can be encoded, by using two models of evolution with different
rates of mutations, as in phylogenetic shadowing [2]. Let us therefore assume that we have
a model p(c|h, s = 0) that describes ?fast? evolution from an ancestral nucleotide h to a
column c in a multiple alignment, and a second model p1 (c|h, s = 1) that describes ?slow?
evolution. In practice, we take these models to be two classical evolutionary models with
different mutation rates, but any reasonable pair of random models could be used here, if
one had a better model for functional sites, for example. Given these two models of evolution, let us also define a prior probability p(s) that a position is relevant or not (related to
the proportion of relevant parts we expect in a promoter region), and prior probabilities for
the ancestor nucleotide p(h|s = 0) and p(h|s = 1).
The joint probability of being in state s, having an ancestor nucleotide h and a resulting
alignment c is then p(c, h, s) = p(s)p(h|s)p(c|h, s). Under the probabilistic
model where
Qn
all columns are independent from each other, that is, p(h, s|c) = i=1 p(hi , si |ci ), we can
now replace (1) by the following features:
X
?Shadow
(c) =
?Relevant
(h, s)p(h, s|c) , ?a ? M .
(2)
a
a
h,s
Like the marginalized spectrum kernel, this kernel can be computed by computing the
explicit representation of each multiple sequence alignment c as a vector (?a (c))a?M as
follows:
Shadow
?a
(c) =
X
?
?n?d+1
X
X
h?An s?{0,1}n
=
n?d+1
X
?
d?1
Y
?
i=1
?
?(a, hi . . . hi+d?1 )?(si , 1) . . . ?(si+d?1 , 1)
i=1
n
Y
p (hi , si |ci )
i=1
?
?
?
?
p(h = aj+1 , s = 1|ci+j )? .
j=0
The computation can then be carried out by exploiting the observation that each term can
be computed by:
p(h, s = 1|c) =
p(s = 1)p(h|s = 1)p(c|h, s = 1)
.
p(s = 0)p(c|s = 0) + p(s = 1)p(c|s = 1)
Moreover, it can easily be seen that, like the marginalized kernel, the shadow kernel is the
marginalization of the kernel corresponding to ?Relevant with respect to p(h, s|c).
Incorporating Markov dependencies between positions The probabilistic model used
in the shadow kernel models each position independently from the others. As a result, a
conserved position has the same contribution to the shadow kernel if it is surrounded by
other conserved positions, or by varying positions. In order to encode our prior knowledge
that the pattern of functional / nonfunctional positions along the sequence is likely to be a
succession of short functional regions and longer nonfunctional regions, we propose to replace this probabilistic model by a probabilistic model with a Markov dependency between
successive positions for the variable s, that is, to consider the probability:
pMarkov (c, h, s) = p(s1 )p(h1 , c1 |s1 )
n
Y
p (si |si?1 ) p(hi , ci |si ).
i=2
This suggests replacing (2) by
?Markov
(c) =
a
X
h,s
?a (h, s)pMarkov (h, s|c) ,
?a ? M .
Once again, this feature vector can be computed as a sum of window weights over sequences by
?Markov
(c) =
a
n?d+1
X
p (si = 1|c) p (hi = aj+1 |ci , si = 1)
i=1
?
d?1
Y
p(hi+j = aj+1 , si+j = 1|ci+j , si+j?1 = 1) .
j=1
The main difference with the computation of the shadow kernel is the need to compute the
term P (si = 1|c), which can be done using the general sum-product algorithm.
3
Experiments
We measure the utility of our hierarchy of kernels in a cross-validated, supervised learning
framework. As a starting point for the analysis, we use various groups of genes that show
co-expression in a microarray study. Eight gene groups were derived from a study that applied hierarchical clustering to a collection of 79 experimental conditions, including time
series from the diauxic shift, the cell cycle series, and sporulation, as well as various temperature and reducing shocks [8]. We hypothesize that co-expression implies co-regulation
of a given group of genes by a common set of transcription factors. Hence, the corresponding promoter regions should be enriched for a corresponding set of transcription factor
binding motifs. We test the ability of a support vector machine (SVM) classifier to learn to
recapitulate the co-expression classes, based only upon the promoter regions. Our results
show that the SVM performance improves as we incorporate more prior knowledge into
the promoter kernel.
We collected the promoter regions from five closely related yeast species [9, 10]. Promoter
regions from orthologous genes were aligned using ClustalW, discarding promoter regions
that aligned with less than 30% sequence identity relative to the other sequences in the
alignment. This procedure produced 3591 promoter region alignments. For the phylogenetic kernels, we inferred a phylogenetic tree among the five yeast species from alignments
of four highly conserved proteins?MCM2, MCM3, CDC47 and MCM6. The concatenated alignment was analyzed with fastDNAml [11] using the default parameters. The
resulting tree was used in all of our analyses.
SVMs were trained using Gist (microarray.cpmc.columbia.edu/gist) with the default parameters. These include a normalized kernel, and a two-norm soft margin with asymmetric
penalty based upon the ratio of positive and negative class sizes. All kernels were computed by summing over all 45 k-mers of width 5. Each class was recognized in a one-vs-all
fashion. SVM testing was performed using balanced three-fold cross-validation, repeated
five times.
The results of this experiment are summarized in Table 1. For every gene class, the
worst-performing kernel is one of the three simplest kernels: ?simple,? ?summation? or
?marginalization.? The mean ROC scores across all eight classes for these three kernels
are 0.733, 0.765 and 0.748. Classification performance improves dramatically using the
shadow kernel with either a small (2) or large (5) ratio of fast-to-slow evolutionary rates.
The mean ROC scores for these two kernels are 0.854 and 0.844. Furthermore, across five
of the eight gene classes, one of the two shadow kernels is the best-performing kernel.
The Markov kernel performs approximately as well as the shadow kernel. We tried six
different parameterizations, as shown in the table, and these achieved mean ROC scores
ranging from 0.822 to 0.850. The differences between the best parameterization of this
kernel (?Markov 5 90/90?) and ?shadow 2? are not significant. Although further tuning
Table 1: Mean ROC scores for SVMs trained using various kernels to recognize classes
of co-expressed yeast genes. The second row in the table gives the number of genes in each
class. All other rows contain mean ROC scores across balanced three-fold cross-validation,
repeated five times. Standard errors (not shown) are almost uniformly 0.02, with a few
values of 0.03. Values in bold-face are the best mean ROC for the given class of genes. The
classes of genes (columns) are, respectively, ATP synthesis, DNA replication, glycolysis,
mitochondrial ribosome, proteasome, spindle-pole body, splicing and TCA cycle. The
kernels are as described in the text. For the shadow and Markov kernels, the values ?2?
and ?5? refer to the ratio of fast to slow evolutionary rates. For the Markov kernel, the
values ?90? and ?99? refer to the self-transition probabilities (times 100) in the conserved
and varying states of the model.
Kernel
single
summation
marginalized
shadow 2
shadow 5
Markov 2 90/90
Markov 2 90/99
Markov 2 99/99
Markov 5 90/90
Markov 5 90/99
Markov 5 99/99
ATP
15
0.711
0.773
0.799
0.881
0.889
0.848
0.868
0.869
0.875
0.872
0.868
DNA
5
0.777
0.768
0.805
0.929
0.935
0.891
0.911
0.910
0.922
0.916
0.917
Glyc
17
0.814
0.824
0.833
0.928
0.927
0.908
0.915
0.912
0.924
0.920
0.921
Ribo
22
0.743
0.750
0.729
0.840
0.819
0.830
0.826
0.816
0.844
0.834
0.830
Prot
27
0.735
0.763
0.748
0.867
0.849
0.853
0.850
0.840
0.868
0.858
0.853
Spin
11
0.716
0.756
0.721
0.827
0.821
0.801
0.782
0.773
0.814
0.794
0.774
Splic
14
0.683
0.739
0.676
0.787
0.766
0.773
0.752
0.737
0.788
0.774
0.751
TCA
16
0.684
0.740
0.673
0.770
0.752
0.758
0.735
0.724
0.769
0.755
0.733
Mean
0.733
0.764
0.748
0.854
0.845
0.833
0.830
0.823
0.851
0.840
0.831
of kernel parameters might yield significant improvement, our results thus far suggest that
incorporating dependencies between adjacent positions does not help very much.
Finally, we test the ability of the SVM to identify sequence regions that correspond to biologically significant motifs. As a gold standard, we use the JASPAR
database (jaspar.cgb.ki.se), searching each class of promoter regions using MONKEY
(rana.lbl.gov/?alan/Monkey.htm) with a p-value threshold of 10?4 . For each gene class,
we identify the three JASPAR motifs that occur most frequently within that class, and we
create a list of all 5-mers that appear within those motif occurrences. Next, we create a corresponding list of 5-mers identified by the SVM. We do this by calculating the hyperplane
weight associated with each 5-mer and retaining the top 20 5-mers for each of the 15 crossvalidation runs. We then take the union over all runs to come up with a list of between 40
and 55 top 5-mers for each class. Table 2 indicates that the discriminative 5-mers identified
by the SVM are significantly enriched in 5-mers that appear within biologically significant
motif regions, with significant p-values for all eight gene classes (see caption for details).
4
Conclusion
We have described and demonstrated the utility of a class of kernels for characterizing
gene regulatory regions. These kernels allow us to incorporate prior knowledge about the
evolution of a set of orthologous sequences and the conservation of transcription factor
binding site motifs. We have also demonstrated that the motifs identified by an SVM trained
using these kernels correspond to biologically significant motif regions. Our future work
will focus on automating the process of agglomerating the identified k-mers into a smaller
set of motif models, and on applying these kernels in combination with gene expression,
protein-protein interaction and other genome-wide data sets.
This work was funded by NIH awards R33 HG003070 and U01 HG003161.
Table 2: SVM features correlate with discriminative motifs. The first row lists the
number of non-redundant 5-mers constructed from high-scoring SVM features. Row two
gives the number of 5-mers constructed from JASPAR motif occurrences in the 5-species
alignments. Row three is a tally of all 5-mers appearing in the sequences making up the
class. The fourth row gives the size of the intersection between the SVM and motif-based
5-mer lists. The final two rows give the expected value and p-value for the intersection
size. The p-value is computed using the hypergeometric distribution by enumerating all
possibilites for the intersection of two sets selected from a larger set given the sizes in the
first three rows.
SVM
Motif
Class
Inter
Expect
p-value
ATP
46
180
1006
24
8.23
6.19e-8
DNA
40
68
839
8
3.24
1.15e-2
Glyc
55
227
967
23
12.91
1.44e-3
Ribo
50
38
973
18
1.95
3.88e-15
Prot
49
148
1001
23
7.25
3.24e-8
Spin
43
152
891
19
7.34
1.74e-5
Splic
48
52
881
14
2.83
1.15e-7
TCA
50
104
995
21
5.23
2.00e-9
References
[1] D. Y. Chiang, P. O. Brown, and M. B. Eisen. Visualizing associations between genome sequences and gene expression data using genome-mean expression profiles. Bioinformatics,
17(Supp. 1):S49?S55, 2001.
[2] D. Boffelli, J. McAuliffe, D. Ovcharenko, K. D. Lewis, I. Ovcharenko, L. Pachter, and E. M.
Rubin. Phylogenetic shadowing of primate sequences to find functional regions of the human
genome. Science, 299:1391?1394, 2003.
[3] C. Leslie, E. Eskin, and W. S. Noble. The spectrum kernel: A string kernel for SVM protein
classification. In R. B. Altman, A. K. Dunker, L. Hunter, K. Lauderdale, and T. E. Klein,
editors, Proceedings of the Pacific Symposium on Biocomputing, pages 564?575, New Jersey,
2002. World Scientific.
[4] X. H-F. Zhang, K. A. Heller, I. Hefter, C. S. Leslie, and L. A. Chasin. Sequence information for
the splicing of human pre-mRNA identified by support vector machine classification. Genome
Research, 13:2637?2650, 2003.
[5] A. Zien, G. R?atch, S. Mika, B. Sch?olkopf, T. Lengauer, and K.-R. M?uller. Engineering support
vector machine kernels that recognize translation initiation sites. Bioinformatics, 16(9):799?
807, 2000.
[6] R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis. Cambridge
UP, 1998.
[7] K. Tsuda, T. Kin, and K. Asai. Marginalized kernels for biological sequences. Bioinformatics,
18:S268?S275, 2002.
[8] M. Eisen, P. Spellman, P. O. Brown, and D. Botstein. Cluster analysis and display of genomewide expression patterns. Proceedings of the National Academy of Sciences of the United States
of America, 95:14863?14868, 1998.
[9] Paul Cliften, Priya Sudarsanam, Ashwin Desikan, Lucinda Fulton, Bob Fulton, John Majors,
Robert Waterston, Barak A. Cohen, and Mark Johnston. Finding functional features in Saccharomyces genomes by phylogenetic footprinting. Science, 301(5629):71?76, 2003.
[10] Manolis Kellis, Nick Patterson, Matthew Endrizzi, Bruce Birren, and Eric S Lander. Sequencing and comparison of yeast species to identify genes and regulatory elements. Nature,
423(6937):241?254, 2003.
[11] GJ Olsen, H Matsuda, R Hagstrom, and R Overbeek. fastDNAmL: a tool for construction
of phylogenetic trees of DNA sequences using maximum likelihood. Comput. Appl. Biosci.,
10(1):41?48, 1994.
| 2861 |@word proportion:1 norm:1 mers:15 tried:1 accounting:1 recapitulate:1 pressure:1 moment:1 series:2 score:5 united:1 ecole:1 denoting:1 tuned:2 comparing:1 si:17 john:1 hypothesize:1 designed:1 gist:2 progressively:1 v:1 selected:2 parameterization:1 agglomerating:1 ith:1 short:7 chiang:1 eskin:1 provides:1 parameterizations:1 successive:1 zhang:1 five:7 phylogenetic:11 along:2 c2:2 constructed:2 symposium:1 replication:1 inter:1 expected:3 p1:1 frequently:1 manolis:1 gov:1 window:2 increasing:3 moreover:1 matsuda:1 evolved:2 string:1 monkey:2 finding:1 every:1 mitochondrial:1 classifier:3 prot:2 medical:1 appear:2 mcauliffe:1 positive:2 before:1 engineering:1 approximately:1 might:3 mika:1 studied:2 suggests:1 challenging:1 appl:1 co:6 mer:2 testing:1 lost:1 block:2 definite:1 practice:1 union:1 procedure:1 empirical:1 significantly:1 vert:2 word:1 pre:1 protein:6 suggest:1 close:1 silico:1 applying:1 demonstrated:2 center:1 mrna:1 starting:1 independently:2 asai:1 searching:1 notion:1 variation:2 altman:1 hierarchy:4 suppose:2 construction:1 caption:1 hypothesis:2 element:1 particularly:1 located:1 clustalw:1 asymmetric:1 database:1 observed:1 worst:1 region:43 stafford:1 cycle:3 balanced:2 insertion:1 mine:1 trained:3 upon:2 division:1 patterson:1 eric:1 tca:3 easily:1 joint:1 htm:1 represented:4 various:4 jersey:1 america:1 alphabet:2 surrounding:1 fast:3 describe:2 jean:2 richer:1 supplementary:1 quite:1 encoded:1 larger:1 novo:3 ability:2 transform:1 final:1 sequence:55 propose:5 interaction:1 product:4 fr:1 relevant:18 aligned:4 turned:1 flexibility:1 gold:1 academy:1 olkopf:1 crossvalidation:1 exploiting:1 cluster:2 double:1 nonfunctional:3 object:1 help:1 krogh:1 shadow:13 implies:1 come:1 closely:1 human:2 biological:3 summation:7 mathematically:1 genomewide:1 matthew:1 major:1 estimation:1 shadowing:3 pachter:1 create:2 tool:1 uller:1 thurman:1 genomic:2 always:3 cerevisiae:1 priya:1 varying:4 encode:2 validated:1 focus:3 derived:1 improvement:2 methodological:1 saccharomyces:2 indicates:1 sequencing:1 likelihood:1 contrast:1 sense:1 motif:49 typically:2 entire:1 ancestor:8 proj:1 classification:4 among:1 retaining:1 once:1 having:1 washington:5 identical:1 represents:1 noble:4 future:1 others:1 few:3 primarily:1 randomly:1 simultaneously:2 recognize:4 national:1 individual:1 william:1 attempt:1 thrown:1 detection:4 highly:1 evaluation:1 unjustified:1 alignment:27 analyzed:2 behind:1 predefined:1 nucleotide:10 tree:6 indexed:1 lbl:1 tsuda:1 increased:1 column:5 soft:1 cover:1 leslie:2 pole:1 conducted:1 ensmp:1 characterize:1 dependency:3 proximal:2 spindle:1 ancestral:1 automating:1 probabilistic:9 lauderdale:1 together:1 synthesis:1 na:2 again:1 hn:1 supp:1 account:1 de:5 dunker:1 summarized:1 bold:1 u01:1 explicitly:1 ad:3 performed:3 view:1 h1:2 start:1 bruce:1 mutation:3 contribution:1 spin:2 accuracy:2 succession:1 yield:2 correspond:3 identify:3 accurately:1 produced:1 hunter:1 confirmed:1 bob:1 associated:1 duplicated:1 enhancer:1 knowledge:9 improves:2 eddy:1 desikan:1 supervised:1 botstein:1 improved:1 done:1 furthermore:1 anywhere:1 replacing:1 mode:2 aj:5 scientific:1 yeast:9 atch:1 lengauer:1 effect:2 contain:4 true:1 brown:2 normalized:1 evolution:6 hence:1 ribosome:1 adjacent:1 visualizing:1 during:1 width:1 self:1 demonstrate:2 performs:1 temperature:1 ranging:1 nih:1 common:8 functional:14 cohen:1 association:1 organism:3 relating:2 refer:3 significant:6 biosci:1 cambridge:1 ashwin:1 ai:1 atp:3 tuning:2 automatic:1 similarly:1 had:1 funded:1 moving:1 similarity:1 longer:1 gj:1 etc:1 base:1 irrelevant:3 initiation:1 binary:1 scoring:1 conserved:10 seen:2 additional:2 recognized:1 redundant:1 signal:1 zien:1 multiple:21 alan:1 cross:3 long:1 award:1 a1:1 essentially:1 expectation:1 kernel:79 represent:5 sequenced:1 achieved:1 cell:3 c1:3 want:1 lander:1 johnston:1 microarray:2 sch:1 operate:1 virtually:1 orthologs:4 call:2 near:1 presence:2 identically:1 variety:1 switch:3 marginalization:3 identified:5 r33:1 inner:2 cn:2 enumerating:1 shift:1 expression:9 six:1 utility:4 penalty:1 suffer:1 speaking:1 dramatically:1 useful:1 iterating:2 se:1 amount:3 svms:2 dna:8 simplest:2 http:1 estimated:1 klein:1 group:4 four:2 threshold:1 shock:1 fraction:2 sum:4 cpmc:1 run:2 letter:7 fourth:1 family:2 reasonable:1 almost:1 splicing:2 ki:1 hi:15 display:1 fold:3 g:2 durbin:1 occur:4 kronecker:1 x2:2 extremely:1 performing:3 relatively:1 department:1 pacific:1 according:1 combination:1 beneficial:1 describes:3 increasingly:1 across:3 smaller:1 modification:1 s1:3 biologically:3 making:1 primate:1 restricted:1 indexing:1 remains:1 describing:1 count:1 know:3 available:3 eight:4 observe:1 hierarchical:1 away:1 regulate:1 glyc:2 occurrence:4 appearing:1 top:2 clustering:2 include:2 recognizes:2 marginalized:10 calculating:1 exploit:1 giving:1 concatenated:1 especially:1 classical:1 kellis:1 objective:1 fulton:2 evolutionary:5 regulated:3 concatenation:2 collected:1 length:9 index:1 relationship:1 ratio:4 regulation:2 robert:2 negative:1 unknown:1 budding:1 observation:3 markov:14 finite:1 philippe:2 defining:1 variability:1 strain:1 inferred:1 pair:1 paris:1 nick:1 hypergeometric:1 deletion:1 geostatistics:1 usually:2 pattern:4 below:1 built:2 including:1 suitable:1 overlap:1 representing:1 spellman:1 library:4 identifies:2 carried:1 extract:1 columbia:1 genomics:1 sn:1 text:1 prior:12 literature:1 heller:1 relative:2 expect:2 rationale:1 interesting:1 limitation:3 validation:2 h2:1 xp:2 rubin:1 editor:1 tiny:1 surrounded:1 production:1 row:8 translation:1 genetics:1 summary:1 course:1 last:1 side:2 allow:1 barak:1 wide:1 taking:1 face:1 characterizing:1 distributed:2 overcome:2 default:2 transition:1 world:1 genome:8 qn:2 eisen:2 author:1 collection:5 clue:1 commonly:1 far:1 correlate:2 olsen:1 transcription:9 gene:35 summing:1 conservation:1 orthologous:8 discriminative:6 xi:1 alternatively:1 spectrum:17 mitchison:1 search:1 regulatory:14 table:6 learn:1 nature:1 ignoring:2 complex:3 upstream:1 eukaryotic:2 intergenic:12 main:1 promoter:30 whole:1 noise:1 paul:1 profile:1 allowed:1 evolutionarily:2 repeated:2 x1:2 enriched:2 site:8 body:1 roc:6 fashion:1 slow:3 position:21 explicit:1 tally:1 comput:1 candidate:1 lie:1 third:1 kin:1 discarding:1 list:5 svm:12 incorporating:3 consist:1 merging:1 gained:1 ci:11 margin:1 gap:1 intersection:3 likely:3 highlighting:1 expressed:1 temporarily:1 rana:1 binding:9 corresponds:2 lewis:1 extracted:1 conditional:2 identity:1 consequently:1 shared:1 replace:2 content:1 paristech:1 typical:2 except:1 reducing:1 uniformly:1 hyperplane:1 total:1 specie:20 experimental:1 indicating:1 support:4 mark:1 latter:2 scan:1 bioinformatics:3 biocomputing:1 incorporate:4 |
2,050 | 2,862 | A Matching Pursuit Approach to
Sparse Gaussian Process Regression
S. Sathiya Keerthi
Yahoo! Research Labs
210 S. DeLacey Avenue
Pasadena, CA 91105
[email protected]
Wei Chu
Gatsby Computational Neuroscience Unit
University College London
London, WC1N 3AR, UK
[email protected]
Abstract
In this paper we propose a new basis selection criterion for building
sparse GP regression models that provides promising gains in accuracy
as well as efficiency over previous methods. Our algorithm is much faster
than that of Smola and Bartlett, while, in generalization it greatly outperforms the information gain approach proposed by Seeger et al, especially
on the quality of predictive distributions.
1
Introduction
Bayesian Gaussian processes provide a promising probabilistic kernel approach to supervised learning tasks. The advantage of Gaussian process (GP) models over non-Bayesian
kernel methods, such as support vector machines, comes from the explicit probabilistic formulation that yields predictive distributions for test instances and allows standard Bayesian
techniques for model selection. The cost of training GP models is O(n3 ) where n is the
number of training instances, which results in a huge computational cost for large data sets.
Furthermore, when predicting a test case, a GP model requires O(n) cost for computing the
mean and O(n2 ) cost for computing the variance. These heavy scaling properties obstruct
the use of GPs in large scale problems.
Recently, sparse GP models which bring down the complexity of training as well as testing have attracted considerable attention. Williams and Seeger (2001) applied the Nystr?om
method to calculate a reduced rank approximation of the original n?n kernel matrix. Csat?o
and Opper (2002) developed an on-line algorithm to maintain a sparse representation of the
GP models. Smola and Bartlett (2001) proposed a forward selection scheme to approximate
the log posterior probability. Candela (2004) suggested a promising alternative criterion by
maximizing the approximate model evidence. Seeger et al. (2003) presented a very fast
greedy selection method for building sparse GP regression models. All of these methods make efforts to select an informative subset of the training instances for the predictive
model. This subset is usually referred to as the set of basis vectors, denoted as I. The
maximal size of I is usually limited by a value dmax . As dmax n, the sparseness greatly
alleviates the computational burden in both training and prediction of the GP models. The
performance of the resulting sparse GP models crucially depends on the criterion used in
the basis vector selection. Motivated by the ideas of Matching Pursuit (Vincent and Bengio, 2002), we propose a new criterion of greedy forward selection for sparse GP models.
Our algorithm is closely related to that of Smola and Bartlett (2001), but the criterion we
propose is much more efficient. Compared with the information gain method of Seeger
et al. (2003) our approach yields clearly better generalization performance, while essentially having the same algorithm complexity. We focus only on regression in this paper, but
the main ideas are applicable to other supervised learning tasks.
The paper is organized as follows: in Section 2 we present the probabilistic framework
for sparse GP models; in Section 3 we describe our method of greedy forward selection
after motivating it via the previous methods; in Section 4 we discuss some issues in model
adaptation; in Section 5 we report results of numerical experiments that demonstrate the
effectiveness of our new method.
2
Sparse GPs for regression
In regression problems, we are given a training data set composed of n samples. Each
sample is a pair of an input vector xi ? Rm and its corresponding target yi ? R. The
true function value at xi is represented as an unobservable latent variable f (xi ) and the
target yi is a noisy measurement of f (xi ). The goal is to construct a predictive model that
estimates the relationship x ? f (x).
Gaussian process regression. In standard GPs for regression, the latent variables {f (xi )}
are random variables in a zero mean Gaussian process indexed by {xi }. The prior distribution of {f (xi )} is a multivariate joint Gaussian, denoted as P(f ) = N (f ; 0, K), where
f = [f (x1 ), . . . , f (xn )]T and K is the n ? n covariance matrix whose ij-th element is
K(xi , xj ), K being the kernel function. The likelihood is essentially a model of the measurement noise, which is usually evaluated as a product of independent Gaussian noises,
P(y|f ) = N (y; f , ? 2 I), where y = [y1 , . . . , yn ]T and ? 2 is the noise variance. The
posterior distribution P(f |y) ? P(y|f )P(f ) is also exactly a Gaussian:
P(f |y) = N (f ; K? , ? 2 K(K + ? 2 I)?1 )
2
(1)
?1
where ? = (K + ? I) y. For any test instance x, the predictive distribution is
N (f (x); ?x , ?x2 ) where ?x = kT (K + ? 2 I)?1 y = kT ? , ?x2 = K(x, x) ? kT (K +
? 2 I)?1 k, and k = [K(x1 , x), . . . , K(xn , x)]T . The computational cost of training is
O(n3 ), which mainly comes from the need to invert the matrix (K + ?2 I) and obtain the
vector ? . For doing predictions of a test instance the cost is O(n) to compute the mean
and O(n2 ) for computing the variance. This heavy scaling with respect to n makes the use
of standard GP computationally prohibitive on large datasets.
Projected latent variables. Seeger et al. (2003) gave a neat method for working with a
reduced number of latent variables, laying the foundation for forming sparse GP models.
In this section we review their ideas. Instead of assuming n latent variables for all the
training instances, sparse GP models assume only d latent variables placed at some chosen
basis vectors {?
xi }, denoted as a column vector f I = [f (?
x1 ), . . . , f (?
xd )]T . The prior
distribution of the sparse GP is a joint Gaussian over f I only, i.e.,
P(f I ) = N (f I ; 0, KI )
(2)
where KI is the d ? d covariance matrix of the basis vectors whose ij-th element is
? j ).
K(?
xi , x
These latent variables are then projected to all the training instances. Under the imposed
joint Gaussian prior, the conditional mean at the training instances is KTI,? K?1
I f I , where
KI,? is a d ? n matrix of the covariance functions between the basis vectors and all the
training instances. The likelihood can be evaluated by these projected latent variables as
follows
2
P(y|f I ) = N (y; KTI,? K?1
(3)
I f I , ? I)
The posterior is P(f I |y) = N (f I ; KI ?I , ? 2 KI (? 2 KI + KI,? KTI,? )?1 KI ), where
?I = (? 2 KI + KI,? KTI,? )?1 KI,? y. The predictive distribution at any test instance x is
2
2 ?T
2
? T ? , ?
? T ?1 ? T
?x2 ), where ?
?x = k
N (f (x); ?
?x , ?
I ?x = K(x, x) ? k KI k + ? k (? KI +
T ?1 ?
?
KI,? KI,? ) k, and k is a column vector of the covariance functions between the basis
? = [K(?
xd , x)]T .
vectors and the test instance x, i.e. k
x1 , x), . . . , K(?
While the cost of training the full GP model is O(n3 ), the training complexity of sparse
2
GP models is only O(nd2max ). This corresponds to the cost of forming K?1
I , (? KI +
T ?1
KI,? KI,? ) and ?I . Thus, if dmax is not big, learning on large datasets is feasible via
sparse GP models. Also, for these sparse models, prediction for each test instance costs
O(dmax ) for the mean and O(d2max ) for the variance.
Generally the basis vectors can be placed anywhere in the input space Rm . Since training
instances usually cover the input space of interest quite well, it is quite reasonable to select
basis vectors from just the set of training instances. For a given problem dmax is chosen
to be as large as possible subject to constraints on computational time in training and/or
testing. Then we use some basis selection method to find I of size dmax . This important
step is taken up in section 3.
A Useful optimization formulation. As pointed out by Smola and Bartlett (2001), it is
useful to view the determination of the mean of the posterior as coming from an optimization problem. This viewpoint helps in the selection of basis vectors. The mean of the
posterior distribution is exactly the maximum a posteriori (MAP) estimate, and it is possible to give an equivalent parametric representation of the latent variables as f = K?,
where ? = [?1 , . . . , ?n ]T . The MAP estimate of the full GP is equivalent to minimizing
the negative logarithm of the posterior (1):
1
min ?(?) := ?T (? 2 K + KT K) ? ? y T K ?
(4)
?
2
Similarly, using f I = KI ?I for sparse GP models, the MAP estimate of the sparse GP is
equivalent to minimizing the negative logarithm of the posterior, P(f I |y):
1
? (?I ) := ?TI (? 2 KI + KI,? KTI,? ) ?I ? y T KTI,? ?I
(5)
min ?
?I
2
Suppose ? in (4) is composed of two parts, ? = [?I ; ?R ] where I denotes the set of basis
vectors and R denotes the remaining instances. Interestingly, as pointed out by Seeger et al.
(2003), the optimization problem (5) is same as minimizing ?(?) in (4) using ?I only, i.e.,
with the constraint, ?R = 0. In other words, the basis vectors of the sparse GPs can be
selected to minimize the negative log-posterior of the full GPs, ?(?) defined as in (4).
3
Selection of basis functions
The most crucial element of the sparse GP approach of the previous section is the choice of
I, the set of basis vectors, which we take to be a subset of the training vectors. The cheapest
method is to select the basis vectors at random from the training data set. But, such a choice
will not work well when dmax is much smaller than n. A principled approach is to select
I that makes the corresponding sparse GP approximate well, the posterior distribution of
the full GP. The optimization formulation of the previous section is useful here. It would
be ideal to choose, among all subsets, I of size dmax , the one that gives the best value of
?
? in (5). But, this requires a combinatorial search that is infeasible for large problems. A
practical approach is to do greedy forward selection. This is the approach used in previous
methods as well as in our method of this paper.
Before we go into the details of the methods, let us give a brief discussion of the time complexities associated with forward selection. There are two costs involved. (1) There is a
basic cost associated with updating of the sparse GP solution, given a sequence of chosen
basis functions. Let us refer to this cost as Tbasic . This cost is the same for all forward
selection methods, and is O(nd2max ). (2) Then, depending on the basis selection method,
there is the cost associated with basis selection. We will refer to the accumulated value of
this cost for choosing all dmax basis functions as Tselection . Forward basis selection methods differ in the way they choose effective basis functions while keeping Tselection small.
It is useful to note that the total cost associated with the random basis selection method
mentioned earlier is just Tbasic = O(nd2max ). This cost forms a baseline for comparison.
Smola and Bartlett?s method. Consider the typical situation in forward selection where
we have a current working set I and we are interested in choosing the next basis vector,
xi . The method of Smola and Bartlett (2001) evaluates each given xi ?
/ I by trying its
complete inclusion, i.e., set I = I ? {xi } and optimize ?(?) using ?I = [?I ; ?i ].
Thus, their selection criterion for the instance xi ?
/ I is the decrease in ?(?) that can be
obtained by allowing both ?I and ?i as variables to be non-zero. The minimal value of
?(?) can be obtained by solving min?I ?
? (?I ) defined in (5). This costs O(nd) time for
each candidate, xi , where d is the size of the current set, I. If all xi ?
/ I need to be tried,
it will lead to O(n2 d) cost. Accumulated till dmax basis functions are added, this leads to
a Tselection that has O(n2 d2max ) complexity, which is disproportionately higher than Tbasic .
Therefore, Smola and Bartlett (2001) resorted to a randomized scheme by considering only
? basis elements randomly chosen from outside I during one basis selection. They used a
value of ? = 59. For this randomized method, the complexity of Tselection is O(?nd2max ).
Although, from a complexity viewpoint, Tbasic and Tselection are same, it should be noted
that the overall cost of the method is about 60 times that of Tbasic .
Seeger et al?s information gain method. Seeger et al. (2003) proposed a novel and very
cheap heuristic criterion for basis selection. The ?informativeness? of an input vector xi ?
/
I is scored by the information gain between the true posterior distribution, P(f I |y) and
a posterior approximation, Q(f I |y), where I denotes the new set of basis vectors after
including a new element xi into the current set I. The posterior approximation Q(f I |y)
ignores the dependencies between the latent variable f (xi ) and the targets other than yi .
Due to this simplification, this value of information gain is computed in O(1) time, given
the current predictive model represented by I. Thus, the scores of all instances outside
I can be efficiently evaluated in O(n) time, which makes this algorithm almost as fast as
using random selection! The potential weakness of this algorithm might be the non-use of
the correlation in the remaining instances {xi : xi ?
/ I}.
Post-backfitting approach. The two methods presented above are extremes in efficiency:
in Smola and Bartlett?s method Tselection is disproportionately larger than Tbasic while,
in Seeger et al?s method Tselection is very much smaller than Tbasic . In this section we
introduce a moderate method that is effective and whose complexity is in between the two
earlier methods. Our method borrows an idea from kernel matching pursuit.
Kernel Matching Pursuit (Vincent and Bengio, 2002) is a sparse method for ordinary least
squares that consists of two general greedy sparse approximation schemes, called prebackfitting and post-backfitting. It is worth pointing out that the same methods were also
considered much earlier in Adler et al. (1996). Both methods can be generalized to select
the basis vectors for sparse GPs. The pre-backfitting approach is very similar in spirit to
Smola and Bartlett?s method. Our method is an efficient selection criterion that is based
on the post-backfitting idea. Recall that, given the current I, the minimal value of ?(?)
when it is optimized using only ?I as variables is equivalent to min?I ?
? (?I ) as in (5).
The minimizer, denoted as ?I , is given by
?I = (? 2 KI + KI,? KTI,? )?1 KI,? y
(6)
/ I is based on optimizing ?(?) by fixing ?I =
Our scoring criterion for an instance xi ?
?I and changing ?i only. The one-dimensional minimizer can be easily found as
? ?
KTi,? (y ? KTI,? ?I ) ? ? 2 k
i
I
T
?i? =
(7)
? 2 K(xi , xi ) + KTi,? Ki,?
where Ki,? is the n ? 1 matrix of covariance functions between xi and all the training data,
? i is a d dimensional vector having K(xj , xi ), xj ? I. The selection score of the
and k
instance xi is the decrease in ?(?) achieved by the one dimensional optimization of ?i ,
which can be written in closed form as
1
?i = (?i? )2 ? 2 K(xi , xi ) + KTi,? Ki,?
(8)
2
where ?i? is defined as in (7). Note that a full kernel column Ki,? is required and so it
costs O(n) time to compute (8). In contrast, for scoring one instance, Smola and Bartlett?s
method requires O(nd) time and Seeger et al?s method requires O(1) time.
Ideally we would like to run over all xi ?
/ I and choose the instance which gives the
largest decrease. This will need O(n2 ) effort. Summing the cost till dmax basis vectors are
selected, we get an overall complexity of O(n2 dmax ), which is much higher than Tbasic .
To restrict the overall complexity of Tselection to O(nd2max ), we resort to a randomization
scheme that selects a relatively good one rather than the best. Since it costs only O(n)
time to evaluate our selection criterion in (8) for one instance, we can choose the next basis
vector from a set of dmax instances randomly selected from outside of I. Such a scheme
keeps the overall complexity of Tselection to O(nd2max ). But, from a practical point of view
the scheme is expensive because the selection criterion (8) requires computing a full kernel
row Ki,? for each instance to be evaluated. As kernel evaluations could be very expensive,
we propose a modified scheme to keep the number of such evaluations small.
Let us maintain a matrix cache, C of size c?n, that contains c rows of the full kernel matrix
K. At the beginning of the algorithm (when I is empty) we initialize C by randomly
choosing c training instances, computing the full kernel row, Ki,? for the chosen i?s and
putting them in the rows of C. Each step corresponding to a new basis vector selection
proceeds as follows. First we compute ?i for the c instances corresponding to the rows of
C and select the instance with the highest score for inclusion in I. Let xj denote the chosen
basis vector. Then we sort the remaining instances (that define C) according to their ?i
values. Finally, we randomly select ? fresh instances (from outside of I and the vectors
that define C) to replace xj and the ? ? 1 cached instances with the lowest score. Thus,
in each basis selection step, we compute the criterion scores for c instances, but evaluate
full kernel rows only for ? fresh instances. An important advantage of the above scheme
is that, those basis elements which have very good scores, but are overtaken by another
better element in a particular step, continue to remain in C and probably get to be selected
in future basis selection steps. Like in Smola and Bartlett?s method we use ? = 59. The
value of c can be set to be any integer between ? and dmax . For any c in this range, the
complexity of Tselection remains at most O(nd2max ). The above cache scheme is special to
our method and cannot be used with Smola and Bartlett?s method without unduly increasing
its complexity. If available, it is also useful to have an extra cache for storing kernel rows of
instances which get discarded in one step, but which get to be considered again in a future
step. Smola and Bartlett?s method can also gain from such a cache.
4
Model adaptation
In this section we address the problem of model adaptation for a given number of basis
functions, dmax . Seeger (2003) and Seeger et al. (2003) give the details together with
a very good discussion of various issues associated with gradient based model adaptation. Since the same ideas hold for all basis selection methods, we will not discuss them
in detail. The sparse GP model is conditional on the parameters in the kernel function
and the Gaussian noise level ? 2 , which can all be collected together in ?, the hyperparameter vector. The optimal values of ? can be inferred by minimizing the negative log
of the marginal
likelihood, ?(?) = ? log P(y|?) using gradient based techniques, where
P(y|?) = P(y|f I )P(f I )df I = N (y|0, ? 2 I + KTI,? K?1
I KI,? ). One of the problems
in doing this is the dependence of I on ? that makes ? a non-differentiable function. This
problem can be handled by repeating the following alternating steps: (1) fix ? and select I
by the given basis selection algorithm; and (2) fix I and do a (short) gradient based adaptation of ?. For the cache-based post-backfitting method of basis selection we also do the
following for adding some stability to the model adaptation process. After we do step (2)
using some I and obtain a ? we set the initial kernel cache, C using the rows of KI,? at ?.
5
Numerical experiments
In this section, we compare our method against other sparse GP methods to verify the usefulness of our algorithm. To evaluate generalization performance, we utilize Normalized
2
t
i ??i )
Mean Square Error (NMSE) given by 1t i=1 (yVar(y)
and Negative Logarithm of Pret
dictive Distribution (NLPD) defined as 1t i=1 ? log P(yi |?i , ?i2 ) where t is the number
of test cases, yi , ?i and ?i2 are, respectively, the target, the predictive mean and the predictive variance of the i-th test case. NMSE uses only the mean while NLPD measures
the quality of predictive distributions as it penalizes over-confident predictions as well as
under-confident ones.
For all experiments,
defined by
we use the ARD Gaussian kernel
m
2
K(xi , xj ) = ?0 exp
+ ?b where ?0 , ? , ?b > 0 and xi denotes the
=1 ? (xi ? xj )
-th element of xi . The ARD parameters {? } give variable weights to input features that
leads to a type of feature selection.
Quality of Basis Selection in KIN40K Data Set. We use the KIN40K data set,1 composed
of 40,000 samples, to evaluate and compare the performance of the various basis selection
criteria. We first trained a full GPR model with the ARD Gaussian kernel on a subset of
2000 samples randomly selected in the dataset. The optimal values of the hyperparameters
that we obtained were fixed and used for all the sparse GP models in this experiment. We
compare the following five basis selection methods:
1. the baseline algorithm (RAND) that selects I at random;
2. the information gain algorithm (INFO) proposed by Seeger et al. (2003);
3. our algorithm described in Section 3 with cache size c = ? = 59 (KAPPA) in
which we evaluate the selection scores of ? instances at each step;
4. our algorithm described in Section 3 with cache size c = dmax (DMAX);
5. the algorithm (SB) proposed by Smola and Bartlett (2001) with ? = 59.
We randomly selected 10,000 samples for training, and kept the remaining 30,000 samples
as test cases. For the purpose of studying variability the methods were run on ten such
random partitions. We varied dmax from 100 to 1200. The test performances of the five
methods are presented in Figure 1. From the upper plot of Figure 1 we can see that INFO
yields much worse NMSE results than KAPPA, DMAX and SB, when dmax is less than 600.
When the size is around 100, INFO is even worse than RAND. DMAX is always better than
KAPPA . Interestingly, DMAX is even slightly better than SB when dmax is less than 200.
This is probably because DMAX has a bigger set of basis functions to choose from, than
SB. SB generally yields slightly better results than KAPPA . From the middle plot of Figure
1 we can note that INFO always gives poor NLPD results, even worse than RAND. The
performances of KAPPA, DMAX and SB are close.
The lower plot of Figure 1 gives the CPU time consumed by the five algorithms for training,
as a function of dmax , in log ? log scale. The scaling exponents of RAND, INFO and SB are
1
The dataset is available at http://www.igi.tugraz.at/aschwaig/data.html.
NMSE
0.3
0.2
0.1
0
100
200
300
400
500
600
700
800
900
1000
1100
1200
100
200
300
400
500
600
700
800
900
1000
1100
1200
NLPD
?0.4
?0.6
?0.8
?1
?1.2
?1.4
?1.6
?1.8
5
10
4
CPU TIME
10
3
10
2
10
1
10
SB
DMAX
KAPPA
INFO
0
10
100
RAND
200
300
500
1000
2000
Figure 1: The variations of test set NMSE, test set NLPD and CPU time (in seconds) for
training of the five algorithms as a function of dmax . In the NMSE and NLPD plots, at each
value of dmax , the results of the five algorithms are presented as a boxplot group. From left
to right, they are RAND(blue), INFO(red), KAPPA(green), DMAX(black), and SB(magenta).
Note that the CPU time plot is on a log ? log scale.
around 2.0 (i.e., cost is proportional to d2max ), which is consistent with our analysis. INFO
is almost as fast as RAND, while SB is about 60 times slower than INFO. The gap between
KAPPA and INFO is the O(?ndmax ) time in computing the score (8) for ? candidates.2 As
dmax increases, the cost of KAPPA asymptotically gets close to INFO. The gap between
DMAX and KAPPA is the O(nd2max ? ?ndmax ) cost in computing the score (8) for the additional (dmax ? ?) instances. Thus, as dmax increases, the curve of DMAX asymptotically
becomes parallel to the curve of INFO. Asymptotically, the ratio of the computational times
of DMAX and INFO is only about 3. Thus, unlike SB, which is about 60 times slower than
INFO , DMAX is only about 3 times slower than INFO . Thus DMAX is an excellent method
for achieving excellent generalization while also being quite efficient.
Model Adaptation on Benchmark Data Sets. Next, we compare model adaptation abilities of the following three algorithms for dmax = 500.
1. The SB algorithm is applied to build a sparse GPR model with fixed hyperparameters (FIXED - SB). The values of these hyperparameters were obtained by training
via a standard full GPR model on a manageable subset of 2000 samples randomly
selected from the training data. FIXED - SB serves as a baseline.
2. The model adaptation scheme is coupled with the INFO basis selection algorithm
(ADAPT- INFO).
3. The model adaptation scheme is coupled with our DMAX basis selection algorithm
(ADAPT- DMAX).
If we want to take kernel evaluations also into account, the cost of KAPPA is O(m?ndmax )
where m is the number of input variables. Note that INFO does not require any kernel evaluations for
computing its selection criterion.
2
Table 1: Test results of the three algorithms on the seven benchmark regression datasets.
The results are the averages over 20 trials, along with the standard deviation. d denotes the
number of input features, ntrg denotes the training data size and ntst denotes the test data
size. We use bold face to indicate the lowest average value among the results of the three
algorithms. The symbol is used to indicate the cases significantly worse than the winning
entry; a p-value threshold of 0.01 in Wilcoxon rank sum test was used to decide this.
DATASET d ntrg ntst
FIXED - SB
BANK 8 FM 8 4500 3692
3.52 ? 0.08
BANK 32 NH 32 4500 3692 48.08 ? 2.92
CPUSMALL 12 4500 3692 2.45 ? 0.16
CPUACT
21 4500 3692 1.58 ? 0.13
CALHOUSE 8 10000 10640 22.58 ? 0.34
HOUSE 8 L
8 10000 12784 42.27 ? 2.14
HOUSE 16 H 16 10000 12784 53.45 ? 7.05
NMSE
ADAPT- INFO
ADAPT- DMAX
3.54 ? 0.08
49.04 ? 1.34
2.45 ? 0.15
1.61 ? 0.14
22.82 ? 0.46
37.30 ? 1.29
45.72 ? 1.15
3.56 ? 0.09
47.41 ? 1.35
2.46 ? 0.14
1.61 ? 0.11
20.02 ? 0.88
35.87 ? 0.94
44.29 ? 0.76
FIXED - SB
NLPD
ADAPT- INFO
ADAPT- DMAX
3.11 ? 0.65 1.37 ? 0.34
0.67 ? 0.53
?1.02 ? 0.21 ?0.79 ? 0.06 ?0.88 ? 0.03
5.18 ? 0.61 3.70 ? 0.46
3.04 ? 0.17
4.49 ? 0.26 3.68 ? 0.40
3.09 ? 0.20
31.83 ? 3.35 21.20 ? 1.47 13.03 ? 0.30
12.06 ? 0.67 12.06 ? 0.07 11.71 ? 0.03
12.72 ? 1.69 12.48 ? 0.06 12.13 ? 0.04
We selected seven large regression datasets.3 Each of them is randomly partitioned into
training/test splits. For the purpose of analyzing statistical significance, the partition was
repeated 20 times independently. Test set performances (NMSE and NLPD) of the three
methods on the seven datasets are presented in Table 1. On the four datasets with 4500
training instances, the NMSE results of the three methods are quite comparable. ADAPTDMAX yields significantly better NLPD results on three of those four datasets. On the three
larger datasets with 10,000 training instances, ADAPT- DMAX is significantly better than
ADAPT- INFO on both NMSE and NLPD .
We also tested our algorithm on the Outaouais dataset, which consists of 29000 training
samples and 20000 test cases whose targets are held by the organizers of the ?Evaluating
Predictive Uncertainty Challenge?.4 The results of NMSE and NLPD we obtained in this
blind test are 0.014 and ?1.037 respectively, which are much better than the results of
other participants.
References
Adler, J., B. D. Rao, and K. Kreutz-Delgado. Comparison of basis selection methods. In
Proceedings of the 30th Asilomar conference on signals, systems and computers, pages
252?257, 1996.
Candela, J. Q. Learning with uncertainty - Gaussian processes and relevance vector machines. PhD thesis, Technical University of Denmark, 2004.
Csat?o, L. and M. Opper. Sparse online Gaussian processes. Neural Computation, The MIT
Press, 14:641?668, 2002.
Seeger, M. Bayesian Gaussian process models: PAC-Bayesian generalisation error bounds
and sparse approximations. PhD thesis, University of Edinburgh, July 2003.
Seeger, M., C. K. I. Williams, and N. Lawrence. Fast forward selection to speed up sparse
Gaussian process regression. In Workshop on AI and Statistics 9, 2003.
Smola, A. J. and P. Bartlett. Sparse greedy Gaussian process regression. In Leen, T. K.,
T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, pages 619?625. MIT Press, 2001.
Vincent, P. and Y. Bengio. Kernel matching pursuit. Machine Learning, 48:165?187, 2002.
Williams, C. K. I. and M. Seeger. Using the Nystr?om method to speed up kernel machines.
In Leen, T. K., T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information
Processing Systems 13, pages 682?688. MIT Press, 2001.
3
These datasets are vailable at http://www.liacc.up.pt/?ltorgo/Regression/DataSets.html.
The dataset and the results contributed by other participants can be found at the web site of the
challenge http://predict.kyb.tuebingen.mpg.de/.
4
| 2862 |@word trial:1 middle:1 manageable:1 nd:2 crucially:1 tried:1 covariance:5 nystr:2 delgado:1 initial:1 contains:1 score:9 interestingly:2 outperforms:1 current:5 com:1 chu:1 attracted:1 written:1 numerical:2 partition:2 informative:1 cheap:1 kyb:1 plot:5 greedy:6 prohibitive:1 selected:8 beginning:1 short:1 provides:1 five:5 along:1 consists:2 backfitting:5 introduce:1 mpg:1 cpu:4 cache:8 considering:1 increasing:1 becomes:1 lowest:2 developed:1 ti:1 xd:2 exactly:2 rm:2 uk:2 unit:1 yn:1 before:1 analyzing:1 might:1 black:1 limited:1 range:1 practical:2 testing:2 significantly:3 matching:5 word:1 pre:2 aschwaig:1 get:5 cannot:1 close:2 selection:42 www:2 optimize:1 equivalent:4 imposed:1 map:3 maximizing:1 williams:3 attention:1 go:1 independently:1 selvarak:1 stability:1 variation:1 target:5 suppose:1 pt:1 gps:6 us:1 element:8 expensive:2 kappa:11 updating:1 calculate:1 decrease:3 highest:1 principled:1 mentioned:1 complexity:13 ideally:1 trained:1 solving:1 predictive:11 efficiency:2 basis:48 liacc:1 easily:1 joint:3 represented:2 various:2 fast:4 describe:1 london:2 effective:2 choosing:3 outside:4 whose:4 quite:4 heuristic:1 larger:2 ability:1 statistic:1 gp:28 noisy:1 online:1 advantage:2 sequence:1 differentiable:1 ucl:1 propose:4 product:1 maximal:1 adaptation:10 coming:1 alleviates:1 till:2 empty:1 cached:1 help:1 depending:1 ac:1 fixing:1 ard:3 ij:2 come:2 indicate:2 differ:1 closely:1 disproportionately:2 require:1 fix:2 generalization:4 randomization:1 hold:1 around:2 considered:2 exp:1 lawrence:1 predict:1 pointing:1 purpose:2 applicable:1 combinatorial:1 ntrg:2 largest:1 mit:3 clearly:1 gaussian:18 always:2 modified:1 rather:1 focus:1 rank:2 likelihood:3 mainly:1 greatly:2 seeger:16 contrast:1 baseline:3 posteriori:1 accumulated:2 sb:16 pasadena:1 interested:1 selects:2 issue:2 unobservable:1 among:2 overall:4 denoted:4 exponent:1 yahoo:2 overtaken:1 html:2 special:1 initialize:1 marginal:1 construct:1 having:2 future:2 report:1 randomly:8 composed:3 keerthi:1 maintain:2 huge:1 interest:1 evaluation:4 weakness:1 extreme:1 held:1 wc1n:1 kt:4 indexed:1 logarithm:3 penalizes:1 minimal:2 instance:38 column:3 earlier:3 rao:1 ar:1 cover:1 chuwei:1 ordinary:1 cost:27 deviation:1 subset:6 entry:1 cpusmall:1 usefulness:1 motivating:1 dependency:1 adler:2 confident:2 randomized:2 probabilistic:3 together:2 again:1 thesis:2 ltorgo:1 choose:5 worse:4 resort:1 account:1 potential:1 de:1 bold:1 inc:1 dictive:1 igi:1 depends:1 blind:1 view:2 lab:1 candela:2 doing:2 closed:1 red:1 sort:1 participant:2 parallel:1 om:2 minimize:1 square:2 accuracy:1 variance:5 efficiently:1 yield:5 bayesian:5 vincent:3 worth:1 evaluates:1 against:1 involved:1 associated:5 gain:8 dataset:5 recall:1 organized:1 higher:2 supervised:2 wei:1 rand:7 formulation:3 evaluated:4 leen:2 furthermore:1 anywhere:1 smola:15 just:2 correlation:1 working:2 web:1 quality:3 building:2 dietterich:2 verify:1 true:2 normalized:1 alternating:1 i2:2 during:1 noted:1 criterion:14 generalized:1 trying:1 complete:1 demonstrate:1 bring:1 novel:1 recently:1 nh:1 measurement:2 refer:2 ai:1 similarly:1 pointed:2 inclusion:2 wilcoxon:1 posterior:12 multivariate:1 optimizing:1 moderate:1 continue:1 yi:5 scoring:2 additional:1 signal:1 july:1 full:11 technical:1 faster:1 adapt:8 determination:1 post:4 bigger:1 prediction:4 regression:13 basic:1 essentially:2 d2max:3 df:1 kernel:21 invert:1 achieved:1 want:1 crucial:1 extra:1 unlike:1 probably:2 subject:1 spirit:1 effectiveness:1 integer:1 ideal:1 bengio:3 split:1 xj:7 gave:1 restrict:1 fm:1 idea:6 avenue:1 consumed:1 motivated:1 handled:1 bartlett:15 effort:2 generally:2 useful:5 repeating:1 ten:1 reduced:2 http:3 neuroscience:1 csat:2 blue:1 hyperparameter:1 group:1 putting:1 four:2 threshold:1 achieving:1 changing:1 utilize:1 kept:1 resorted:1 asymptotically:3 sum:1 run:2 uncertainty:2 almost:2 reasonable:1 decide:1 scaling:3 comparable:1 ki:32 bound:1 simplification:1 constraint:2 x2:3 n3:3 boxplot:1 speed:2 min:4 relatively:1 according:1 poor:1 smaller:2 remain:1 slightly:2 partitioned:1 organizer:1 taken:1 asilomar:1 computationally:1 remains:1 dmax:44 discus:2 serf:1 studying:1 pursuit:5 available:2 alternative:1 slower:3 original:1 denotes:7 remaining:4 tugraz:1 especially:1 build:1 added:1 parametric:1 dependence:1 gradient:3 seven:3 collected:1 tuebingen:1 fresh:2 denmark:1 laying:1 assuming:1 nlpd:11 relationship:1 ratio:1 minimizing:4 info:21 negative:5 contributed:1 allowing:1 upper:1 datasets:10 discarded:1 benchmark:2 situation:1 variability:1 y1:1 varied:1 inferred:1 pair:1 required:1 optimized:1 unduly:1 address:1 suggested:1 proceeds:1 usually:4 kin40k:2 challenge:2 including:1 green:1 predicting:1 scheme:11 brief:1 coupled:2 tresp:2 prior:3 review:1 proportional:1 borrows:1 foundation:1 kti:12 consistent:1 informativeness:1 viewpoint:2 bank:2 editor:2 storing:1 heavy:2 row:8 placed:2 keeping:1 neat:1 infeasible:1 face:1 sparse:32 edinburgh:1 curve:2 opper:2 xn:2 evaluating:1 ignores:1 forward:9 projected:3 approximate:3 keep:2 summing:1 kreutz:1 sathiya:1 xi:33 search:1 latent:10 table:2 promising:3 ca:1 excellent:2 ntst:2 cheapest:1 significance:1 main:1 big:1 noise:4 scored:1 hyperparameters:3 n2:6 repeated:1 x1:4 nmse:11 site:1 referred:1 gatsby:2 explicit:1 winning:1 candidate:2 house:2 gpr:3 down:1 magenta:1 pac:1 symbol:1 evidence:1 burden:1 workshop:1 adding:1 phd:2 sparseness:1 gap:2 forming:2 corresponds:1 minimizer:2 conditional:2 goal:1 obstruct:1 replace:1 considerable:1 feasible:1 typical:1 generalisation:1 total:1 called:1 select:8 college:1 support:1 relevance:1 evaluate:5 tested:1 |
2,051 | 2,863 | From Lasso regression to Feature vector
machine
1
Fan Li1 , Yiming Yang1 and Eric P. Xing1,2
LTI and CALD, School of Computer Science, Carnegie Mellon University,
Pittsburgh, PA USA 15213
{hustlf,yiming,epxing}@cs.cmu.edu
2
Abstract
Lasso regression tends to assign zero weights to most irrelevant or redundant features, and hence is a promising technique for feature selection.
Its limitation, however, is that it only offers solutions to linear models.
Kernel machines with feature scaling techniques have been studied for
feature selection with non-linear models. However, such approaches require to solve hard non-convex optimization problems. This paper proposes a new approach named the Feature Vector Machine (FVM). It reformulates the standard Lasso regression into a form isomorphic to SVM,
and this form can be easily extended for feature selection with non-linear
models by introducing kernels defined on feature vectors. FVM generates sparse solutions in the nonlinear feature space and it is much more
tractable compared to feature scaling kernel machines. Our experiments
with FVM on simulated data show encouraging results in identifying the
small number of dominating features that are non-linearly correlated to
the response, a task the standard Lasso fails to complete.
1 Introduction
Finding a small subset of most predictive features in a high dimensional feature space is an
interesting problem with many important applications, e.g. in bioinformatics for the study
of the genome and the proteome, and in pharmacology for high throughput drug screening.
Lasso regression ([Tibshirani et al., 1996]) is often an effective technique for shrinkage and
feature selection. The loss function of Lasso regression is defined as:
X
X
X
?p xip )2 + ?
||?p ||1
L=
(yi ?
i
p
p
where xip denotes the pth predictor (feature) in the ith datum, yi denotes the value of the
response in this datum,
P and ?p denotes the regression coefficient of the pth feature. The
norm-1 regularizer p ||?p ||1 in Lasso regression typically leads to a sparse solution in the
feature space, which means that the regression coefficients for most irrelevant or redundant
features are shrunk to zero. Theoretical analysis in [Ng et al., 2003] indicates that Lasso
regression is particularly effective when there are many irrelevant features and only a few
training examples.
One of the limitations of standard Lasso regression is its assumption of linearity in the
feature space. Hence it is inadequate to capture non-linear dependencies from features to
responses (output variables). To address this limitation, [Roth, 2004] proposed ?generalized Lasso regressions? (GLR) by introducing kernels. In GLR, the loss function is defined
as
X
X
X
L=
(yi ?
?j k(xi , xj ))2 + ?
||?i ||1
i
j
i
where ?j can be regarded as the regression coefficient corresponding to the jth basis in an
instance space (more precisely, a kernel space with its basis defined on all examples), and
k(xi , xj ) represents some kernel function over the ?argument? instance x i and the ?basis?
instance xj . The non-linearity can be captured by a non-linear kernel. This loss function
typically yields a sparse solution in the instance space, but not in feature space where data
was originally represented. Thus GLR does not lead to compression of data in the feature
space.
[Weston et al., 2000], [Canu et al., 2002] and [Krishnapuram et al., 2003] addressed the
limitation from a different angle. They introduced feature scaling kernels in the form of:
K? (xi , xj ) = ?(xi ? ?)?(xj ? ?) = K(xi ? ?, xj ? ?)
where xi ? ? denotes the component-wise product between two vectors: x i ? ? =
(xi1 ?1 , ..., xip ?p ). For example, [Krishnapuram et al., 2003] used a feature scaling polynomial kernel:
X
K? (xi , xj ) = (1 +
?p xip xjp )k ,
p
where ?p = ?p2 . With a norm-1 or norm-0 penalizer on ? in the loss function of a feature scaling kernel machine, a sparse solution is supposed to identify the most influential
features. Notice that in this formalism the feature scaling vector ? is inside the kernel function, which means that the solution space of ? could be non-convex. Thus, estimating ?
in feature scaling kernel machines is a much harder problem than the convex optimization
problem in conventional SVM of which the weight parameters to be estimated are outside
of the kernel functions.
What we are seeking for here is an alternative approach that guarantees a sparse solution
in the feature space, that is sufficient for capturing both linear and non-linear relationships
between features and the response variable, and that does not involve parameter optimization inside of kernel functions. The last property is particularly desirable in the sense that
it will allow us to leverage many existing works in kernel machines which have been very
successful in SVM-related research.
We propose a new approach where the key idea is to re-formulate and extend Lasso regression into a form that is similar to SVM except that it generates a sparse solution in the
feature space rather than in the instance space. We call our newly formulated and extended
Lasso regression ?Feature Vector Machine? (FVM). We will show (in Section 2) that FVM
has many interesting properties that mirror SVM. The concepts of support vectors, kernels
and slack variables can be easily adapted in FVM. Most importantly, all the parameters we
need to estimate for FVM are outside of the kernel functions, ensuring the convexity of the
solution space, which is the same as in SVM. 1 When a linear kernel is put to use with no
slack variables, FVM reduces to the standard Lasso regression.
1
Notice that we can not only use FVM to select important features from training data, but also use
it to predict the values of response variables for test data (see section 5). We have shown that we only
need convex optimization in the training phase of FVM. In the test phase, FVM makes a prediction for
each test example independently. This only involves with a one-dimensional optimization problem
with respect to the response variable for the test example. Although the optimization in the test phase
may be non-convex, it will be relatively easy to solve because it is only one-dimensional. This is the
price we pay for avoiding the high dimensional non-convex optimization in the training phase, which
may involve thousands of model parameters.
We notice that [Hochreiter et al., 2004] has recently developed an interesting feature selection technique named ?potential SVM?, which has the same form as the basic version of
FVM (with linear kernel and no slack variables). However, they did not explore the relationship between ?potential SVM? and Lasso regression. Furthermore, their method does
not work for feature selection tasks with non-linear models since they did not introduce the
concepts of kernels defined on feature vectors.
In section 2, we analyze some geometric similarities between the solution hyper-planes in
the standard Lasso regression and in SVM. In section 3, we re-formulate Lasso regression
in a SVM style form. In this form, all the operations on the training data can be expressed
by dot products between feature vectors. In section 4, we introduce kernels (defined for
feature vectors) to FVM so that it can be used for feature selection with non-linear models.
In section 5, we give some discussions on FVM. In section 6, we conduct experiments and
in section 7 we give conclusions.
2 Geometric parity between the solution hyper-planes of Lasso
regression and SVM
Formally, let X = [x1 , . . . , xN ] denote a sample matrix, where each column xi =
(x1 , . . . , xK )T represents a sample vector defined on K features. A feature vector can be
defined as a transposed row in the sample matrix, i.e., fq = (x1q , . . . , xN q )T (corresponding to the q row of X). Note that we can write XT = [f1 , . . . , fK ] = F. For convenience,
let y = (y1 , . . . , yn )T denote a response vector containing the responses corresponding to
all the samples.
Now consider an example space of which each basis is represented by an x i in our sample
matrix (note that this is different from the space ?spanned? by the sample vectors). Under
the example space, both the features fq and the response vector y can be regarded as a
point in this space. It can be shown that the solution of Lasso regression has a very intuitive meaning in the example space: the regression coefficients can be regarded as the
weights of feature vectors in the example space; moreover, all the non-zero weighted feature vectors are on two parallel hyper-planes in the example space. These feature vectors,
together with the response variable, determine the directions of these two hyper-planes.
This geometric view can be drawn from the following recast of the Lasso regression due
to [Perkins et al., 2003]:
X
X
?
|
(yi ?
?p xip )xiq | ? , ?q
2
p
i
?
, ?q.
(1)
2
It is apparent from the above equation that y ? [f1 , . . . , fK ]? defines the orientation of a
separation hyper-plane. It can be shown that equality only holds for non-zero weighted
features, and all the zero weighted feature vectors are between the hyper-planes with ?/2
margin (Fig. 1a).
?
|fq (y ? [f1 , . . . , fK ]?)| ?
The separating hyper-planes due to (hard, linear) SVM have similar properties as those of
the regression hyper-planes described above, although the former are now defined in the
feature space (in which each axis represents a feature and each point represents a sample)
instead of the example space. In an SVM, all the non-zero weighted samples are also on
the two ?/2-margin separating hyper-planes (as is the case in Lasso regression), whereas
all the zero-weighted samples are now outside the pair of hyper-planes (Fig 1b). It?s well
known that the classification hyper-planes in SVM can be extended to hyper-surfaces by
introducing kernels defined for example vectors. In this way, SVM can model non-linear
dependencies between samples and the classification boundary. Given the similarity of the
X1
response variable
feature a
feature f
feature a
X3
X5
X1
feature b
feature e
feature b
X2
feature c
X4
feature d
X2
X6
(a)
X8
(b)
Figure 1: Lasso regression vs. SVM. (a) The solution of Lasso regression in the example
space. X1 and X2 represent two examples. Only feature a and d have non-zero weights,
and hence the support features. (b)The solution of SVM in the feature space. Sample X1,
X3 and X5 are in one class and X2, X4, X6 and X8 are in the other. X1 and X2 are the
support vectors (i.e., with non-zero weights).
geometric structures of Lasso regression and SVM, it is nature to pursue in parallel how
one can apply similar ?kernel tricks? to the feature vectors in Lasso regression, so that its
feature selection power can be extended to non-linear models. This is the intension of this
paper, and we envisage full leverage of much of the computational/optimization techniques
well-developed in the SVM community in our task.
3 A re-formulation of Lasso regression akin to SVM
[Hochreiter et al., 2004] have proposed a ?potential SVM? as follows:
(
P P
1
min?
(
?p xip )2
2
P i pP
s.t.
| i (yi ? p ?p xip )xiq | ? ?2
?q.
To clean up a little bit, we rewrite Eq. (2) in linear algebra format:
(
1
T
T
2
min?
2 k[f1 , . . . , fK ]?k
s.t.
|fq (y ? [f1 , . . . , fK ]?)| ? ?2 ,
?q.
(2)
(3)
A quick eyeballing of this formulation reveals that it shares the same constrain function
needed to be satisfied in Lasso regression. Unfortunately, this connection was not further
explored in [Hochreiter et al., 2004], e.g., to relate the objection function to that of the
Lasso regression, and to extend the objective function using kernel tricks in a way similar
to SVM. Here we show that the solution to Eq. (2) is exactly the same as that of a standard
Lasso regression. In other words, Lasso regression can be re-formulated as Eq. (2). Then,
based on this re-formulation, we show how to introduce kernels to allow feature selection
under a non-linear Lasso regression. We refer to the optimization problem defined by Eq.
(3), and its kernelized extensions, as feature vector machine (FVM).
P P
P
Proposition 1: For a Lasso regression problem min? i ( p xip ?p ? yi )2 + ? p |?p |,
P P
if we have ? such that: if ?q = 0, then | i ( p ?p xip ? yi )xiq | < ?2 ; if ?q < 0, then
P P
P P
?
?
i(
p ?p xip ? yi )xiq = 2 ; and if ?q > 0, then
i(
p ?p xip ? yi )xiq = ? 2 , then
? is the solution of the Lasso regression defined above. For convenience, we refer to the
aforementioned three conditions on ? as the Lasso sandwich.
Proof: see [Perkins et al., 2003].
Proposition 2: For Problem (3), its solution ? satisfies the Lasso sandwich
Sketch of proof: Following the equivalence between feature matrix F and sample matrix
X (see the begin of ?2), Problem (3) can be re-written as:
?
? min?
s.t.
?
1
T
2
2 ||X ?||
T
X(X ? ? y) ? ?2 e ? 0
X(X T ? ? y) + ?2 e ? 0
,
(4)
where e is a one-vector of K dimensions. Following the standard constrained optimization
procedure, we can derive the dual of this optimization problem. The Lagrange L is given
by
1
?
?
L = ? T XX T ? ? ?T+ (X(X T ? ? y) + e) + ?T? (X(X T ? ? y) + e)
2
2
2
where ?+ and ?? are K ? 1 vectors with positive elements. The optimizer satisfies:
?? L = XX T ? ? XX T (?+ ? ?? ) = 0
Suppose the data matrix X has been pre-processed so that the feature vectors are centered
and normalized. In this case the elements of XX T reflect the correlation coefficients of
feature pairs and XX T is non-singular. Thus we know ? = ?+ ? ?? is the solution of
this loss function. For any element ?q > 0, obviously ?+q should be larger than zero.
P
P
From the KKT condition, we know i (yi ? p ?p xip )xiq = ? ?2 holds at this time. For
P
the same reason we can get when ?q < 0, ??q should be larger than zero thus i (yi ?
P
?
p ?p xip )xiq = 2 holds. When ?q = 0, ?+q and ??q must both be zero (it?s easy to
see
from
condition), thus from KKT condition, both
P they can
P not be both non-zero
P KKT P
?
?
i (yi ?
p ?p xip )xiq > ? 2 and
i (yi ?
p ?p xip )xiq < 2 hold now, which means
P
P
| i (yi ? p ?p xip )xiq | < ?2 at this time.
Theorem 3: Problem (3) ? Lasso regression.
Proof. Follows from proposition 1 and proposition 2.
4 Feature kernels
In many cases, the dependencies between feature vectors are non-linear. Analogous to the
SVM, here we introduce kernels that capture such non-linearity. Note that unlike SVM, our
kernels are defined on feature vectors instead of the sampled vectors (i.e., the rows rather
than the columns in the data matrix). Such kernels can also allow us to easily incorporate
certain domain knowledge into the classifier.
Suppose that two feature vectors fp and fq have a non-linear dependency relationship. In
the absence of linear interaction between fp and fq in the the original space, we assume
that they can be mapped to some (higher dimensional, possibly infinite-dimensional) space
via transformation ?(?), so that ?(fq ) and ?(fq ) interact linearly, i.e., via a dot product
?(fp )T ?(fq ). We introduce kernel K(fq , fp ) = ?(fp )T ?(fq ) to represent the outcome of
this operation.
Replacing f with ?(f ) in Problem (3), we have
(
P
1
min?
p,q ?p ?q K(fp , fp )
2
P
s.t.
?q, | p ?p K(fq , fp ) ? K(fq , y)| ?
?
2
(5)
Now, in Problem 5, we no longer have ?(?), which means we do not have to work in
the transformed feature space, which could be high or infinite dimensional, to capture nonlinearity of features. The kernel K(?, ?) can be any symmetric semi-positive definite matrix.
When domain knowledge from experts is available, it can be incorporated into the choice
of kernel (e.g., based on the distribution of feature values). When domain knowledge is not
available, we can use some general kernels that can detect non-linear dependencies without
any distribution assumptions. In the following we give one such example.
One possible kernel is the mutual information [Cover et al., 1991] between two feature
vectors: K(fp , fq ) = M I(fp , fq ). This kernel requires a pre-processing step to discritize
the elements of features vectors because they are continuous in general. In this paper, we
discritize the continuous variables according to their ranks in different examples. Suppose
we have N examples in total. Then for each feature, we sort its values in these N examples.
The first m values (the smallest m values) are assigned a scale 1. The m + 1 to 2m
values are assigned a scale 2. This process is iterated until all the values are assigned with
corresponding scales. It?s easy to see that in this way, we can guarantee that for any two
features p and q, K(fp , fp ) = K(fq , fq ), which means the feature vectors are normalized
and have the same length in the ? space (residing on a unit sphere centered at the origin).
Mutual information kernels have several good properties. For example, it is symmetric
(i.e., K(fp , fq ) = K(fq , fp ), non-negative, and can be normalized. It also has intuitive
interpretation related to the redundancy between features. Therefore, a non-linear feature
selection using generalized Lasso regression with this kernel yields human interpretable
results.
5 Some extensions and discussions about FVM
As we have shown, FVM is a straightforward feature selection algorithm for nonlinear features captured in a kernel; and the selection can be easily done by solving a standard SVM
problem in the feature space, which yield an optimal vector ? of which most elements are
zero. It turns out that the same procedure also seemlessly leads to a Lasso-style regularized
nonlinear regression capable of predicting the response given data in the original space.
In the prediction phase, all we have to do is to keep the trained ? fixed, and turn the
optimization problem (5) into an analogous one that optimizes over the response y. Specifically, given a new sample xt of unknown response, our sample matrix X grows by one
column X ? [X, xt ], which means all our feature vectors gets one more dimension. We
denote the newly elongated features by F 0 = {fq0 }q?A (note that A is the pruned index
set corresponding to features whose weight ?q is non-zero). Let y 0 denote the elongated
response vector due to the newly given sample: y 0 = (y1 , ..., yN , yt )T , it can be shown that
the optimum response yt can be obtained by solving the following optimization problem 2 :
X
minyt K(y 0 , y 0 ) ? 2
?p K(y 0 , fp0 )
(6)
p?A
When we replace the kernel function K with a linear dot product, FVM reduces
P to Lasso
regression. Indeed, in this special case, it is easy to see from Eq. (6) that y t = p?A ?p xtp ,
which is exactly how Lasso regression would predict the response. In this case one predicts
yt according to ? and xt without using the training data X. However, when a more complex
kernel is used, solving Eq. (6) is not always trivial. In general, to predict y t , we need not
only xt and ?, but also the non-zero weight features extracted from the training data.
2
For simplicity we omit details here, but as a rough sketch, note that Eq. (5) can be reformed as
X
X
min? ||?(y 0 ) ?
?p ?(fp0 )||2 +
||?p ||1 .
p
p
Replacing the opt. argument ? with y and dropping terms irrelevant to yt , we will arrive at Eq. (6).
As in SVM, we can introduce slack variables into FVM to define a ?soft? feature surface.
But due to space limitation, we omit details here. Essentially, most of the methodologies
developed for SVM can be easily adapted to FVM for nonlinear feature selection.
6 Experiments
We test FVM on a simulated dataset with 100 features and 500 examples. The response
variable y in the simulated data is generated by a highly nonlinear rule:
q
y = sin(10 ? f1 ? 5) + 4 ? 1 ? f22 ? 3 ? f3 + ?.
Here feature f1 and f3 are random variables following a uniform distribution in [0, 1];
feature f2 is a random variable uniformly distributed in [?1, 1]; and ? represents Gaussian
noise. The other 97 features f4 , f5 , ..., f100 are conditionally independent of y given the
three features f1 , f2 and f3 . In particular, f4 , ..., f33 are all generated by the rule fj =
3 ? f1 + ?; f34 , ..., f72 are all generated by the rule fj = sin(10 ? f2) + ?; and the remaining
features (f73 , ..., f100 ) simply follow a uniform distribution in [0, 1]. Fig. 2 shows our data
projected in a space spanned by f1 and f2 and y.
We use a mutual information kernel for our FVM. For each feature, we sort its value in
different examples and use the rank to discritize these values into 10 scales (thus each scale
corresponds to 50 data points). An FVM can be solved by quadratic programming, but
more efficient solutions exist. [Perkins et al., 2003] has proposed a fast grafting algorithm
to solve Lasso regression, which is a special case of FVM when linear kernel is used. In our
implementation, we extend the idea of fast grafting algorithm to FVM withPmore general
kernels. The only difference is that, each time when we need to calculate i xpi xqi , we
calculate K(fp , fq ) instead. We found that fast grafting algorithm is very efficient in our
case because it uses the sparse property of the solution of FVM.
We apply both standard Lasso regression and FVM with mutual information kernel on this
dataset. The value of the regularization parameter ? can be tuned to control the number
of non-zero weighted features. In our experiment, we tried two choices of the ?, for both
FVM and the standard Lasso regression. In one case, we set ? such that only 3 non-zero
weighted features are selected; in another case, we relaxed a bit and allowed 10 features.
The results are very encouraging. As shown in Fig. (3), under stringent ?, FVM successfully identified the three correct features, f1 , f2 and f3 , whereas Lasso regression has
missed f1 and f2 , which are non-linearly correlated with y. Even when ? was relaxed,
Lasso regression still missed the right features, whereas FVM was very robust.
6
4
response variable y
response variable y
5
3
2
1
0
?1
?2
?3
1
6
5
4
3
2
1
1
0
0.8
?1
0.5
0.6
?2
f2
0
0.4
?3
?0.5
?1
1
0
0.2
0.4
0.6
f1
0.8
1
0.2
0.5
0
f2
?0.5
?1
f1
0
Figure 2: The responses y and the two features f1 and f2 in our simulated data. Two graphs
from different angles are plotted to show the distribution more clearly in 3D space.
7 Conclusions
In this paper, we proposed a novel non-linear feature selection approach named FVM,
which extends standard Lasso regression by introducing kernels on feature vectors. FVM
?3
0
x 10
Lasso (3 features)
?3
5
?0.5
x 10
Lasso (10 features)
0
?1
Weight assigned to features
?5
?1.5
?10
?2
?15
?2.5
?3
0
20
40
60
80
100
?20
0
FVM (3 features)
?3
x 10
20
40
60
80
100
FVM (10 features)
0.01
1
0.008
0.8
0.006
0.6
0.004
0.4
0.002
0.2
0
0
20
40
60
Feature id
80
100
0
0
20
40
60
80
100
Feature id
Figure 3: Results of FVM and the standard Lasso regression on this dataset. The X axis
represents the feature IDs and the Y axis represents the weights assigned to features. The
two left graphs show the case when 3 features are selected by each algorithm and the two
right graphs show the case when 10 features are selected. From the down left graph, we can
see that FVM successfully identified f1 ,f2 and f3 as the three non-zero weighted features.
From the up left graph, we can see that Lasso regression missed f 1 and f2 , which are
non-linearly correlated with y. The two right graphs show similar patterns.
has many interesting properties that mirror the well-known SVM, and can therefore leverage many computational advantages of the latter approach. Our experiments with FVM
on highly nonlinear and noisy simulated data show encouraging results, in which it can
correctly identify the small number of dominating features that are non-linearly correlated
to the response variable, a task the standard Lasso fails to complete.
References
[Canu et al., 2002] Canu, S. and Grandvalet, Y. Adaptive Scaling for Feature Selection in SVMs
NIPS 15, 2002
[Hochreiter et al., 2004] Hochreiter, S. and Obermayer, K. Gene Selection for Microarray Data. In
Kernel Methods in Computational Biology, pp. 319-355, MIT Press, 2004.
[Krishnapuram et al., 2003] Krishnapuram, B. et al. Joint classifier and feature optimization for cancer diagnosis using gene expression data. The Seventh Annual International Conference on Research in Computational Molecular Biology (RECOMB) 2003, ACM press, April 2003
[Ng et al., 2003] Ng, A. Feature selection, L1 vs L2 regularization, and rotational invariance. ICML
2004
[Perkins et al., 2003] Perkins, S., Lacker, K. & Theiler, J. Grafting: Fast,Incremental Feature Selection by gradient descent in function space JMLR 2003 1333-1356
[Roth, 2004] Roth, V. The Generalized LASSO. IEEE Transactions on Neural Networks (2004), Vol.
15, NO. 1.
[Tibshirani et al., 1996] Tibshirani, R. Optimal Reinsertion:Regression shrinkage and selection via
the lasso. J.R.Statist. Soc. B(1996), 58,No.1, 267-288
[Cover et al., 1991] Cover, TM. and Thomas, JA. Elements in Information Theory. New York: John
Wiley & Sons Inc (1991).
[Weston et al., 2000] Weston, J., Mukherjee, S., Chapelle, O., Pontil, M., Poggio, T. and Vapnik V.
Feature Selection for SVMs NIPS 13, 2000
| 2863 |@word version:1 polynomial:1 compression:1 norm:3 tried:1 harder:1 tuned:1 existing:1 written:1 must:1 john:1 interpretable:1 v:2 selected:3 plane:11 xk:1 ith:1 inside:2 introduce:6 indeed:1 encouraging:3 little:1 begin:1 estimating:1 linearity:3 moreover:1 xx:5 what:1 pursue:1 developed:3 finding:1 transformation:1 guarantee:2 exactly:2 classifier:2 control:1 unit:1 omit:2 yn:2 positive:2 reformulates:1 tends:1 id:3 studied:1 equivalence:1 definite:1 x3:2 procedure:2 pontil:1 drug:1 word:1 pre:2 krishnapuram:4 proteome:1 get:2 convenience:2 selection:20 put:1 conventional:1 elongated:2 quick:1 roth:3 yt:4 penalizer:1 straightforward:1 independently:1 convex:6 formulate:2 simplicity:1 identifying:1 rule:3 regarded:3 importantly:1 spanned:2 analogous:2 suppose:3 programming:1 us:1 origin:1 pa:1 trick:2 element:6 particularly:2 mukherjee:1 predicts:1 solved:1 capture:3 thousand:1 calculate:2 convexity:1 trained:1 rewrite:1 solving:3 algebra:1 predictive:1 eric:1 f2:11 basis:4 easily:5 joint:1 represented:2 regularizer:1 fast:4 effective:2 hyper:12 outside:3 outcome:1 apparent:1 whose:1 larger:2 solve:3 dominating:2 noisy:1 envisage:1 obviously:1 advantage:1 propose:1 interaction:1 product:4 supposed:1 intuitive:2 optimum:1 incremental:1 yiming:2 derive:1 xiq:10 school:1 eq:8 p2:1 soc:1 c:1 involves:1 direction:1 correct:1 f4:2 shrunk:1 centered:2 human:1 stringent:1 require:1 ja:1 assign:1 f1:16 proposition:4 opt:1 extension:2 hold:4 residing:1 predict:3 optimizer:1 smallest:1 successfully:2 weighted:8 rough:1 clearly:1 mit:1 always:1 gaussian:1 rather:2 shrinkage:2 rank:2 indicates:1 fq:20 sense:1 detect:1 xip:16 typically:2 kernelized:1 transformed:1 classification:2 orientation:1 aforementioned:1 dual:1 proposes:1 constrained:1 special:2 mutual:4 f3:5 ng:3 x4:2 represents:7 fp0:2 biology:2 icml:1 throughput:1 few:1 phase:5 sandwich:2 screening:1 highly:2 glr:3 capable:1 poggio:1 conduct:1 re:6 plotted:1 theoretical:1 instance:5 formalism:1 column:3 soft:1 cover:3 introducing:4 subset:1 predictor:1 uniform:2 successful:1 seventh:1 inadequate:1 dependency:5 international:1 xpi:1 xi1:1 together:1 reflect:1 satisfied:1 containing:1 f22:1 possibly:1 f5:1 expert:1 style:2 potential:3 coefficient:5 inc:1 view:1 analyze:1 sort:2 parallel:2 yield:3 identify:2 iterated:1 pp:2 proof:3 transposed:1 sampled:1 newly:3 dataset:3 knowledge:3 originally:1 higher:1 x6:2 methodology:1 response:22 follow:1 april:1 formulation:3 done:1 furthermore:1 correlation:1 until:1 sketch:2 replacing:2 nonlinear:6 defines:1 grows:1 usa:1 cald:1 concept:2 normalized:3 former:1 regularization:2 hence:3 assigned:5 equality:1 symmetric:2 conditionally:1 x5:2 sin:2 xqi:1 generalized:3 complete:2 l1:1 fj:2 meaning:1 wise:1 novel:1 recently:1 extend:3 interpretation:1 mellon:1 refer:2 fk:5 canu:3 nonlinearity:1 dot:3 chapelle:1 similarity:2 surface:2 longer:1 irrelevant:4 optimizes:1 certain:1 yi:14 captured:2 relaxed:2 determine:1 redundant:2 semi:1 full:1 desirable:1 reduces:2 offer:1 sphere:1 molecular:1 ensuring:1 prediction:2 regression:50 basic:1 essentially:1 cmu:1 kernel:46 represent:2 hochreiter:5 whereas:3 addressed:1 objection:1 singular:1 microarray:1 unlike:1 call:1 leverage:3 easy:4 xj:7 li1:1 lasso:52 identified:2 idea:2 tm:1 expression:1 akin:1 york:1 involve:2 statist:1 processed:1 svms:2 exist:1 notice:3 estimated:1 tibshirani:3 correctly:1 diagnosis:1 carnegie:1 write:1 dropping:1 vol:1 key:1 redundancy:1 drawn:1 clean:1 lti:1 graph:6 angle:2 named:3 arrive:1 extends:1 separation:1 missed:3 scaling:8 bit:2 capturing:1 pay:1 datum:2 fan:1 quadratic:1 annual:1 adapted:2 precisely:1 perkins:5 constrain:1 x2:5 generates:2 argument:2 min:6 pruned:1 relatively:1 format:1 influential:1 according:2 son:1 equation:1 slack:4 turn:2 needed:1 know:2 tractable:1 available:2 operation:2 apply:2 alternative:1 original:2 thomas:1 denotes:4 remaining:1 lacker:1 seeking:1 objective:1 f34:1 obermayer:1 gradient:1 mapped:1 simulated:5 separating:2 trivial:1 reason:1 length:1 index:1 relationship:3 intension:1 rotational:1 unfortunately:1 relate:1 negative:1 implementation:1 unknown:1 descent:1 extended:4 incorporated:1 y1:2 community:1 introduced:1 pair:2 connection:1 nip:2 address:1 pattern:1 fp:15 xjp:1 recast:1 power:1 regularized:1 predicting:1 epxing:1 axis:3 x8:2 geometric:4 l2:1 loss:5 interesting:4 limitation:5 recomb:1 reinsertion:1 sufficient:1 theiler:1 grandvalet:1 share:1 row:3 cancer:1 last:1 parity:1 jth:1 allow:3 sparse:7 distributed:1 boundary:1 dimension:2 xn:2 genome:1 adaptive:1 projected:1 pth:2 transaction:1 f33:1 grafting:4 keep:1 gene:2 kkt:3 reveals:1 pittsburgh:1 xi:8 continuous:2 promising:1 nature:1 robust:1 interact:1 complex:1 domain:3 did:2 linearly:5 noise:1 pharmacology:1 allowed:1 x1:7 fig:4 wiley:1 fails:2 xtp:1 jmlr:1 theorem:1 down:1 xt:5 explored:1 svm:28 vapnik:1 mirror:2 margin:2 yang1:1 simply:1 explore:1 lagrange:1 expressed:1 corresponds:1 satisfies:2 extracted:1 acm:1 weston:3 formulated:2 price:1 absence:1 replace:1 hard:2 infinite:2 except:1 specifically:1 uniformly:1 total:1 isomorphic:1 invariance:1 select:1 formally:1 f100:2 support:3 latter:1 bioinformatics:1 incorporate:1 avoiding:1 correlated:4 |
2,052 | 2,864 | Principles of real-time computing with feedback
applied to cortical microcircuit models
Wolfgang Maass, Prashant Joshi
Institute for Theoretical Computer Science
Technische Universitaet Graz
A-8010 Graz, Austria
maass,[email protected]
Eduardo D. Sontag
Department of Mathematics
Rutgers, The State University of New Jersey
Piscataway, NJ 08854-8019, USA
[email protected]
Abstract
The network topology of neurons in the brain exhibits an abundance of
feedback connections, but the computational function of these feedback
connections is largely unknown. We present a computational theory that
characterizes the gain in computational power achieved through feedback
in dynamical systems with fading memory. It implies that many such
systems acquire through feedback universal computational capabilities
for analog computing with a non-fading memory. In particular, we show
that feedback enables such systems to process time-varying input streams
in diverse ways according to rules that are implemented through internal
states of the dynamical system. In contrast to previous attractor-based
computational models for neural networks, these flexible internal states
are high-dimensional attractors of the circuit dynamics, that still allow
the circuit state to absorb new information from online input streams. In
this way one arrives at novel models for working memory, integration of
evidence, and reward expectation in cortical circuits. We show that they
are applicable to circuits of conductance-based Hodgkin-Huxley (HH)
neurons with high levels of noise that reflect experimental data on invivo conditions.
1
Introduction
Quite demanding real-time computations with fading memory1 can be carried out by
generic cortical microcircuit models [1]. But many types of computations in the brain, for
1
A map (or filter) F from input- to output streams is defined to have fading memory if its current
output at time t depends (up to some precision ?) only on values of the input u during some finite
time interval [t ? T, t]. In formulas: F has fading memory if there exists for every ? > 0 some
? )(t)| < ? for any t ? R and any input functions u, u
? with
? > 0 and T > 0 so that |(F u)(t) ? (F u
example computations that involve memory or persistent internal states, cannot be modeled
by such fading memory systems. On the other hand concrete examples of artificial neural
networks [2] and cortical microcircuit models [3] suggest that their computational power
can be enlarged through feedback from trained readouts. Furthermore the brain is known to
have an abundance of feedback connections on several levels: within cortical areas, where
pyramidal cells typically have in addition to their long projecting axon a number of local
axon collaterals, between cortical areas, and between cortex and subcortical structures. But
the computational role of these feedback connections has remained open. We present here
a computational theory which characterizes the gain in computational power that a fading
memory system can acquire through feedback from trained readouts, both in the idealized
case without noise and in the case with noise. This theory simultaneously characterizes
the potential gain in computational power resulting from training a few neurons within a
generic recurrent circuit for a specific task. Applications of this theory to cortical microcircuit models provide a new way of explaining the possibility of real-time processing of
afferent input streams in the light of learning-induced internal circuit states that might represent for example working memory or rules for the timing of behavior. Further details to
these results can be found in [4].
2
Computational Theory
Recurrent circuits of neurons are from a mathematical perspective special cases of dynamical systems. The subsequent mathematical results show that a large variety of dynamical
systems, in particular also neural circuits, can overcome in the presence of feedback the
computational limitations of a fading memory ? without necessarily falling into the chaotic
regime. In fact, feedback endows them with universal capabilities for analog computing,
in a sense that can be made precise in the following way (see Fig. 1A-C for an illustration):
Theorem 2.1 A large class Sn of systems of differential equations of the form
x0i (t) = fi (x1 (t), . . . , xn (t)) + gi (x1 (t), . . . , xn (t)) ? v(t), i = 1, . . . , n
(1)
are in the following sense universal for analog computing:
It can respond to an external input u(t) with the dynamics of any nth order differential
equation of the form
z (n) (t) = G(z(t), z 0 (t), z 00 (t), . . . , z (n?1) (t)) + u(t)
(2)
n
(for arbitrary smooth functions G : R ? R) if the input term v(t) is replaced by a suitable memoryless feedback function K(x1 (t), . . . , xn (t), u(t)), and if a suitable memoryless readout function h(x1 (t), . . . , xn (t)) is applied to its internal state hx1 (t), . . . , xn (t)i.
Also the dynamic responses of all systems consisting of several higher order differential
equations of the form (2) can be simulated by fixed systems of the form (1) with a corresponding number of feedbacks.
The class Sn of dynamical systems that become through feedback universal for analog
computing subsumes2 systems of the form
n
X
x0i (t) = ??i xi (t) + ? (
aij ? xj (t)) + bi ? v(t) , i = 1, . . . , n
(3)
j=1
? (? )k < ? for all ? ? [t ? T, t]. This is a characteristic property of all filters that can be
ku(? ) ? u
approximated by an integral over the input stream u, or more generally by Volterra- or Wiener series.
2
for example if the ?i are pairwise different and aij = 0 for all i, j, and all bi are nonzero; fewer
restrictions are needed if more then one feedback to the system (3) can be used
Figure 1: Universal computational capability acquired through feedback according to Theorem 2.1. (A) A fixed circuit C with dynamics (1). (B) An arbitrary given nth order
dynamical system (2) with external input u(t). (C) If the input v(t) to circuit C is replaced
by a suitable feedback K(x(t), u(t)), then this fixed circuit C can simulate the dynamic
response z(t) of the arbitrarily given system shown in B, for any input stream u(t).
that are commonly used to model the temporal evolution of firing rates in neural circuits
(? is some standard activation function). If the activation function ? is also applied to the
term v(t) in (3), the system (3) can still simulate arbitrary differential equations (2) with
bounded inputs u(t) and bounded responses z(t), . . . , z (n?1) (t).
Note that according to [5] all Turing machines can be simulated by systems of differential
equations of the form (2). Hence the systems (1) become through feedback also universal
for digital computing. A proof of Theorem 2.1 is given in [4].
It has been shown that additive noise, even with an arbitrarily small bounded amplitude,
reduces the non-fading memory capacity of any recurrent neural network to some finite
number of bits [6, 7]. Hence such network can no longer simulate arbitrary Turing machines. But feedback can still endow noisy fading memory systems with the maximum
possible computational power within this a-priori limitation. The following result shows
that in principle any finite state machine (= deterministic finite automaton), in particular
any Turing machine with tapes of some arbitrary but fixed finite length, can be emulated by
a fading memory system with feedback, in spite of noise in the system.
Theorem 2.2 Feedback allows linear and nonlinear fading memory systems, even in the
presence of additive noise with bounded amplitude, to employ the computational capability
and non-fading states of any given finite state machine (in addition to their fading memory)
for real-time processing of time varying inputs.
The precise formalization and the proof of this result (see [4]) are technically rather involved, and cannot be given in this abstract. A key method of the proof, which makes
sure that noise does not get amplified through feedback, is also applied in the subsequent
computer simulations of cortical microcircuit models. There the readout functions K that
provide feedback values K(x(t)) are trained to assume values which cancel the impact of
errors or imprecision in the values K(x(s)) of this feedback for immediately preceding
time steps s < t.
3
Application to Generic Circuits of Noisy Neurons
We tested this computational theory on circuits consisting of 600 integrate-and-fire (I&F)
neurons and circuits consisting of 600 conductance-based HH neurons, in either case with
a rather high level of noise that reflects experimental data on in-vivo conditions [8]. In
addition we used models for dynamic synapses whose individual mixture of paired-pulse
depression and facilitation is based on experimental data [9, 10]. Sparse connectivity between neurons with a biologically realistic bias towards short connections was generated by
a probabilistic rule, and synaptic parameters were randomly chosen, depending on the type
of pre-and postsynaptic neurons, in accordance with these empirical data (see [1] or [4]
for details). External inputs and feedback from readouts were connected to populations of
neurons within the circuit, with randomly varying connection strengths. The current circuit
state x(t) was modeled by low-pass filtered spike trains from all neurons in the circuit (with
a time constant of 30 ms, modeling time constants of receptors and membrane of potential
readout neurons). Readout functions K(x(t)) were modeled by weighted sums w ? x(t)
Figure 2: State-dependent real-time processing of 4 independent input streams in a generic cortical
microcircuit model. (A) 4 input streams, consisting each of 8 spike trains generated by Poisson
processes with randomly varying rates ri (t), i = 1, . . . , 4 (rates plotted in (B); all rates are given
in Hz). The 4 input streams and the feedback were injected into disjoint but densely interconnected
subpopulations of neurons in the circuit. (C) Resulting firing activity of 100 out of the 600 I&F
neurons in the circuit. Spikes from inhibitory neurons marked in gray. (D) Target activation times of
the high-dimensional attractor (gray shading), spike trains of 2 of the 8 I&F neurons that were trained
to create the high-dimensional attractor by sending their output spike trains back into the circuit,
and average firing rate of all 8 neurons (lower trace). (E and F) Performance of linear readouts
that were trained to switch their real-time computation task depending on the current state of the
high-dimensional attractor: output 2 ? r3 (t) instead of r3 (t) if the high-dimensional attractor is on
(E), output r3 (t) + r4 (t) instead of |r3 (t) ? r4 (t)| if the high-dimensional attractor is on (F). (G)
Performance of linear readout that was trained to output r3 (t) ? r4 (t), showing that another linear
readout from the same circuit can simultaneously carry out nonlinear computations that are invariant
to the current state of the high-dimensional attractor.
whose weights w were trained during 200 s of simulated biological time to minimize the
mean squared error with regard to desired target output functions K. After training these
weights w were fixed, and the performance of the otherwise generic circuit was evaluated
for new input streams u (with new input rates drawn from the same distribution) that had
not been used for training. It was sufficient to use just linear functions K that transformed
the current circuit state x(t) into a feedback K(x(t)), confirming the predictions of [1]
and [2] that the recurrent circuit automatically assumes the role of a kernel (in the sense of
machine learning) that creates nonlinear combinations of recent inputs.
We found that computer simulations of such generic cortical microcircuit models confirm
the theoretical prediction that feedback from suitably trained readouts enables complex
state-dependent real-time processing of a fairly large number of diverse input spike trains
within a single circuit (all results shown are for test inputs that had not been used for
training). Readout neurons could be trained to turn a high-dimensional attractor on or
off in response to particular signals in 2 of the 4 independent input streams (Fig. 2D).
The target value for K(x(t)) during training was the currently desired activity-state of the
high-dimensional attractor, where x(t) resulted from giving already tentative spike trains
that matched this target value as feedback into the circuit. These neurons were trained
to represent in their firing activity at any time the information in which of input streams
1 or 2 a burst had most recently occurred. If it occurred most recently in stream 1, they
were trained to fire at 40 Hz, and not to fire otherwise. Thus these neurons were required
to represent the non-fading state of a very simple finite state machine, demonstrating in a
simple example the validity of Theorem 2.2.
The weights w of these readout neurons were determined by a sign-constrained linear
regression, so that weights from excitatory (inhibitory) presynaptic neurons were automatically positive (negative). Since these readout neurons had the same properties as neurons
within the circuit, this computer simulation also provided a first indication of the gain in
real-time processing capability that can be achieved by suitable training of a few spiking
neurons within an otherwise randomly connected recurrent circuit. Fig. 2 shows that other
readouts from the same circuit (that do not provide feedback) can be trained to amplify their
response to one of the input streams (Fig. 2E), or even switch their computational function
(Fig. 2F) if the high-dimensional attractor is in the on-state, thereby providing a model for
the way in which internal circuit states can change the ?program? for its online processing.
Continuous high-dimensional attractors that hold a time-varying analog value (instead of
a discrete state) through globally distributed activity within the circuit can be created in
the same way through feedback. In fact, several such high-dimensional attractors can coexist within the same circuit, see Fig. 3B,C,D. This gives rise to a model (Fig. 3) that
could explain how timing of behavior and reward expectation are learnt and controlled by
neural microcircuits on a behaviorally relevant large time scale. In addition Fig. 4 shows
that a continuous high-dimensional attractor that is created through feedback provides a
new model for a neural integrator, and that the current value of this neural integrator can
be combined within the same circuit and in real-time with variables extracted from timevarying analog input streams.
This learning-induced generation of high-dimensional attractors through feedback provides
a new model for the emergence of persistent firing in cortical circuits that does not rely
on especially constructed circuits, neurons, or synapses, and which is consistent with high
noise (see Fig. 4G for the quite realistic trial-to-trial variability in this circuit of HH neurons
with background noise according to [8]). This learning based model is also consistent with
the surprising plasticity that has recently been observed even in quite specialized neural
integrators [11]. Its robustness can be traced back to the fact that readouts can be trained to
correct errors in their previous feedback. Furthermore such error correction is not restricted
to linear computational operations, since the inherent kernel property of generic recurrent
circuits allows even linear readouts to carry out nonlinear computations on firing rates
(Fig. 2G). Whereas previous models for discrete or continuous attractors in recurrent neural
circuits required that the whole dynamics of such circuit was entrained by the attractor, our
new model predicts that persistent firing states can co-exist with other high-dimensional
attractors and with responses to time-varying afferent inputs within the same circuit. Note
that such attractors can equivalently be generated by training (instead of readouts) a few
neurons within an otherwise generic cortical microcircuit model.
Figure 3: Representation of time for behaviorally relevant time spans in a generic cortical microcircuit model. (A) Afferent circuit input, consisting of a cue in one channel (gray) and random spikes
(freshly drawn for each trial) in the other channels. (B) Response of 100 neurons from the same
circuit as in Fig. 2, which has here two co-existing high-dimensional attractors. The autonomously
generated periodic bursts with a periodic frequency of about 8 Hz are not related to the task, and
readouts were trained to become invariant to them. (C and D) Feedback from two linear readouts
that were simultaneously trained to create and control two high-dimensional attractors. One of them
was trained to decay in 400 ms (C), and the other in 600 ms (D) (scale in nA is the average current
injected by feedback into a randomly chosen subset of neurons in the circuit). (E) Response of the
same neurons as in (B), for the same circuit input, but with feedback from a different linear readout
that was trained to create a high-dimensional attractor that increases its activity and reaches a plateau
600 ms after the occurrence of the cue in the input stream. (F) Feedback from the linear readout that
creates this continuous high-dimensional attractor.
4
Discussion
We have demonstrated that persistent memory and online switching of real-time processing can be implemented in generic cortical microcircuit models by training a few neurons
Figure 4: A model for analog real-time computation on external and internal variables in a generic
cortical microcircuit (consisting of 600 conductance-based HH neurons). (A and B) Two input
streams as in Fig. 2; their firing rates r1 (t), r2 (t) are shown in (B). (C) Resulting firing activity
of 100 neurons in the circuit. (D) Performance of a neural integrator, generated by feedback from
aR linear readout that was trained to output at any time t an approximation CA(t) of the integral
t
(r1 (s) ? r2 (s))ds over the difference of both input rates. Feedback values were injected as input
0
currents into a randomly chosen subset of neurons in the circuit. Scale in nA shows average strength
of feedback currents (also in panel H). (E) Performance of linear readout that was trained to output 0
as long as CA(t) stayed below 1.35 nA, and to output then r2 (t) until the value of CA(t) dropped
below 0.45 nA (i.e., in this test run during the shaded time periods). (F) Performance of linear readout trained to output r1 (t) ? CA(t), i.e. a combination of external and internal variables, at any time
t (both r1 and CA normalized into the range [0, 1]). (G) Response of a randomly chosen neuron in
the circuit for 10 repetitions of the same experiment (with input spike trains generated by Poisson
processes with the same time-course of firing rates), showing biologically realistic trial-to-trial variability. (H) Activity traces of a continuous attractor as in (D), but in 8 different trials for 8 different
fixed values of r1 and r2 (shown on the right). The resulting traces are very similar to the temporal
evolution of firing rates of neurons in area LIP that integrate sensory evidence (see Fig.5A in [12]).
(within or outside of the circuit) through very simple learning processes (linear regression,
or alternatively ? with some loss in performance ? perceptron learning). The resulting highdimensional attractors can be made noise-robust through training, thereby overcoming the
inherent brittleness of constructed attractors. The high dimensionality of these attractors,
which is caused by the small number of synaptic weights that are fixed for their creation,
allows the circuit state to move in or out of other attractors, and to absorb new information
from online inputs, while staying within such high-dimensional attractor. The resulting
virtually unlimited computational capability of fading memory circuits with feedback can
be explained on the basis of the theoretical results that were presented in section 2.
Acknowledgments
Helpful comments from Wulfram Gerstner, Stefan Haeusler, Herbert Jaeger, Konrad Koerding, Henry Markram, Gordon Pipa, Misha Tsodyks, and Tony Zador are gratefully acknowledged. Written under partial support by the Austrian Science Fund FWF, project #
S9102-N04, project # IST2002-506778 (PASCAL) and project # FP6-015879 (FACETS)
of the European Union.
References
[1] W. Maass, T. Natschl?ager, and H. Markram. Real-time computing without stable
states: A new framework for neural computation based on perturbations. Neural
Computation, 14(11):2531?2560, 2002.
[2] H. J?ager and H. Haas. Harnessing nonlinearity: predicting chaotic systems and saving
energy in wireless communication. Science, 304:78?80, 2004.
[3] P. Joshi and W. Maass. Movement generation with circuits of spiking neurons. Neural
Computation, 17(8):1715?1738, 2005.
[4] W. Maass, P. Joshi, and E. D. Sontag. Computational aspects of feedback in neural circuits. submitted for publication, 2005. Online available as #168 from
http://www.igi.tugraz.at/maass/.
[5] M. S. Branicky. Universal computation and other capabilities of hybrid and continuous dynamical systems. Theoretical Computer Science, 138:67?100, 1995.
[6] M. Casey. The dynamics of discrete-time computation with application to recurrent
neural networks and finite state machine extraction. Neural Computation, 8:1135 ?
1178, 1996.
[7] W. Maass and P. Orponen. On the effect of analog noise in discrete-time analog
computations. Neural Computation, 10:1071?1095, 1998.
[8] A. Destexhe, M. Rudolph, and D. Pare. The high-conductance state of neocortical
neurons in vivo. Nat. Rev. Neurosci., 4(9):739?751, 2003.
[9] H. Markram, Y. Wang, and M. Tsodyks. Differential signaling via the same axon of
neocortical pyramidal neurons. PNAS, 95:5323?5328, 1998.
[10] A. Gupta, Y. Wang, and H. Markram. Organizing principles for a diversity of
GABAergic interneurons and synapses in the neocortex. Science, 287:273?278, 2000.
[11] G. Major, R. Baker, E. Aksay, B. Mensh, H. S. Seung, and D. W. Tank. Plasticity and
tuning by visual feedback of the stability of a neural integrator. Proc Natl Acad Sci,
101(20):7739?7744, 2004.
[12] M. E. Mazurek, J. D. Roitman, J. Ditterich, and M. N. Shadlen. A role for neural
integrators in perceptual decision making. Cerebral Cortex, 13(11):1257?1269, 2003.
| 2864 |@word trial:6 suitably:1 open:1 pulse:1 simulation:3 pipa:1 thereby:2 shading:1 carry:2 series:1 orponen:1 existing:1 current:9 surprising:1 activation:3 written:1 subsequent:2 additive:2 realistic:3 plasticity:2 confirming:1 enables:2 fund:1 cue:2 fewer:1 short:1 filtered:1 provides:2 mathematical:2 burst:2 constructed:2 differential:6 become:3 persistent:4 acquired:1 pairwise:1 behavior:2 brain:3 integrator:6 globally:1 automatically:2 provided:1 project:3 bounded:4 matched:1 circuit:53 panel:1 baker:1 nj:1 eduardo:1 temporal:2 every:1 control:1 positive:1 dropped:1 local:1 timing:2 accordance:1 switching:1 receptor:1 acad:1 firing:11 might:1 r4:3 shaded:1 co:2 bi:2 range:1 acknowledgment:1 union:1 chaotic:2 signaling:1 branicky:1 area:3 universal:7 mensh:1 empirical:1 pre:1 subpopulation:1 spite:1 suggest:1 hx1:1 amplify:1 get:1 cannot:2 coexist:1 restriction:1 www:1 map:1 deterministic:1 demonstrated:1 zador:1 automaton:1 immediately:1 rule:3 brittleness:1 facilitation:1 population:1 stability:1 target:4 approximated:1 predicts:1 observed:1 role:3 wang:2 tsodyks:2 graz:2 readout:25 connected:2 autonomously:1 movement:1 reward:2 seung:1 dynamic:8 trained:20 koerding:1 ist2002:1 creation:1 technically:1 creates:2 basis:1 jersey:1 train:7 artificial:1 outside:1 harnessing:1 quite:3 whose:2 otherwise:4 gi:1 emergence:1 noisy:2 rudolph:1 online:5 indication:1 interconnected:1 relevant:2 organizing:1 amplified:1 mazurek:1 r1:5 jaeger:1 staying:1 depending:2 recurrent:8 x0i:2 implemented:2 c:1 implies:1 s9102:1 correct:1 filter:2 stayed:1 biological:1 correction:1 hold:1 major:1 proc:1 applicable:1 currently:1 repetition:1 create:3 reflects:1 weighted:1 stefan:1 behaviorally:2 rather:2 varying:6 timevarying:1 publication:1 endow:1 casey:1 contrast:1 sense:3 helpful:1 dependent:2 typically:1 transformed:1 tank:1 flexible:1 pascal:1 priori:1 constrained:1 integration:1 special:1 fairly:1 saving:1 extraction:1 cancel:1 gordon:1 inherent:2 few:4 employ:1 randomly:7 simultaneously:3 densely:1 resulted:1 individual:1 replaced:2 consisting:6 fire:3 attractor:29 conductance:4 interneurons:1 possibility:1 mixture:1 arrives:1 misha:1 light:1 natl:1 integral:2 partial:1 collateral:1 ager:2 desired:2 plotted:1 theoretical:4 modeling:1 facet:1 ar:1 technische:1 subset:2 periodic:2 learnt:1 combined:1 probabilistic:1 off:1 concrete:1 na:4 connectivity:1 squared:1 reflect:1 external:5 potential:2 diversity:1 caused:1 igi:2 depends:1 stream:17 idealized:1 afferent:3 wolfgang:1 characterizes:3 capability:7 vivo:2 minimize:1 wiener:1 largely:1 characteristic:1 emulated:1 submitted:1 explain:1 synapsis:3 reach:1 plateau:1 synaptic:2 energy:1 frequency:1 involved:1 proof:3 gain:4 austria:1 dimensionality:1 amplitude:2 back:2 higher:1 response:9 evaluated:1 microcircuit:12 furthermore:2 just:1 until:1 d:1 working:2 hand:1 nonlinear:4 gray:3 usa:1 effect:1 validity:1 normalized:1 roitman:1 evolution:2 hence:2 imprecision:1 memoryless:2 nonzero:1 maass:7 konrad:1 during:4 m:4 neocortical:2 novel:1 fi:1 recently:3 specialized:1 spiking:2 cerebral:1 analog:9 occurred:2 haeusler:1 tuning:1 mathematics:1 nonlinearity:1 gratefully:1 had:4 henry:1 stable:1 cortex:2 longer:1 recent:1 perspective:1 arbitrarily:2 herbert:1 preceding:1 period:1 signal:1 pnas:1 reduces:1 smooth:1 long:2 paired:1 controlled:1 impact:1 prediction:2 regression:2 austrian:1 expectation:2 rutgers:2 poisson:2 represent:3 kernel:2 achieved:2 cell:1 background:1 whereas:1 addition:4 interval:1 pyramidal:2 natschl:1 sure:1 comment:1 induced:2 hz:3 virtually:1 entrained:1 fwf:1 joshi:4 presence:2 destexhe:1 variety:1 xj:1 switch:2 topology:1 ditterich:1 sontag:3 tape:1 depression:1 generally:1 involve:1 neocortex:1 http:1 exist:1 inhibitory:2 sign:1 disjoint:1 diverse:2 discrete:4 key:1 demonstrating:1 falling:1 drawn:2 traced:1 acknowledged:1 fp6:1 sum:1 run:1 turing:3 injected:3 respond:1 hodgkin:1 decision:1 bit:1 activity:7 strength:2 fading:16 huxley:1 ri:1 unlimited:1 aspect:1 simulate:3 span:1 department:1 according:4 piscataway:1 combination:2 membrane:1 postsynaptic:1 rev:1 biologically:2 making:1 projecting:1 invariant:2 restricted:1 explained:1 equation:5 turn:1 r3:5 hh:4 needed:1 sending:1 available:1 operation:1 generic:11 occurrence:1 robustness:1 assumes:1 tony:1 tugraz:2 giving:1 especially:1 move:1 already:1 spike:9 volterra:1 exhibit:1 simulated:3 capacity:1 sci:1 haas:1 presynaptic:1 length:1 modeled:3 illustration:1 providing:1 acquire:2 equivalently:1 trace:3 negative:1 rise:1 n04:1 unknown:1 neuron:40 finite:8 variability:2 precise:2 communication:1 perturbation:1 arbitrary:5 overcoming:1 required:2 connection:6 tentative:1 dynamical:7 below:2 regime:1 program:1 memory:17 power:5 suitable:4 demanding:1 rely:1 endows:1 predicting:1 hybrid:1 nth:2 pare:1 gabaergic:1 created:2 carried:1 sn:2 loss:1 generation:2 limitation:2 subcortical:1 digital:1 integrate:2 sufficient:1 consistent:2 shadlen:1 principle:3 excitatory:1 course:1 wireless:1 aij:2 bias:1 allow:1 perceptron:1 institute:1 explaining:1 markram:4 sparse:1 distributed:1 regard:1 feedback:45 overcome:1 cortical:15 xn:5 sensory:1 made:2 commonly:1 absorb:2 confirm:1 universitaet:1 xi:1 freshly:1 alternatively:1 continuous:6 lip:1 ku:1 channel:2 robust:1 ca:5 gerstner:1 necessarily:1 complex:1 european:1 neurosci:1 whole:1 noise:12 x1:4 enlarged:1 fig:13 axon:3 precision:1 formalization:1 perceptual:1 abundance:2 formula:1 remained:1 theorem:5 specific:1 showing:2 r2:4 decay:1 gupta:1 evidence:2 exists:1 nat:1 visual:1 extracted:1 marked:1 towards:1 change:1 wulfram:1 determined:1 prashant:1 pas:1 experimental:3 highdimensional:1 internal:8 support:1 tested:1 |
2,053 | 2,865 | Fast Krylov Methods for N-Body Learning
Yang Wang
School of Computing Science
Simon Fraser University
[email protected]
Nando de Freitas
Department of Computer Science
University of British Columbia
[email protected]
Dustin Lang
Department of Computer Science
University of Toronto
[email protected]
Maryam Mahdaviani
Department of Computer Science
University of British Columbia
[email protected]
Abstract
This paper addresses the issue of numerical computation in machine
learning domains based on similarity metrics, such as kernel methods,
spectral techniques and Gaussian processes. It presents a general solution strategy based on Krylov subspace iteration and fast N-body learning methods. The experiments show significant gains in computation and
storage on datasets arising in image segmentation, object detection and
dimensionality reduction. The paper also presents theoretical bounds on
the stability of these methods.
1 Introduction
Machine learning techniques based on similarity metrics have gained wide acceptance over
the last few years. Spectral clustering [1] is a typical example. Here one forms a Laplacian
matrix L = D?1/2 WD?1/2 , where the entries of W measure the similarity between data
points xi ? X , i = 1, . . . , N . For example, a popular choice is to set the entries of W to
1
wij = e? ? kxi ?xj k
2
whereP
? is a user-specified parameter. D is a normalizing diagonal matrix with entries
di = j wij . The clusters can be found by running, say, K-means on the eigenvectors of
L. K-means generates better clusters on this nonlinear embedding of the data provided one
adopts a suitable similarity metric.
The list of machine learning domains where one forms a covariance or similarity matrix
(be it W, D?1 W or D ? W) is vast and includes ranking on nonlinear manifolds [2],
semi-supervised and active learning [3], Gaussian processes [4], Laplacian eigen-maps [5],
stochastic neighbor embedding [6], multi-dimensional scaling, kernels on graphs [7] and
many other kernel methods for dimensionality reduction, feature extraction, regression and
classification. In these settings, one is interested in either inverting the similarity matrix
or finding some of its eigenvectors. The computational cost of both of these operations
is O(N 3 ) while the storage requirement is O(N 2 ). These costs are prohibitively large in
applications where one encounters massive quantities of data points or where one is interested in real-time solutions such as spectral image segmentation for mobile robots [8]. In
this paper, we present general numerical techniques for reducing the computational cost
to O(N log N ), or even O(N ) in specific cases, and the storage cost to O(N ). These
reductions are achieved by combining Krylov subspace iterative solvers (such as Arnoldi,
Lanczos, GMRES and conjugate gradients) with fast kernel density estimation (KDE) techniques (such as fast multipole expansions, the fast Gauss transform and dual tree recursions
[9, 10, 11]).
Specific Krylov methods have been applied to kernel problems. For example, [12] uses
Lanczos for spectral clustering and [4] uses conjugate gradients for Gaussian processes.
However, the use of fast KDE methods, in particular fast multipole methods, to further
accelerate these techniques has only appeared in the context of interpolation [13] and our
paper on semi-supervised learning [8]. Here, we go for a more general exposition and
present several new examples, such as fast nonlinear embeddings and fast Gaussian processes. More importantly, we attack the issue of stability of these methods. Fast KDE
techniques have guaranteed error bounds. However, if these techiques are used inside iterative schemes based on orthogonalization of the Krylov subspace, there is a danger that
the errors might grow over iterations. In practice, good behaviour has been observed. In
Section 4, we present theoretical results that explain these observations and shed light on
the behaviour of these algorithms. Before doing so, we begin with a very brief review of
Krylov solvers and fast KDE methods.
2 Krylov subspace iteration
This section is a compressed overview of Krylov subspace iteration. The main message is
that Krylov methods are very efficient algorithms for solving linear systems and eigenvalue
problems, but they require a matrix vector multiplication at each iteration. In the next
section, we replace this expensive matrix-vector multiplication with a call to fast KDE
routines. Readers happy with this message and familiar with Krylov methods, such as
conjugate gradients and Lanczos, can skip the rest of this section.
For ease of presentation, let the similarity matrix be simply A = W ? RN ?N , with
entries aij = a(xi , xj ). (One can easily handle other cases, such as A = D?1 W and
A = D?W.) Typical measures of similarity include polynomial a(xi , xj ) = (xi xTj +b)p ,
1
T
Gaussian a(xi , xj ) = e? ? (xi ?xj )(xi ?xj ) and sigmoid a(xi , xj ) = tanh(?xi xTj ? ?)
kernels, where xi xTj denotes a scalar inner product. Our goal is to solve linear systems
Ax = b and (possibly generalized) eigenvalue problems Ax = ?x. The former arise,
for example, in semi-supervised learning and Gaussian processes, while the latter arise in
spectral clustering and dimensionality reduction. One could attack these problems with
naive iterative methods such as the power method, Jacobi and Gauss-Seidel [14]. The
problem with these strategies is that the estimate x(t) , at iteration t, only depends on the
previous estimate x(t?1) . Hence, these methods do typically take too many iterations to
converge. It is well accepted in the numerical computation field that Krylov methods [14,
15], which make use of the entire history of solutions {x(1) , . . . , x(t?1) }, converge at a
faster rate.
The intuition behind Krylov subspace methods is to use the history of the solutions we have
already computed. We formulate this intuition in terms of projecting an N -dimensional
problem onto a lower dimensional subspace. Given a matrix A and a vector b, the associated Krylov matrix is:
K = [b Ab A2 b . . . ].
The Krylov subspaces are the spaces spanned by the column vectors of this matrix. In
order to find a new estimate of x(t) we could project onto the Krylov subspace. However,
K is a poorly conditioned matrix. (As in the power method, At b is converging to the
eigenvector corresponding to the largest eigenvalue of A.) We therefore need to construct
a well-conditioned orthogonal matrix Q(t) = [q(1) ? ? ? q(t) ], with q(i) ? RN , that spans
the Krylov space. That is, the leading t columns of K and Q span the same space. This is
easily done using the QR-decomposition of K [14], yielding the following Arnoldi relation
(augmented Schuur factorization):
e (t) ,
AQ(t) = Q(t+1) H
e (t) is the augmented Hessenberg matrix:
where H
?
h1,1 h1,2 h1,3
???
???
? h2,1 h2,2 h2,3
?
..
..
..
e (t) = ? ..
H
.
.
.
? .
? 0
???
0
ht,t?1
0
???
0
0
h1, t
h2,t
..
.
ht,t
ht+1,t
?
?
?
?.
?
?
The eigenvalues of the smaller (t + 1) ? t Hessenberg matrix approximate the eigenvalues
of A as t increases. These eigenvalues can be computed efficiently by applying the Arnoldi
e is tridiagonal and
relation recursively as shown in Figure 1. (If A is symmetric, then H
we obtain the Lanczos algorithm.) Notice that the matrix vector multiplication v = Aq is
the expensive step in the Arnoldi algorithm. Most Krylov algorithms resemble the Arnoldi
algorithm in this. To solve systems of equations, we can minimize either the residual
Initialization: b = arbitrary, q(1) = b/kbk
FOR t = 1, 2, 3, . . .
(t)
? v = Aq
? FOR j = 1, . . . , N
? hj,t = q(t)T v
? v = v ? hj,t q(j)
? ht+1,t = kvk
? q(t+1) = v/ht+1,t
Initialization: q(1) = b/kbk
FOR t = 1, 2, 3, . . .
? Perform step t of the Arnoldi algorithm
?
?
?
? e (t)
y ? kbki?
? miny ?H
? Set x(t) = Q(t) y(t)
Figure 1: The Arnoldi (left) and GMRES (right) algorithms.
r(t) , b ? Ax(t) , leading to the GMRES and MINRES algorithms, or the A-norm, leading
to conjugate gradients (CG) [14]. GMRES, MINRES and CG apply to general, symmetric,
and spd matrices respectively. For ease of presentation, we focus on the GMRES algorithm.
At step t of GMRES, we approximate the solution by the vector in the Krylov subspace
x(t) ? K(t) that minimizes the norm of the residual. Since x(t) is in the Krylov subspace,
it can be written as a linear combination of the columns of the Krylov matrix K (t) . Our
problem therefore reduces to finding the vector y ? Rt that minimizes kAK(t) y ? bk.
As before, stability considerations force us to use the QR decomposition of K (t) . That is,
instead of using a linear combination of the columns of K(t) , we use a linear combination
of the columns of Q(t) . So our least squares problem becomes y (t) = miny kAQ(t) y?bk.
e (t) , we only need to solve a problem of dimension (t + 1) ? t:
Since AQ(t) = Q(t+1) H
e (t) y ? bk. Keeping in mind that the columns of the projection
y(t) = miny kQ(t+1) H
e (t) y ?
matrix Q are orthonormal, we can rewrite this least squares problem as min y kH
(t+1)T
(1)
(t+1)T
Q
bk. We start the iterations with q
= b/kbk and hence Q
b = kbki,
where i is the unit vector with a 1 in the first entry. The final form of our least squares
problem at iteration t is:
e (t)
y(t) = min
H
y ? kbki
,
y
(t)
(t) (t)
with solution x = Q y . The algorithm is shown in Figure 1. The least squares
problem of size (t + 1) ? t to compute y (t) can be solved in O(t) steps using Givens
rotations [14]. Notice again that the expensive step in each iteration is the matrix-vector
product v = Aq. This is true also of CG and other Krylov methods.
One important property of the Arnoldi relation is that the residuals are orthogonal to the
e (t) . That is,
space spanned by the columns of V = Q(t+1) H
e (t) y(t) = 0
e (t) y(t) ) = H
e (t)T kbki ? H
e (t)T H
e (t)T Q(t+1)T (b ? Q(t+1) H
VT r(t) = H
In the following section, we introduce methods to speed up the matrix-vector product v =
Aq. These methods will incur, at most, a pre-specified (tolerance) error e (t) at iteration
t. Later, we present theoretical bounds on how these errors affect the residuals and the
orthogonality of the Krylov subspace.
3 Fast KDE
The expensive step in Krylov methods is the operation v = Aq(t) . This step requires that
we solve two O(N 2 ) kernel estimates:
vi =
N
X
(t)
qj a(xi , xj )
i = 1, 2, . . . , M.
j=1
It is possible to reduce the storage and computational cost to O(N ) at the expense of a
small specified error tolerance , say 10?6 , using the fast Gauss transform (FGT) algorithm
[16, 17]. This algorithm is an instance of more general fast multipole methods for solving
N -body interactions [9]. The FGT applies when the problem is low dimensional, say
xk ? R3 . However, to attack larger dimensions one can adopt clustering-based partitions
as in the improved fast Gauss transform (IFGT) [10].
Fast multipole methods tend to work only in low dimensions and are specific to the choice
of similarity metric. Dual tree recursions based on KD-trees and ball trees [11, 18] overcome these difficulties, but on average cost O(N log N ). Due to space constraints, we can
only mention these techniques here, but refer the reader to [18] for a thorough comparison.
4 Stability results
The problem with replacing the matrix-vector multiplication at each iteration of the Krylov
methods is that we do not know how the errors accumulate over successive iterations. In
this section, we will derive bounds that describe what factors influence these errors. In
particular, the bounds will state what properties of the similarity metric and measurable
quantities affect the residuals and the orthogonality of the Krylov subspaces.
Several papers have addressed the issue of Krylov subspace stability [19, 20, 21]. Our
approach follows from [21]. For presentation purposes, we focus on the GMRES algorithm.
Let e(t) denote the errors introduced in the approximate matrix-vector multiplication at
each iteration of Arnoldi. For the purposes of upper-bounding, this is the tolerance of the
fast KDE methods. Then, the fast KDE methods change the Arnoldi relation to:
h
i
e (t) ,
AQ(t) + E(t) = Aq(1) + e(1) , . . . , Aq(t) + e(t) = Q(t+1) H
where E(t) = e(1) , . . . , e(t) . The new true residuals are therefore:
e (t) y(t) + E(t) y(t)
r(t) = b ? Ax(t) = b ? AQ(t) y(t) = b ? Q(t+1) H
e (t) y(t) are the measured residuals.
and e
r(t) = b ? Q(t+1) H
We need to ensure two bounds when using fast KDE methods in Krylov iterations. First,
the measured residuals e
r(t) should not deviate too far from the true residuals r(t) . Second,
deviations from orthogonality should be upper-bounded. Let us address the first question.
The deviation in residuals is given by
ke
r(t) ? r(t) k = kE(t) y(t) k.
Let y(t) = [y1 , . . . , yt ]T . Then, this deviation satisfies:
t
t
X
X
(t)
(t)
(k)
|yk |ke(k) k.
ke
r ?r k=
yk e
?
k=1
(1)
k=1
The deviation from orthogonality can be upper-bounded in a similar fashion:
t
X
e (t)T (t) (t)
e (t) k
e (t)T Q(t+1)T (e
|yk |ke(k) k
kVT r(t) k=kH
r(t) + E(t) y(t) )k =
H
E y
? kH
k=1
(2)
The following lemma provides a relation between the yk and the measured residuals e
r(k?1) .
Lemma 1. [21, Lemma 5.1] Assume that t iterations of the inexact Arnoldi method have
been carried out. Then, for any k = 1, . . . , t,
1
|yk | ?
ke
r(k?1) k
(3)
e
?t (H(t) )
e (t) ) denotes the t-th singular value of H
e (t) .
where ?t (H
e (t) , see [15, 21]. This
The proof of the lemma follows from the QR decomposition of H
lemma, in conjunction with equations (1) and (2), allows us to establish the main theoretical
result of this section:
Proposition 1. Let > 0. If for every k ? t we have
e (t) )
?t ( H
1
ke(k) k <
,
(k?1)
t
ke
r
k
then ke
r(t) ? r(t) k < . Moreover, if
ke(k) k <
then kVT r(t) k < .
e (t) )
?t ( H
1
,
(k?1) k
(t)
e
ke
r
tkH k
Proof: First, we have
t
t
X
X
e (t) )
?t ( H
1
1
ke
r(t) ? r(t) k ?
|yk |ke(k) k <
ke
r(k?1) k = .
(k?1) k
(t)
e
t
ke
r
?t ( H )
k=1
k=1
Pt
T (t)
(t)
(k)
e
and similarly, kV r k ? kH k k=1 |yk |ke k <
Proposition 1 tells us that in order to keep the residuals bounded while ensuring bounded
e (t) and
deviations from orthogonality at iteration k, we need to monitor the eigenvalues of H
(k?1)
(t)
e
the measured residuals e
r
. Of course, we have no access to H . However, monitoring
the residuals is of practical value. If the residuals decrease, we can increase the tolerance of
the fast KDE algorithms and viceversa. The bounds do lead to a natural way of constructing
adaptive algorithms for setting the tolerance of the fast KDE algorithms.
(a)
(b)
Time Comparison
1200
1000
Time (Seconds)
NAIVE
800
600
CG
400
200
CG?DT
1000
(d)
(c)
2000
3000
4000
5000
6000
7000
Data Set Size (Number of Features)
Figure 2: Figure (a) shows a test image from the PASCAL database. Figure (b) shows the
SIFT features extracted from the image. Figure (c) shows the positive feature predictions
for the label ?car?. Figure (d) shows the centroid of the positive features as a black dot. The
plot on the right shows the computational gains obtained by using fast Krylov methods.
5 Experimental results
The results of this section demonstrate that significant computational gains may be obtained by combining fast KDE methods with Krylov iterations. We present results in three
domains: spectral clustering and image segmentation [1, 12], Gaussian process regression
[4] and stochastic neighbor embedding [6].
5.1 Gaussian processes with large dimensional features
In this experiment we use Gaussian processes to predict the labels of 128-dimensional
SIFT features [22] for the purposes of object detection and localization as shown in Figure 2. There are typically thousands of features per image, so it is of paramount importance
to generate fast predictions. The hard computational task here involves inverting the covariance matrix of the Gaussian process. The figure shows that it is possible to do this
efficiently, under the same ROC error, by combining conjugate gradients [4] with dual
trees.
5.2 Spectral clustering and image segmentation
We applied spectral clustering to color image segmentation; a generalized eigenvalue problem. The types of segmentations obtained are shown in Figure 3. There are no perceptible
differences between them. We observed that fast Krylov methods run approximately twice
as fast as the Nystrom method. One should note that the result of Nystrom depends on the
quality of sampling, while fast N-body methods enable us to work directly with the full
matrix, so the solution is less sensitive. Once again, fast KDE methods lead to significant
computational improvements over Krylov algorithms (Lanczos in this case).
5.3 Stochastic neighbor embedding
Our final example is again a generalized eigenvalue problem arising in dimensionality reduction. We use the stochastic neighbor embedding algorithm of [6] to project two 3-D
structures to 2-D, as shown in Figure 4. Again, we observe significant computational improvements.
4
10
3
Running time(seconds)
10
2
10
1
10
0
10
Lanczos
?1
IFGT
10
Dual Tree
?2
10
0
500
1000
1500
2000
2500
N
3000
3500
4000
4500
5000
Figure 3: (left) Segmentation results (order: original image, IFGT, dual trees and Nystrom)
and (right) computational improvements obtained in spectral clustering.
6 Conclusions
We presented a general approach for combining Krylov solvers and fast KDE methods
to accelerate machine learning techniques based on similarity metrics. We demonstrated
some of the methods on several datasets and presented results that shed light on the stability
and convergence properties of these methods. One important point to make is that these
methods work better when there is structure in the data. There is no computational gain
if there is not statistical information in the data. This is a fascinating relation between
computation and statistical information, which we believe deserves further research and
understanding. One question is how can we design pre-conditioners in order to improve the
convergence behavior of these algorithms. Another important avenue for further research
is the application of the bounds presented in this paper in the design of adaptive algorithms.
Acknowledgments
We would like to thank Arnaud Doucet, Firas Hamze, Greg Mori and Changjiang Yang.
References
[1] A Y Ng, M I Jordan, and Y Weiss. On spectral clustering: Analysis and algorithm. In Advances
in Neural Information Processing Systems, pages 849?856, 2001.
[2] D Zhou, J Weston, A Gretton, O Bousquet, and B Scholkopf. Ranking on data manifolds. In
Advances on Neural Information Processing Systems, 2004.
[3] X Zhu, J Lafferty, and Z Ghahramani. Semi-supervised learning using Gaussian fields and
harmonic functions. In International Conference on Machine Learning, pages 912?919, 2003.
[4] M N Gibbs. Bayesian Gaussian processes for regression and classification. In PhD Thesis,
University of Cambridge, 1997.
[5] M Belkin and P Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373?1396, 2003.
[6] G Hinton and S Roweis. Stochastic neighbor embedding. In Advances in Neural Information
Processing Systems, pages 833?840, 2002.
[7] A Smola and R Kondor. Kernels and regularization of graphs. In Computational Learning
Theory, pages 144?158, 2003.
True manifold
Sampled data
Embedding of SNE Embedding of SNE withIFGT
S?curve
4
Running time(seconds)
Running time(seconds)
10
SNE
SNE with IFGT
3
10
2
10
1
10
Swissroll
4
10
SNE
SNE with IFGT
3
10
2
10
1
0
1000
2000
3000
N
4000
5000
10
0
1000
2000
3000
4000
5000
N
Figure 4: Examples of embedding on S-curve and Swiss-roll datasets.
[8] M Mahdaviani, N de Freitas, B Fraser, and F Hamze. Fast computational methods for visually
guided robots. In IEEE International Conference on Robotics and Automation, 2004.
[9] L Greengard and V Rokhlin. A fast algorithm for particle simulations. Journal of Computational Physics, 73:325?348, 1987.
[10] C Yang, R Duraiswami, N A Gumerov, and L S Davis. Improved fast Gauss transform and
efficient kernel density estimation. In International Conference on Computer Vision, Nice,
2003.
[11] A Gray and A Moore. Rapid evaluation of multiple density models. In Artificial Iintelligence
and Statistics, 2003.
[12] J Shi and J Malik. Normalized cuts and image segmentation. In IEEE Conference on Computer
Vision and Pattern Recognition, pages 731?737, 1997.
[13] R K Beatson, J B Cherrie, and C T Mouat. Fast fitting of radial basis functions: Methods based
on preconditioned GMRES iteration. Advances in Computational Mathematics, 11:253?270,
1999.
[14] J W Demmel. Applied Numerical Linear Algebra. SIAM, 1997.
[15] Y Saad. Iterative Methods for Sparse Linear Systems. The PWS Publishing Company, 1996.
[16] L Greengard and J Strain. The fast Gauss transform. SIAM Journal of Scientific Statistical
Computing, 12(1):79?94, 1991.
[17] B J C Baxter and G Roussos. A new error estimate of the fast Gauss transform. SIAM Journal
of Scientific Computing, 24(1):257?259, 2002.
[18] D Lang, M Klaas, and N de Freitas. Empirical testing of fast kernel density estimation algorithms. Technical Report TR-2005-03, Department of Computer Science, UBC, 2005.
[19] G H Golub and Q Ye. Inexact preconditioned conjugate gradient method with inner-outer
iteration. SIAM Journal of Scientific Computing, 21:1305?1320, 1999.
[20] G W Stewart. Backward error bounds for approximate Krylov subspaces. Linear Algebra and
Applications, 340:81?86, 2002.
[21] V Simoncini and D B Szyld. Theory of inexact Krylov subspace methods and applications to
scientific computing. SIAM Journal on Scientific Computing, 25:454?477, 2003.
[22] D G Lowe. Object recognition from local scale-invariant features. In ICCV, 1999.
| 2865 |@word kondor:1 polynomial:1 norm:2 simulation:1 covariance:2 decomposition:3 mention:1 tr:1 recursively:1 reduction:6 fgt:2 freitas:3 wd:1 lang:2 written:1 numerical:4 partition:1 klaas:1 plot:1 xk:1 provides:1 toronto:1 successive:1 attack:3 scholkopf:1 fitting:1 inside:1 introduce:1 rapid:1 behavior:1 multi:1 company:1 solver:3 becomes:1 provided:1 begin:1 project:2 bounded:4 moreover:1 what:2 minimizes:2 eigenvector:1 finding:2 thorough:1 every:1 shed:2 prohibitively:1 unit:1 arnoldi:11 before:2 positive:2 local:1 conditioner:1 interpolation:1 approximately:1 might:1 black:1 twice:1 initialization:2 ease:2 factorization:1 practical:1 acknowledgment:1 testing:1 practice:1 swiss:1 danger:1 empirical:1 projection:1 viceversa:1 pre:2 radial:1 onto:2 storage:4 context:1 influence:1 applying:1 measurable:1 map:1 demonstrated:1 yt:1 shi:1 go:1 formulate:1 ke:16 importantly:1 spanned:2 orthonormal:1 stability:6 embedding:9 handle:1 pt:1 user:1 massive:1 us:2 expensive:4 recognition:2 cut:1 database:1 observed:2 wang:1 solved:1 thousand:1 decrease:1 yk:7 intuition:2 miny:3 solving:2 rewrite:1 algebra:2 incur:1 localization:1 basis:1 accelerate:2 easily:2 fast:37 describe:1 demmel:1 artificial:1 tell:1 larger:1 solve:4 say:3 compressed:1 niyogi:1 statistic:1 transform:6 final:2 eigenvalue:9 maryam:2 product:3 interaction:1 beatson:1 ywang12:1 combining:4 poorly:1 roweis:1 kh:4 kv:1 qr:3 convergence:2 cluster:2 requirement:1 object:3 simoncini:1 derive:1 measured:4 school:1 c:4 skip:1 resemble:1 involves:1 guided:1 stochastic:5 nando:2 enable:1 require:1 behaviour:2 proposition:2 visually:1 predict:1 kvt:2 adopt:1 a2:1 purpose:3 estimation:3 label:2 tanh:1 sensitive:1 largest:1 gaussian:12 zhou:1 hj:2 mobile:1 conjunction:1 ax:4 focus:2 improvement:3 cg:5 centroid:1 typically:2 entire:1 relation:6 wij:2 interested:2 issue:3 classification:2 dual:5 pascal:1 gmres:8 field:2 construct:1 once:1 extraction:1 ng:1 sampling:1 report:1 few:1 belkin:1 xtj:3 familiar:1 ab:1 detection:2 acceptance:1 message:2 evaluation:1 golub:1 kvk:1 yielding:1 light:2 behind:1 orthogonal:2 tree:7 theoretical:4 instance:1 column:7 lanczos:6 stewart:1 deserves:1 cost:6 deviation:5 entry:5 kq:1 eigenmaps:1 firas:1 tridiagonal:1 too:2 kxi:1 density:4 international:3 siam:5 physic:1 again:4 thesis:1 possibly:1 leading:3 de:3 includes:1 automation:1 changjiang:1 ranking:2 depends:2 vi:1 later:1 h1:4 lowe:1 doing:1 start:1 simon:1 minimize:1 square:4 greg:1 roll:1 efficiently:2 bayesian:1 monitoring:1 history:2 explain:1 inexact:3 nystrom:3 associated:1 di:1 jacobi:1 proof:2 gain:4 sampled:1 popular:1 color:1 car:1 mahdaviani:2 dimensionality:5 segmentation:8 routine:1 dt:1 supervised:4 improved:2 wei:1 duraiswami:1 done:1 smola:1 replacing:1 nonlinear:3 quality:1 gray:1 scientific:5 believe:1 ye:1 normalized:1 true:4 former:1 hence:2 regularization:1 arnaud:1 symmetric:2 moore:1 davis:1 kak:1 generalized:3 demonstrate:1 orthogonalization:1 image:10 harmonic:1 consideration:1 sigmoid:1 rotation:1 overview:1 accumulate:1 significant:4 refer:1 cambridge:1 gibbs:1 mathematics:1 similarly:1 particle:1 aq:11 dot:1 robot:2 access:1 similarity:11 vt:1 converge:2 semi:4 full:1 multiple:1 reduces:1 gretton:1 seidel:1 technical:1 faster:1 fraser:2 laplacian:3 ensuring:1 converging:1 prediction:2 regression:3 vision:2 metric:6 iteration:20 kernel:10 achieved:1 robotics:1 addressed:1 grow:1 singular:1 saad:1 rest:1 tend:1 ifgt:5 lafferty:1 jordan:1 call:1 hamze:2 yang:3 embeddings:1 baxter:1 xj:8 spd:1 affect:2 inner:2 reduce:1 avenue:1 qj:1 eigenvectors:2 generate:1 notice:2 arising:2 per:1 monitor:1 ht:5 backward:1 cherrie:1 vast:1 graph:2 year:1 swissroll:1 run:1 reader:2 sfu:1 scaling:1 bound:9 guaranteed:1 fascinating:1 paramount:1 orthogonality:5 constraint:1 bousquet:1 generates:1 speed:1 span:2 min:2 department:4 combination:3 ball:1 conjugate:6 kd:1 smaller:1 perceptible:1 kbk:3 projecting:1 invariant:1 iccv:1 mori:1 equation:2 r3:1 mind:1 know:1 operation:2 greengard:2 apply:1 observe:1 spectral:10 encounter:1 eigen:1 original:1 denotes:2 clustering:9 running:4 multipole:4 include:1 ensure:1 publishing:1 roussos:1 ghahramani:1 establish:1 malik:1 already:1 quantity:2 question:2 strategy:2 rt:1 diagonal:1 kaq:1 gradient:6 subspace:16 thank:1 outer:1 manifold:3 preconditioned:2 happy:1 sne:6 kde:14 expense:1 design:2 perform:1 gumerov:1 upper:3 observation:1 datasets:3 hinton:1 strain:1 y1:1 rn:2 arbitrary:1 bk:4 inverting:2 introduced:1 specified:3 hessenberg:2 address:2 krylov:34 pattern:1 appeared:1 power:2 suitable:1 difficulty:1 force:1 natural:1 recursion:2 residual:15 zhu:1 scheme:1 improve:1 brief:1 carried:1 naive:2 columbia:2 deviate:1 review:1 understanding:1 nice:1 multiplication:5 h2:4 szyld:1 course:1 last:1 keeping:1 aij:1 wide:1 neighbor:5 sparse:1 tolerance:5 overcome:1 dimension:3 curve:2 adopts:1 adaptive:2 far:1 approximate:4 keep:1 doucet:1 active:1 xi:11 iterative:4 ca:4 expansion:1 constructing:1 domain:3 main:2 bounding:1 arise:2 body:4 augmented:2 wherep:1 roc:1 fashion:1 dustin:1 british:2 pws:1 specific:3 sift:2 list:1 normalizing:1 gained:1 importance:1 phd:1 conditioned:2 simply:1 scalar:1 applies:1 ubc:4 satisfies:1 extracted:1 weston:1 goal:1 presentation:3 exposition:1 replace:1 change:1 hard:1 typical:2 reducing:1 lemma:5 accepted:1 gauss:7 experimental:1 rokhlin:1 latter:1 |
2,054 | 2,866 | Conditional Visual Tracking in Kernel Space
Cristian Sminchisescu1,2,3 Atul Kanujia3 Zhiguo Li3 Dimitris Metaxas3
1
TTI-C, 1497 East 50th Street, Chicago, IL, 60637, USA
2
University of Toronto, Department of Computer Science, Canada
3
Rutgers University, Department of Computer Science, USA
[email protected], {kanaujia,zhli,dnm}@cs.rutgers.edu
Abstract
We present a conditional temporal probabilistic framework for reconstructing 3D human motion in monocular video based on descriptors encoding image silhouette observations. For computational efficiency we
restrict visual inference to low-dimensional kernel induced non-linear
state spaces. Our methodology (kBME) combines kernel PCA-based
non-linear dimensionality reduction (kPCA) and Conditional Bayesian
Mixture of Experts (BME) in order to learn complex multivalued predictors between observations and model hidden states. This is necessary
for accurate, inverse, visual perception inferences, where several probable, distant 3D solutions exist due to noise or the uncertainty of monocular perspective projection. Low-dimensional models are appropriate
because many visual processes exhibit strong non-linear correlations in
both the image observations and the target, hidden state variables. The
learned predictors are temporally combined within a conditional graphical model in order to allow a principled propagation of uncertainty. We
study several predictors and empirically show that the proposed algorithm positively compares with techniques based on regression, Kernel
Dependency Estimation (KDE) or PCA alone, and gives results competitive to those of high-dimensional mixture predictors at a fraction of their
computational cost. We show that the method successfully reconstructs
the complex 3D motion of humans in real monocular video sequences.
1
Introduction and Related Work
We consider the problem of inferring 3D articulated human motion from monocular video.
This research topic has applications for scene understanding including human-computer interfaces, markerless human motion capture, entertainment and surveillance. A monocular
approach is relevant because in real-world settings the human body parts are rarely completely observed even when using multiple cameras. This is due to occlusions form other
people or objects in the scene. A robust system has to necessarily deal with incomplete,
ambiguous and uncertain measurements. Methods for 3D human motion reconstruction
can be classified as generative and discriminative. They both require a state representation,
namely a 3D human model with kinematics (joint angles) or shape (surfaces or joint positions) and they both use a set of image features as observations for state inference. The
computational goal in both cases is the conditional distribution for the model state given
image observations.
Generative model-based approaches [6, 16, 14, 13] have been demonstrated to flexibly reconstruct complex unknown human motions and to naturally handle problem constraints.
However it is difficult to construct reliable observation likelihoods due to the complexity
of modeling human appearance. This varies widely due to different clothing and deformation, body proportions or lighting conditions. Besides being somewhat indirect, the
generative approach further imposes strict conditional independence assumptions on the
temporal observations given the states in order to ensure computational tractability. Due
to these factors inference is expensive and produces highly multimodal state distributions
[6, 16, 13]. Generative inference algorithms require complex annealing schedules [6, 13]
or systematic non-linear search for local optima [16] in order to ensure continuing tracking.
These difficulties motivate the advent of a complementary class of discriminative algorithms [10, 12, 18, 2], that approximate the state conditional directly, in order to simplify
inference. However, inverse, observation-to-state multivalued mappings are difficult to
learn (see e.g. fig. 1a) and a probabilistic temporal setting is necessary. In an earlier paper
[15] we introduced a probabilistic discriminative framework for human motion reconstruction. Because the method operates in the originally selected state and observation spaces
that can be task generic, therefore redundant and often high-dimensional, inference is more
expensive and can be less robust. To summarize, reconstructing 3D human motion in a
Figure 1: (a, Left) Example of 180o ambiguity in predicting 3D human poses from silhouette image features (center). It is essential that multiple plausible solutions (e.g. F 1 and
F2 ) are correctly represented and tracked over time. A single state predictor will either
average the distant solutions or zig-zag between them, see also tables 1 and 2. (b, Right) A
conditional chain model. The local distributions p(yt |yt?1 , zt ) or p(yt |zt ) are learned as
in fig. 2. For inference, the predicted local state conditional is recursively combined with
the filtered prior c.f . (1).
conditional temporal framework poses the following difficulties: (i) The mapping between
temporal observations and states is multivalued (i.e. the local conditional distributions to be
learned are multimodal), therefore it cannot be accurately represented using global function
approximations. (ii) Human models have multivariate, high-dimensional continuous states
of 50 or more human joint angles. The temporal state conditionals are multimodal which
makes efficient Kalman filtering algorithms inapplicable. General inference methods (particle filters, mixtures) have to be used instead, but these are expensive for high-dimensional
models (e.g. when reconstructing the motion of several people that operate in a joint state
space). (iii) The components of the human state and of the silhouette observation vector exhibit strong correlations, because many repetitive human activities like walking or running
have low intrinsic dimensionality. It appears wasteful to work with high-dimensional states
of 50+ joint angles. Even if the space were truly high-dimensional, predicting correlated
state dimensions independently may still be suboptimal.
In this paper we present a conditional temporal estimation algorithm that restricts visual
inference to low-dimensional, kernel induced state spaces. To exploit correlations among
observations and among state variables, we model the local, temporal conditional distributions using ideas from Kernel PCA [11, 19] and conditional mixture modeling [7, 5],
here adapted to produce multiple probabilistic predictions. The corresponding predictor is
referred to as a Conditional Bayesian Mixture of Low-dimensional Kernel-Induced Experts
(kBME). By integrating it within a conditional graphical model framework (fig. 1b), we
can exploit temporal constraints probabilistically. We demonstrate that this methodology is
effective for reconstructing the 3D motion of multiple people in monocular video. Our contribution w.r.t. [15] is a probabilistic conditional inference framework that operates over a
non-linear, kernel-induced low-dimensional state spaces, and a set of experiments (on both
real and artificial image sequences) that show how the proposed framework positively compares with powerful predictors based on KDE, PCA, or with the high-dimensional models
of [15] at a fraction of their cost.
2
Probabilistic Inference in a Kernel Induced State Space
We work with conditional graphical models with a chain structure [9], as shown in fig. 1b,
These have continuous temporal states yt , t = 1 . . . T , observations zt . For compactness,
we denote joint states Yt = (y1 , y2 , . . . , yt ) or joint observations Zt = (z1 , . . . , zt ).
Learning and inference are based on local conditionals: p(yt |zt ) and p(yt |yt?1 , zt ), with
yt and zt being low-dimensional, kernel induced representations of some initial model
having state xt and observation rt . We obtain zt , yt from rt , xt using kernel PCA [11, 19].
Inference is performed in a low-dimensional, non-linear, kernel induced latent state space
(see fig. 1b and fig. 2 and (1)). For display or error reporting, we compute the original
conditional p(x|r), or a temporally filtered version p(xt |Rt ), Rt = (r1 , r2 , . . . , rt ), using
a learned pre-image state map [3].
2.1
Density Propagation for Continuous Conditional Chains
For online filtering, we compute the optimal distribution p(yt |Zt ) for the state yt , conditioned by observations Zt up to time t. The filtered density can be recursively derived
as:
Z
p(yt |Zt ) =
p(yt |yt?1 , zt )p(yt?1 |Zt?1 )
(1)
yt?1
We compute using a conditional mixture for p(yt |yt?1 , zt ) (a Bayesian mixture of experts
c.f . ?2.2) and the prior p(yt?1 |Zt?1 ), each having, say M components. We integrate M 2
pairwise products of Gaussians analytically. The means of the expanded posterior are clustered and the centers are used to initialize a reduced M -component Kullback-Leibler approximation that is refined using gradient descent [15]. The propagation rule (1) is similar
to the one used for discrete state labels [9], but here we work with multivariate continuous
state spaces and represent the local multimodal state conditionals using kBME (fig. 2), and
not log-linear models [9] (these would require intractable normalization). This complex
continuous model rules out inference based on Kalman filtering or dynamic programming
[9].
2.2
Learning Bayesian Mixtures over Kernel Induced State Spaces (kBME)
In order to model conditional mappings between low-dimensional non-linear spaces we
rely on kernel dimensionality reduction and conditional mixture predictors. The authors of
KDE [19] propose a powerful structured unimodal predictor. This works by decorrelating
the output using kernel PCA and learning a ridge regressor between the input and each
decorrelated output dimension.
Our procedure is also based on kernel PCA but takes into account the structure of the
studied visual problem where both inputs and outputs are likely to be low-dimensional and
the mapping between them multivalued. The output variables xi are projected onto the
column vectors of the principal space in order to obtain their principal coordinates y i . A
z ? P(Fr )
O
p(y|z)
kP CA
?r (r) ? Fr
O
?r
/ y ? P(Fx )
O
QQQ
QQQ
QQQ
kP CA
QQQ
Q(
?x (x) ? Fx
x ? PreImage(y)
O
?x
r ? R ? Rr
x ? X ? Rx
p(x|r) ? p(x|y)
Figure 2: The learned low-dimensional predictor, kBME, for computing p(x|r) ?
p(xt |rt ), ?t. (We similarly learn p(xt |xt?1 , rt ), with input (x, r) instead of r ? here we
illustrate only p(x|r) for clarity.) The input r and the output x are decorrelated using Kernel PCA to obtain z and y respectively. The kernels used for the input and output are ? r
and ?x , with induced feature spaces Fr and Fx , respectively. Their principal subspaces
obtained by kernel PCA are denoted by P(Fr ) and P(Fx ), respectively. A conditional
Bayesian mixture of experts p(y|z) is learned using the low-dimensional representation
(z, y). Using learned local conditionals of the form p(yt |zt ) or p(yt |yt?1 , zt ), temporal inference can be efficiently performed in a low-dimensional kernel induced state space
(see e.g. (1) and fig. 1b). For visualization and error measurement, the filtered density, e.g.
p(yt |Zt ), can be mapped back to p(xt |Rt ) using the pre-image c.f . (3).
similar procedure is performed on the inputs ri to obtain zi . In order to relate the reduced
feature spaces of z and y (P(Fr ) and P(Fx )), we estimate a probability distribution over
mappings from training pairs (zi , yi ). We use a conditional Bayesian mixture of experts
(BME) [7, 5] in order to account for ambiguity when mapping similar, possibly identical
reduced feature inputs to very different feature outputs, as common in our problem (fig. 1a).
This gives a model that is a conditional mixture of low-dimensional kernel-induced experts
(kBME):
p(y|z) =
M
X
g(z|? j )N (y|Wj z, ?j )
(2)
j=1
where g(z|? j ) is a softmax function parameterized by ? j and (Wj , ?j ) are the parameters and the output covariance of expert j, here a linear regressor. As in many Bayesian
settings [17, 5], the weights of the experts and of the gates, Wj and ? j , are controlled by
hierarchical priors, typically Gaussians with 0 mean, and having inverse variance hyperparameters controlled by a second level of Gamma distributions. We learn this model using
a double-loop EM and employ ML-II type approximations [8, 17] with greedy (weight)
subset selection [17, 15].
Finally, the kBME algorithm requires the computation of pre-images in order to recover
the state distribution x from it?s image y ? P(Fx ). This is a closed form computation
for polynomial kernels of odd degree. For more general kernels optimization or learning
(regression based) methods are necessary [3]. Following [3, 19], we use a sparse Bayesian
kernel regressor to learn the pre-image. This is based on training data (xi , yi ):
p(x|y) = N (x|A?y (y), ?)
(3)
with parameters and covariances (A, ?). Since temporal inference is performed in
the low-dimensional kernel induced state space, the pre-image function needs to be
calculated only for visualizing results or for the purpose of error reporting. Propagating the result from the reduced feature space P(Fx ) to the output space X pro-
duces a Gaussian mixture with M elements, having coefficients g(z|? j ) and components
>
N (x|A?y (Wj z), AJ?y ?j J>
?y A + ?), where J?y is the Jacobian of the mapping ?y .
3
Experiments
We run experiments on both real image sequences (fig. 5 and fig. 6) and on sequences where
silhouettes were artificially rendered. The prediction error is reported in degrees (for mixture of experts, this is w.r.t. the most probable one, but see also fig. 4a), and normalized per
joint angle, per frame. The models are learned using standard cross-validation. Pre-images
are learned using kernel regressors and have average error 1.7o .
Training Set and Model State Representation: For training we gather pairs of 3D human
poses together with their image projections, here silhouettes, using the graphics package
Maya. We use realistically rendered computer graphics human surface models which we
animate using human motion capture [1]. Our original human representation (x) is based
on articulated skeletons with spherical joints and has 56 skeletal d.o.f. including global
translation. The database consists of 8000 samples of human activities including walking,
running, turns, jumps, gestures in conversations, quarreling and pantomime.
Image Descriptors: We work with image silhouettes obtained using statistical background
subtraction (with foreground and background models). Silhouettes are informative for pose
estimation although prone to ambiguities (e.g. the left / right limb assignment in side views)
or occasional lack of observability of some of the d.o.f. (e.g. 180o ambiguities in the global
azimuthal orientation for frontal views, e.g. fig. 1a). These are multiplied by intrinsic forward / backward monocular ambiguities [16]. As observations r, we use shape contexts
extracted on the silhouette [4] (5 radial, 12 angular bins, size range 1/8 to 3 on log scale).
The features are computed at different scales and sizes for points sampled on the silhouette. To work in a common coordinate system, we cluster all features in the training set
into K = 50 clusters. To compute the representation of a new shape feature (a point on the
silhouette), we ?project? onto the common basis by (inverse distance) weighted voting into
the cluster centers. To obtain the representation (r) for a new silhouette we regularly sample 200 points on it and add all their feature vectors into a feature histogram. Because the
representation uses overlapping features of the observation the elements of the descriptor
are not independent. However, a conditional temporal framework (fig. 1b) flexibly accommodates this.
For experiments, we use Gaussian kernels for the joint angle feature space and dot product
kernels for the observation feature space. We learn state conditionals for p(yt |zt ) and
p(yt |yt?1 , zt ) using 6 dimensions for the joint angle kernel induced state space and 25
dimensions for the observation induced feature space, respectively. In fig. 3b) we show
an evaluation of the efficacy of our kBME predictor for different dimensions in the joint
angle kernel induced state space (the observation feature space dimension is here 50). On
the analyzed dancing sequence, that involves complex motions of the arms and the legs,
the non-linear model significantly outperforms alternative PCA methods and gives good
predictions for compact, low-dimensional models.1
In tables 1 and 2, as well as fig. 4, we perform quantitative experiments on artificially
rendered silhouettes. 3D ground truth joint angles are available and this allows a more
1
Running times: On a Pentium 4 PC (3 GHz, 2 GB RAM), a full dimensional BME model with
5 experts takes 802s to train p(xt |xt?1 , rt ), whereas a kBME (including the pre-image) takes 95s to
train p(yt |yt?1 , zt ). The prediction time is 13.7s for BME and 8.7s (including the pre-image cost
1.04s) for kBME. The integration in (1) takes 2.67s for BME and 0.31s for kBME. The speed-up for
kBME is significant and likely to increase with original models having higher dimensionality.
Prediction Error
Number of Clusters
100
1000
100
10
1
1 2 3 4 5 6 7 8
Degree of Multimodality
kBME
KDE_RVM
PCA_BME
PCA_RVM
10
1
0
20
40
Number of Dimensions
60
Figure 3: (a, Left) Analysis of ?multimodality? for a training set. The input zt dimension
is 25, the output yt dimension is 6, both reduced using kPCA. We cluster independently in
(yt?1 , zt ) and yt using many clusters (2100) to simulate small input perturbations and we
histogram the yt clusters falling within each cluster in (yt?1 , zt ). This gives intuition on
the degree of ambiguity in modeling p(yt |yt?1 , zt ), for small perturbations in the input. (b,
Right) Evaluation of dimensionality reduction methods for an artificial dancing sequence
(models trained on 300 samples). The kBME is our model ?2.2, whereas the KDE-RVM
is a KDE model learned with a Relevance Vector Machine (RVM) [17] feature space map.
PCA-BME and PCA-RVM are models where the mappings between feature spaces (obtained using PCA) is learned using a BME and a RVM. The non-linearity is significant.
Kernel-based methods outperform PCA and give low prediction error for 5-6d models.
systematic evaluation. Notice that the kernelized low-dimensional models generally outperform the PCA ones. At the same time, they give results competitive to the ones of
high-dimensional BME predictors, while being lower-dimensional and therefore significantly less expensive for inference, e.g. the integral in (1).
In fig. 5 and fig. 6 we show human motion reconstruction results for two real image sequences. Fig. 5 shows the good quality reconstruction of a person performing an agile
jump. (Given the missing observations in a side view, 3D inference for the occluded body
parts would not be possible without using prior knowledge!) For this sequence we do inference using conditionals having 5 modes and reduced 6d states. We initialize tracking using
p(yt |zt ), whereas for inference we use p(yt |yt?1 , zt ) within (1). In the second sequence
in fig. 6, we simultaneously reconstruct the motion of two people mimicking domestic activities, namely washing a window and picking an object. Here we do inference over a
product, 12-dimensional state space consisting of the joint 6d state of each person. We
obtain good 3D reconstruction results, using only 5 hypotheses. Notice however, that the
results are not perfect, there are small errors in the elbow and the bending of the knee for
the subject at the l.h.s., and in the different wrist orientations for the subject at the r.h.s.
This reflects the bias of our training set.
Walk and turn
Conversation
Run and turn left
KDE-RR
10.46
7.95
5.22
RVM
4.95
4.96
5.02
KDE-RVM
7.57
6.31
6.25
BME
4.27
4.15
5.01
kBME
4.69
4.79
4.92
Table 1: Comparison of average joint angle prediction error for different models. All
kPCA-based models use 6 output dimensions. Testing is done on 100 video frames for
each sequence, the inputs are artificially generated silhouettes, not in the training set. 3D
joint angle ground truth is used for evaluation. KDE-RR is a KDE model with ridge regression (RR) for the feature space mapping, KDE-RVM uses an RVM. BME uses a Bayesian
mixture of experts with no dimensionality reduction. kBME is our proposed model. kPCAbased methods use kernel regressors to compute pre-images.
Expert Prediction
Frequency ? Closest to Ground truth
Frequency ? Close to ground truth
30
25
20
15
10
5
0
1
2
3
4
Expert Number
14
10
8
6
4
2
0
5
1st Probable Prev Output
2nd Probable Prev Output
3rd Probable Prev Output
4th Probable Prev Output
5th Probable Prev Output
12
1
2
3
4
Current Expert
5
Figure 4: (a, Left) Histogram showing the accuracy of various expert predictors: how
many times the expert ranked as the k-th most probable by the model (horizontal axis) is
closest to the ground truth. The model is consistent (the most probable expert indeed is
the most accurate most frequently), but occasionally less probable experts are better. (b,
Right) Histograms show the dynamics of p(yt |yt?1 , zt ), i.e. how the probability mass is
redistributed among experts between two successive time steps, in a conversation sequence.
Walk and turn back
Run and turn
KDE-RR
7.59
17.7
RVM
6.9
16.8
KDE-RVM
7.15
16.08
BME
3.6
8.2
kBME
3.72
8.01
Table 2: Joint angle prediction error computed for two complex sequences with walks, runs
and turns, thus more ambiguity (100 frames). Models have 6 state dimensions. Unimodal
predictors average competing solutions. kBME has significantly lower error.
Figure 5: Reconstruction of a jump (selected frames). Top: original image sequence. Middle: extracted silhouettes. Bottom: 3D reconstruction seen from a synthetic viewpoint.
4
Conclusion
We have presented a probabilistic framework for conditional inference in latent kernelinduced low-dimensional state spaces. Our approach has the following properties: (a)
Figure 6: Reconstructing the activities of 2 people operating in an 12-d state space (each
person has its own 6d state). Top: original image sequence. Bottom: 3D reconstruction
seen from a synthetic viewpoint.
Accounts for non-linear correlations among input or output variables, by using kernel nonlinear dimensionality reduction (kPCA); (b) Learns probability distributions over mappings
between low-dimensional state spaces using conditional Bayesian mixture of experts, as required for accurate prediction. In the resulting low-dimensional kBME predictor ambiguities and multiple solutions common in visual, inverse perception problems are accurately
represented. (c) Works in a continuous, conditional temporal probabilistic setting and offers a formal management of uncertainty. We show comparisons that demonstrate how the
proposed approach outperforms regression, PCA or KDE alone for reconstructing the 3D
human motion in monocular video. Future work we will investigate scaling aspects for
large training sets and alternative structured prediction methods.
References
[1] CMU Human Motion DataBase. Online at http://mocap.cs.cmu.edu/search.html, 2003.
[2] A. Agarwal and B. Triggs. 3d human pose from silhouettes by Relevance Vector Regression.
In CVPR, 2004.
[3] G. Bakir, J. Weston, and B. Scholkopf. Learning to find pre-images. In NIPS, 2004.
[4] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape
contexts. PAMI, 24, 2002.
[5] C. Bishop and M. Svensen. Bayesian mixtures of experts. In UAI, 2003.
[6] J. Deutscher, A. Blake, and I. Reid. Articulated Body Motion Capture by Annealed Particle
Filtering. In CVPR, 2000.
[7] M. Jordan and R. Jacobs. Hierarchical mixtures of experts and the EM algorithm. Neural
Computation, (6):181?214, 1994.
[8] D. Mackay. Bayesian interpolation. Neural Computation, 4(5):720?736, 1992.
[9] A. McCallum, D. Freitag, and F. Pereira. Maximum entropy Markov models for information
extraction and segmentation. In ICML, 2000.
[10] R. Rosales and S. Sclaroff. Learning Body Pose Via Specialized Maps. In NIPS, 2002.
[11] B. Sch?olkopf, A. Smola, and K. M?uller. Nonlinear component analysis as a kernel eigenvalue
problem. Neural Computation, 10:1299?1319, 1998.
[12] G. Shakhnarovich, P. Viola, and T. Darrell. Fast Pose Estimation with Parameter Sensitive
Hashing. In ICCV, 2003.
[13] L. Sigal, S. Bhatia, S. Roth, M. Black, and M. Isard. Tracking Loose-limbed People. In CVPR,
2004.
[14] C. Sminchisescu and A. Jepson. Generative Modeling for Continuous Non-Linearly Embedded
Visual Inference. In ICML, pages 759?766, Banff, 2004.
[15] C. Sminchisescu, A. Kanaujia, Z. Li, and D. Metaxas. Discriminative Density Propagation for
3D Human Motion Estimation. In CVPR, 2005.
[16] C. Sminchisescu and B. Triggs. Kinematic Jump Processes for Monocular 3D Human Tracking.
In CVPR, volume 1, pages 69?76, Madison, 2003.
[17] M. Tipping. Sparse Bayesian learning and the Relevance Vector Machine. JMLR, 2001.
[18] C. Tomasi, S. Petrov, and A. Sastry. 3d tracking = classification + interpolation. In ICCV, 2003.
[19] J. Weston, O. Chapelle, A. Elisseeff, B. Scholkopf, and V. Vapnik. Kernel dependency estimation. In NIPS, 2002.
| 2866 |@word middle:1 version:1 polynomial:1 proportion:1 nd:1 triggs:2 azimuthal:1 atul:1 covariance:2 jacob:1 elisseeff:1 recursively:2 reduction:5 initial:1 efficacy:1 outperforms:2 current:1 distant:2 chicago:1 informative:1 shape:5 alone:2 generative:5 selected:2 greedy:1 isard:1 mccallum:1 filtered:4 toronto:2 successive:1 banff:1 scholkopf:2 consists:1 freitag:1 combine:1 prev:5 multimodality:2 pairwise:1 li3:1 indeed:1 frequently:1 spherical:1 window:1 elbow:1 domestic:1 project:1 linearity:1 mass:1 advent:1 temporal:14 quantitative:1 voting:1 reid:1 local:8 encoding:1 dnm:1 interpolation:2 pami:1 black:1 studied:1 range:1 camera:1 wrist:1 testing:1 procedure:2 significantly:3 projection:2 matching:1 pre:10 integrating:1 radial:1 onto:2 close:1 selection:1 cannot:1 context:2 map:3 demonstrated:1 center:3 yt:42 missing:1 annealed:1 roth:1 flexibly:2 independently:2 knee:1 rule:2 handle:1 coordinate:2 fx:7 target:1 programming:1 us:3 hypothesis:1 element:2 qqq:4 expensive:4 recognition:1 walking:2 database:2 observed:1 bottom:2 capture:3 wj:4 zig:1 principled:1 intuition:1 complexity:1 skeleton:1 occluded:1 dynamic:2 motivate:1 trained:1 shakhnarovich:1 animate:1 inapplicable:1 efficiency:1 f2:1 completely:1 basis:1 multimodal:4 joint:17 indirect:1 represented:3 various:1 articulated:3 train:2 fast:1 effective:1 kp:2 artificial:2 bhatia:1 refined:1 widely:1 plausible:1 cvpr:5 say:1 reconstruct:2 cristian:1 online:2 sequence:14 rr:5 eigenvalue:1 reconstruction:8 propose:1 redistributed:1 product:3 fr:5 relevant:1 loop:1 realistically:1 olkopf:1 double:1 optimum:1 r1:1 cluster:8 produce:2 darrell:1 perfect:1 tti:1 object:3 illustrate:1 svensen:1 propagating:1 pose:7 bme:11 odd:1 strong:2 c:3 predicted:1 involves:1 rosales:1 filter:1 human:28 bin:1 require:3 clustered:1 probable:10 clothing:1 ground:5 blake:1 mapping:10 purpose:1 estimation:6 label:1 rvm:10 sensitive:1 successfully:1 weighted:1 reflects:1 uller:1 gaussian:2 surveillance:1 probabilistically:1 derived:1 likelihood:1 pentium:1 inference:24 typically:1 compactness:1 hidden:2 kernelized:1 mimicking:1 among:4 orientation:2 html:1 denoted:1 classification:1 softmax:1 initialize:2 integration:1 mackay:1 construct:1 having:6 extraction:1 identical:1 icml:2 foreground:1 future:1 simplify:1 employ:1 gamma:1 simultaneously:1 occlusion:1 consisting:1 highly:1 investigate:1 kinematic:1 evaluation:4 mixture:18 truly:1 analyzed:1 pc:1 chain:3 accurate:3 integral:1 necessary:3 incomplete:1 continuing:1 walk:3 deformation:1 uncertain:1 column:1 modeling:4 earlier:1 assignment:1 kpca:4 cost:3 tractability:1 subset:1 predictor:15 graphic:2 reported:1 dependency:2 varies:1 synthetic:2 combined:2 person:3 density:4 st:1 probabilistic:8 systematic:2 regressor:3 picking:1 together:1 ambiguity:8 management:1 reconstructs:1 possibly:1 expert:22 li:1 account:3 coefficient:1 performed:4 view:3 closed:1 competitive:2 recover:1 contribution:1 il:1 accuracy:1 descriptor:3 variance:1 efficiently:1 bayesian:13 metaxas:1 accurately:2 rx:1 lighting:1 classified:1 decorrelated:2 petrov:1 frequency:2 naturally:1 sampled:1 multivalued:4 conversation:3 dimensionality:7 knowledge:1 bakir:1 schedule:1 segmentation:1 back:2 appears:1 originally:1 higher:1 hashing:1 tipping:1 methodology:2 decorrelating:1 done:1 angular:1 smola:1 correlation:4 horizontal:1 nonlinear:2 overlapping:1 propagation:4 lack:1 mode:1 aj:1 quality:1 preimage:1 usa:2 normalized:1 y2:1 analytically:1 leibler:1 deal:1 visualizing:1 ambiguous:1 ridge:2 demonstrate:2 motion:18 interface:1 pro:1 image:24 common:4 specialized:1 empirically:1 tracked:1 volume:1 measurement:2 significant:2 rd:1 sastry:1 similarly:1 particle:2 dot:1 chapelle:1 surface:2 operating:1 add:1 multivariate:2 posterior:1 closest:2 own:1 perspective:1 occasionally:1 yi:2 seen:2 somewhat:1 subtraction:1 mocap:1 redundant:1 ii:2 multiple:5 unimodal:2 full:1 gesture:1 cross:1 offer:1 controlled:2 prediction:11 regression:5 cmu:2 rutgers:2 repetitive:1 kernel:35 represent:1 normalization:1 histogram:4 agarwal:1 background:2 conditionals:6 whereas:3 annealing:1 sch:1 operate:1 strict:1 induced:15 subject:2 regularly:1 jordan:1 iii:1 independence:1 zi:2 restrict:1 suboptimal:1 competing:1 observability:1 idea:1 pca:16 gb:1 generally:1 reduced:6 http:1 outperform:2 exist:1 restricts:1 notice:2 correctly:1 per:2 discrete:1 skeletal:1 dancing:2 falling:1 wasteful:1 clarity:1 backward:1 ram:1 fraction:2 limbed:1 run:4 inverse:5 angle:11 uncertainty:3 powerful:2 parameterized:1 package:1 reporting:2 scaling:1 maya:1 display:1 activity:4 adapted:1 constraint:2 scene:2 ri:1 aspect:1 speed:1 simulate:1 performing:1 expanded:1 rendered:3 deutscher:1 department:2 structured:2 reconstructing:6 em:2 leg:1 iccv:2 washing:1 monocular:9 visualization:1 turn:6 kinematics:1 loose:1 available:1 gaussians:2 multiplied:1 limb:1 hierarchical:2 occasional:1 appropriate:1 generic:1 alternative:2 gate:1 original:5 top:2 running:3 entertainment:1 ensure:2 graphical:3 madison:1 exploit:2 kanaujia:2 malik:1 rt:9 exhibit:2 gradient:1 subspace:1 distance:1 mapped:1 accommodates:1 street:1 topic:1 besides:1 kalman:2 difficult:2 kde:13 relate:1 zt:29 unknown:1 perform:1 observation:22 markov:1 descent:1 viola:1 y1:1 frame:4 perturbation:2 canada:1 introduced:1 namely:2 pair:2 required:1 z1:1 tomasi:1 learned:11 nip:3 perception:2 dimitris:1 summarize:1 including:5 reliable:1 video:6 difficulty:2 rely:1 ranked:1 predicting:2 arm:1 temporally:2 axis:1 bending:1 prior:4 understanding:1 embedded:1 filtering:4 validation:1 integrate:1 degree:4 gather:1 consistent:1 imposes:1 sigal:1 viewpoint:2 translation:1 prone:1 side:2 allow:1 bias:1 formal:1 sparse:2 duce:1 ghz:1 dimension:11 calculated:1 world:1 author:1 forward:1 jump:4 projected:1 regressors:2 approximate:1 compact:1 kullback:1 silhouette:15 ml:1 global:3 uai:1 belongie:1 discriminative:4 xi:2 search:2 continuous:7 latent:2 table:4 learn:6 robust:2 ca:2 sminchisescu:3 complex:7 necessarily:1 artificially:3 agile:1 jepson:1 linearly:1 noise:1 hyperparameters:1 complementary:1 positively:2 body:5 fig:20 referred:1 inferring:1 position:1 pereira:1 jmlr:1 jacobian:1 learns:1 xt:9 bishop:1 showing:1 r2:1 essential:1 intrinsic:2 intractable:1 vapnik:1 conditioned:1 sclaroff:1 entropy:1 appearance:1 likely:2 visual:8 tracking:6 truth:5 extracted:2 weston:2 conditional:30 goal:1 operates:2 principal:3 east:1 zag:1 rarely:1 puzicha:1 people:6 relevance:3 frontal:1 correlated:1 |
2,055 | 2,867 | A Theoretical Analysis of Robust Coding over
Noisy Overcomplete Channels
Eizaburo Doi1 , Doru C. Balcan2 , & Michael S. Lewicki1,2
1
Center for the Neural Basis of Cognition,
2
Computer Science Department,
Carnegie Mellon University, Pittsburgh, PA 15213
{edoi,dbalcan,lewicki}@cnbc.cmu.edu
Abstract
Biological sensory systems are faced with the problem of encoding a
high-fidelity sensory signal with a population of noisy, low-fidelity neurons. This problem can be expressed in information theoretic terms as
coding and transmitting a multi-dimensional, analog signal over a set of
noisy channels. Previously, we have shown that robust, overcomplete
codes can be learned by minimizing the reconstruction error with a constraint on the channel capacity. Here, we present a theoretical analysis
that characterizes the optimal linear coder and decoder for one- and twodimensional data. The analysis allows for an arbitrary number of coding
units, thus including both under- and over-complete representations, and
provides a number of important insights into optimal coding strategies.
In particular, we show how the form of the code adapts to the number
of coding units and to different data and noise conditions to achieve robustness. We also report numerical solutions for robust coding of highdimensional image data and show that these codes are substantially more
robust compared against other image codes such as ICA and wavelets.
1
Introduction
In neural systems, the representational capacity of a single neuron is estimated to be as
low as 1 bit/spike [1, 2]. The characteristics of the optimal coding strategy under such
conditions, however, remains an open question. Recent efficient coding models for sensory
coding such as sparse coding and ICA have provided many insights into visual sensory
coding (for a review, see [3]), but those models made the implicit assumption that the
representational capacity of individual neurons was infinite. Intuitively, such a limit on
representational precision should strongly influence the form of the optimal code. In particular, it should be possible to increase the number of limited capacity units in a population
to form a more precise representation of the sensory signal. However, to the best of our
knowledge, such a code has not been characterized analytically, even in the simplest case.
Here we present a theoretical analysis of this problem for one- and two-dimensional data
for arbitrary numbers of units. For simplicity, we assume that the encoder and decoder
are both linear, and that the goal is to minimize the mean squared error (MSE) of the
reconstruction. In contrast to our previous report, which examined noisy overcomplete
representations [4], the cost function does not contain a sparsity prior. This simplification
makes the cost depend up to second order statistics, making it analytically tractable while
preserving the robustness to noise.
2
The model
To define our model, we assume that the data is N -dimensional, has zero mean and covariance matrix ?x , and define two matrices W ? RM ?N and A ? RN ?M . For each data
point x, its representation r in the model is the linear transform of x through matrix W,
perturbed by the additive noise (i.e., channel noise) n ? N (0, ?n2 IM ):
r = Wx + n = u + n.
(1)
We refer to W as the encoding matrix and its row vectors as encoding vectors. The reconstruction of a data point from its representation is simply the linear transform of the latter,
using matrix A:
x
? = Ar = AWx + An.
(2)
We refer to A as the decoding matrix and its column vectors as decoding vectors. The term
AWx in eq. 2 determines how the reconstruction depends on the data, while An reflects
the channel noise in the reconstruction. When there is no channel noise (n = 0), AW = I
is equivalent to perfect reconstruction. A graphical description of this system is shown in
Fig. 1.
Channel Noise
n
Encoder
x
Data
W
u
Noiseless
Representation
Decoder
r
Noisy
Representation
A
^
x
Reconstruction
Figure 1: Diagram of the model.
The goal of the system is to form an accurate representation of the data that is robust
to the presence of channel noise. We quantify the accuracy of the reconstruction by the
mean squared error (MSE) over a set of data. The error of each sample is = x ? x
?=
(IN ? AW)x ? An, and the MSE is expressed in matrix form:
E(A, W) = tr{(IN ? AW)?x (IN ? AW)T } + ?n2 tr{AAT },
(3)
where we used E = hT i = tr(hT i). Note that, due to the MSE objective along with
the zero-mean assumptions, the optimal solution depends solely on second-order statistics
of the data and the noise.
Since the SNR is limited in the neural representation [1, 2], we assume that each coding
unit has a limited variance hu2i i = ?u2 so that the SNR is limited to the same constant value
? 2 = ?u2 /?n2 . As the channel capacity of information is defined by C = 21 ln(? 2 + 1),
this is equivalent to limiting the capacity of each unit to the same level. We will call this
constraint as channel capacity constraint.
Now our problem is to minimize eq. 3 under the channel capacity constraint. To solve
it, we will include this constraint in the parametrization of W. Let ?x = EDET be
1
the eigenvalue
decomposition of the data covariance matrix, and denote S = D 2 =
?
?
diag( ?1 , ? ? ? , ?M ), where ?i ? Dii are the ?x ?s eigenvalues. As we will see shortly,
it is convenient to define V ? WES/?u , then the condition hu2i i = ?u2 implies that
VVT = Cu = huuT i/?u2 ,
(4)
where Cu is the correlation matrix of the representation u. Now the problem is formulated
as a constrained optimization: finding the parameters that satisfy eq. 4 and minimize E.
3
The optimal solutions and their characteristics
In this section we analyze the optimal solutions in some simple cases, namely for 1dimensional (1-D) and 2-dimensional (2-D) data.
3.1
1-D data
In the 1-D case the MSE (eq. 3) is expressed as
E = ?x2 (1 ? aw)2 + ?n2 kak22 ,
?x2
(5)
M ?1
1?M
where
= ?x ? R, a = A ? R
and w = W ? R
. By solving the necessary
condition for the minimum, ?E/?a = 0, with the channel capacity constraint (eq. 4), the
entries of the optimal solutions are
?u
1
?2
, ai =
?
,
?x
wi M ? ? 2 + 1
and the smallest value of the MSE is
?x2
E =
.
M ? ?2 + 1
wi = ?
(6)
(7)
This minimum depends on the SNR (? 2 ) and on the number of units (M ), and it is monotonically decreasing with respect to both. Furthermore, we can compensate for a decrease in
SNR by an increase of the number of units. Note that ai are responsible for this adaptive
behavior as wi do not vary with either ? 2 or M , in the 1-D case. The second term in eq. 5
leads the optimal a into having as small norm as possible, while the first term prevents it
from being arbitrarily small. The optimum is given by the best trade-off between them.
3.2
2-D data
In the 2-D case, the channel capacity constraint (eq. 4) restricts V such that the row vectors
of V should be on the unit circle. Therefore V can be parameterized as
?
?
cos ?1 sin ?1
?
?
..
..
V = ?
(8)
?,
.
.
cos ?M sin ?M
where ?i ? [0, 2?) is the angle between i-th row of V and the principal eigenvector of
the data e1 (E = [e1 , e2 ], ?1 ? ?2 > 0). The necessary condition for the minimum
?E/?A = O implies
A = ?u ESVT (?u2 VVT + ?n2 IM )?1 .
Using eqs. 8 and 9, the MSE can be expressed as
2
2 2
? + 1 ? ?2 (?1 ? ?2 ) Re(Z)
(?1 + ?2 ) M
E =
,
2 1
M 2
4 |Z|2
?
+
1
?
?
2
4
(9)
(10)
where by definition
Z =
PM
k=1 zk
=
PM
k=1 [cos(2?k )
+ i sin(2?k )].
(11)
Now the problem has been reduced to finding simply a complex number Z that minimizes
E. Note that Z defines ?k in V, which in turn defines W (by definition; see eq. 4) and A
(eq. 9). In the following we analyze the problem in two complementary cases: when the
data variance is isotropic (i.e., ?1 = ?2 ), and when it is anisotropic (?1 > ?2 ). As we will
see, the solutions are qualitatively different in these two cases.
3.2.1
Isotropic case
Isotropy of the data variance implies ?1 = ?2 ? ?x2 , and (without loss of generality) E = I,
which simplifies the MSE (eq. 10) as
2
2?x2 1 + M
2 ?
E =
.
(12)
2 1
2 2
? 4 ? 4 |Z|2
M? +1
Therefore, E is minimized whenever |Z|2 is minimized.
If M = 1, |Z|2 = |z1 |2 is always 1 by definition (eq. 11), yielding the optimal solutions
W=
?x
?2
?u
V, A =
? 2
VT ,
?x
?u ? + 1
(13)
where V = V(?1 ), ? ?1 ? [0, 2?). Eq. 13 means that the orientation of the encoding and
decoding vectors is arbitrary, and that the length of those vectors is adjusted exactly as in
the 1-D case (eq. 6 with M = 1; Fig. 2). The minimum MSE is given by
E =
?x2
+ ?x2 .
?2 + 1
(14)
The first term is the same as in the 1-D case (eq. 7 with M = 1), corresponding to the error
component along the axis that the encoding/decoding vectors represent, while the second
term is the whole data variance along the axis orthogonal to the encoding/decoding vectors,
along which no reconstruction is made.
If M ? 2, there exists a set of angles ?k for which |Z|2 is 0. This can be verified by representing Z in the complex plane (Z-diagram in Fig. 2) and observing that there is always a
configuration of connected, unit-length bars that starts from, and ends up at the origin, thus
indicating that Z = |Z|2 = 0. Accordingly, the optimal solution is
W=
?x
?2
?u
V, A =
?M 2
VT ,
?x
?u 2 ? + 1
(15)
where the optimal V = V(?1 , ? ? ? , ?M ) is given by such ?1 , . . . , ?M for which Z = 0.
Specifically, if M = 2, then z1 and z2 must be antiparallel but are not otherwise constrained,
making the pair of decoding vectors (and that of encoding vectors) orthogonal, yet free to
rotate. Note that both the encoding and the decoding vectors are parallel to the rows of
V (eq. 15), and the angle of zk from the real axis is twice as large as that of ak (or wk ).
Likewise, if M = 3, the decoding vectors should be evenly distributed yet still free to rotate;
if M = 4, the four vectors should just be two pairs of orthogonal vectors (not necessarily
evenly distributed); if M ? 5, there is no obvious regularity. With Z = 0, the MSE is
minimized as
E =
2?x2
.
+1
M 2
2 ?
(16)
The minimum MSE (eq. 16) depends on the SNR (? 2 ) and overcompleteness ratio (M/N )
exactly in the same manner as explained in the 1-D case (eq. 7), considering that in both
cases the numerator is the data variance, tr(?x ). We present examples in Fig 2: given
M = 2, the reconstruction gets worse by lowering the SNR from 10 to 1; however, the
reconstruction can be improved by increasing the number of units for a fixed SNR (? 2 = 1).
Just as in the 1-D case, the norm of the decoding vectors gets smaller by increasing M or
decreasing ? 2 , which is explicitly described by eq. 15.
M=2
?2=10
?2=1
M=3
?2=1
M=4
?2=1
M=5
?2=1
Z-Diagram
Decoding
Encoding
Variance
M=1
?2=1
Figure 2: The optimal solutions for isotropic data. M is the number of units and ? 2 is the
SNR in the representation. ?Variance? shows the variance ellipses for the data (gray) and
the reconstruction (magenta). For perfect reconstruction, the two ellipses should overlap.
?Encoding? and ?Decoding? show encoding vectors (red) and decoding vectors (blue), respectively. The gray vectors show the principal axes of the data, e1 and e2 . ?Z-Diagram?
represents Z = ?k zk (eq. 11) in the complex plane, where each unit length bar corresponds
to a zk , and the end point indicated by ??? represents the coordinates of Z. The set of green
dots in a plot corresponds to optimal values of Z; when this set reduces to a single dot, the
optimal Z is unique. In general there could be multiple configurations of bars for a single
Z, implying multiple equivalent solutions of A and W for a given Z. For M = 2 and
? 2 = 10, we drew with gray dotted bars an example of Z that is not optimal (corresponding
encoding and decoding vectors not shown).
3.2.2
Anisotropic case
In the anisotropic condition ?1 > ?2 , the MSE (eq. 10) is minimized when Z = Re(Z) ? 0
for a fixed value of |Z|2 . Therefore, the problem is reduced to seeking a real value Z =
y ? [0, M ] that minimizes
?2
2
(?1 + ?2 ) M
2 ? + 1 ? 2 (?1 ? ?2 ) y
.
(17)
E=
2 1
M 2
4 y2
?
+
1
?
?
2
4
If M = 1, then y = cos 2?1 from eq. 11, and therefore, E in eq. 17 is minimized iff ?1 = 0,
yielding the optimal solutions
?
?u T
?1
?2
W = ? e1 , A =
? 2
e1 .
(18)
?u ? + 1
?1
In contrast to the isotropic case with M = 1, the encoding and decoding vectors are specified along the principal axis (e1 ) as illustrated in Fig. 3. The minimum MSE is
?1
+ ?2 .
(19)
+1
This is the same form as in the isotropic case (eq. 14) except that the first term is now related
to the variance along the principal axis, ?1 , by which the encoding/decoding vectors can
E=
?2
most effectively be utilized for representing the data, while the second term is specified as
the data variance along the minor axis, ?2 , by which the loss of reconstruction is mostly
minimized. Note that it is a similar mechanism of dimensionality reduction as using PCA.
If M ? 2, then we can derive the optimal y from the necessary condition for the minimum,
dE/dy = 0, which yields
?
?
?
?
?1 ? ? 2
2
? 1 + ?2
2
?
?
?
M + 2 ?y ?
M + 2 ? y = 0.
(20)
?
?
?1 + ? 2
? 1 ? ?2
Let ?c2 denote the SNR critical point, where
p
?c2 = ( ?1 /?2 ? 1)/M.
2
If ? ?
?c2 ,
then eq. 20 has a root within its domain [0, M ],
?
?
? 1 ? ?2 2
?
+M ,
y=?
? 1 + ?2 ? 2
with y = M if ? 2 = ?c2 . Accordingly the optimal solutions are given by
?
?
?
?1 + ?2
?2
?u / ?1
0?
T
W=V
?M 2
EVT ,
E , A=
0
?u / ?2
2?u
?
+
1
2
(21)
(22)
(23)
where the optimal V = V(?1 , ? ? ? , ?M ) is given by the Z-diagram as illustrated in Fig. 3,
which we will describe shortly. The minimum MSE is given by
?
?
( ?1 + ?2 )2
1
E= M 2
.
(24)
2
2 ? +1
Note that eqs. 23?24 are reduced to eqs. 15?16 if ?1 = ?2 .
If the SNR is smaller than ?c2 , then dE/dy = 0 does not have a root within the domain.
However, dE/dy is always negative, and hence, E decreases monotonically on [0, M ]. The
minimum is therefore obtained when y = M , yielding the optimal solutions
?
?u
?1
?2
T
W = ? 1M e 1 , A =
e1 1TM ,
(25)
?
?u M ? 2 + 1
?1
where 1M = (1, ? ? ? , 1)T ? RM , and the minimum is given by
E =
?1
+ ?2 .
M ?2 + 1
(26)
Note that E takes the same form as in M = 1 (eq. 19) except that we can now decrease the
error by increasing the number of units. To summarize, if the representational resource is
too limited either by M or ? 2 , the best strategy is to represent only the principal axis.
Now we describe the optimal solutions using the Z-diagram (Fig. 3). First, the optimal solutions differ depending on the SNR. If ? 2 > ?c2 , the optimal Z is a certain point between 0
and M on the real axis. Specifically, for M = 2 the optimal configuration of the unit-length
connected bars is unique (up to flipping about x-axis), meaning that the encoding/decoding
vectors are symmetric about the principal axis; for M ? 3, there are infinitely many configurations of the bars starting from the origin and ending at the optimal Z, and nothing can be
added about their regularity. If ? 2 ? ?c2 , the optimal Z is M , and the optimal configuration
is obtained only when all the bars align on the real axis. In this case, encoding/decoding
vectors are all parallel to the principal axis (e1 ), as described by eq. 25. Such a degenerate
representation is unique for the anisotropic case and is determined by ?c2 (eq. 21). We can
?2=10
M=2
?2=2
M=3
?2=1
?2=10
?2=1
M=8
?2=1
Z-Diagram
Decoding
Encoding
Variance
M=1
?2=1
Figure 3: The optimal solutions for anisotropic data. Notations are as in Fig. 2. We set
?1 = 1.87 and ?2 = 0.13. ? 2 > ?c2 holds for all M ? 2 but the one with M= 2 and ? 2 = 1.
avoid the degeneration either by increasing the SNR (e.g., Fig. 3, M = 2 with different ? 2 )
or by increasing the number of units (? 2 = 1 with different M ).
Also, the optimal solutions for the overcomplete representation are, in general, not obtained
by simple replication (except in the degenerate case). For example, for ? 2 = 1 in Fig. 3,
the optimal solution for M = 8 is not identical to the replication of the optimal solution for
M = 2, and we can formally prove it by using eq. 22.
For M = 1 and for the degenerate case, where only one axis in two dimensional space is
represented, the optimal strategy is to preserve information along the principal axis at the
cost of losing all information along the minor axis. Such a biased representation is also
found for the non-degenerate case. We can see in Fig. 3 that the data along the principal
axis is more accurately reconstructed than that along the minor axis; if there is no bias, the
ellipse for the reconstruction should be similar to that of the data. More precisely,
? we?can
prove that the error ratio along e1 is smaller than that along e2 at the ratio of ?2 : ?1
(note the switch of the subscripts), which describes the representation bias toward the main
axis.
4
Application to image coding
In the case of high-dimensional data we can employ an algorithm similar to the one in [4],
to numerically compute optimal solutions that minimizes the MSE subject to the channel
capacity constraint. Fig. 4 presents the performance of our model when applied to image
coding in the presence of channel noise. The data were 8 ? 8 pixel blocks taken from a
large image, and for comparison we considered representations with M = 64 (?1??) and
respectively, 512 (?8??) units. As for the channel capacity, each unit has 1.0 bit precision
as in the neural representation [1]. The robust coding model shows a dramatic reduction
in the reconstruction error, when compared to alternatives such as ICA and wavelet codes.
This underscores the importance of taking into account the channel capacity constraint for
better understanding the neural representation.
Original
ICA
Wavelets
Robust Coding (1x) Robust Coding (8x)
32.5%
34.8%
3.8%
0.6%
Figure 4: Reconstruction using one bit channel capacity representations. To ensure that
all models had the same precision of 1.0 bit for each coefficient, we added Gaussian noise
to the coefficients of the ICA and ?Daubechies 9/7? wavelet codes as in the robust coding.
For each representation, we displayed percentage error of the reconstruction. The results
are consistent using other images, block size, or wavelet filters.
5
Discussion
In this study we measured the accuracy of the reconstruction by the MSE. An alternative
? ) between the data and the reconmeasure could be, as in [5, 3], mutual information I(x, x
struction. However, we can prove that this measure does not yield optimal solutions for the
robust coding problem. Assuming the data is Gaussian and the representation is complete,
we can prove that the mutual information is upper- bounded,
N
1
? ) = ln det(? 2 VVT + IN ) ?
ln(? 2 + 1),
(27)
I(x, x
2
2
with equality iff VVT = I, i.e., when the representation u is whitened (see eq. 4). This result holds even for anisotropic data, which is different from the optimal MSE code that can
employ correlated, or even degenerate, representation. As ICA is one form of whitening,
the results in Fig. 4 demonstrate the suboptimality of whitening in the MSE sense.
The optimal MSE code over noisy channels was examined previously in [6] for N dimensional data. However, the capacity constraint was defined for a population and only
examined the case of undercomplete codes. In the model studied here, motivated by the
neural representation, the capacity constraint is imposed for individual units. Furthermore,
the model allows for arbitrary number of units, which provides a way to arbitrarily improve the robustness of the code using a population code. The theoretical analysis for oneand two- dimensional cases quantifies the amount of error reduction as a function of the
SNR and the number of units along with the data covariance matrix. Finally, our numerical results for higher- dimensional image data demonstrate a dramatic improvement in the
robustness of the code over both conventional transforms such as wavelets and also representations optimized for statistical efficiency such as ICA.
References
[1] A. Borst and F. E. Theunissen. Information theory and neural coding. Nature Neuroscience,
2:947?957, 1999.
[2] N. K. Dhingra and R. G. Smith. Spike generator limits efficiency of information transfer in a
retinal ganglion cell. Journal of Neuroscience, 24:2914?2922, 2004.
[3] A. Hyvarinen, J. Karhunen, and E. Oja. Independent Component Analysis. Wiley, 2001.
[4] E. Doi and M. S. Lewicki. Sparse coding of natural images using an overcomplete set of limited
capacity units. In Advances in NIPS, volume 17, pages 377?384. MIT Press, 2005.
[5] J. J. Atick and A. N. Redlich. What does the retina know about natural scenes? Neural Computation, 4:196?210, 1992.
[6] K. I. Diamantaras, K. Hornik, and M. G. Strintzis. Optimal linear compression under unreliable
representation and robust PCA neural models. IEEE Trans. Neur. Netw., 10(5):1186?1195, 1999.
| 2867 |@word cu:2 compression:1 norm:2 open:1 covariance:3 decomposition:1 dramatic:2 tr:4 reduction:3 configuration:5 z2:1 yet:2 must:1 additive:1 numerical:2 wx:1 plot:1 implying:1 accordingly:2 plane:2 isotropic:5 parametrization:1 smith:1 provides:2 along:14 c2:9 replication:2 kak22:1 prove:4 manner:1 cnbc:1 ica:7 behavior:1 multi:1 decreasing:2 borst:1 considering:1 increasing:5 struction:1 provided:1 notation:1 bounded:1 coder:1 isotropy:1 what:1 substantially:1 eigenvector:1 minimizes:3 finding:2 exactly:2 rm:2 unit:22 diamantaras:1 aat:1 limit:2 encoding:17 ak:1 subscript:1 solely:1 twice:1 studied:1 examined:3 co:4 limited:6 unique:3 responsible:1 block:2 convenient:1 get:2 twodimensional:1 influence:1 equivalent:3 imposed:1 conventional:1 center:1 starting:1 simplicity:1 insight:2 population:4 coordinate:1 limiting:1 losing:1 origin:2 pa:1 utilized:1 theunissen:1 degeneration:1 connected:2 decrease:3 trade:1 depend:1 solving:1 efficiency:2 edoi:1 basis:1 represented:1 describe:2 doi:1 solve:1 otherwise:1 encoder:2 statistic:2 transform:2 noisy:6 eigenvalue:2 reconstruction:19 iff:2 degenerate:5 achieve:1 adapts:1 representational:4 description:1 regularity:2 optimum:1 perfect:2 derive:1 depending:1 measured:1 minor:3 eq:32 implies:3 quantify:1 differ:1 filter:1 dii:1 biological:1 im:2 adjusted:1 hold:2 considered:1 cognition:1 vary:1 smallest:1 overcompleteness:1 reflects:1 mit:1 always:3 gaussian:2 avoid:1 ax:1 improvement:1 underscore:1 contrast:2 sense:1 pixel:1 fidelity:2 orientation:1 constrained:2 mutual:2 having:1 identical:1 represents:2 vvt:4 minimized:6 report:2 employ:2 retina:1 oja:1 preserve:1 individual:2 yielding:3 accurate:1 necessary:3 orthogonal:3 circle:1 re:2 overcomplete:5 theoretical:4 column:1 ar:1 cost:3 entry:1 snr:13 undercomplete:1 too:1 perturbed:1 aw:5 off:1 decoding:18 michael:1 transmitting:1 squared:2 daubechies:1 worse:1 account:1 de:3 retinal:1 coding:21 wk:1 coefficient:2 satisfy:1 explicitly:1 depends:4 root:2 analyze:2 characterizes:1 observing:1 start:1 red:1 parallel:2 minimize:3 accuracy:2 variance:11 characteristic:2 likewise:1 yield:2 hu2i:2 accurately:1 antiparallel:1 whenever:1 definition:3 against:1 obvious:1 e2:3 knowledge:1 dimensionality:1 higher:1 improved:1 strongly:1 generality:1 furthermore:2 just:2 implicit:1 atick:1 correlation:1 defines:2 indicated:1 gray:3 contain:1 y2:1 analytically:2 hence:1 equality:1 symmetric:1 illustrated:2 sin:3 numerator:1 suboptimality:1 theoretic:1 complete:2 demonstrate:2 image:8 meaning:1 volume:1 anisotropic:6 analog:1 numerically:1 mellon:1 refer:2 ai:2 pm:2 had:1 dot:2 whitening:2 align:1 recent:1 certain:1 arbitrarily:2 vt:2 preserving:1 minimum:10 monotonically:2 signal:3 multiple:2 reduces:1 characterized:1 compensate:1 e1:9 ellipsis:2 whitened:1 noiseless:1 cmu:1 represent:2 cell:1 diagram:7 edet:1 biased:1 subject:1 call:1 presence:2 switch:1 eizaburo:1 simplifies:1 tm:1 det:1 motivated:1 pca:2 amount:1 transforms:1 simplest:1 wes:1 reduced:3 percentage:1 restricts:1 dotted:1 estimated:1 neuroscience:2 blue:1 carnegie:1 four:1 verified:1 lowering:1 angle:3 parameterized:1 dy:3 bit:4 simplification:1 constraint:11 precisely:1 x2:8 scene:1 department:1 neur:1 smaller:3 describes:1 wi:3 making:2 intuitively:1 explained:1 taken:1 ln:3 resource:1 previously:2 remains:1 turn:1 mechanism:1 know:1 tractable:1 end:2 alternative:2 robustness:4 shortly:2 original:1 include:1 ensure:1 graphical:1 ellipse:1 seeking:1 objective:1 added:2 question:1 spike:2 flipping:1 strategy:4 capacity:17 decoder:3 evenly:2 toward:1 assuming:1 code:14 length:4 ratio:3 minimizing:1 mostly:1 negative:1 upper:1 neuron:3 displayed:1 precise:1 rn:1 arbitrary:4 namely:1 pair:2 specified:2 z1:2 optimized:1 learned:1 nip:1 trans:1 bar:7 sparsity:1 summarize:1 including:1 green:1 overlap:1 critical:1 natural:2 representing:2 improve:1 axis:18 faced:1 review:1 prior:1 understanding:1 loss:2 generator:1 consistent:1 row:4 free:2 bias:2 taking:1 sparse:2 distributed:2 ending:1 sensory:5 made:2 adaptive:1 qualitatively:1 hyvarinen:1 reconstructed:1 netw:1 unreliable:1 pittsburgh:1 quantifies:1 channel:19 zk:4 robust:11 nature:1 transfer:1 hornik:1 mse:19 complex:3 necessarily:1 domain:2 diag:1 main:1 whole:1 noise:11 n2:5 nothing:1 complementary:1 fig:13 redlich:1 wiley:1 precision:3 wavelet:6 magenta:1 exists:1 effectively:1 drew:1 importance:1 karhunen:1 simply:2 infinitely:1 ganglion:1 visual:1 prevents:1 expressed:4 lewicki:2 u2:5 corresponds:2 determines:1 goal:2 formulated:1 infinite:1 specifically:2 except:3 determined:1 principal:9 indicating:1 formally:1 highdimensional:1 latter:1 rotate:2 correlated:1 |
2,056 | 2,868 | Bayesian model learning in
human visual perception
Gerg?o Orb?an
Collegium Budapest
Institute for Advanced Study
2 Szenth?aroms?ag utca, Budapest,
1014 Hungary
[email protected]
Richard N. Aslin
Department of Brain and Cognitive
Sciences, Center for Visual Science
University of Rochester
Rochester, New York 14627, USA
[email protected]
J?ozsef Fiser
Department of Psychology and
Volen Center for Complex Systems
Brandeis University
Waltham, Massachusetts 02454, USA
[email protected]
M?at?e Lengyel
Gatsby Computational Neuroscience Unit
University College London
17 Queen Square, London WC1N 3AR
United Kingdom
[email protected]
Abstract
Humans make optimal perceptual decisions in noisy and ambiguous
conditions. Computations underlying such optimal behavior have been
shown to rely on probabilistic inference according to generative models
whose structure is usually taken to be known a priori. We argue that
Bayesian model selection is ideal for inferring similar and even more
complex model structures from experience. We find in experiments that
humans learn subtle statistical properties of visual scenes in a completely
unsupervised manner. We show that these findings are well captured by
Bayesian model learning within a class of models that seek to explain
observed variables by independent hidden causes.
1
Introduction
There is a growing number of studies supporting the classical view of perception as probabilistic inference [1, 2]. These studies demonstrated that human observers parse sensory
scenes by performing optimal estimation of the parameters of the objects involved [3, 4, 5].
Even single neurons in primary sensory cortices have receptive field properties that seem to
support such a computation [6]. A core element of this Bayesian probabilistic framework is
an internal model of the world, the generative model, that serves as a basis for inference. In
principle, inference can be performed on several levels: the generative model can be used
for inferring the values of hidden variables from observed information, but also the model
itself may be inferred from previous experience [7].
Most previous studies testing the Bayesian framework in human psychophysical experiments used highly restricted generative models of perception, usually consisting of a few
observed and latent variables, of which only a limited number of parameters needed to
be adjusted by experience. More importantly, the generative models considered in these
studies were tailor-made to the specific pscychophysical task presented in the experiment.
Thus, it remains to be shown whether more flexible, ?open-ended? generative models are
used and learned by humans during perception.
Here, we use an unsupervised visual learning task to show that a general class of generative models, sigmoid belief networks (SBNs), perform similarly to humans (also reproducing paradoxical aspects of human behavior), when not only the parameters of these
models but also their structure is subject to learning. Crucially, the applied Bayesian model
learning embodies the Automatic Occam?s Razor (AOR) effect that selects the models that
are ?as simple as possible, but no simpler?. This process leads to the extraction of independent causes that efficiently and sufficiently account for sensory experience, without a
pre-specification of the number or complexity of potential causes.
In section 2, we describe the experimental protocol we used in detail. Next, the mathematical framework is presented that is used to study model learning in SBNs (Section 3). In
Section 4, experimental results on human performance are compared to the prediction of
our Bayes-optimal model learning in the SBN framework. All the presented human experimental results were reproduced and had identical roots in our simulations: the modal model
developed latent variables corresponding to the unknown underlying causes that generated
the training scenes.
In Section 5, we discuss the implications of our findings. Although structure and parameter learning are not fundamentally different computations in Bayesian inference, we argue
that the natural integration of these two kinds of learning lead to a behavior that accounts
for human data which cannot be reproduced in some simpler alternative learning models
with parameter but without structure learning. Given the recent surge of biologically plausible neural network models performing inference in belief networks we also point out
challenges that our findings present for future models of probabilistic neural computations.
2
Experimental paradigm
Human adult subjects were trained and then tested in an unsupervised learning paradigm
with a set of complex visual scenes consisting of 6 of 12 abstract unfamiliar black shapes
arranged on a 3x3 (Exp 1) or 5x5 (Exps 2-4) white grid (Fig. 1, left panel). Unbeknownst to
subjects, various subsets of the shapes were arranged into fixed spatial combinations (combos) (doublets, triplets, quadruplets, depending on the experiment). Whenever a combo
appeared on a training scene, its constituent shapes were presented in an invariant spatial
arrangement, and in no scenes elements of a combo could appear without all the other elements of the same combo also appearing. Subjects were presented with 100?200 training
scenes, each scene was presented for 2 seconds with a 1-second pause between scenes. No
specific instructions were given to subjects prior to training, they were only asked to pay
attention to the continuous sequence of scenes.
The test phase consisted of 2AFC trials, in which two arrangements of shapes were shown
sequentially in the same grid that was used in the training, and subjects were asked which
of the two scenes was more familiar based on the training. One of the presented scenes
was either a combo that was actually used for constructing the training set (true combo), or
a part of it (embedded combo) (e.g., a pair of adjacent shapes from a triplet or quadruplet
combo). The other scene consisted of the same number of shapes as the first scene in
an arrangement that might or might not have occurred during training, but was in fact a
mixture of shapes from different true combos (mixture combo).
Here four experiments are considered that assess various aspects of human observational
wx
x1
w11
y1
wx
1
w12
x2
w22
y2
wy
1
2
w24
w23
y3
wy
2
y4
wy
3
wy
4
Figure 1: Experimental design (left panel) and explanation of graphical model parameters
(right panel).
learning, the full set of experiments are presented elsewhere [8, 9]. Each experiment was
run with 20 na??ve subjects.
1. Our first goal was to establish that humans are sensitive to the statistical structure of visual experience, and use this experience for judging familiarity. In the
baseline experiment 6 doublet combos were defined, three of which were presented simultaneously in any given training scene, allowing 144 possible scenes
[8]. Because the doublets were not marked in any way, subjects saw only a group
of random shapes arranged on a grid. The occurrence frequency of doublets and
individual elements was equal across the set of scenes, allowing no obvious bias
to remember any element more than others. In the test phase a true and a mixture
doublet were presented sequentially in each 2AFC trial. The mixture combo was
presented in a spatial position that had never appeared before.
2. In the previous experiment the elements of mixture doublets occurred together
fewer times than elements of real doublets, thus a simple strategy based on tracking co-occurrence frequencies of shape-pairs would be sufficient to distinguish
between them. The second, frequency-balanced experiment tested whether humans are sensitive to higher-order statistics (at least cross-correlations, which are
co-occurence frequencies normalized by respective invidual occurence frequencies).
The structure of Experiment 1 was changed so that while the 6 doublet combo architecture remained, their appearance frequency became non-uniform introducing
frequent and rare combos. Frequent doublets were presented twice as often as rare
ones, so that certain mixture doublets consisting of shapes from frequent doublets
appeared just as often as rare doublets. Note, that the frequency of the constituent
shapes of these mixture doublets was higher than that of rare doublets. The training session consisted of 212 scenes, each scene being presented twice. In the test
phase, the familiarity of both single shapes and doublet combos was tested. In the
doublet trials, rare combos with low appearance frequency but high correlations
between elements were compared to mixed combos with higher element and equal
pair appearance frequency, but lower correlations between elements.
3. The third experiment tested whether human performance in this paradigm can
be fully accounted for by learning cross-correlations. Here, four triplet combos
were formed and presented with equal occurrence frequencies. 112 scenes were
presented twice to subjects. In the test phase two types of tests were performed. In
the first type, the familiarity of a true triplet and a mixture triplet was compared,
while in the second type doublets consisting of adjacent shapes embedded in a
triplet combo (embedded doublet) were tested against mixture doublets.
4. The fourth experiment compared directly how humans treat embedded and independent (non-embedded) combos of the same spatial dimensions. Here two
quadruplet combos and two doublet combos were defined and presented with
equal frequency. Each training scene consisted of six shapes, one quadruplet and
one doublet. 120 such scenes were constructed. In the test phase three types of
tests were performed. First, true quadruplets were compared to mixture quadruplets; next, embedded doublets were compared to mixture doublets, finally true
doublets were compared to mixture doublets.
3
Modeling framework
The goal of Bayesian learning is to ?reverse-engineer? the generative model that could have
generated the training data. Because of inherent ambiguity and stochasticity assumed by the
generative model itself, the objective is to establish a probability distribution over possible
models. Importantly, because models with parameter spaces of different dimensionality are
compared, the likelihood term (Eq. 3) will prefer the simplest model (in our case, the one
with fewest parameters) that can effectively account for (generate) the training data due to
the AOR effect in Bayesian model comparison [7].
Sigmoid belief networks The class of generative models we consider is that of two-layer
sigmoid belief networks (SBNs, Fig. 1). The same modelling framework has been successfully aplied to animal learning in classical conditioning [10, 11]. The SBN architecture
assumes that the state of observed binary variables (yj , in our case: shapes being present
or absent in a training scene) depends through a sigmoidal activation function on the state
of a set of hidden binary variables (x), which are not directly observable:
!!?1
X
P (yj = 1|x, wm , m) = 1 + exp ?
wij xi ? wyj
(1)
i
where wij describes the (real-valued) influence of hidden variable xi on observed variable yj , wyj determines the spontaneous activation bias of yj , and m indicates the model
structure, including the number of latent variables and identity of the observeds they can
influence (the wij weights that are allowed to have non-zero value).
Observed variables are independent conditioned on the latents (i.e. any correlation between
them is assumed to be due to shared causes), and latent variables are marginally independent and have Bernoulli distributions parametrised by wx :
Y
Y
?1
P (y|x, wm , m) =
P (yj |x, wm , m) , P (x|wm , m) =
(1 + exp (?1xi wxi ))
j
i
(2)
(t)
Finally, scenes (y ) are assumed to be iid samples from the same generative distribution,
and so the probability of the training data (D) given a specific model is:
YXY
Y
(t)
P (D|wm , m) =
P y(t) |wm , m =
P yj , x|wm , m
(3)
t
t
x
j
The ?true? generative model that was actually used for generating training data in the experiments (Section 2) is closely related to this model, with the combos corresponding to
latent variables. The main difference is that here we ignore the spatial aspects of the task,
i.e. only the occurrence of a shape matters but not where it appears on the grid. Although in
general, space is certainly not a negligible factor in vision, human behavior in the present
experiments depended on the fact of shape-appearances sufficiently strongly so that this
simplification did not cause major confounds in our results.
A second difference between the model and the human experiments was that in the experiments, combos were not presented completely randomly, because the number of combos
per scene was fixed (and not binomially distributed as implied by the model, Eq. 2). Nevertheless, our goal was to demonstrate the use of a general-purpose class of generative
models, and although truly independent causes are rare in natural circumstances, always a
fixed number of them being present is even more so. Clearly, humans are able to capture
dependences between latent variables, and these should be modeled as well ([12]). Similarly, for simplicity we also ignored that subsequent scenes are rarely independent (Eq. 3)
in natural vision.
Training Establishing the posterior probability of any given model is straightforward
using Bayes? rule:
P (wm , m|D) ? P (D|wm , m) P (wm , m)
(4)
where the first term is the likelihood of the model (Eq. 3), and the second term is the prior
distribution of models. Prior distributions
for the weights were: P (wij ) = Laplace (12, 2),
P (wxi ) = Laplace (0, 2), P wxj = ? (?6). The prior over model structure preferred
simple models and was such that the distributions of the number of latents and of the
number of links conditioned on the number of latents were both Geometric (0.1). The
effect of this preference is ?washed out? with increasing training length as the likelihood
term (Eq. 3) sharpens.
Testing When asked to compare the familiarity of two scenes (yA and yB ) in the testing
phase, the optimal strategy for subjects would be to compute the posterior probability of
both scenes based on the training data
Z
X
X
Z
P y |D =
dwm
P yZ , x|wm , m P (wm , m|D)
(5)
m
x
and always (ie, with probability one) choose the one with the higher probability. However,
as a phenomenological model of all kinds of possible sources of noise (sensory noise,
model noise, etc) we chose a soft threshold function for computing choice probability:
!!?1
P yA |D
P (choose A) = 1 + exp ?? log
(6)
P (yB |D)
and used ? = 1 (? = ? corresponds to the optimal strategy).
Note that when computing the probability of a test scene, we seek the probability that
exactly the given scene was generated by the learned model. This means that we require
not only that all the shapes that are present in the test scene are present in the generated
data, but also that all the shapes that are absent from the test scene are absent from the
generated data. A different scheme, in which only the presence but not the absence of the
shapes need to be matched (i.e. absent observeds are marginalized out just as latents are in
Eq. 5) could also be pursued, but the results of the embedding experiments (Exp. 3 and 4,
see below) discourage it.
The model posterior in Eq. 4 is analytically intractable, therefore an exchange reversiblejump Markov chain Monte Carlo sampling method [10, 13, 14] was applied, that ensured
fair sampling from a model space containing subspaces of differring dimensionality, and
integration over this posterior in Eq. 5 was approximated by a sum over samples.
4
Results
Pilot studies were performed with reduced training datasets in order to test the performance
of the model learning framework. First, we trained the model on data consisting of 8 observed variables (?shapes?). The 8 ?shapes? were partitioned into three ?combos? of different
12
.
?6
12
?6
13
?1
12
12
13
?6
?6
?6
?1
13
?6
12
?6
?6
.
Avarage latent #
?1
4
3
2
1
0
0
10
20
Training length
30
Figure 2: Bayesian learning in sigmoid belief networks. Left panel: MAP model of a 30trial-long training with 8 observed variables and 3 combos. Latent variables of the MAP
model reflect the relationships defined by the combos. Right panel: Increasing model
complexity with increasing training experience. Average number of latent variables (?SD)
in the model posterior distribution as a function of the length of training data was obtained
by marginalizing Eq. 4 over weights w.
sizes (5, 2, 1), two of which were presented simultaneously in each training trial. The AOR
effect in Bayesian model learning should select the model structure that is of just the right
complexity for describing the data. Accordingly, after 30 trials, the maximum a posteriori (MAP) model had three latents corresponding to the underlying ?combos? (Fig. 2, left
panel). Early on in training simpler model structures dominated because of the prior preference for low latent and link numbers, but due to the simple structure of the training data
the likelihood term won over in as few as 10 trials, and the model posterior converged to
the true generative model (Fig. 2, right panel, gray line). Importantly, presenting more data
with the same statistics did not encourage the fitting of over-complicated model structures.
On the other hand, if data was generated by using more ?combos? (4 ?doublets?), model
learning converged to a model with a correspondingly higher number of latents (Fig. 2,
right panel, black line).
In the baseline experiment (Experiment 1) human subjects were trained with six equalsized doublet combos and were shown to recognize true doublets over mixture doublets
(Fig. 3, first column). When the same training data was used to compute the choice probability in 2AFC tests with model learning, true doublets were reliably preferred over mixture
doublets. Also, the MAP model showed that the discovered latent variables corresponded
to the combos generating the training data (data not shown).
In Experiment 2, we sought to answer the question whether the statistical learning demonstrated in Experiment 1 was solely relying on co-occurrence frequencies, or was using something more sophisticated, such as at least cross-correlations between shapes.
Bayesian model learning, as well as humans, could distinguish between rare doublet combos and mixtures from frequent doublets (Fig. 3, second column) despite their balanced
co-occurrence frequencies. Furthermore, although in this comparison rare doublet combos
were preferred, both humans and the model learned about the frequencies of their constituent shapes and preferred constituent single shapes of frequent doublets over those of
rare doublets. Nevertheless, it should be noted that while humans showed greater preference for frequent singlets than for rare doublets our simulations predicted an opposite
trend1 .
We were interested whether the performance of humans could be fully accounted for by
the learning of cross-correlations, or they demonstrated more sophisticated computations.
1
This discrepancy between theory and experiments may be explained by Gestalt effects in human
vision that would strongly prefer the independent processing of constituent shapes due to their clear
spatial separation in the training scenes. The reconciliation of such Gestalt effects with pure statistical
learning is the target of further investigations.
Percent correct
EXPERIMENT
Experiment 1
100
Percent correct
Experiment 3
Experiment 4
100
100
80
80
80
80
60
60
60
60
40
40
40
40
20
20
20
20
0
0
0
dbls
100
MODEL
Experiment 2
100
dbls
sngls
trpls
e?d dbls
0
100
100
100
80
80
80
80
60
60
60
60
40
40
40
40
20
20
20
20
0
0
0
dbls
dbls
sngls
trpls
e?d dbls
0
qpls
idbls
e?d dbls
qpls
idbls
e?d dbls
Figure 3: Comparison of human and model performance in four experiments. Bars show
percent ?correct? values (choosing a true or embedded combo over a mixture combo, or a
frequent singlet over a rare singlet) for human experiments (average over subjects ?SEM),
and ?correct? choice probabilities (Eq. 6) for computer simulations. Sngls: Single shapes;
dbls: Doublet combos; trpls: triplet combos; e?d dbls: embedded doublet combos; qpls:
quadruple combos; idbls: independent doublet combos.
In Experiment 3, training data was composed of triplet combos, and beside testing true
triplets against mixture triplets, we also tested embedded doublets (pairs of shapes from
the same triplet) against mixture doublets (pairs of shapes from different triplets). If learning only depends on cross-correlations, we expect to see similar performance on these
two types of tests. In contrast, human performace was significantly different for triplets
(true triplets were preferred) and doublets (embedded and mixture doublets were not distinguished) (Fig. 3, third column). This may be seen as Gestalt effects being at work: once
the ?whole? triplet is learned, its constituent parts (the embedded doublets) loose their significance. Our model reproduced this behavior and provided a straightforward explanation:
latent-to-observed weights (wij ) in the MAP model were so strong that whenever a latent
was switched on it could almost only produce triplets, therefore doublets were created by
spontaneous independent activation of observeds which thus produced embedded and mixture doublets with equal chance. In other words, doublets were seen as mere noise under
the MAP model.
The fourth experiment tested explicitly whether embedded combos and equal-sized independent real combos are distinguished and not only size effects prevented the recognition
of embedded small structures in the previous experiment. Both human experiments and
Bayesian model selection demonstrated that quadruple combos as well as stand-alone doublets were reliably recognized (Fig. 3, fourth column), while embedded doublets were not.
5
Discussion
We demonstrated that humans flexibly yet automatically learn complex generative models
in visual perception. Bayesian model learning has been implicated in several domains of
high level human cognition, from causal reasoning [15] to concept learning [16]. Here we
showed it being at work already at a pre-verbal stage.
We emphasized the importance of learning the structure of the generative model, not only
its parameters, even though it is quite clear that the two cannot be formally distinguished.
Nevertheless we have two good reasons to believe that structure learning is indeed impor-
tant in our case. (1) Sigmoid belief networks identical to ours but without structure learning
have been shown to perform poorly on a task closely related to ours [17], F?oldi?ak?s bar test
[18]. More complicated models will of course be able to produce identical results, but we
think our model framework has the advantage of being intuitively simple: it seeks to find
the simplest possible explanation for the data assuming that it was generated by independent causes. (2) Structure learning allows Occam?s automatic razor to come to play. This is
computationally expensive, but together with the generative model class we use provides a
neat and highly efficient way to discover ?independent components? in the data. We experienced difficulties with other models [17] developed for similar purposes when trying to
reproduce our experimental findings.
Our approach is very much in the tradition that sees the finding of independent causes behind sensory data as one of the major goals of perception [2]. Although neural network
models that can produce such computations exist [6, 19], none of these does model selection. Very recently, several models have been proposed for doing inference in belief networks [20, 21] but parameter learning let alone structure learning proved to be non-trivial
in them. Our results highlight the importance of considering model structure learning in
neural models of Bayesian inference.
Acknowledgements
We were greatly motivated by the earlier work of Aaron Courville and Nathaniel Daw
[10, 11], and hugely benefited from several useful discussions with them. We would also
like to thank the insightful comments of Peter Dayan, Maneesh Sahani, Sam Roweis, and
Zolt?an Szatm?ary on an earlier version of this work. This work was supported by IST-FET1940 program (GO), NIH research grant HD-37082 (RNA, JF), and the Gatsby Charitable
Foundation (ML).
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
Helmholtz HLF. Treatise on Physiological Optics. New York: Dover, 1962.
Barlow HB. Vision Res 30:1561, 1990.
Ernst MO, Banks MS. Nature 415:429, 2002.
K?ording KP, Wolpert DM. Nature 427:244, 2004.
Kersten D, et al. Annu Rev Psychol 55, 2004.
Olshausen BA, Field DJ. Nature 381:607, 1996.
MacKay DJC. Network: Comput Neural Syst 6:469, 1995.
Fiser J, Aslin RN. Psych Sci 12:499, 2001.
Fiser J, Aslin RN. J Exp Psychol Gen , in press.
Courville AC, et al. In NIPS 16 , Cambridge, MA, 2004. MIT Press.
Courville AC, et al. In NIPS 17 , Cambridge, MA, 2005. MIT Press.
Hinton GE, et al. In Artificial Intelligence and Statistics , Barbados, 2005.
Green PJ. Biometrika 82:711, 1995.
Iba Y. Int J Mod Phys C 12:623, 2001.
Tenenbaum JB, Griffiths TL. In NIPS 15 , 35, Cambridge, MA, 2003. MIT Press.
Tenenbaum JB. In NIPS 11 , 59, Cambridge, MA, 1999. MIT Press.
Dayan P, Zemel R. Neural Comput 7:565, 1995.
F?oldiak P. Biol Cybern 64:165, 1990.
Dayan P, et al. Neural Comput 7:1022, 1995.
Rao RP. Neural Comput 16:1, 2004.
Deneve S. In NIPS 17 , Cambridge, MA, 2005. MIT Press.
| 2868 |@word trial:7 version:1 sharpens:1 open:1 instruction:1 hu:1 seek:3 crucially:1 simulation:3 zolt:1 united:1 ours:2 ording:1 activation:3 yet:1 subsequent:1 wx:3 shape:29 alone:2 generative:17 fewer:1 pursued:1 intelligence:1 accordingly:1 dover:1 core:1 provides:1 preference:3 sigmoidal:1 simpler:3 mathematical:1 constructed:1 fitting:1 manner:1 indeed:1 behavior:5 surge:1 growing:1 brain:1 relying:1 automatically:1 considering:1 increasing:3 provided:1 discover:1 underlying:3 matched:1 panel:8 avarage:1 kind:2 psych:1 developed:2 ag:1 finding:5 ended:1 remember:1 y3:1 exactly:1 ensured:1 biometrika:1 uk:1 unit:1 grant:1 appear:1 before:1 negligible:1 treat:1 sd:1 depended:1 despite:1 ak:1 establishing:1 quadruple:2 solely:1 black:2 might:2 twice:3 chose:1 co:4 limited:1 testing:4 yj:6 x3:1 maneesh:1 significantly:1 pre:2 word:1 performace:1 griffith:1 cannot:2 selection:3 influence:2 cybern:1 kersten:1 map:6 demonstrated:5 center:2 straightforward:2 attention:1 flexibly:1 hugely:1 go:1 simplicity:1 aor:3 pure:1 rule:1 importantly:3 hd:1 embedding:1 laplace:2 spontaneous:2 target:1 play:1 element:10 helmholtz:1 approximated:1 recognition:1 expensive:1 observed:9 capture:1 sbn:2 balanced:2 complexity:3 exps:1 asked:3 trained:3 completely:2 basis:1 various:2 fewest:1 describe:1 london:2 monte:1 kp:1 artificial:1 zemel:1 corresponded:1 choosing:1 whose:1 quite:1 plausible:1 valued:1 statistic:3 think:1 noisy:1 itself:2 reproduced:3 sequence:1 advantage:1 ucl:1 frequent:7 budapest:2 hungary:1 gen:1 poorly:1 ernst:1 roweis:1 constituent:6 produce:3 generating:2 object:1 depending:1 ac:3 eq:10 strong:1 predicted:1 come:1 waltham:1 orb:1 closely:2 correct:4 human:32 observational:1 require:1 exchange:1 investigation:1 adjusted:1 sufficiently:2 considered:2 exp:6 cognition:1 mo:1 major:2 sought:1 early:1 purpose:2 estimation:1 sensitive:2 saw:1 successfully:1 mit:5 clearly:1 always:2 rna:1 modelling:1 likelihood:4 indicates:1 bernoulli:1 greatly:1 contrast:1 tradition:1 baseline:2 posteriori:1 inference:8 dayan:3 hidden:4 wij:5 reproduce:1 selects:1 interested:1 flexible:1 priori:1 animal:1 spatial:6 integration:2 mackay:1 field:2 equal:6 never:1 extraction:1 once:1 sampling:2 identical:3 unsupervised:3 afc:3 discrepancy:1 future:1 jb:2 others:1 aslin:4 fundamentally:1 richard:1 few:2 inherent:1 randomly:1 composed:1 simultaneously:2 ve:1 recognize:1 individual:1 familiar:1 phase:6 consisting:5 djc:1 highly:2 certainly:1 mixture:20 truly:1 behind:1 parametrised:1 wc1n:1 implication:1 chain:1 encourage:1 experience:7 respective:1 impor:1 re:1 causal:1 column:4 modeling:1 soft:1 earlier:2 rao:1 ar:1 queen:1 introducing:1 subset:1 rare:11 latents:6 uniform:1 answer:1 ie:1 probabilistic:4 barbados:1 together:2 na:1 ambiguity:1 reflect:1 containing:1 choose:2 cognitive:1 syst:1 account:3 potential:1 int:1 matter:1 explicitly:1 depends:2 performed:4 root:1 view:1 observer:1 doing:1 wm:12 bayes:2 complicated:2 rochester:3 ass:1 square:1 formed:1 yxy:1 became:1 nathaniel:1 efficiently:1 confounds:1 bayesian:15 produced:1 iid:1 marginally:1 carlo:1 mere:1 none:1 lengyel:1 ary:1 converged:2 explain:1 phys:1 whenever:2 against:3 frequency:14 involved:1 obvious:1 dm:1 pilot:1 proved:1 massachusetts:1 w23:1 dimensionality:2 combo:45 subtle:1 sophisticated:2 actually:2 appears:1 higher:5 modal:1 yb:2 arranged:3 though:1 strongly:2 furthermore:1 just:3 stage:1 fiser:4 correlation:8 hand:1 parse:1 gray:1 believe:1 olshausen:1 usa:2 effect:8 consisted:4 true:13 y2:1 normalized:1 concept:1 analytically:1 tant:1 barlow:1 white:1 adjacent:2 x5:1 during:2 quadruplet:6 ambiguous:1 razor:2 noted:1 iba:1 won:1 m:1 trying:1 presenting:1 demonstrate:1 percent:3 reasoning:1 recently:1 nih:1 sigmoid:5 conditioning:1 occurred:2 unfamiliar:1 cambridge:5 cv:1 automatic:2 grid:4 similarly:2 session:1 stochasticity:1 dj:1 had:3 phenomenological:1 specification:1 treatise:1 cortex:1 etc:1 something:1 posterior:6 recent:1 showed:3 reverse:1 certain:1 binary:2 captured:1 seen:2 greater:1 recognized:1 paradigm:3 full:1 sbns:3 reconciliation:1 cross:5 long:1 doublet:50 prevented:1 prediction:1 vision:4 circumstance:1 source:1 dwm:1 comment:1 subject:12 mod:1 seem:1 presence:1 ideal:1 hb:1 psychology:1 architecture:2 opposite:1 absent:4 whether:6 six:2 motivated:1 peter:1 york:2 cause:9 ignored:1 useful:1 clear:2 tenenbaum:2 simplest:2 reduced:1 generate:1 exist:1 judging:1 neuroscience:1 per:1 gerg:1 group:1 ist:1 four:3 nevertheless:3 threshold:1 pj:1 deneve:1 lmate:1 sum:1 run:1 fourth:3 tailor:1 almost:1 separation:1 w12:1 decision:1 collegium:1 prefer:2 layer:1 pay:1 distinguish:2 simplification:1 courville:3 optic:1 w22:1 scene:33 x2:1 dominated:1 aspect:3 performing:2 department:2 according:1 combination:1 wxi:2 across:1 describes:1 sam:1 partitioned:1 rev:1 biologically:1 explained:1 restricted:1 invariant:1 intuitively:1 taken:1 unbeknownst:1 computationally:1 remains:1 discus:1 describing:1 loose:1 needed:1 ge:1 serf:1 appearing:1 occurrence:6 distinguished:3 alternative:1 rp:1 assumes:1 graphical:1 paradoxical:1 marginalized:1 embodies:1 yz:1 establish:2 classical:2 psychophysical:1 objective:1 implied:1 arrangement:3 question:1 already:1 receptive:1 primary:1 strategy:3 dependence:1 subspace:1 link:2 thank:1 sci:1 argue:2 trivial:1 reason:1 assuming:1 length:3 modeled:1 y4:1 relationship:1 kingdom:1 ba:1 design:1 binomially:1 reliably:2 unknown:1 perform:2 allowing:2 neuron:1 markov:1 datasets:1 oldi:1 supporting:1 hinton:1 volen:1 y1:1 discovered:1 rn:2 reproducing:1 inferred:1 pair:5 learned:4 daw:1 nip:5 adult:1 able:2 bar:2 usually:2 perception:6 wy:4 below:1 appeared:3 challenge:1 program:1 including:1 green:1 explanation:3 belief:7 natural:3 rely:1 difficulty:1 pause:1 advanced:1 scheme:1 w11:1 created:1 washed:1 psychol:2 occurence:2 sahani:1 prior:5 geometric:1 acknowledgement:1 marginalizing:1 embedded:15 fully:2 beside:1 expect:1 highlight:1 mixed:1 foundation:1 switched:1 sufficient:1 principle:1 charitable:1 bank:1 occam:2 elsewhere:1 changed:1 accounted:2 course:1 supported:1 neat:1 implicated:1 verbal:1 bias:2 institute:1 correspondingly:1 distributed:1 dimension:1 world:1 stand:1 sensory:5 made:1 brandeis:2 gestalt:3 observable:1 ignore:1 preferred:5 ml:1 sequentially:2 assumed:3 xi:3 continuous:1 latent:13 triplet:16 learn:2 nature:3 sem:1 complex:4 discourage:1 constructing:1 protocol:1 domain:1 did:2 significance:1 main:1 whole:1 noise:4 allowed:1 fair:1 x1:1 fig:9 benefited:1 tl:1 gatsby:3 experienced:1 inferring:2 position:1 comput:4 perceptual:1 third:2 annu:1 remained:1 familiarity:4 specific:3 wyj:2 emphasized:1 insightful:1 physiological:1 intractable:1 effectively:1 importance:2 conditioned:2 wolpert:1 appearance:4 visual:7 tracking:1 corresponds:1 determines:1 chance:1 ma:5 goal:4 marked:1 identity:1 sized:1 shared:1 absence:1 jf:1 engineer:1 experimental:6 ya:2 ozsef:1 rarely:1 select:1 college:1 formally:1 internal:1 support:1 aaron:1 tested:7 biol:1 |
2,057 | 2,869 | Beyond Gaussian Processes: On the
Distributions of Infinite Networks
Ricky Der
Department of Mathematics
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Daniel Lee
Department of Electrical Engineering
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Abstract
A general analysis of the limiting distribution of neural network functions
is performed, with emphasis on non-Gaussian limits. We show that with
i.i.d. symmetric stable output weights, and more generally with weights
distributed from the normal domain of attraction of a stable variable, that
the neural functions converge in distribution to stable processes. Conditions are also investigated under which Gaussian limits do occur when
the weights are independent but not identically distributed. Some particularly tractable classes of stable distributions are examined, and the
possibility of learning with such processes.
1 Introduction
Consider the model
fn (x) =
n
n
1 X
1 X
vj h(x; uj ) ?
vj hj (x)
sn j=1
sn j=1
(1)
which can be viewed as a multi-layer perceptron with input x, hidden functions h, weights
uj , output weights vj , and sn a sequence of normalizing constants. The work of Radford
Neal [1] showed that, under certain assumptions on the parameter priors {vj , hj }, the distribution over the implied network functions fn converged to that of a Gaussian process,
in the large network limit n ? ?. The main feature of this derivation consisted of an
invocation of the classical Central Limit Theorem (CLT).
While one cavalierly speaks of ?the? central limit theorem, there are in actuality many different CLTs, of varying generality and effect. All are concerned with the limits of suitably
normalised sums of independent random variables (or where some condition is imposed so
that no one variable dominates the sum1 ), but the limits themselves differ greatly: Gaussian,
stable, infinitely divisible, or, discarding the infinitesimal assumption, none of these. It follows that in general, the asymptotic process for (1) may not be Gaussian. The following
questions then arise: what is the relationship between choices of distributions on the model
priors, and the asymptotic distribution over the induced neural functions? Under what conditions does the Gaussian approximation hold? If there do exist non-Gaussian limit points,
is it possible to construct analagous generalizations of Gaussian process regression?
1
Typically called an infinitesimal condition ? see [4].
Previous work on these problems consists mainly in Neal?s publication [1], which established that when the output weights vj are finite variance and i.i.d., the limiting distribution
is a Gaussian process. Additionally, it was shown that when the weights are i.i.d. symmetric stable (SS), the first-order marginal distributions of the functions are also SS. Unfortunately, no mathematical analysis was presented to show that the higher-order distributions
converged, though empirical evidence was suggestive of that hypothesis. Moreover, the
exact form of the higher-dimensional distributions remained elusive.
This paper conducts a further investigation of these questions, with concentration on the
cases where the weight priors can be 1) of infinite variance, and 2) non-i.i.d. Such assumptions fall outside the ambit of the classical CLT, but are amenable to more general limit
methods. In Section 1, we give a general classification of the possible limiting processes
that may arise under an i.i.d. assumption on output weights distributed from a certain class
? roughly speaking, those weights with tails asymptotic to a power-law ? and provide
explicit formulae for all the joint distribution functions. As a byproduct, Neal?s preliminary
analysis is completed, a full multivariate prescription attained and the convergence of the
finite-dimensional distributions proved. The subsequent section considers non-i.i.d. priors,
specifically independent priors where the ?identically distributed? assumption is discarded.
An example where a finite-variance non-Gaussian process acts as a limit point for a nontrivial infinite network is presented, followed by an investigation of conditions under which
the Gaussian approximation is valid, via the Lindeberg-Feller theorem. Finally, we raise
the possibility of replacing network models with the processes themselves for learning applications: here, motivated by the foregoing limit theorems, the set of stable processes
form a natural generalization to the Gaussian case. Classes of stable stochastic processes
are examined where the parameterizations are particularly simple, as well as preliminary
applications to the nonlinear regression problem.
2 Neural Network Limits
Referring to (1), we make the following assumptions: hj (x) ? h(x; uj ) are uniformly
bounded in x (as for instance occurs if h is associated with some fixed nonlinearity), and
{uj } is an i.i.d. sequence, so that hj (x) are i.i.d. for fixed x, and independent of {vj }.
With these assumptions, the choice of output priors vj will tend to dictate large-network
behavior, independently of uj . In the sequel, we restrict ourselves to functions fn (x) :
R ? R, as the respective proofs for the generalizations of x and fn to higher-dimensional
spaces are routine. Finally, all random variables are assumed to be of zero mean whenever
first moments exist. For brevity, we only present sketches of proofs.
2.1 Limits with i.i.d. priors
The Gaussian distribution has the feature that if X1 and X2 are statistically independent
copies of the Gaussian variable X, then their linear combination is also Gaussian, i.e.
aX1 + bX2 has the same distribution as cX + d for some c and d. More generally, the
stable distributions [5], [6, Chap. 17] are defined to be the set of all distributions satisfying
the above ?closure? property. If one further demands symmetry of the distribution, then
?
?
they must have characteristic function ?(t) = e?? |t| , for parameters ? > 0 (called
the spread), and 0 < ? ? 2, termed the index. Since the characteristic functions are not
generally twice differentiable at t = 0, their variances are infinite, the Gaussian distribution
being the only finite variance stable distribution, associated to index ? = 2.
The attractive feature of stable variables, by definition, is closure under the formation of linear combinations: the linear combination of any two independent stable variables is another
stable variable of the same index. Moreover, the stable distributions are attraction points
of distributions under a linear combiner operator, and indeed, the only such distributions in
Pn
the following sense: if {Yj } are i.i.d., and an + s1n j=1 Yj converges in distribution to X,
then X must be stable [5]. This fact already has consequences for our network model (1),
and implies that ? under i.i.d. priors vj , and assuming (1) converges at all ? convergence
can occur only to stable variables, for each x.
Multivariate analogues are defined similarly: we say a random vector X is (strictly) stable
if, for every a, b ? R, there exists a constant c such that aX1 + bX2 = cX where Xi are
independent copies of X and the equality is in distribution. A symmetric stable random
vector is one which is stable and for which the distribution of X is the same as ?X. The
following important classification theorem gives an explicit Fourier domain description of
all multivariate symmetric stable distributions:
Theorem 1. Kuelbs [5]. X is a symmetric ?-stable vector if and only if it has characteristic
function
? Z
?
|ht, si|? d?(s)
?(t) = exp ?
(2)
S d?1
where ? is a finite measure on the unit (d ? 1)-sphere S d?1 , and 0 < ? ? 2.
? = 1 (?(A) +
Remark: (2) remains unchanged replacing ? by the symmetrized measure ?
2
? is called
?(?A)), for all Borel sets A. In this case, the (unique) symmetrized measure ?
the spectral measure of the stable random vector X.
Finally, stable processes are defined as indexed sets of random variables whose finitedimensional distributions are (multivariate) stable.
First we establish the following preliminary result.
Lemma 1. Let v be a symmetric stable random variable of index 0 < ? ? 2, and spread
? > 0. Let h be independent
of v and E|h|? < ?. If y = hv, and {yi } are i.i.d. copies of
Pn
1
y, then Sn = n1/? i=1 yi converges in distribution to an ?-stable variable with characteristic function ?(t) = exp{?|?t|? E|h|? }.
Proof. This follows by computing the characteristic function ?Sn , then using standard
theorems in measure theory (e.g. [4]), to obtain limn?? log ?Sn (t) = ?|?t|? E|h|? .
Now we can state the first network convergence theorem.
Proposition 1. Let the network (1) have symmetric
stable i.i.d. weights vj of index 0 <
Pn
1
v
? ? 2 and spread ?. Then fn (x) = n1/?
j=1 j hj (x) converges in distribution to a
symmetric ?-stable process f (x) as n ? ?. The finite-dimensional stable distribution of
(f (x1 ), . . . , f (xd )), where xi ? R, has characteristic function:
?
?(t) = exp (?? ? Eh |ht, hi| )
(3)
where h = (h(x1 ), . . . , h(xd )), and h(x) is a random variable with the common distribution (across j) of hj (x). Moreover, if h = (h(x1 ), . . . , h(xd )) has joint probability density
p(h) = p(rs), with s on the S d?1 sphere and r the radial component of h, then the finite
measure ? corresponding to the multivariate stable distribution of (f (x1 ), . . . , f (xd )) is
given by
?Z
?
?
d?(s) =
r?+d?1 p(rs) dr ds
(4)
0
where ds is Lebesgue measure on S d?1 .
Proof. It suffices to show that every finite-dimensional distribution of f (x) converges
Pd
to a symmetric multivariate stable characteristic function. We have i=1 ti fn (xi ) =
Pd
vj i=1 ti hj (xi ) for constants {x1 , . . . , xd } and (t1 , . . . , td ) ? Rd . An application of Lemma 1 proves the statement. The relation between the expectation in (3) and
the stable spectral measure (4) is derived from a change of variable to spherical coordinates
in the d-dimensional space of h.
1
n1/?
Pn
j=1
Remark: When ? = 2, the exponent in the characteristic function (3) is a quadratic form
in t, and becomes the usual Gaussian multivariate distribution.
The above proposition is the rigorous completion of Neal?s analysis, and gives the explicit
form of the asymptotic process under i.i.d. SS weights. More generally, we can consider
output weights from the normal domain of attraction of index ?, which, roughly, consists
of those densities whose tails are asymptotic to |x|?(?+1) , 0 < ? < 2 [6, pg. 547]. With a
similar proof to the previous theorem, one establishes
Proposition 2. Let network (1) have i.i.d. weights vj from the normal
Pn domain of attraction
1
of an SS variable with index ?, spread ?. Then fn (x) = n1/?
j=1 vj hj (x) converges in
distribution to a symmetric ?-stable process f (x), with the joint characteristic functions
given as in Proposition 1.
2.1.1 Example: Distributions with step-function priors
Let h(x) = sgn(a + ux), where a and u are independent Gaussians with zero mean. From
(3) it is clear that the limiting network function f (x) is a constant (in law, hence almost
surely), as |x| ? ?, so that the interesting behavior occurs in some ?central region?
|x| < k. Neal in [1] has shown that when the output weights vj are Gaussian, then the
choice of the signum nonlinearity for h gives rise to local Brownian motion in the central
regime.
There is a natural generalization of the Brownian process within the context of symmetric stable processes, called the symmetric ?-stable L?evy motion. It is characterised by an
indexed sequence {wt : t ? R} satisfying i) w0 = 0 almost surely, ii) independent increments, and iii) wt ? ws is distributed symmetric ?-stable with spread ? = |t ? s|1/? . As
we shall now show, the choice of step-function nonlinearity for h and symmetric ?-stable
priors for vj lead to locally L?evy stable motion, which provide a theoretical exposition for
the empirical observations in [1].
Fix two nearby positions x and y, and select ? = 1 for notational simplicity. From (3)
the random variable f (x) ? f (y) is symmetric stable with spread parameter [Eh |h(x) ?
h(y)|? ]1/? . For step inputs, |h(x) ? h(y)| is non-zero only when the step located at ?a/u
falls between x and y. For small |x?y| approximate the density of this event to be uniform,
so that [Eh |h(x) ? h(y)|? ] ? |x ? y|. Hence locally, the increment f (x) ? f (y) is a
symmetric stable variable with spread proportional to |x ? y|1/? , which is condition (iii)
of L?evy motion. Next let us demonstrate that the increments are independent. Consider the
vector (f (x1 )?f (x2 ), f (x2 )?f (x3 ), . . . , f (xn?1 )?f (xn )), where x1 < x2 < . . . < xn .
Its joint characteristic function in the variables t1 , . . . , tn?1 can be calculated to be
?(t1 , . . . , tn?1 ) = exp (?Eh |t1 (h(x1 ) ? h(x2 )) + ? ? ? + tn?1 (h(xn?1 ) ? h(xn ))|? )
(5)
The disjointness of the intervals (xi?1 , xi ) implies that the only events which have nonzero probability within the range [x1 , xn ] are the events |h(xi ) ? h(xi?1 )| = 2 for some i,
and zero for all other indices. Letting pi denote the probabilities of those events, (5) reads
?(t1 , . . . , tn?1 ) = exp (?2? (p1 |t1 |? + ? ? ? + pn?1 |tn?1 |? ))
(6)
which describes a vector of independent ?-stable random variables, as the characteristic
function splits. Thus the limiting process has independent increments.
The differences between sample functions arising from Cauchy priors as opposed to
Gaussian priors is evident from Fig. 1, which displays sample paths from Gaussian and
5
10000
5000
0
0
?5
?5000
0
2000
4000 6000
(a)
8000 10000
150
0
5000
(b)
10000
5000
100
0
50
?5000
0
?50
0
2000
4000 6000
(c)
8000 10000
?10000
0
2000
4000 6000
(d)
8000 10000
Figure 1: Sample functions: (a) i.i.d. Gaussian, (b) i.i.d. Cauchy, (c) Brownian motion, (d) L?evy
Cauchy-Stable motion.
Pn
Cauchy i.i.d. processes wn and their ?integrated? versions i=1 wi , simulating the L?evy
motions. The sudden jumps in the Cauchy motion arise from the presence of strong outliers
in the respective Cauchy i.i.d. process, which would correspond, in the network, to hidden
units with heavy weighting factors vj .
2.2 Limits with non-i.i.d. priors
We begin with an interesting example, which shows that if the ?identically distributed?
assumption for the output weights is dispensed with, the limiting distribution of (1) can
attain a non-stable (and non-Gaussian) form. Take vj to be independent random variables
with P (vj = 2?j ) = P (vj = ?2?j ) = 1/2. The characteristic functions can easily be
computed as E[eitvj ] = cos(t/2j ). Now recall the Viet?e formula:
n
Y
j=1
cos(t/2j ) =
sin t
2n sin(t/2n )
(7)
Taking n ? ? shows that the limiting characteristic function is a sinc function, which
corresponds with the uniform density. Selecting the signum nonlinearity for h, it is not
difficult to show with estimates on the tail P
of the product (7) that all finite-dimensional
n
distributions of the neural process fn (x) = j=1 vj hj (x) converge, so that fn converges
in distribution to a random process whose first-order distributions are uniform2 .
What conditions are required on independent, but not necessarily identically distributed
priors vj for convergence to the Gaussian? This question is answered by the classical
Lindeberg-Feller theorem.
Theorem 2. Central Limit Theorem (Lindeberg-Feller) [4]. Let vj be a sequence of
2
independent
random variables each with zero mean and finite
Pn variance, define sn =
Pn
1
var[ j=1 vj ], and assume s1 6= 0. Then the sequence sn j=1 vj converges in distri2
P
An intuitive proof is as follows: one thinks of j vj as a binary expansion of real numbers in
[-1,1]; the prescription of the probability laws for vj imply all such expansions are equiprobable,
manifesting in the uniform distribution.
bution to an N (0, 1) variable, if
n Z
1 X
v 2 dFvj (v) = 0
lim
n?? s2
n i=1 |v|??sn
(8)
for each ? > 0, and where Fvj is the distribution function for vj .
Condition (8) is called the Lindeberg condition, and imposes an ?infinitesimal? requirement
on the sequence {vj } in the sense that no one variable is allowed to dominate the sum. This
theorem can be used to establish the following non-i.i.d. network convergence result.
Proposition
Let the network (1) have independent finite-variance weights
Pvnj . Defining
P3.
n
1
2
sn = var[ j=1 vj ], if the sequence {vj } is Lindeberg then fn (x) = sn j=1 vj hj (x)
converges in distribution to a Gaussian process f (x) of mean zero and covariance function
C(f (x), f (y)) = E[h(x)h(y)] as n ? ?, where h(x) is a variable with the common
distribution of the hj (x).
Proof. Fix a finite set of points {x1 , . . . , xk } in the input space, and look at the joint distribution (fn (x1 ), . . . , fn (xn )). We want to show these variables are jointly Gaussian
in the limit as n ? ?, by showing that every linear combination of the components converges in distribution to a Gaussian distribution. Fixing k constants ?i , we
Pn
Pk
Pk
Pk
have i=1 ?i f (xi ) = s1n j=1 vj i=1 ?i hj (xi ). Define ?j =
i=1 ?i hj (xi ), and
P
n
s?2n = var( j=1 vj ?j ) = (E? 2 )s2n , where ? is a random variable with the common distribution of ?j . Then for some c > 0:
n Z
n Z
c2 1 X
1 X
2
|vj (?)|2 dP (?)
|v
(?)?
(?)|
dP
(?)
?
j
j
s?2n j=1 |vj ?j |???sn
E? 2 s2n j=1 |vj |?? (E?2 )c1/2 sn
The right-hand side can be made arbitrarily small, from the Lindeberg assumption on {vj },
hence {vj ?j } is Lindeberg, from which the theorem follows. The covariance function is
easy to calculate.
Corollary 1. If the output weights {vj } are a uniformly bounded sequence of independent
random variables, and limn?? sn = ?, then fn (x) in (1) converges in distribution to a
Gaussian process.
The preceding corollary, besides giving an easily verifiable condition for Gaussian limits,
demonstrates that the non-Gaussian convergence in the example initialising Section 2.2 was
made possible precisely because the weights vj decayed sufficiently quickly with j, with
the result that limn sn < ?.
3 Learning with Stable Processes
One of the original reasons for focusing machine learning interest on Gaussian processes
consisted in the fact that they act as limit points of suitably constructed parametric models
[2], [3]. The problem of learning a regression function, which was previously tackled by
Bayesian inference on a modelling neural network, could be reconsidered by directly placing a Gaussian process prior on the fitting functions themselves. Yet already in early papers
introducing the technique, reservations had been expressed concerning such wholesale replacement [2]. Gaussian processes did not seem to capture the richness of finite neural
networks ? for one, the dependencies between multiple outputs of a network vanished in
the Gaussian limit.
Consider the simplest regression problem, that of the estimation of a state process u(x)
from observations y(xi ), under the model
y(x) = u(x) + ?(x)
(9)
5
5
5
0
0
0
?5
?5
?5
0
(a)
5
?5
?5
0
(b)
5
5
5
5
0
0
0
?5
?5
?5
0
5
?5
0
(c)
5
?5
0
5
?5
?5
0
5
Figure 2: Scatter plots of bivariate symmetric ?-stable distributions with discrete spectral measures.
Top row: ? = 1.5; Bottom row: ? =?0.5. Left to right: (a) H =
? identity, (b) H a rotation, (c) H a
2 ? 3 matrix with columns (?1/16, 3/16)T , (0, 1)T , (1/16, 3/16)T .
where ?(x) is noise independent of u. The obvious generalization of Gaussian process regression involves the placement of a stable process prior of index ? on u, and setting ? as
i.i.d. stable noise of the same index. Then the observations y also form a stable process
of index ?. Two advantages come with such generalization. First, the use of a heavytailed distribution for ? will tend to produce more robust regression estimates, relative to
the Gaussian case; this robustness can be additionally controlled by the stability parameter
?. Secondly, a glance at the classification of Theorem 1 indicates that the correlation structure of stable vectors (hence processes), is significantly richer than that of the Gaussian; the
space of n-dimensional stable vectors is already characterised by a whole space of measures, rather than an n ? n covariance matrix. The use of such priors on the data u afford
a significant broadening in the number of interesting dependency relationships that may be
assumed.
An understanding of the dependency structure of multivariate stable vectors can be first
broached by considering the following basic class. Let v be a vector of i.i.d. symmetric
stable variables of the same index, and let H be a matrix of appropriate dimension so that
x = Hv is well-defined. Then x has a symmetric stable characteristic function, where
? in Theorem 1 is discrete, i.e. concentrated on a finite number of
the spectral measure ?
points. Divergences in the correlation structure are readily apparent even within this class.
In the Gaussian case, there is no advantage in the selection of non-square matrices H, since
? with the same
the distribution of x can always be obtained by a square mixing matrix H
number of rows as H. Not so when ? < 2, for then the characteristic function for x in
general possesses n fundamental discontinuities in higher-order derivatives, where n is the
number of columns of H. Furthermore, in the square case, replacement of H with HR,
where R is any rotation matrix, leaves the distribution invariant when ? = 2; for nonGaussian stable vectors, the mixing matrices H and H0 give rise to the same distribution
only when |H?1 H0 | is a permutation matrix, where | ? | is defined component-wise. Figure
2 illustrates the variety of dependency structures which can be attained as H is changed.
A number of techniques already exist in the statistical literature for the estimation of the
spectral measure (and hence the mixing H) of multivariate stable vectors from empirical
data. The infinite-dimensional generalization of the above situation gives rise to the set
of stable processes produced as time-varying filtered versions of i.i.d. stable noise, and
similar to the Gaussian process, are parameterized by a centering (mean) function ?(x) and
a bivariate filter function h(x, ?) encoding dependency information. Another simple family
of stable processes consist of the so-called sub-Gaussian processes. These are processes
defined by u(x) = A1/2 G(x) where A is a totally right-skew ?/2 stable variable [5], and
G a Gaussian process of mean zero and covariance K. The result is a symmetric ?-stable
random process with finite-dimensional characteristic functions of form
1
(10)
?(t) = exp(? |ht, Kti|?/2 )
2
The sub-Gaussian processes are then completely parameterized by the statistics of the subordinating Gaussian process G. Even more, they have the following linear regression property [5]: if Y1 , . . . , Yn are jointly sub-Gaussian, then
E[Yn |Y1 , . . . , Yn?1 ] = a1 Y1 + ? ? ? an?1 Yn?1 .
(11)
Unfortunately, the regression is somewhat trivial, because a calculation shows that the coefficients of regression {ai } are the same as the case where Yi are assumed jointly Gaussian!
Indeed, this curious property appears anytime the variables take the form Y = BG, for
any fixed scalar random variable B and Gaussian vector G. It follows that the predictive
mean estimates for (10) employing sub-Gaussian priors are identical to the estimates under
a Gaussian hypothesis. On the other hand, the conditional distribution of Yn |Y1 , . . . , Yn?1
differs greatly from the Gaussian, and is neither stable nor symmetric about its conditional
mean in general. From Fig. 2 one even sees that the conditional distribution may be multimodal, in which case the predictive mean estimates are not particularly valuable. More
useful are MAP estimates, which in the Gaussian scenario coincide with the conditional
mean. In any case, regression on stable processes suggest the need to compute and investigate the entire a posteriori probability law.
The main thrust of our foregoing results indicate that the class of possible limit points of
network functions is significantly richer than the family of Gaussian processes, even under relatively restricted (e.g. i.i.d.) hypotheses. Gaussian processes are the appropriate
models of large networks with finite variance priors in which no one component dominates
another, but when the finite variance assumption is discarded, stable processes become
the natural limit points. Non-stable processes can be obtained with the appropriate choice
of non-i.i.d. parameters priors, even in an infinite network. Our discussion of the stable
process regression problem has principally been confined to an exposition of the basic theoretical issues and principles involved, rather than to algorithmic procedures. Nevertheless,
since simple closed-form expressions exist for the characteristic functions, the predictive
probability laws can all in principle be computed with multi-dimensional Fourier transform
techniques. Stable variables form mathematically natural generalisations of the Gaussian,
with some fundamental, but compelling, differences which suggest additional variety and
flexibility in learning applications.
References
[1] R. Neal, Bayesian Learning for Neural Networks. New York: Springer-Verlag, 1996.
[2] D. MacKay. Introduction to Gaussian Processes. Extended lecture notes, NIPS 1997.
[3] M. Seeger, Gaussian Processes for Machine Learning. International Journal of Neural Systems
14(2), 2004, 69?106.
[4] C. Burrill, Measure, Integration and Probability. New York: McGraw-Hill, 1972.
[5] G. Samorodnitsky & M. Taqqu, Stable Non-Gaussian Random Processes. New York: Chapman
& Hall, 1994.
[6] W. Feller, An Introduction to Probability Theory and Its Applications, Vol. 2. New York: John
Wiley & Sons, 1966.
| 2869 |@word version:2 clts:1 suitably:2 closure:2 r:2 covariance:4 pg:1 moment:1 subordinating:1 selecting:1 daniel:1 si:1 yet:1 scatter:1 must:2 readily:1 john:1 fn:13 subsequent:1 thrust:1 plot:1 leaf:1 xk:1 filtered:1 sudden:1 math:1 parameterizations:1 evy:5 mathematical:1 c2:1 constructed:1 become:1 consists:2 fitting:1 speaks:1 upenn:2 indeed:2 behavior:2 themselves:3 p1:1 nor:1 multi:2 roughly:2 chap:1 spherical:1 td:1 lindeberg:7 considering:1 totally:1 becomes:1 begin:1 moreover:3 bounded:2 what:3 vanished:1 every:3 act:2 ti:2 xd:5 demonstrates:1 unit:2 yn:6 t1:6 engineering:1 local:1 limit:21 consequence:1 encoding:1 path:1 emphasis:1 twice:1 examined:2 co:2 range:1 statistically:1 unique:1 yj:2 differs:1 x3:1 procedure:1 empirical:3 ax1:2 attain:1 dictate:1 significantly:2 radial:1 suggest:2 selection:1 operator:1 context:1 imposed:1 map:1 elusive:1 ricky:1 independently:1 simplicity:1 attraction:4 dominate:1 stability:1 coordinate:1 increment:4 limiting:7 exact:1 hypothesis:3 pa:2 satisfying:2 particularly:3 located:1 bottom:1 electrical:1 hv:2 capture:1 calculate:1 region:1 richness:1 valuable:1 feller:4 pd:2 taqqu:1 raise:1 predictive:3 completely:1 easily:2 joint:5 multimodal:1 derivation:1 formation:1 outside:1 reservation:1 h0:2 whose:3 richer:2 apparent:1 foregoing:2 say:1 s:4 reconsidered:1 statistic:1 think:1 jointly:3 transform:1 sequence:8 differentiable:1 advantage:2 product:1 mixing:3 flexibility:1 description:1 intuitive:1 convergence:6 requirement:1 sea:1 produce:1 converges:11 completion:1 fixing:1 strong:1 involves:1 implies:2 come:1 indicate:1 differ:1 filter:1 stochastic:1 sgn:1 suffices:1 generalization:7 fix:2 investigation:2 preliminary:3 proposition:5 secondly:1 mathematically:1 strictly:1 hold:1 sufficiently:1 hall:1 normal:3 exp:6 algorithmic:1 early:1 heavytailed:1 estimation:2 establishes:1 gaussian:56 always:1 rather:2 pn:10 hj:13 varying:2 publication:1 combiner:1 signum:2 corollary:2 derived:1 notational:1 modelling:1 indicates:1 mainly:1 greatly:2 seeger:1 rigorous:1 sense:2 posteriori:1 inference:1 entire:1 typically:1 integrated:1 fvj:1 hidden:2 relation:1 w:1 issue:1 classification:3 exponent:1 integration:1 mackay:1 marginal:1 construct:1 chapman:1 identical:1 placing:1 look:1 equiprobable:1 divergence:1 ourselves:1 lebesgue:1 replacement:2 n1:4 interest:1 possibility:2 investigate:1 amenable:1 byproduct:1 respective:2 conduct:1 indexed:2 bx2:2 theoretical:2 instance:1 column:2 compelling:1 introducing:1 uniform:3 dependency:5 referring:1 density:4 decayed:1 fundamental:2 international:1 sequel:1 lee:1 quickly:1 nongaussian:1 central:5 opposed:1 dr:1 derivative:1 disjointness:1 coefficient:1 analagous:1 bg:1 performed:1 closed:1 bution:1 square:3 variance:9 characteristic:17 correspond:1 bayesian:2 produced:1 none:1 converged:2 whenever:1 definition:1 infinitesimal:3 centering:1 involved:1 obvious:1 associated:2 proof:7 proved:1 recall:1 lim:1 anytime:1 routine:1 focusing:1 appears:1 higher:4 attained:2 though:1 generality:1 furthermore:1 correlation:2 d:2 sketch:1 hand:2 replacing:2 nonlinear:1 glance:1 effect:1 consisted:2 equality:1 hence:5 read:1 symmetric:21 nonzero:1 manifesting:1 neal:6 attractive:1 sin:2 dispensed:1 hill:1 evident:1 demonstrate:1 tn:5 motion:8 wise:1 common:3 rotation:2 tail:3 significant:1 ai:1 rd:1 mathematics:1 similarly:1 nonlinearity:4 had:1 stable:67 brownian:3 multivariate:9 showed:1 scenario:1 termed:1 verlag:1 certain:2 binary:1 arbitrarily:1 yi:3 der:1 wholesale:1 additional:1 somewhat:1 preceding:1 surely:2 converge:2 clt:2 ii:1 full:1 multiple:1 calculation:1 sphere:2 prescription:2 concerning:1 a1:2 controlled:1 regression:11 basic:2 expectation:1 confined:1 c1:1 want:1 interval:1 limn:3 posse:1 induced:1 tend:2 seem:1 curious:1 presence:1 iii:2 identically:4 concerned:1 divisible:1 split:1 wn:1 easy:1 variety:2 pennsylvania:2 restrict:1 actuality:1 motivated:1 expression:1 speaking:1 afford:1 york:4 remark:2 generally:4 useful:1 clear:1 verifiable:1 locally:2 concentrated:1 simplest:1 exist:4 arising:1 discrete:2 shall:1 vol:1 nevertheless:1 neither:1 ht:3 sum:2 parameterized:2 almost:2 family:2 p3:1 initialising:1 layer:1 hi:1 followed:1 display:1 tackled:1 quadratic:1 nontrivial:1 occur:2 placement:1 precisely:1 x2:5 nearby:1 fourier:2 answered:1 relatively:1 department:2 combination:4 across:1 describes:1 son:1 wi:1 s1:1 outlier:1 invariant:1 restricted:1 principally:1 remains:1 previously:1 skew:1 letting:1 tractable:1 gaussians:1 spectral:5 appropriate:3 simulating:1 s2n:2 robustness:1 symmetrized:2 original:1 top:1 completed:1 giving:1 uj:5 establish:2 prof:1 classical:3 unchanged:1 implied:1 question:3 already:4 occurs:2 parametric:1 concentration:1 usual:1 dp:2 w0:1 considers:1 cauchy:6 trivial:1 reason:1 assuming:1 besides:1 index:12 relationship:2 difficult:1 unfortunately:2 statement:1 rise:3 observation:3 discarded:2 finite:17 defining:1 situation:1 extended:1 y1:4 required:1 established:1 discontinuity:1 nip:1 beyond:1 regime:1 analogue:1 power:1 event:4 natural:4 eh:4 hr:1 imply:1 philadelphia:2 sn:15 prior:20 understanding:1 literature:1 asymptotic:5 law:5 relative:1 lecture:1 permutation:1 interesting:3 proportional:1 var:3 kti:1 imposes:1 principle:2 pi:1 heavy:1 row:3 changed:1 copy:3 side:1 normalised:1 viet:1 perceptron:1 fall:2 taking:1 distributed:7 calculated:1 finitedimensional:1 valid:1 xn:7 dimension:1 made:2 jump:1 coincide:1 employing:1 approximate:1 mcgraw:1 suggestive:1 assumed:3 xi:12 additionally:2 robust:1 symmetry:1 expansion:2 broadening:1 investigated:1 necessarily:1 domain:4 vj:39 did:1 pk:3 main:2 spread:7 s2:1 noise:3 arise:3 whole:1 allowed:1 x1:12 fig:2 borel:1 ddlee:1 wiley:1 sub:4 position:1 explicit:3 invocation:1 weighting:1 theorem:16 remained:1 formula:2 discarding:1 showing:1 sinc:1 normalizing:1 dominates:2 evidence:1 exists:1 bivariate:2 consist:1 illustrates:1 demand:1 cx:2 infinitely:1 expressed:1 ux:1 scalar:1 radford:1 springer:1 corresponds:1 conditional:4 viewed:1 identity:1 exposition:2 samorodnitsky:1 change:1 infinite:6 specifically:1 uniformly:2 characterised:2 wt:2 generalisation:1 lemma:2 called:6 s1n:2 select:1 brevity:1 |
2,058 | 287 | 52
Grajski and Merzenich
Neural Network Simulation
of
Somatosensory Representational Plasticity
Kamil A. Grajski
Ford Aerospace
San Jose, CA 95161-9041
[email protected]
Michael M. Merzenich
Coleman Laboratories
UC San Francisco
San Francisco, CA 94143
ABSTRACT
The brain represents the skin surface as a topographic map in the
somatosensory cortex. This map has been shown experimentally to
be modifiable in a use-dependent fashion throughout life. We
present a neural network simulation of the competitive dynamics
underlying this cortical plasticity by detailed analysis of receptive
field properties of model neurons during simulations of skin coactivation, cortical lesion, digit amputation and nerve section.
1 INTRODUCTION
Plasticity of adult somatosensory cortical maps has been demonstrated experimentally
in a variety of maps and species (Kass, et al., 1983; Wall, 1988). This report focuses
on modelling primary somatosensory cortical plasticity in the adult monkey.
We model the long-term consequences of four specific experiments, taken in pairs.
With the first pair, behaviorally controlled stimulation of restricted skin surfaces (Jenkins, et al., 1990) and induced cortical lesions (Jenkins and Merzenich, 1987), we
demonstrate that Hebbian-type dynamics is sufficient to account for the inverse relationship between cortical magnification (area of cortical map representing a unit area
of skin) and receptive field size (skin surface which when stimulated excites a cortical
unit) (Sur, et al., 1980; Grajski and Merzenich, 1990). These results are obtained with
several variations of the basic model. We conclude that relying solely on cortical
magnification and receptive field size will not disambiguate the contributions of each
of the myriad circuits known to occur in the brain. With the second pair, digit amputation (Merzenich, et al., 1984) and peripheral nerve cut (without regeneration) (Merzenich, ct al., 1983), we explore the role of local excitatory connections in the model
Neural Network Simulation of Somatosensory Representational Plasticity
cortex (Grajski, submitted).
Previous models have focused on the self-organization of topographic maps in general
(Willshaw and von der Malsburg, 1976; Takeuchi and Amari, 1979; Kohonen, 1982;
among others). Ritter and Schulten (1986) specifically addressed somatosensory plasticity using a variant of Kohonen's self-organizing mapping. Recently, Pearson, et al.,
(1987), using the framework of the Group Selection Hypothesis, have also modelled
aspects of nonnal and reorganized somatosensory plasticity.
Elements of the present study have been published elsewhere (Grajski and Merzenich,
1990).
2 THE MODEL
2.1
ARCIDTECTURE
The network consists of three heirarchically organized two-dimensional layers shown
in Figure 1A.
Cortical Layer
Subcortical Layer
Skin Layer
....
b.)
a.)
Figure 1: Network architecture.
The divergence of projections from a single skin site to subcortex (SC) and its subsequent projection to cortex (C) is shown at left: Skin (S) to SC, 5 x 5; SC to C, 7 x 7.
S is "partitioned" into three 15 x 5 "digits" Left, Center and Right. The standard S
stimulus used in all simulations is shown lying on digit Left. The projection from C
to SC E and I cells is shown at right. Each node in the SC and C layers contains an
excitatory (E) and inhibitory cell (I) as shown in Figure lB. In C, each E cell forms
excitatory connections with a 5 x 5 patch of I cells; each I cell fonns inhibitory con-
S3
54
Grajski and Merzenich
nections with a 7 x 7 path of E cells. In se, these connections are 3 x 3 and 5 x 5,
respectively. In addition, in e only, E cells form excitatory connections with a 5 by 5
patch of E cells. The spatial relationship of E and I cell projections for the central
node is shown at left (C E to E shown in light gray, e I to E shown in black).
2.2
DYNAMICS
The model neuron is the same for all E and I cells: an RC-time constant membrane
which is depolarized and (additively) hyperpolarized by linearly weighted connections:
u,~,E
=
-'t u~,E
'" ,
+ ~v~C,Ew~,E:SC,E + ~v9,Ew~,E:C,E _
~ J
~ J
'J
&J
~v9Jw~,E:CJ
~ J
'J
i i i
U?~J =
,
-A'
-"""
u~J
+~
~v9,Ew~J:C,E
J
'J
j
u,~C,E
U?~CJ ,
-
u~C,E
'" ,
= -t
+ ~o~w~C,E:S + ~v9,Ew~C,E:C,E
~ J
~ J
&J
_ ~v~C,Ew~C,E:SCJ
~ J
i i i
- t u~CJ + ~v~C,Ew~CJ:SC,E
""
'J
~ J
'J
+ ~v9,Ew~CJ:C,E _
~ J
'J
'J
~v~C,Ew~C,E:SCJ
'J
~ J
i i i
ut
vt
Y - membrane potential for unit i of type Y on layer X;
,f - firing rate for unit i
of type Y on layer X;
skin units are OFF (=0) or ON (=1); tIft - membrane time
constant (with respect to unit time); wrsr(x,y):preltY) - connection to unit i of postsynaptic type y on postsynaptic layer x from units of presynaptic type Y on presynaptic layer X. Each summation tenn is normalized by the number of incoming connections (corrected for planar boundary conditions) contributing to the term. Each unit
converts membrane potential to a continuous-valued output value Vi via a sigmoidal
function representing an average firing rate @ = 4.0):
of -
1
2.3
1
2(l+tanh(~(ui-2 )))
ui~?02,
o
ui<0.02
SYNAPTIC PLASTICITY
Synaptic strength is modified in three ways: a.) activity-dependent change; b.) passive
decay; and c.) normalization. Activity-dependent and passive decay tenns are as follows:
connection from cell j to cell i; t.l}'11 =0.01 tIft =0.005 - time constant for passive
synaptic decay; a.=O.05, the maximum activity-dependent step change; Vi,Vi - pre- and
post-synaptic output values, respectively. Further modification occurs by a multiplicative normalization performed over the incoming connections for each cell. The normalization is such that the summed total strength of incoming connections is R:
wii -
Neural Network Simulation of Somatosensory Representational Plasticity
1
-N . l:'J?w?'1?
,
=
R
number of incoming connections for cell i; Wij - connection from cell j to cell i;
R = 2.0 - the total resource available to cell i for redistribution over its incoming connections.
Ni -
2.4
MEASURING
AREA
CORTICAL
MAGNIFICATION,
RECEPTIVE
FIELD
Cortical magnification is measured by "mapping" the network, e.g., noting which 3x3
skin patch most strongly drives each cortical E cell. The number of cortical nodes
driven maximally by the same skin site is the cortical magnification for that skin site.
Receptive field size for a C (SC) layer E cell is estimated by stimulating all possible
3x3 skin patches (169) and noting the peak response. Receptive field size is defined as
the number of 3x3 skin patches which drive the unit at ~50% of its peak response.
3 SIMULATIONS
3.1
FORMATION OF THE TOPOGRAPHIC MAP ENTAILS REFINEMENT
OF SYNAPTIC PATTERNING
The location of individual connections is fixed by topographic projection; initial
strengths are drawn from a Gaussian distribution (JJ. = 2.0, (12 = 0.2). Standard-sized
skin patches are stimulated in random sequence with no double-digit stimulation.
(Mapping includes tests for double-digit receptive fields.) For each patch, the network
is allowed to reach steady-state while the plasticity rule is ON. Synaptic strengths are
then renonnalized. Refinement continues until two conditions are met: a.) fewer than
5% of all E cells change their receptive field location; and b.) receptive field areas (using the 50% criterion) change by no more than ?1 unit area for 95% of E cells. (See
Figures 2 and 3 in Merzenich and Grajski, 1990; Grajski, submitted ).
3.2
RESTRICTED SKIN STIMULATION GIVES INCREASED MAGNIFICATION, DECREASED RECEPTIVE FIELD SIZE
Jenkins, et aI., (1990) describe a behavioral experiment which leads to cortical somatotopic reorganization. Monkeys are trained to maintain contact with a rotating disk
situated such that only the tips of one or two of their longest digits are stimulated.
Monkeys are required to maintain this contact for a specified period of time in order
to receive food reward. Comparison of pre- and post-stimulation maps (or the latter
with maps obtained after varying periods without disk stimulation) reveal up to nearly
3-fold differences in cortical magnification and reduction in receptive field size for
stimulated skin.
We simulate the above experiment by extending the refinement process described
above, but with the probability of stimulating a restricted skin region increased 5: 1.
(See Grajski and Merzenich (1990), Figure 4.) Figure 2 illustrates the change in size
(left) and synaptic patterning (right) for a single representative cortical receptive field.
5S
56
Grajski and Merzenich
Figure 2: Representative co-activation induced receptive field changes.
Incoming Synaptic Strengths
Skin to Subcortex
Subcortex to Cortex
Post CoActivation
Cortical RF
Pre CoActivation
a.)
3.3
b.)
low
high
AN INDUCED, FOCAL CORTICAL LESIONS GIVES DECREASED
MAGNIFICATION, INCREASED RECEPTIVE FIELD SIZE
The inverse magnification rule predicts that a decrease in cortical magnification is accompanied by an increase in receptive field areas. Jenkins, et al., (l987) confirmed
this hypothesis by inducing focal cortical lesions in the representation of restricted
hand surfaces, e.g. a single digit. Changes included: a.) a re-emergence of a representation of the skin fonnedy represented in the now lesioned zone in the intact surrounding cortex; b.) the new representation is at the expense of cortical magnification of
skin originally represented in those regions; so that c.) large regions of the map contain neurons with abnonnally large receptive fields.
We simulate this experiment by eliminating the incoming and outgoing connections of
the cortical layer region representing the middle digit The refinement process
described above is continued under these new conditions until topographic map and
receptive field size measures converge. The re-emergence of representation and
changes in distributions of receptive field areas are shown in Grajski and Merzenich,
(1990) Figure 5. Figure 3 below illustrates the change in size and location of a
representative (sub) cortical receptive field.
3.4
SEVERAL MODEL VARIANTS REPRODUCE THE INVERSE MAGNIFICATION RULE
Repeating the above simulations using networks with no descending projections or using networks with no descending and no cortical mutual exciatation, yields largely
nonnal topography and co-activation results. Restricting plasticity to excitatory pathways alone also yields qualitatively similar results. (Studies with a two-layer network
Neural Network Simulation of Somatosensory Representational Plasticity
yield qualitatively similar results.) Thus, the refinement and co-activation experiments
alone are insufficient to discriminate fundamental differences between network variants.
Figure 3: Representative cortical lesion induced receptive field changes.
Pre-Cortical Lesion
Post-Cortical Lesion
Cortical
Receptive
Field
Sub-cortical
Receptive
Field
3.5
MUTUALLY EXCITATORY LOCAL CORTICAL CONNECTIONS MAY
BE CRITICAL FOR SIMULATING EFFECTS OF DIGIT AMPUTATION
AND NERVE SECTION
The role of lateral excitation in the cortical layer is made clearer through simulations
of nerve section and digit amputation experiments (Merzenich, et al., 1983; Merzenich, et at. 1984; see also Wall, 1988). The feature of interest here is the cortical distance over which reorganization is observed. Following cessation of peripheral input
from digit 3, for example. the surrounding representations (digits 2 and 4) expand into
the now silenced zone. Not only expansion is observed. Neurons in the surrounding
representations up to several l00's of microns distant from the silenced zone shift
their receptive fields. The shift is such that the receptive field covers skin sites closer
to the silenced skin.
The deafferentation experiment is simulated by eliminating the connection between the
skin layer CENTER digit (central 1/3) and SC layers and then proceeding with
refinement with the usual convergence checks. Simulations are run for three network
architectures. The "full" model is that described above. Two other models strip the
descending and both descending and lateral excitatory connections, respectively.
Figure 4 shows features of reorganization: the conversion of initially silenced zones, or
refinement of initially large, low amplitude fields to normal-like fields (a-c). Importantly, the receptive field farthest away from the initially silenced representation (d)
undergoes a shift towards the deafferented skin. The shift is comprised of a translation in the receptive field peak location as well as an increase (below the 50% amplitude threshold. but increases range 25 - 200%) in the regions surrounding the peak and
facing the silenced cortical zone (shown in light shading). Only the "full" model
evolves expanded and shifted representations. These results are preliminary in that no
parameter adjustments are made in the other networks to coax a result. It may simply
be a matter of not enough excitation in the other cases. Nevertheless, these results
show that local cortical excitation can contribute critical activity for reorganization.
57
58
Grajski and Merzenich
Figure 4: Summary of immediate and long-term post-amputation effects.
Post-amputation
Post-amputation
Normal
Immediate Long-term
Normal
Immediate Long-term
a?)LJDO c?)O[]O
b?)~[]EJ d?)DD~
4 CONCLUSION
We have shown that a.) Hebbian-type dynamics is sufficient to account for the quantitative inverse relationship between cortical magnification and receptive field size; and
b.) cortical magnification and receptive field size alone are insufficient to distinguish
between model variants.
Are these results just "so much biological detail?" No. The inverse magnificationreceptive field rule applies nearly universally in (sub)cortical topographic maps; it
reflects a fundamental principle of brain organization. For instance, experiments revealing the operation of mechanisms possibly similar to those modelled above have
been observed in the visual system. Wurtz, et al., (1990) have observed that following
chemically induced focal lesions in visual area MT, surviving neurons' visual receptive field area increased. For a review of use-dependent receptive field plasticity in
the auditory system see Weinberger, et al., (1990).
Research in computational neuroscience has long drawn on principles of topographic
organization. Recent advances include those by Linsker (1989), providing a theoretical (optimization) framework for map formation and those studies linking concepts related to localized receptive fields with adaptive nets (Moody and Darken, 1989; see
Barron, this volume). The experimental and modelling issues discussed here offer an
opportunity to sustain and further enhance the synergy inherent in this area of computational neuroscience.
4.0.1
Acknowldegements
This research supported by NIH grants (to MMM) NSI0414 and GM07449, Hearing
Research Inc., the Coleman Fund and the San Diego Supercomputer Center. KAG
gratefully acknowledges helpful discussions with Terry Allard, Bill Jenkins, John Pearson, Gregg Recanzone and especially Ken Miller.
4.0.2
References
Grajski, K. A. and M. M. Merzenich. (1990). Hebb-type dynamics is sufficient to account for the inverse magnification rule in cortical somatotopy. In Press. Neural
Computation. Vol. 2. No. 1.
Neural Network Simulation of Somatosensory Representational Plasticity
Jenkins. W. M. and M. M. Merzenich. (1987). Reorganization of neocortical representations after brain injury. In: Progress in Brain Research. Vol. 71. Seil, F. J., et
al., Eds. Elsevier. pgs.249-266.
Jenkins, W. M., et al., (1990). Functional reorganization of primary somatosensory
cortex in adult owl monkeys after behaviorally controlled tactile stimulation. J. Neurophys. In Press.
Kaas, J. H., M. M. Merzenich and H. P. Killackey. (1983). The reorganization of
somatosensory cortex following peripheral nerve damage in adult and developing
mammals. Ann. Rev. Neursci. 6:325-356.
Kohonen, T. (1982). Self-organized formation of topologically correct feature maps.
Bioi. Cyb. 43:59-69.
Linsker, R. (1989). How to generate ordered maps by maximizing the mutual information between input and output signals. IBM Research Report No. RC 14624
Merzenich, M. M., J. H. Kaas, J. T. Wall, R. J. Nelson, M. Sur and D. J. Felleman.
(1983). Topographic reorganization of somatosensory cortical areas 3b and 1 in adult
monkeys following restricted deafferentation. Neuroscience. 8: 1:33-55.
Merzenich, M. M., R. J. Nelson, M. P. Stryker, M. Cynader, J. M Zook and A.
Schoppman. (1984). Somatosensory cortical map changes following digit amputation
in adult monkeys. J. Compo Neurol. 244:591-605.
Moody, J. and C. J. Darken. (1989). Fast learning in networks of locally-tuned processing units. Neural Computation 1:281-294.
Pearson, J. C., L. H. Finkel and G. M. Edelman. (1987). Plasticity in the organization of adult cerebral cortical maps. J. Neurosci. 7:4209-4223.
Ritter, H. and K. Schulten. (1986). On the stationary state of Kohonen's selforganizing sensory mapping. Bioi. Cyb. 54:99-106.
Sur, M., M. M. Merzenich and J. H. Kaas. (1980). Magnification, receptive-field area
and "hypercolumn" size in areas 3b and 1 of somatosensory cortex in owl monkeys.
J. Neurophys. 44:295-311.
Takeuchi, A. and S. Amari. (1979). Formation of topographic maps and columnar
microstructures in nerve fields. Bioi. Cyb. 35:63-72.
Wall, J. T. (1988). Variable organization in cortical maps of the skin as an indication
of the lifelong adaptive capacities of circuits in the mammalian brain. Trends in Neurosci. 11:12:549-557.
Weinberger, N. M., et al., (1990). Retuning auditory cortex by learning: A preliminary model of receptive field plasticity. Concepts in Neuroscience. In Press.
Willshaw, D. J. and C. von der Malsburg. (1976). How patterned neural connections
can be set up by self-organization. Proc. R. Soc. Lond. B. 194:431-445.
Wurtz, R., et al. (1990). Motion to movement: Cerebral cortical visual processing for
pursuit eye movements. In: Signal and sense: Local and global order in perceptual
maps. Gall, E. W., Ed. Wiley: New York. In Press.
S9
| 287 |@word middle:1 eliminating:2 hyperpolarized:1 disk:2 additively:1 simulation:12 mammal:1 shading:1 reduction:1 initial:1 contains:1 tuned:1 ka:1 com:1 neurophys:2 activation:3 john:1 subsequent:1 distant:1 plasticity:16 fund:1 alone:3 tenn:1 fewer:1 patterning:2 stationary:1 coleman:2 compo:1 node:3 location:4 contribute:1 sigmoidal:1 rc:2 edelman:1 consists:1 pathway:1 behavioral:1 brain:6 relying:1 food:1 underlying:1 circuit:2 monkey:7 quantitative:1 deafferented:1 willshaw:2 unit:12 farthest:1 grant:1 local:4 consequence:1 solely:1 path:1 firing:2 black:1 co:3 patterned:1 range:1 coactivation:3 x3:3 digit:15 area:13 revealing:1 projection:6 pre:4 selection:1 s9:1 descending:4 bill:1 map:20 demonstrated:1 center:3 maximizing:1 focused:1 rule:5 grajski:13 continued:1 importantly:1 scj:2 variation:1 diego:1 gall:1 hypothesis:2 element:1 trend:1 magnification:16 continues:1 mammalian:1 cut:1 predicts:1 observed:4 role:2 region:5 decrease:1 movement:2 ui:3 reward:1 lesioned:1 dynamic:5 trained:1 cyb:3 myriad:1 cynader:1 represented:2 surrounding:4 fac:1 describe:1 fast:1 sc:9 formation:4 pearson:3 valued:1 regeneration:1 amari:2 topographic:9 emergence:2 ford:2 sequence:1 indication:1 net:1 kohonen:4 gregg:1 organizing:1 representational:5 inducing:1 convergence:1 double:2 extending:1 clearer:1 mmm:1 measured:1 excites:1 progress:1 soc:1 somatosensory:15 met:1 correct:1 redistribution:1 owl:2 wall:4 preliminary:2 biological:1 summation:1 lying:1 normal:3 mapping:4 proc:1 tanh:1 weighted:1 reflects:1 behaviorally:2 gaussian:1 modified:1 ej:1 finkel:1 varying:1 focus:1 longest:1 modelling:2 check:1 sense:1 helpful:1 elsevier:1 dependent:5 initially:3 expand:1 wij:1 reproduce:1 nonnal:2 issue:1 among:1 spatial:1 summed:1 uc:1 mutual:2 field:37 represents:1 nearly:2 linsker:2 report:2 others:1 stimulus:1 inherent:1 divergence:1 individual:1 somatotopy:1 maintain:2 organization:6 interest:1 light:2 closer:1 rotating:1 re:2 theoretical:1 increased:4 instance:1 cover:1 injury:1 measuring:1 hearing:1 comprised:1 peak:4 fundamental:2 ritter:2 off:1 michael:1 tip:1 enhance:1 moody:2 von:2 central:2 possibly:1 account:3 potential:2 accompanied:1 includes:1 matter:1 inc:1 vi:3 multiplicative:1 performed:1 kaas:3 competitive:1 contribution:1 ni:1 takeuchi:2 v9:4 largely:1 miller:1 yield:3 modelled:2 recanzone:1 confirmed:1 drive:2 published:1 submitted:2 reach:1 synaptic:8 strip:1 ed:2 amputation:8 con:1 auditory:2 ut:1 organized:2 cj:5 amplitude:2 nerve:6 originally:1 planar:1 response:2 maximally:1 sustain:1 strongly:1 just:1 until:2 hand:1 microstructures:1 cessation:1 undergoes:1 gray:1 reveal:1 effect:2 normalized:1 contain:1 concept:2 merzenich:21 laboratory:1 during:1 self:4 steady:1 excitation:3 criterion:1 neocortical:1 demonstrate:1 felleman:1 motion:1 passive:3 recently:1 nih:1 stimulation:6 mt:1 functional:1 volume:1 cerebral:2 linking:1 discussed:1 ai:1 focal:3 gratefully:1 entail:1 cortex:9 surface:4 recent:1 somatotopic:1 driven:1 allard:1 tenns:1 life:1 vt:1 der:2 converge:1 period:2 signal:2 full:2 hebbian:2 offer:1 long:5 post:7 controlled:2 variant:4 basic:1 fonns:1 wurtz:2 normalization:3 cell:21 receive:1 addition:1 addressed:1 decreased:2 depolarized:1 induced:5 surviving:1 noting:2 enough:1 variety:1 architecture:2 shift:4 tactile:1 york:1 jj:1 detailed:1 se:1 selforganizing:1 repeating:1 locally:1 situated:1 ken:1 generate:1 inhibitory:2 s3:1 shifted:1 estimated:1 neuroscience:4 modifiable:1 vol:2 group:1 four:1 threshold:1 nevertheless:1 drawn:2 convert:1 run:1 jose:1 inverse:6 micron:1 topologically:1 chemically:1 throughout:1 patch:7 layer:15 ct:1 distinguish:1 fold:1 activity:4 strength:5 occur:1 aspect:1 simulate:2 lond:1 expanded:1 developing:1 peripheral:3 membrane:4 postsynaptic:2 partitioned:1 evolves:1 modification:1 rev:1 restricted:5 taken:1 resource:1 mutually:1 mechanism:1 available:1 jenkins:7 wii:1 operation:1 pursuit:1 away:1 barron:1 simulating:1 weinberger:2 supercomputer:1 include:1 opportunity:1 malsburg:2 especially:1 contact:2 skin:26 occurs:1 coax:1 receptive:33 primary:2 damage:1 usual:1 stryker:1 distance:1 lateral:2 simulated:1 capacity:1 nelson:2 presynaptic:2 sur:3 reorganization:8 relationship:3 insufficient:2 providing:1 expense:1 reorganized:1 conversion:1 neuron:5 darken:2 immediate:3 tift:2 lb:1 pair:3 required:1 specified:1 hypercolumn:1 connection:19 aerospace:1 adult:7 below:2 rf:1 terry:1 critical:2 representing:3 eye:1 acknowledges:1 review:1 contributing:1 topography:1 subcortical:1 facing:1 localized:1 sufficient:3 dd:1 principle:2 translation:1 ibm:1 excitatory:7 elsewhere:1 summary:1 supported:1 lifelong:1 boundary:1 cortical:45 sensory:1 qualitatively:2 refinement:7 san:4 made:2 universally:1 adaptive:2 l00:1 synergy:1 global:1 incoming:7 conclude:1 francisco:2 continuous:1 stimulated:4 disambiguate:1 ca:2 expansion:1 kamil:2 linearly:1 pgs:1 neurosci:2 lesion:8 allowed:1 neursci:1 silenced:6 site:4 representative:4 fashion:1 hebb:1 wiley:1 sub:3 schulten:2 perceptual:1 specific:1 decay:3 neurol:1 restricting:1 illustrates:2 columnar:1 simply:1 explore:1 visual:4 adjustment:1 ordered:1 applies:1 stimulating:2 bioi:3 sized:1 deafferentation:2 ann:1 towards:1 experimentally:2 change:11 included:1 specifically:1 corrected:1 total:2 specie:1 discriminate:1 experimental:1 intact:1 ew:8 zone:5 latter:1 outgoing:1 |
2,059 | 2,870 | Mixture Modeling by Affinity Propagation
Brendan J. Frey and Delbert Dueck
University of Toronto
Software and demonstrations available at www.psi.toronto.edu
Abstract
Clustering is a fundamental problem in machine learning and has been
approached in many ways. Two general and quite different approaches
include iteratively fitting a mixture model (e.g., using EM) and linking together pairs of training cases that have high affinity (e.g., using spectral
methods). Pair-wise clustering algorithms need not compute sufficient
statistics and avoid poor solutions by directly placing similar examples
in the same cluster. However, many applications require that each cluster
of data be accurately described by a prototype or model, so affinity-based
clustering ? and its benefits ? cannot be directly realized. We describe a
technique called ?affinity propagation?, which combines the advantages
of both approaches. The method learns a mixture model of the data by
recursively propagating affinity messages. We demonstrate affinity propagation on the problems of clustering image patches for image segmentation and learning mixtures of gene expression models from microarray data. We find that affinity propagation obtains better solutions than
mixtures of Gaussians, the K-medoids algorithm, spectral clustering and
hierarchical clustering, and is both able to find a pre-specified number
of clusters and is able to automatically determine the number of clusters.
Interestingly, affinity propagation can be viewed as belief propagation
in a graphical model that accounts for pairwise training case likelihood
functions and the identification of cluster centers.
1
Introduction
Many machine learning tasks involve clustering data using a mixture model, so that the
data in each cluster is accurately described by a probability model from a pre-defined,
possibly parameterized, set of models [1]. For example, words can be grouped according to
common usage across a reference set of documents, and segments of speech spectrograms
can be grouped according to similar speaker and phonetic unit. As researchers increasingly
confront more challenging and realistic problems, the appropriate class-conditional models
become more sophisticated and much more difficult to optimize.
By marginalizing over hidden variables, we can still view many hierarchical learning problems as mixture modeling, but the class-conditional models become complicated and nonlinear. While such class-conditional models may more accurately describe the problem at
hand, the optimization of the mixture model often becomes much more difficult. Exact
computation of the data likelihoods may not be feasible and exact computation of the sufficient statistics needed to update parameterized models may not be feasible. Further, the
complexity of the model and the approximations used for the likelihoods and the sufficient
statistics often produce an optimization surface with a large number of poor local minima.
A different approach to clustering ignores the notion of a class-conditional model, and
links together pairs of data points that have high affinity. The affinity or similarity (a real
number in [0, 1]) between two training cases gives a direct indication of whether they should
be in the same cluster. Hierarchical clustering and its Bayesian variants [2] is a popular
affinity-based clustering technique, whereby a binary tree is constructed greedily from the
leaves to the root, by recursively linking together pairs of training cases with high affinity.
Another popular method uses a spectral decomposition of the normalized affinity matrix
[4]. Viewing affinities as transition probabilities in a random walk on data points, modes
of the affinity matrix correspond to clusters of points that are isolated in the walk [3, 5].
We describe a new method that, for the first time to our knowledge, combines the advantages of model-based clustering and affinity-based clustering. Unlike previous techniques
that construct and learn probability models of transitions between data points [6, 7], our
technique learns a probability model of the data itself. Like affinity-based clustering,
our algorithm directly examines pairs of nearby training cases to help ascertain whether
or not they should be in the same cluster. However, like model-based clustering, our
technique uses a probability model that describes the data as a mixture of class-conditional
distributions. Our method, called ?affinity propagation?, can be viewed as the sum-product
algorithm or the max-product algorithm in a graphical model describing the mixture model.
2
A greedy algorithm: K-medoids
The first step in obtaining the benefit of pair-wise training case comparisons is to replace
the parameters of the mixture model with pointers into the training data. A similar representation is used in K-medians clustering or K-medoids clustering, where the goal is
to identify K training cases, or exemplars, as cluster centers. Exact learning is known to
be NP-hard (c.f. [8]), but a hard-decision algorithm can be used to find approximate solutions. While the algorithm makes greedy hard decisions for the cluster centers, it is a useful
intermediate step in introducing affinity propagation.
For training cases x1 , . . . , xN , suppose the likelihood of training case xi given that training
case xk is its cluster center
is P (xi |xi in xk ) (e.g., a Gaussian likelihood would have the
2
2 ?
form e?(xi ?xk ) /2? / 2?? 2 ). Given the training data, this likelihood depends only on
i and k, so we denote it by Lik . Lii is set to the Bayesian prior probability that xi is a
cluster center. Initially, K training cases are chosen as exemplars, e.g., at random. Denote
the current set of cluster center indices by K and the index of the current cluster center
for xi by si . K-medoids iterates between assigning training cases to exemplars (E step),
and choosing a training case as the new exemplar for each cluster (M step). Assuming for
simplicity that the mixing proportions are equal and denoting the responsibility likelihood
ratio by rik = P (xi |xi in xk )/P (xi |xi not in xk )1 , the updates are
E step
For i = 1, . . . , N :
P
For k ? K: rik ? Lik /( j:j6=k Lij )
si ? argmaxk?K rik
Greedy M step
Q
For k ? K: Replace k in K with argmaxj:sj =k ( i:si =k Lij )
This algorithm nicely replaces parameter-to-training case comparisons with pair-wise
training case comparisons. However, in the greedy M step, specific training cases are
chosen as exemplars. By not searching over all possible combinations of exemplars, the
algorithm will frequently find poor local minima. We now introduce an algorithm that
does approximately search over all possible combinations of exemplars.
1
Note that using the traditional definition of responsibility, rik ? Lik /(?j Lij ), will give the
same decisions as using the likelihood ratio.
3
Affinity propagation
The responsibilities in the greedy K-medoids algorithm can be viewed as messages that are
sent from training cases to potential exemplars, providing soft evidence of the preference
for each training case to be in each exemplar. To avoid making hard decisions for the
cluster centers, we introduce messages called ?availabilities?. Availabilities are sent from
exemplars to training cases and provide soft evidence of the preference for each exemplar
to be available as a center for each training case.
Responsibilities are computed using likelihoods and availabilities, and availabilities are
computed using responsibilities, recursively. We refer to both responsibilities and availabilities as affinities and we refer to the message-passing scheme as affinity propagation.
Here, we explain the update rules; in the next section, we show that affinity propagation
can be derived as the sum-product algorithm in a graphical model describing the mixture
model. Denote the availability sent from candidate exemplar xk to training case xi by aki .
Initially, these messages are set equal, e.g., aki = 1 for all i and k. Then, the affinity
propagation update rules are recursively applied:
Responsibility updates
P
rik ? Lik /( j:j6=k aij Lij )
Availability updates
Q
akk ? j:j6=k (1 + rjk ) ? 1
Q
Q
1
?1
aki ? 1/( rkk
+ 1 ? j:j6=k,j6=i (1 + rjk )?1 )
j:j6=k,j6=i (1 + rjk )
The first update rule is quite similar to the update used in EM, except the likelihoods used
to normalize the responsibilities are modulated by the availabilities of the competing exemplars. In this rule, the responsibility of a training case xi as its own cluster center, rii ,
is high if no other exemplars are highly available to xi and if xi has high probability under
the Bayesian prior, Lii .
The second update rule also has an intuitive explanation. The availability of a training
case xk as its own exemplar, akk , is high if at least one other training case places high
responsibility on xk being an exemplar. The availability of xk as a exemplar for xi , aki
is high if the self-responsibility rkk is high (1/rkk ?1 approaches ?1), but is decreased if
other training cases compete in using xk as an exemplar (the term 1/rkk ? 1 is scaled down
if rjk is large for some other training case xj ).
Messages may be propagated in parallel or sequentially. In our implementation, each candidate exemplar absorbs and emits affinities
P in parallel, and the centers are ordered according
to the sum of their likelihoods, i.e. i Lik . Direct implementation of the above propagation rules gives an N 2 -time algorithm, but affinities need only be propagated between i and
k if Lik > 0. In practice, likelihoods below some threshold can be set to zero, leading to a
sparse graph on which affinities are propagated.
Affinity propagation accounts for a Bayesian prior pdf on the exemplars and is able to
automatically search over the appropriate number of exemplars. (Note that the number of
exemplars is not pre-specified in the above updates.) In applications where a particular
number of clusters is desired, the update rule for the responsibilities (in particular, the selfresponsibilities rkk , which determine the availabilities of the exemplars) can be modified,
as described in the next section. Later, we describe applications where K is pre-specified
and where K is automatically selected by affinity propagation.
The affinity propagation update rules can be derived as an instance of the sum-product
Figure 1: Affinity propagation can be viewed as belief propagation in this factor graph.
(?loopy BP?) algorithm in a graphical model. Using si to denote the index of the exemplar
for xi , the product of the likelihoods of the training cases and the priors on the exemplars
QN
is i=1 Lisi . (If si = i, xi is an exemplar with a priori pdf Lii .) The set of hidden
variables s1 , . . . , sN completely specifies the mixture model, but not all configurations of
these variables are allowed: si = k (xi in cluster xk ) implies sk = k (xk is an exemplar)
and sk = k (xk is an exemplar) implies si = k for some i 6= k (some other training case is
in cluster xk ). The global indicator function for the satisfaction of these constraints can be
QN
written k=1 fk (s1 , . . . , sN ), where fk is the constraint for candidate cluster xk :
?
?0 if sk = k and si 6= k for all i 6= k
fk (s1 , . . . , sN ) = 0 if sk 6= k and si = k for some i 6= k
?
1 otherwise.
Thus, the joint distribution of the mixture model and data factorizes as follows:
P =
N
Y
i=1
Lisi
N
Y
fk (s1 , . . . , sN ).
k=1
The factor graph [10] in Fig. 1 describes this factorization. Each black box corresponds to
a term in the factorization, and it is connected to the variables on which the term depends.
While exact inference in this factor graph is NP-hard, approximate inference algorithms can
be used to infer the s variables. It is straightforward to show that the updates for affinity
propagation correspond to the message updates for the sum-product algorithm or loopy
belief propagation (see [10] for a tutorial). The responsibilities correspond to messages
sent from the s?s to the f ?s, while the availabilities correspond to messages sent from the
f ?s to the s?s. If the goal is to find K exemplars, an additional constraint g(s1 , . . . , sN ) =
PN
[K =
k=1 [sk = k]] can be included, where [ ] indicates Iverson?s notation ([true]=1
and [false] = 0). Messages can be propagated through this function in linear time, by
implementing it as a Markov chain that accumulates exemplar counts.
Max-product affinity propagation. Max-product affinity propagation can be derived
as an instance of the max-product algorithm, instead of the sum-product algorithm. The
update equations for the affinities are modified and maximizations are used instead of
summations. An advantage of max-product affinity propagation is that the algorithm is
invariant to multiplicative constants in the log-likelihoods.
4
Image segmentation
A sensible model-based approach to image segmentation is to imagine that each patch in
the image originates from one of a small number of prototype texture patches. The main
difficulty is that in addition to standard additive or multiplicative pixel-level noise, another
prevailing form of noise is due to transformations of the image features, and in particular
translations.
Pair-wise affinity-based techniques and in particular spectral clustering has been employed
with some success [4, 9], with the main disadvantage being that without an underlying
Figure 2: Segmentation of non-aligned gray-scale characters. Patches clustered by affinity
propagation and K-medoids are colored according to classification (centers shown below
solutions). Affinity propagation achieves a near-best score compared to 1000 runs of Kmedoids.
model there is no sound basis for selecting good class representatives. Having a model with
class representatives enables efficient synthesis (generation) of patches, and classification
of test patches ? requiring only K comparisons (to class centers) rather than N comparisons
(to training cases).
We present results for segmenting two image types. First, as a toy example, we segment
an image containing many noisy examples of the letters ?N? ?I? ?P? and ?S? (see Fig. 2).
The original image is gray-scale with resolution 216 ? 240 and intensities ranging from 0
(background color, white) to 1 (foreground color, black). Each training case xi is a 24 ? 24
image patch and xm
i is the mth pixel in the patch. To account for translations, we include a
hidden
2-D
translation
variable T . The match between patch xi and patch xk is measured
P
m
by m xm
i ?f (xk , T ), where f (xk , T ) is the patch obtained by applying a 2-D translation
T plus cropping to patch xk . f m is the mth pixel in the translated, cropped patch. This
metric is used in the likelihood function:
X
m
m
m
m
Lik ?
p(T )e?(?m xi ?f (xk ,T ))/?xi ? e? maxT (?m xi ?f (xk ,T ))/?xi ,
T
where x
?i = 2412 m xm
i is used to normalize the match by the amount of ink in xi . ?
controls how strictly xi should match xk to have high likelihood. Max-product affinity
propagation is independent of the choice of ?, and for sum-product affinity propagation we
quite arbitrarily chose ? = 1. The exemplar priors Lkk were set to mediani,k6=i Lik .
P
We cut the image in Fig. 2 into a 9 ? 10 grid of non-overlapping 24 ? 24 patches, computed
the pair-wise likelihoods, and clustered them into K = 4 classes using the greedy EM
algorithm (randomly chosen initial exemplars) and affinity propagation. (Max-product and
sum-product affinity propagation yielded identical results.) We then took a much larger set
of overlapping patches, classified them into the 4 categories, and then colored each pixel in
the image according to the most frequent class for the pixel. The results are shown in Fig. 2.
While affinity propagation is deterministic, the EM algorithm depends on initialization. So,
we ran the EM algorithm 1000 times and in Fig. 2 we plot the cumulative distribution of
the log P scores obtained by EM. The score for affinity propagation is also shown, and
achieves near-best performance (98th percentile).
We next analyzed the more natural 192 ? 192 image shown in Fig. 3. Since there is no
natural background color, we use mean-squared pixel differences in HSV color space to
measure similarity between the 24 ? 24 patches:
m
Lik ? e?? minT ?m?W (xi
?f m (xk ,T ))2
,
where W is the set of indices corresponding to a 16 ? 16 window centered in the patch
and f m (xk , T ) is the same as above. As before, we arbitrarily set ? = 1 and Lkk to
mediani,k6=i Lik .
Figure 3: Segmentation results for several methods applied to a natural image. For methods
other than affinity propagation, many parameter settings were tried and the best segmentation selected. The histograms show the percentile in score achieved by affinity propagation
compared to 1000 runs of greedy EM, for different random training sets.
We cut the image in Fig. 3 into an 8 ? 8 grid of non-overlapping 24 ? 24 patches and
clustered them into K = 6 classes using affinity propagation (both forms), greedy EM
in our model, spectral clustering (using a normalized L-matrix based on a set of 29 ? 29
overlapping patches), and mixtures of Gaussians2 . For greedy EM, the affinity propagation
algorithms, and mixtures of Gaussians, we then choose all possible 24 ? 24 overlapping
patches and calculated the likelihoods of them given each of the 6 cluster centers, classifying each patch according to its maximum likelihood.
Fig. 3 shows the segmentations for the various methods, where the central pixel of each
patch is colored according to its class. Again, affinity propagation achieves a solution that
is near-best compared to one thousand runs of greedy EM.
5
Learning mixtures of gene models
Currently, an important problem in genomics research is the discovery of genes and gene
variants that are expressed as messenger RNAs (mRNAs) in normal tissues. In a recent
study [11], we used DNA-based techniques to identify 837,251 possible exons (?putative
exons?) in the mouse genome. For each putative exon, we used an Agilent microarray
probe to measure the amount of corresponding mRNA that was present in each of 12 mouse
tissues. Each 12-D vector, called an ?expression profile?, can be viewed as a feature vector
indicating the putative exon?s function. By grouping together feature vectors for nearby
probes, we can detect genes and variations of genes. Here, we compare affinity propagation
with hierarchical clustering, which was previously used to find gene structures [12].
Fig. 4a shows a normalized subset of the data and gives three examples of groups of nearby
2
For spectral clustering, we tried ? = 0.5, 1 and 2, and for each of these tried clustering using 6, 8,
10, 12 and 14 eigenvectors. We then visually picked the best segmentation (? = 1, 10 eigenvectors).
The eigenvector features were clustered using EM in a mixture of Gaussians and out of 10 trials,
the solution with highest likelihood was selected. For mixtures of Gaussians applied directly to the
image patches, we picked the model with highest likelihood in 10 trials.
(a)
(b)
Figure 4: (a) A normalized subset of 837,251 tissue expression profiles ? mRNA level
versus tissue ? for putative exons from the mouse genome (most profiles are much noisier
than these). (b) The true exon detection rate (in known genes) versus the false discovery
rate, for affinity propagation and hierarchical clustering.
feature vectors that are similar enough to provide evidence of gene units. The actual data
is generally much noisier, and includes multiplicative noise (exon probe sensitivity can
vary by two orders of magnitude), correlated additive noise (a probe can cross-hybridize in
a tissue-independent manner to background mRNA sources), and spurious additive noise
(due to a noisy measurement procedure and biological effects such as alternative splicing).
To account for noise, false putative exons, and the distance between exons in the same
gene, we used the following likelihood function:
12
m
m
2
1
Z
e? 2?2 ?m=1 (xi ?(y?xj +z))
??|i?j|
Lij = ?e
q?p0 (xi ) + (1?q) p(y, z, ?)
dydzd?
?
12
2?? 2
12
m
m
2
1
e? 2?2 ?m=1 (xi ?(y?xj +z))
? ?e??|i?j| q?p0 (xi ) + (1?q) max p(y, z, ?)
,
?
12
y,z,?
2?? 2
where xm
i is the expression level for the mth tissue in the ith probe (in genomic order).
We found that in this application, the maximum is a sufficiently good approximation to
the integral. The distribution over the distance between probes in the same gene |i ? j|
is assumed to be geometric with parameter ?. p0 (xi ) is a background distribution that
accounts for false putative exons and q is the probability of a false putative exon within a
gene. We assumed y, z and ? are independent and uniformly distributed3 . The Bayesian
prior probability that xk is an exemplar is set to ? ? p0 (xk ), where ? is a control knob used
to vary the sensitivity of the system.
Because of the term ?e??|i?j| and the additional assumption that genes on the same strand
do not overlap, it is not necessary to propagate affinities between all 837, 2512 pairs of
training cases. We assume Lij = 0 for |i ? j| > 100, in which case it is not necessary
to propagate affinities between xi and xj . The assumption that genes do not overlap implies that if si = k, then sj = k for j ? {min(i, k), . . . , max(i, k)}. It turns out that
this constraint causes the dependence structure in the update equations for the affinities to
reduce to a chain, so affinities need only be propagated forward and backward along the
genome. After affinity propagation is used to automatically select the number of mixture
3
Based on the experimental procedure and a set of previously-annotated genes (RefSeq), we estimated ? = 0.05, q = 0.7, y ? [.025, 40], z ? [??, ?] (where ? = maxi,m xm
i ), ? ? (0, ?]. We
used a mixture of Gaussians for p0 (xi ), which was learned from the entire training set.
components and identify the mixture centers and the probes that belong to them (genes),
each probe xi is labeled as an exon or a non-exon depending on which of the two terms in
the above likelihood function (q ? p0 (xi ) or the large term to its right) is larger.
Fig. 4b shows the fraction of exons in known genes detected by affinity propagation
versus the false detection rate. The curve is obtained by varying the sensitivity parameter,
?. The false detection rate was estimated by randomly permuting the order of the
probes in the training set, and applying affinity propagation. Even for quite low false
discovery rates, affinity propagation identifies over one third of the known exons. Using
a variety of metrics, including the above metric, we also used hierarchical clustering
to detect exons. The performance of hierarchical clustering using the metric with
highest sensitivity is also shown. Affinity propagation has significantly higher sensitivity, e.g., achieving a five-fold increase in true detection rate at a false detection rate of 0.4%.
6
Computational efficiency
The following table compares the MATLAB execution times of our implementations of
the methods we compared on the problems we studied. For methods that first compute
a likelihood or affinity matrix, we give the timing of this computation first. Techniques
denoted by ?*? were run many times to obtain the shown results, but the given time is for
a single run.
NIPS
Dog
Genes
7
Affinity Prop
12.9 s + 2.0 s
12.0 s + 1.5 s
16 m + 43 m
K-medoids*
12.9 s + .2 s
12.0 s + 0.1 s
-
Spec Clust*
12.0 s + 29 s
-
MOG EM*
3.3 s
-
Hierarch Clust
16 m + 28 m
Summary
An advantage of affinity propagation is that the update rules are deterministic, quite simple,
and can be derived as an instance of the sum-product algorithm in a factor graph. Using
challenging applications, we showed that affinity propagation obtains better solutions (in
terms of percentile log-likelihood, visual quality of image segmentation and sensitivityto-specificity) than other techniques, including K-medoids, spectral clustering, Gaussian
mixture modeling and hierarchical clustering.
To our knowledge, affinity propagation is the first algorithm to combine advantages of
pair-wise clustering methods that make use of bottom-up evidence and model-based
methods that seek to fit top-down global models to the data.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
CM Bishop. Neural Networks for Pattern Recognition. Oxford University Press, NY, 1995.
KA Heller, Z Ghahramani. Bayesian hierarchical clustering. ICML, 2005.
M Meila, J Shi. Learning segmentation by random walks. NIPS 14, 2001.
J Shi, J Malik. Normalized cuts and image segmentation. Proc CVPR, 731-737, 1997.
A Ng, M Jordan, Y Weiss. On spectral clustering: Analysis and an algorithm. NIPS 14, 2001.
N Shental A Zomet T Hertz Y Weiss. Pairwise clustering and graphical models NIPS 16 2003.
R Rosales, BJ Frey. Learning generative models of affinity matrices. Proc UAI, 2003.
M Charikar, S Guha, A Tardos, DB Shmoys. A constant-factor approximation algorithm for the
k-median problem. J Comp and Sys Sci, 65:1, 129-149, 2002.
J Malik et al.. Contour and texture analysis for image segmentation. IJCV 43:1, 2001.
FR Kschischang, BJ Frey, H-A Loeliger. Factor graphs and the sum-product algorithm. IEEE
Trans Info Theory 47:2, 498-519, 2001.
BJ Frey, QD Morris, M Robinson, TR Hughes. Finding novel transcripts in high-resolution
genome-wide microarray data using the GenRate model. Proc RECOMB 2005, 2005.
D. D. Shoemaker et al. Experimental annotation of the human genome using microarray technology. Nature 409, 922-927, 2001.
| 2870 |@word trial:2 proportion:1 seek:1 tried:3 propagate:2 decomposition:1 p0:6 tr:1 recursively:4 initial:1 configuration:1 score:4 selecting:1 loeliger:1 denoting:1 document:1 interestingly:1 current:2 ka:1 si:10 assigning:1 written:1 realistic:1 additive:3 enables:1 plot:1 update:17 greedy:10 leaf:1 selected:3 spec:1 generative:1 xk:26 sys:1 ith:1 pointer:1 colored:3 iterates:1 toronto:2 preference:2 hsv:1 five:1 along:1 constructed:1 direct:2 become:2 iverson:1 ijcv:1 fitting:1 combine:3 absorbs:1 manner:1 introduce:2 pairwise:2 frequently:1 automatically:4 actual:1 window:1 becomes:1 notation:1 underlying:1 cm:1 eigenvector:1 finding:1 transformation:1 dueck:1 scaled:1 control:2 unit:2 originates:1 segmenting:1 before:1 frey:4 local:2 timing:1 accumulates:1 oxford:1 approximately:1 black:2 plus:1 chose:1 initialization:1 studied:1 challenging:2 factorization:2 practice:1 hughes:1 procedure:2 significantly:1 pre:4 word:1 specificity:1 cannot:1 applying:2 www:1 optimize:1 deterministic:2 center:15 shi:2 straightforward:1 mrna:4 resolution:2 simplicity:1 examines:1 rule:9 searching:1 notion:1 variation:1 tardos:1 imagine:1 suppose:1 exact:4 us:2 recognition:1 cut:3 labeled:1 bottom:1 thousand:1 connected:1 highest:3 ran:1 complexity:1 segment:2 efficiency:1 completely:1 basis:1 translated:1 exon:16 joint:1 various:1 describe:4 detected:1 approached:1 choosing:1 quite:5 larger:2 cvpr:1 otherwise:1 statistic:3 itself:1 noisy:2 advantage:5 indication:1 took:1 product:17 fr:1 frequent:1 aligned:1 mixing:1 intuitive:1 normalize:2 cluster:23 cropping:1 produce:1 help:1 depending:1 propagating:1 measured:1 exemplar:33 transcript:1 implies:3 rosales:1 qd:1 annotated:1 centered:1 human:1 viewing:1 implementing:1 require:1 clustered:4 rkk:5 biological:1 summation:1 strictly:1 sufficiently:1 normal:1 visually:1 bj:3 achieves:3 vary:2 proc:3 currently:1 grouped:2 genomic:1 rna:1 gaussian:2 modified:2 rather:1 avoid:2 pn:1 factorizes:1 varying:1 knob:1 derived:4 likelihood:25 indicates:1 brendan:1 greedily:1 detect:2 inference:2 entire:1 initially:2 hidden:3 mth:3 spurious:1 pixel:7 classification:2 denoted:1 priori:1 k6:2 prevailing:1 equal:2 construct:1 nicely:1 having:1 ng:1 identical:1 placing:1 icml:1 foreground:1 np:2 randomly:2 detection:5 message:10 highly:1 mixture:23 akk:2 analyzed:1 permuting:1 chain:2 integral:1 necessary:2 tree:1 walk:3 desired:1 isolated:1 instance:3 modeling:3 soft:2 disadvantage:1 maximization:1 loopy:2 introducing:1 subset:2 guha:1 fundamental:1 sensitivity:5 together:4 synthesis:1 mouse:3 squared:1 central:1 again:1 containing:1 choose:1 possibly:1 lii:3 leading:1 toy:1 account:5 potential:1 availability:12 includes:1 depends:3 later:1 view:1 root:1 responsibility:13 multiplicative:3 picked:2 complicated:1 parallel:2 annotation:1 correspond:4 identify:3 identification:1 bayesian:6 shmoys:1 accurately:3 clust:2 comp:1 researcher:1 j6:7 tissue:6 classified:1 explain:1 messenger:1 definition:1 psi:1 propagated:5 emits:1 hybridize:1 popular:2 knowledge:2 color:4 segmentation:12 sophisticated:1 agilent:1 higher:1 wei:2 box:1 hand:1 nonlinear:1 overlapping:5 propagation:46 mode:1 quality:1 gray:2 usage:1 effect:1 normalized:5 true:3 requiring:1 iteratively:1 white:1 self:1 aki:4 speaker:1 whereby:1 percentile:3 pdf:2 demonstrate:1 image:19 wise:6 ranging:1 novel:1 common:1 linking:2 belong:1 refer:2 measurement:1 meila:1 fk:4 grid:2 similarity:2 surface:1 own:2 recent:1 showed:1 mint:1 phonetic:1 binary:1 success:1 arbitrarily:2 minimum:2 additional:2 spectrogram:1 employed:1 determine:2 lik:10 sound:1 infer:1 match:3 cross:1 variant:2 confront:1 metric:4 mog:1 histogram:1 achieved:1 addition:1 background:4 cropped:1 decreased:1 median:2 source:1 microarray:4 unlike:1 sent:5 db:1 jordan:1 near:3 intermediate:1 enough:1 variety:1 xj:4 fit:1 competing:1 reduce:1 prototype:2 whether:2 expression:4 speech:1 passing:1 cause:1 matlab:1 useful:1 generally:1 involve:1 eigenvectors:2 amount:2 morris:1 category:1 dna:1 specifies:1 tutorial:1 estimated:2 shental:1 kmedoids:1 group:1 threshold:1 achieving:1 shoemaker:1 hierarch:1 backward:1 graph:6 fraction:1 sum:10 compete:1 run:5 parameterized:2 letter:1 place:1 splicing:1 patch:23 putative:7 decision:4 fold:1 replaces:1 yielded:1 constraint:4 bp:1 software:1 nearby:3 min:1 charikar:1 according:7 combination:2 poor:3 hertz:1 across:1 ascertain:1 em:12 increasingly:1 describes:2 character:1 making:1 s1:5 invariant:1 medoids:8 equation:2 previously:2 describing:2 count:1 turn:1 argmaxj:1 needed:1 lkk:2 available:3 gaussians:5 probe:9 hierarchical:9 spectral:8 appropriate:2 alternative:1 original:1 top:1 clustering:30 include:2 graphical:5 ghahramani:1 ink:1 malik:2 realized:1 dependence:1 traditional:1 affinity:67 distance:2 link:1 sci:1 sensible:1 rjk:4 assuming:1 index:4 ratio:2 demonstration:1 providing:1 difficult:2 info:1 implementation:3 rii:1 markov:1 intensity:1 pair:11 dog:1 specified:3 learned:1 nip:4 trans:1 robinson:1 able:3 below:2 pattern:1 xm:5 max:9 including:2 explanation:1 belief:3 overlap:2 satisfaction:1 difficulty:1 natural:3 indicator:1 scheme:1 technology:1 identifies:1 argmaxk:1 lij:6 sn:5 genomics:1 prior:6 geometric:1 discovery:3 heller:1 marginalizing:1 generation:1 recomb:1 versus:3 rik:5 sufficient:3 classifying:1 translation:4 maxt:1 summary:1 aij:1 wide:1 sparse:1 benefit:2 curve:1 calculated:1 xn:1 transition:2 cumulative:1 genome:5 qn:2 ignores:1 forward:1 contour:1 sj:2 approximate:2 obtains:2 gene:18 global:2 sequentially:1 uai:1 assumed:2 xi:36 search:2 sk:5 table:1 learn:1 nature:1 kschischang:1 obtaining:1 main:2 noise:6 profile:3 allowed:1 x1:1 fig:10 representative:2 ny:1 delbert:1 candidate:3 third:1 learns:2 down:2 specific:1 bishop:1 maxi:1 evidence:4 grouping:1 false:9 texture:2 magnitude:1 execution:1 visual:1 expressed:1 ordered:1 strand:1 corresponds:1 prop:1 conditional:5 viewed:5 goal:2 replace:2 feasible:2 hard:5 included:1 except:1 uniformly:1 zomet:1 called:4 experimental:2 indicating:1 select:1 modulated:1 noisier:2 correlated:1 |
2,060 | 2,871 | Value Function Approximation with Diffusion
Wavelets and Laplacian Eigenfunctions
Sridhar Mahadevan
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
[email protected]
Mauro Maggioni
Program in Applied Mathematics
Department of Mathematics
Yale University
New Haven, CT 06511
[email protected]
Abstract
We investigate the problem of automatically constructing efficient representations or basis functions for approximating value functions based
on analyzing the structure and topology of the state space. In particular, two novel approaches to value function approximation are explored
based on automatically constructing basis functions on state spaces that
can be represented as graphs or manifolds: one approach uses the eigenfunctions of the Laplacian, in effect performing a global Fourier analysis
on the graph; the second approach is based on diffusion wavelets, which
generalize classical wavelets to graphs using multiscale dilations induced
by powers of a diffusion operator or random walk on the graph. Together,
these approaches form the foundation of a new generation of methods for
solving large Markov decision processes, in which the underlying representation and policies are simultaneously learned.
1 Introduction
Value function approximation (VFA) is a well-studied problem: a variety of linear and
nonlinear architectures have been studied, which are not automatically derived from the
geometry of the underlying state space, but rather handcoded in an ad hoc trial-and-error
process by a human designer [1]. A new framework for VFA called proto-reinforcement
learning (PRL) was recently proposed in [7, 8, 9]. Instead of learning task-specific value
functions using a handcoded parametric architecture, agents learn proto-value functions, or
global basis functions that reflect intrinsic large-scale geometric constraints that all value
functions on a manifold [11] or graph [3] adhere to, using spectral analysis of the selfadjoint Laplace operator. This approach also yields new control learning algorithms called
representation policy iteration (RPI) where both the underlying representations (basis functions) and policies are simultaneously learned. Laplacian eigenfunctions also provide ways
of automatically decomposing state spaces since they reflect bottlenecks and other global
geometric invariants.
In this paper, we extend the earlier Laplacian approach in a new direction using the recently
proposed diffusion wavelet transform (DWT), which is a compact multi-level representation of Markov diffusion processes on manifolds and graphs [4, 2]. Diffusion wavelets
provide an interesting alternative to global Fourier eigenfunctions for value function approximation, since they encapsulate all the traditional advantages of wavelets: basis functions have compact support, and the representation is inherently hierarchical since it is
based on multi-resolution modeling of processes at different spatial and temporal scales.
2 Technical Background
This paper uses the framework of spectral graph theory [3] to build basis representations
for smooth (value) functions on graphs induced by Markov decision processes. Given
any graph G, an obvious but poor choice of representation is the ?table-lookup? orthonormal encoding, where ?(i) = [0 . . . i . . . 0] is the encoding of the ith node in the graph.
This representation does not reflect the topology of the specific graph under consideration. Polynomials are another popular choice of orthonormal basis functions [5], where
?(s) = [1 s . . . sk ] for some fixed k. This encoding has two disadvantages: it is numerically unstable for large graphs, and is dependent on the ordering of vertices. In this paper,
we outline a new approach to the problem of building basis functions on graphs using
Laplacian eigenfunctions and diffusion wavelets.
a
a
A finite Markov decision process (MDP) M = (S, A, Pss
0 , Rss0 ) is defined as a finite set
a
of states S, a finite set of actions A, a transition model Pss0 specifying the distribution over
future states s0 when an action a is performed in state s, and a corresponding reward model
a
Rss
0 specifying a scalar cost or reward [10]. A state value function is a mapping S ? R
or equivalently a vector in R|S| . Given a policy ? : S ? A mapping states to actions,
its corresponding value function V ? specifies the expected long-term discounted sum of
rewards received by the agent in any given state s when actions are chosen using the policy.
Any optimal policy ? ? defines the same unique optimal value function V ? which satisfies
the nonlinear constraints
X
?
a
a
? 0
V (s) = max
Pss
0 (Rss0 + ?V (s ))
a
s0
For any MDP, any policy induces a Markov chain that partitions the states into classes:
transient states are visited initially but not after a finite time, and recurrent states are visited
infinitely often. In ergodic MDPs, the set of transient states is empty. The construction of
basis functions below assumes that the Markov chain induced by a policy is a reversible
random walk on the state space. While some policies may not induce such Markov chains,
the set of basis functions learned from a reversible random walk can still be useful in
approximating value functions for (reversible or non-reversible) policies. In other words,
the construction of the basis functions can be considered an off-policy method: just as
in Q-learning where the exploration policy differs from the optimal learned policy, in the
proposed approach the actual MDP dynamics may induce a different Markov chain than the
one analyzed to build representations. Reversible random walks greatly simplify spectral
analysis since such random walks are similar to a symmetric operator on the state space.
2.1 Smooth Functions on Graphs and Value Function Representation
We assume the state space can be modeled as a finite undirected weighted graph (G, E, W ),
but the approach generalizes to Riemannian manifolds.P
We define x ? y to mean an edge
between x and y, and the degree of x to be d(x) =
x?y w(x, y). D will denote the
diagonal matrix defined by Dxx = d(x), and W the matrix
P defined by Wxy = w(x, y) =
w(y, x). The L2 norm of a function on G is ||f ||22 = x?G |f (x)|2 d(x). The gradient
of a function is ?f (i, j) = w(i, j)(f (i) ? f (j)) if there is an edge e connecting i to j, 0
otherwise. The smoothness of a function on a graph, can be measured by the Sobolev norm
X
X
||f ||2H2 = ||f ||22 + ||?f ||22 =
|f (x)|2 d(x) +
|f (x) ? f (y)|2 w(x, y) .
(1)
x
x?y
The first term in this norm controls the size (in terms of L2 -norm) for the function f , and
the second term controls the size of the gradient. The smaller ||f ||H2 , the smoother is f .
We will assume that the value functions we consider have small H2 norms, except at a
few points, where the gradient may be large. Important variations exist, corresponding to
different measures on the vertices and edges of G.
Classical techniques, such as value iteration and policy iteration [10], represent value functions using an orthonormal basis (e1 , . . . , e|S| ) for the space R|S| [1]. For a fixed precision
, a value function V ? can be approximated as
X
||V ? ?
??i ei || ?
i?S()
?
with ?i =< V , ei > since the ei ?s are orthonormal, and the approximation is measured
in some norm, such as L2 or H2 . The goal is to obtain representations in which the index
set S() in the summation is as small as possible, for a given approximation error . This
hope is well founded at least when V ? is smooth or piecewise smooth, since in this case it
should be compressible in some well chosen basis {ei }.
3 Function Approximation using Laplacian Eigenfunctions
The combinatorial Laplacian L [3] is defined as
X
Lf (x) =
w(x, y)(f (x) ? f (y)) = (D ? W )f .
y?x
1
1
Often one considers the normalized Laplacian L = D? 2 (D?W )D? 2 which has spectrum
in
smoothness as above, since hf, Lf i =
P[0, 2]. This Laplacian
P is related to the notion of
2
2
x f (x) Lf (x) =
x,y w(x, y)(f (x) ? f (y)) = ||?f ||2 , which should be compared
with (1). Functions that satisfy the equation Lf = 0 are called harmonic. The Spectral
Theorem can be applied to L (or L), yielding a discrete set of eigenvalues 0 ? ?0 ? ?1 ?
. . . ?i ? . . . and a corresponding orthonormal basis of eigenfunctions {?i }i?0 , solutions to
the eigenvalue problem L?i = ?i ?i .
The eigenfunctions of the Laplacian can be viewed as an orthonormal basis of global
Fourier smooth functions that can be used for approximating any value function on a
graph. These basis functions capture large-scale features of the state space, and are particularly sensitive to ?bottlenecks?, a phenomenon widely studied in Riemannian geometry
and spectral graph theory [3]. Observe that ?i satisfies ||??i ||22 = ?i . In fact, the variational characterization of eigenvectors shows that ?i is the normalized function orthogonal
to ?0 , . . . , ?i?1 with minimal ||??i ||2 . Hence the projection of a function f on S onto the
top k eigenvectors of the Laplacian is the smoothest approximation to f , in the sense of the
norm in H2 . A potential drawback of Laplacian approximation is that it detects only global
smoothness, and may poorly approximate a function which is not globally smooth but only
piecewise smooth, or with different smoothness in different regions. These drawbacks are
addressed in the context of analysis with diffusion wavelets, and in fact partly motivated
their construction.
4 Function Approximation using Diffusion Wavelets
Diffusion wavelets were introduced in [4, 2], in order to perform a fast multiscale analysis
of functions on a manifold or graph, generalizing wavelet analysis and associated signal
processing techniques (such as compression or denoising) to functions on manifolds and
graphs. They allow the fast and accurate computation of high powers of a Markov chain
DiffusionWaveletTree (H0 , ?0 , J, ):
// H0 : symmetric conjugate to random walk matrix, represented on the basis ?0
// ?0 : initial basis (usually Dirac?s ?-function basis), one function per column
// J : number of levels to compute
// : precision
for j from 0 to J do,
1. Compute sparse factorization Hj ? Qj Rj , with Qj orthogonal.
j
?
?j Hj+1 ? Rj Rj? .
2. ?j+1 ? Qj = Hj Rj?1 and [H02 ]?j+1
j+1
3. Compute sparse factorization I ? ?j+1 ??j+1 = Q0j Rj0 , with Q0j orthogonal.
4. ?j+1 ? Q0j .
end
Figure 1: Pseudo-code for constructing a Diffusion Wavelet Tree
P on the manifold or graph, including direct computation of the Green?s function (or fundamental matrix) of the Markov chain, (I ? P )?1 , which can be used to solve Bellman?s
equation. Here, ?fast? means that the number of operations required is O(|S|), up to logarithmic factors.
Space constraints permit only a brief description of the construction of diffusion wavelet
trees. More details are provided in [4, 2]. The input to the algorithm is a ?precision?
parameter > 0, and a weighted graph (G, E, W ). We can assume that G is connected,
otherwise we can consider each connected component separately. The construction is based
on using the natural random walk P = D?1 W on a graph and its powers to ?dilate?, or
?diffuse? functions on the graph, and then defining an associated coarse-graining of the
graph. We symmetrize P by conjugation and take powers to obtain
X
1
1
1
1
H t = D 2 P t D? 2 = (D? 2 W D? 2 )t = (I ? L)t =
(1 ? ?i )t ?i (?)?i (?)
(2)
i?0
where {?i } and {?i } are the eigenvalues and eigenfunctions of the Laplacian as above.
Hence the eigenfunctions of H t are again ?i and the ith eigenvalue is (1 ? ?i )t . We assume
that H 1 is a sparse matrix, and that the spectrum of H 1 has rapid decay.
A diffusion wavelet tree consist of orthogonal diffusion scaling functions ?j that are
smooth bump functions, with some oscillations, at scale roughly 2j (measured with respect
to geodesic distance, for small j), and orthogonal wavelets ?j that are smooth localized oscillatory functions at the same scale. The scaling functions ?j span a subspace Vj , with the
property that Vj+1 ? Vj , and the span of ?j , Wj , is the orthogonal complement of Vj into
j
Vj+1 . This is achieved by using the dyadic powers H 2 as ?dilations?, to create smoother
and wider (always in a geodesic sense) ?bump? functions (which represent densities for the
symmetrized random walk after 2j steps), and orthogonalizing and downsampling appropriately to transform sets of ?bumps? into orthonormal scaling functions.
Computationally (Figure 1), we start with the basis ?0 = I and the matrix H0 := H 1 ,
sparse by assumption, and construct an orthonormal basis of well-localized functions for
its range (the space spanned by the columns), up to precision , through a variation of
the Gram-Schmidt orthonormalization scheme, described in [4]. In matrix form, this is a
sparse factorization H0 ? Q0 R0 , with Q0 orthonormal. Notice that H0 is |G| ? |G|,
but in general Q0 is |G| ? |G(1) | and R0 is |G(1) | ? |G|, with |G(1) | ? |G|. In fact
|G(1) | is approximately equal to the number of singular values of H0 larger than . The
columns of Q0 are an orthonormal basis of scaling functions ?1 for the range of H0 , written
as a linear combination of the initial basis ?0 . We can now write H02 on the basis ?1 :
?
?
?
1
H1 := [H 2 ]?
?1 = Q0 H0 H0 Q0 = R0 R0 , where we used H0 = H0 . This is a compressed
2
(1)
representation of H0 acting on the range of H0 , and it is a |G | ? |G(1) | matrix. We
j
proceed by induction: at scale j we have an orthonormal basis ?j for the rank of H 2 ?1
up to precision j, represented as a linear combination of elements in ?j?1 . This basis
contains |G(j) | functions, where |G(j) | is comparable with the number of eigenvalues ?j of
j
j
H0 such that ?2j ?1 ? . We have the operator H02 represented on ?j by a |G(j) | ? |G(j) |
matrix Hj , up to precision j. We compute a sparse decomposition of Hj ? Qj Rj , and
j+1
obtain the next basis ?j+1 = Qj = Hj Rj?1 and represent H 2
on this basis by the
j
?
matrix Hj+1 := [H 2 ]?j+1
= Q?j Hj Hj Qj = Rj Rj? .
j+1
Wavelet bases for the spaces Wj can be built analogously by factorizing IVj ? Qj+1 Q?j+1 ,
which is the orthogonal projection on the complement of Vj+1 into Vj . The spaces can
be further split to obtain wavelet packets [2]. A Fast Diffusion Wavelet Transform allows expanding in O(n) (where n is the number of vertices) computations any function
in the wavelet, or wavelet packet, basis, and efficiently search for the most suitable basis
set. Diffusion wavelets and wavelet packets are a very efficient tool for representation and
approximation of functions on manifolds and graphs [4, 2], generalizing to these general
spaces the nice properties of wavelets that have been so successfully applied to similar tasks
in Euclidean spaces.
k
Diffusion wavelets allow computing H 2 f for any fixed f , in order O(kn). This is nontrivial because while the matrix H is sparse, large powers of it are not, and the computation
H ? H . . . ? (H(Hf )) . . .) involves 2k matrix-vector products. As a notable consequence,
this yields a fast algorithm for computing the Green?s function, or fundamental matrix,
P
Q
k
associated with the Markov process H, via (I?H 1 )?1 f = k?0 H k = k?0 (I+H 2 )f .
In a similar way one can compute (I ? P )?1 . For large classes of Markov chains we can
perform this computation in time O(n), in a direct (as opposed to iterative) fashion. This is
remarkable since in general the matrix (I ? H 1 )?1 is full and only writing down the entries
would take time O(n2 ). It is the multiscale compression scheme that allows to efficiently
represent (I ? H 1 )?1 in compress form, taking advantage of the smoothness of the entries
of the matrix. This is discussed in general in [4]. We use this approach to develop a faster
policy evaluation step for solving MDPs described in [6]
5 Experiments
Figure 2 contrasts Laplacian eigenfunctions and diffusion wavelet basis functions in a three
room grid world environment. Laplacian eigenfunctions were produced by solving Lf =
?f , where L is the combinatorial Laplacian, whereas diffusion wavelet basis functions
were produced using the algorithm described in Figure 1. The input to both methods is an
undirected graph, where edges connect states reachable through a single (reversible) action.
Such graphs can be easily learned from a sample of transitions, such as that generated by
RL agents while exploring the environment in early phases of policy learning. Note how
the intrinsic multi-room environment is reflected in the Laplacian eigenfunctions. The
Laplacian eigenfunctions are globally defined over the entire state space, whereas diffusion
wavelet basis functions are progressively more compact at higher levels, beginning at the
lowest level with the table-lookup representation, and converging at the highest level to
basis functions similar to Laplacian eigenfunctions. Figure 3 compares the approximations
produced in a two-room grid world MDP with 630 states. These experiments illustrate
the superiority of diffusion wavelets: in the first experiment (top row), diffusion wavelets
handily outperform Laplacian eigenfunctions because the function is highly nonlinear near
Figure 2: Examples of Laplacian eigenfunctions (left) and diffusion wavelet basis functions
(right) computed using the graph Laplacian on a complete undirected graph of a deterministic grid world environment with reversible actions.
the goal, but mostly linear elsewhere. The eigenfunctions contain a lot of ripples in the flat
region causing a large residual error. In the second experiment (bottom row), Laplacian
eigenfunctions work significantly better because the value function is globally smooth.
Even here, the superiority of diffusion wavelets is clear.
2
WP
Eig
1
50
50
40
40
30
30
20
20
10
10
0
30
0
30
10
0
?1
5
?2
0
?3
?4
30
20
?5
30
10
?5
30
20
20
10
10
0
30
20
20
10
10
0
0
?6
20
10
0
0
?7
0
0
50
100
150
200
3
WP
Eig
300
200
300
60
200
40
2.5
20
2
100
100
0
0
?20
0
1.5
?100
?100
30
?40
?200
30
30
20
20
10
10
0
0
?60
30
30
20
1
30
20
20
10
10
0
0
20
10
10
0
0
0.5
0
50
100
150
200
Figure 3: Left column: value functions in a two room grid world MDP, where each room
has 21 ? 15 states connected by a door in the middle of the common wall. Middle two
columns: approximations produced by 5 diffusion wavelet bases and Laplacian eigenfunctions. Right column: least-squares approximation error (log scale) using up to 200 basis
functions (bottom curve: diffusion wavelets; top curve: Laplacian eigenfunctions). In the
top row, the value function corresponds to a random walk. In the bottom row, the value
function corresponds to the optimal policy.
5.1 Control Learning using Representation Policy Iteration
This section describes results of using the automatically generated basis functions inside
a control learning algorithm, in particular the Representation Policy Iteration (RPI) algorithm [8]. RPI is an approximate policy iteration algorithm where the basis functions
?(s, a) handcoded in other methods, such as LSPI [5] are learned from a random walk of
transitions by computing the graph Laplacian and then computing the eigenfunctions or the
diffusion wavelet bases as described above. One striking property of the eigenfunction and
diffusion wavelet basis functions is their ability to reflect nonlinearities arising from ?bottlenecks? in the state space. Figure 4 contrasts the value function approximation produced
by RPI using Laplacian eigenfunctions with that produced by a polynomial approximator.
The polynomial approximator yields a value function that is ?blind? to the nonlinearities
produced by the walls in the two room grid world MDP.
Figure 4: This figures compares the value functions produced by RPI using Laplacian
eigenfunctions with that produced by LSPI using a polynomial approximator in a two
room grid world MDP with a ?bottleneck? region representing the door connecting the two
rooms. The Laplacian basis functions on the left clearly capture the nonlinearity arising
from the bottleneck, whereas the polynomial approximator on the right smooths the value
function across the walls as it is ?blind? to the large-scale geometry of the environment.
Table 1 compares the performance of diffusion wavelets and Laplacian eigenfunctions using RPI on the classic chain MDP from [5]. Here, an initial random walk of 5000 steps
was carried out to generate the basis functions in a 50 state chain. The chain MDP is a
sequential open (or closed) chain of varying number of states, where there are two actions
for moving left or right along the chain. In the experiments shown, a reward of 1 was provided in states 10 and 41. Given a fixed k, the encoding ?(s) of a state s for Laplacian
eigenfunctions is the vector comprised of the values of the k th lowest-order eigenfunctions
on state k. For diffusion wavelets, all the basis functions at level k were evaluated at state
s to produce the encoding.
Method
RPI DF (5)
RPI DF (14)
RPI DF (19)
RPI Lap (5)
RPI Lap (15)
RPI Lap (25)
#Trials
4.4
6.8
8.2
4.2
7.2
9.4
Error
2.4
4.8
0.6
3.8
3
2
Method
LSPI RBF (6)
LSPI RBF (14)
LSPI RBF (26)
LSPI Poly (5)
LSPI Poly (15)
LSPI Poly (25)
#Trials
3.8
4.4
6.4
4.2
1
1
Error
20.8
2.8
2.8
4
34.4
36
Table 1: This table compares the performance of RPI using diffusion wavelets and Laplacian eigenfunctions with LSPI using handcoded polynomial and radial basis functions on a
50 state chain graph MDP.
Each row reflects the performance of either RPI using learned basis functions or LSPI with
a handcoded basis function (values in parentheses indicate the number of basis functions
used for each architecture). The two numbers reported are steps to convergence and the
error in the learned policy (number of incorrect actions), averaged over 5 runs. Laplacian
and diffusion wavelet basis functions provide a more stable performance at both the low
end and at the higher end, as compared to the handcoded basis functions. As the number of
basis functions are increased, RPI with Laplacian basis functions takes longer to converge,
but learns a more accurate policy. Diffusion wavelets converge slower as the number of
basis functions is increased, giving the best results overall with 19 basis functions. Unlike
Laplacian eigenfunctions, the policy error is not monotonically decreasing as the number
of bases functions is increased. This result is being investigated. LSPI with RBF is unstable
at the low end, converging to a very poor policy for 6 basis functions. LSPI with a 5 degree
polynomial approximator works reasonably well, but its performance noticeably degrades
at higher degrees, converging to a very poor policy in one step for k = 15 and k = 25.
6 Future Work
We are exploring many extensions of this framework, including extensions to factored
MDPs, approximating action value functions as well as large state spaces by exploiting
symmetries defined by a group of automorphisms of the graph. These enhancements will
facilitate efficient construction of eigenfunctions and diffusion wavelets. For large state
spaces, one can randomly subsample the graph, construct the eigenfunctions of the Laplacian or the diffusion wavelets on the subgraph, and then interpolate these functions using
the Nystr?om approximation and related low-rank linear algebraic methods. In experiments
on the classic inverted pendulum control task, the Nystr?om approximation yielded excellent results compared to radial basis functions, learning a more stable policy with a smaller
number of samples.
Acknowledgements
This research was supported in part by a grant from the National Science Foundation IIS0534999.
References
[1] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont,
Massachusetts, 1996.
[2] J. Bremer, R. Coifman, M. Maggioni, and A. Szlam. Diffusion wavelet packets. Technical
Report Tech. Rep. YALE/DCS/TR-1304, Yale University, 2004. to appear in Appl. Comp.
Harm. Anal.
[3] F. Chung. Spectral Graph Theory. American Mathematical Society, 1997.
[4] R. Coifman and M Maggioni. Diffusion wavelets. Technical Report Tech. Rep. YALE/DCS/TR1303, Yale University, 2004. to appear in Appl. Comp. Harm. Anal.
[5] M. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning Research, 4:1107?1149, 2003.
[6] M. Maggioni and S. Mahadevan. Fast direct policy evaluation using multiscale Markov Diffusion Processes. Technical Report Tech. Rep.TR-2005-39, University of Massachusetts, 2005.
[7] S. Mahadevan. Proto-value functions: Developmental reinforcement learning. In Proceedings
of the 22nd International Conference on Machine Learning, 2005.
[8] S. Mahadevan. Representation policy iteration. In Proceedings of the 21st International Conference on Uncertainty in Artificial Intelligence, 2005.
[9] S. Mahadevan. Samuel meets Amarel: Automating value function approximation using global
state space analysis. In National Conference on Artificial Intelligence (AAAI), 2005.
[10] M. L. Puterman. Markov decision processes. Wiley Interscience, New York, USA, 1994.
[11] S Rosenberg. The Laplacian on a Riemannian Manifold. Cambridge University Press, 1997.
| 2871 |@word trial:3 middle:2 compression:2 polynomial:7 norm:7 nd:1 open:1 r:1 decomposition:1 rj0:1 nystr:2 tr:2 initial:3 contains:1 uma:1 rpi:15 written:1 belmont:1 partition:1 progressively:1 intelligence:2 beginning:1 ith:2 characterization:1 coarse:1 node:1 wxy:1 compressible:1 mathematical:1 along:1 direct:3 incorrect:1 interscience:1 inside:1 coifman:2 expected:1 rapid:1 roughly:1 multi:3 bellman:1 discounted:1 detects:1 globally:3 automatically:5 decreasing:1 actual:1 provided:2 underlying:3 lowest:2 temporal:1 pseudo:1 control:6 szlam:1 grant:1 superiority:2 appear:2 encapsulate:1 bertsekas:1 consequence:1 encoding:5 analyzing:1 meet:1 approximately:1 studied:3 specifying:2 appl:2 factorization:3 range:3 averaged:1 unique:1 differs:1 lf:5 orthonormalization:1 significantly:1 projection:2 word:1 induce:2 radial:2 onto:1 operator:4 context:1 writing:1 deterministic:1 ergodic:1 resolution:1 factored:1 orthonormal:11 spanned:1 amarel:1 classic:2 maggioni:5 variation:2 notion:1 laplace:1 construction:6 programming:1 us:2 automorphisms:1 element:1 approximated:1 particularly:1 bottom:3 capture:2 region:3 wj:2 connected:3 ordering:1 highest:1 environment:5 developmental:1 reward:4 dynamic:2 geodesic:2 solving:3 basis:54 easily:1 represented:4 fast:6 artificial:2 h0:14 widely:1 solve:1 larger:1 otherwise:2 compressed:1 ability:1 transform:3 hoc:1 advantage:2 eigenvalue:5 product:1 causing:1 bremer:1 subgraph:1 poorly:1 description:1 dirac:1 exploiting:1 convergence:1 empty:1 enhancement:1 ripple:1 produce:1 wider:1 illustrate:1 recurrent:1 develop:1 measured:3 received:1 c:1 involves:1 indicate:1 direction:1 drawback:2 exploration:1 human:1 packet:4 transient:2 noticeably:1 wall:3 summation:1 exploring:2 extension:2 considered:1 mapping:2 bump:3 parr:1 early:1 combinatorial:2 visited:2 sensitive:1 create:1 successfully:1 tool:1 weighted:2 reflects:1 hope:1 clearly:1 always:1 rather:1 hj:9 varying:1 rosenberg:1 derived:1 rank:2 ps:2 greatly:1 contrast:2 tech:3 sense:2 dependent:1 entire:1 initially:1 overall:1 spatial:1 equal:1 construct:2 future:2 report:3 simplify:1 haven:1 few:1 piecewise:2 randomly:1 simultaneously:2 national:2 interpolate:1 geometry:3 phase:1 investigate:1 highly:1 evaluation:2 analyzed:1 yielding:1 chain:13 accurate:2 edge:4 orthogonal:7 tree:3 euclidean:1 walk:11 minimal:1 increased:3 column:6 earlier:1 modeling:1 disadvantage:1 cost:1 vertex:3 entry:2 comprised:1 reported:1 kn:1 connect:1 st:1 density:1 fundamental:2 amherst:1 international:2 automating:1 off:1 together:1 connecting:2 analogously:1 graining:1 ivj:1 again:1 reflect:4 aaai:1 opposed:1 american:1 chung:1 potential:1 nonlinearities:2 lookup:2 satisfy:1 notable:1 ad:1 blind:2 performed:1 h1:1 lot:1 closed:1 pendulum:1 start:1 hf:2 om:2 square:2 efficiently:2 yield:3 generalize:1 produced:9 comp:2 rss0:2 oscillatory:1 obvious:1 associated:3 riemannian:3 massachusetts:3 popular:1 higher:3 reflected:1 evaluated:1 just:1 ei:4 multiscale:4 nonlinear:3 reversible:7 eig:2 defines:1 scientific:1 mdp:10 facilitate:1 effect:1 usa:1 contain:1 normalized:2 building:1 hence:2 symmetric:2 wp:2 q0:6 puterman:1 samuel:1 outline:1 complete:1 harmonic:1 consideration:1 novel:1 recently:2 variational:1 lagoudakis:1 common:1 rl:1 extend:1 discussed:1 numerically:1 cambridge:1 smoothness:5 grid:6 mathematics:2 nonlinearity:1 reachable:1 moving:1 stable:2 longer:1 base:4 rep:3 inverted:1 r0:4 converge:2 monotonically:1 signal:1 smoother:2 full:1 rj:8 smooth:11 technical:4 faster:1 long:1 e1:1 laplacian:37 parenthesis:1 converging:3 neuro:1 df:3 iteration:8 represent:4 achieved:1 background:1 whereas:3 separately:1 addressed:1 adhere:1 singular:1 appropriately:1 unlike:1 eigenfunctions:31 induced:3 undirected:3 mahadeva:1 near:1 prl:1 door:2 mahadevan:5 split:1 variety:1 architecture:3 topology:2 qj:7 bottleneck:5 motivated:1 algebraic:1 proceed:1 york:1 action:9 useful:1 clear:1 eigenvectors:2 induces:1 handily:1 generate:1 specifies:1 outperform:1 exist:1 notice:1 designer:1 arising:2 per:1 discrete:1 write:1 group:1 diffusion:38 graph:35 sum:1 run:1 uncertainty:1 striking:1 sobolev:1 oscillation:1 decision:4 scaling:4 comparable:1 ct:1 conjugation:1 yale:6 yielded:1 nontrivial:1 constraint:3 flat:1 diffuse:1 fourier:3 span:2 performing:1 department:2 combination:2 poor:3 conjugate:1 h02:3 smaller:2 describes:1 across:1 invariant:1 computationally:1 equation:2 end:4 generalizes:1 decomposing:1 operation:1 permit:1 observe:1 hierarchical:1 spectral:6 dwt:1 alternative:1 schmidt:1 symmetrized:1 slower:1 compress:1 assumes:1 top:4 giving:1 build:2 approximating:4 classical:2 society:1 lspi:12 parametric:1 dilate:1 degrades:1 traditional:1 diagonal:1 gradient:3 subspace:1 distance:1 mauro:2 athena:1 manifold:9 considers:1 unstable:2 induction:1 code:1 modeled:1 index:1 downsampling:1 equivalently:1 mostly:1 handcoded:6 anal:2 policy:29 perform:2 markov:14 finite:5 defining:1 dc:2 introduced:1 complement:2 required:1 learned:8 eigenfunction:1 below:1 usually:1 program:1 built:1 max:1 including:2 green:2 power:6 suitable:1 natural:1 residual:1 representing:1 scheme:2 mdps:3 brief:1 carried:1 nice:1 geometric:2 l2:3 acknowledgement:1 generation:1 interesting:1 approximator:5 localized:2 remarkable:1 h2:5 foundation:2 agent:3 degree:3 s0:2 row:5 elsewhere:1 supported:1 tsitsiklis:1 allow:2 taking:1 sparse:7 curve:2 transition:3 gram:1 world:6 symmetrize:1 reinforcement:2 pss0:1 founded:1 approximate:2 compact:3 global:7 harm:2 spectrum:2 factorizing:1 search:1 iterative:1 sk:1 dilation:2 table:5 learn:1 reasonably:1 expanding:1 inherently:1 symmetry:1 investigated:1 poly:3 excellent:1 constructing:3 vj:7 q0j:3 subsample:1 sridhar:1 n2:1 dyadic:1 fashion:1 wiley:1 precision:6 smoothest:1 wavelet:44 learns:1 theorem:1 down:1 specific:2 explored:1 decay:1 intrinsic:2 consist:1 sequential:1 orthogonalizing:1 generalizing:2 logarithmic:1 lap:3 infinitely:1 scalar:1 corresponds:2 satisfies:2 ma:1 goal:2 viewed:1 rbf:4 room:8 except:1 acting:1 denoising:1 called:3 partly:1 support:1 proto:3 phenomenon:1 |
2,061 | 2,872 | Efficient Estimation of OOMs
Herbert Jaeger, Mingjie Zhao, Andreas Kolling
International University Bremen
Bremen, Germany
h.jaeger|m.zhao|[email protected]
Abstract
A standard method to obtain stochastic models for symbolic time series
is to train state-emitting hidden Markov models (SE-HMMs) with the
Baum-Welch algorithm. Based on observable operator models (OOMs),
in the last few months a number of novel learning algorithms for similar purposes have been developed: (1,2) two versions of an ?efficiency
sharpening? (ES) algorithm, which iteratively improves the statistical efficiency of a sequence of OOM estimators, (3) a constrained gradient descent ML estimator for transition-emitting HMMs (TE-HMMs). We give
an overview on these algorithms and compare them with SE-HMM/EM
learning on synthetic and real-life data.
1
Introduction
Stochastic symbol sequences with memory effects are frequently modelled by training hidden Markov models with the Baum-Welch variant of the EM algorithm. More specifically,
state-emitting HMMs (SE-HMMs) are standardly employed, which emit observable events
from hidden states. Known weaknesses of HMM training with Baum-Welch are long runtimes and proneness to getting trapped in local maxima.
Over the last few years, an alternative to HMMs has been developed, observable operator models (OOMs). The class of processes that can be described by (finite-dimensional)
OOMs properly includes the processes that can be described by (finite-dimensional)
HMMs. OOMs identify the observable events a of a process with linear observable operators ?a acting on a real vector space of predictive states w [1]. A basic learning algorithm for OOMs [2] estimates the observable operators ?a by solving a linear system of
learning equations. The learning algorithm is constructive, fast and yields asymptotically
correct estimates. Two problems that so far prevented OOMs from practical use were (i)
poor statistical efficiency, (ii) the possibility that the obtained models might predict negative ?probabilities? for some sequences. Since a few months the first problem has been very
satisfactorily solved [2]. In this novel approach to learning OOMs from data we iteratively
construct a sequence of estimators whose statistical efficiency increases, which led us to
call the method efficiency sharpening (ES).
Another, somewhat neglected class of stochastic models is transition-emitting HMMs (TEHMMs). TE-HMMs fall in between SE-HMMs and OOMs w.r.t. expressiveness. TEHMMs are equivalent to OOMs whose operator matrices are non-negative. Because TEHMMs are frequently referred to as Mealy machines (actually a misnomer because orig-
inally Mealy machines are not probabilistic but only non-deterministic), we have started
to call non-negative OOMs ?Mealy OOMs? (MOOMs). We use either name according to
the way the models are represented. A variant of Baum-Welch has recently been described
for TE-HMMs [3]. We have derived an alternative learning constrained log gradient (CLG)
algorithm for MOOMs which performs a constrained gradient descent on the log likelihood
surface in the log model parameter space of MOOMs.
In this article we give a compact introduction to the basics of OOMs (Section 2), outline
the new ES and CLG algorithms (Sections 3 and 4), and compare their performance on a
variety of datasets (Section 5). In the conclusion (Section 6) we also provide a pointer to a
Matlab toolbox.
2 Basics of OOMs
Let (?, A, P, (Xn )n?0 ) or (Xn ) for short be a discrete-time stochastic process with values
in a finite symbol set O = {a1 , . . . , aM }. We will consider only stationary processes
here for notational simplicity; OOMs can equally model nonstationary processes. An mdimensional OOM for (Xn ) is a structure A = (Rm , (?a )a?O , w0 ), where each observable
operator ?a is a real-valued m ? m matrix and w0 ? Rm is the starting state, provided that
for any finite sequence ai0 . . . ain it holds that
P (X0 = ai0 , . . . Xn = ain ) = 1m ?ain ? ? ? ?ai0 w0 ,
(1)
where 1m always denotes a row vector of units of length m (we drop the subscript if it is
clear from the context). We will use the shorthand notation a
? to denote a generic sequence
and ?a? to denote a concatenation of the corresponding operators in reverse order, which
would condense (1) into P (?
a) = 1?a? w0 .
Conversely, if a structure A = (Rm , (?a )a?O , w0 ) satisfies
(i) 1w0 = 1,
(ii) 1(
?a ) = 1,
(iii) ??
a ? O? : 1?a? w0 ? 0,
(2)
a?O
(where O? denotes the set of all finite sequences over O), then there exists
a process whose
distribution is described by A via (1). The process is stationary iff ( a?O ?a )w0 = w0 .
Conditions (i) and (ii) are easy to check, but no efficient criterium is known to decide
whether the non-negativity criterium (iii) holds for a structure A (for recent progress in
this problem, which is equivalent to a problem of general interest in linear algebra, see
[4]). Models A learnt from data tend to marginally violate (iii) ? this is the unresolved
non-negativity problem in the theory of OOMs.
The state wa? of an OOM after an initial history a
? is obtained by normalizing ?a? w0 to unit
component sum via wa? = ?a? w0 /1?a? w0 .
A fundamental (and nontrivial) theorem for OOMs characterizes equivalence of two
?a )a?O , w
?0 )
OOMs. Two m-dimensional OOms A = (Rm , (?a )a?O , w0 ) and A? = (Rm , (?
are defined to be equivalent if they generate the same probability distribution according to
(1). By the equivalence theorem, A is equivalent to A? if and only if there exists a transformation matrix of size m ? m, satisfying 1 = 1, such that ??a = ?a ?1 for all symbols
a.
We mentioned in the Introduction that OOM states represent the future probability distribution of the process. This can be algebraically captured in the notion of characterizers. Let
A = (Rm , (?a )a?O , w0 ) be an OOM for (Xn ) and choose k such that ? = |O|k ? m. Let
?b1 , . . . , ?b? be the alphabetical enumeration of Ok . Then a m?? matrix C is a characterizer
of length k for A iff 1C = 1 (that is, C has unit column sums) and
??
a ? O? : wa? = C(P (?b1 |?
a) ? ? ? P (?b? |?
a)) ,
(3)
where denotes the transpose and P (?b|?
a) is the conditional probability that the process
continues with ?b after an initial history a
?. It can be shown [2] that every OOM has characterizers of length k for suitably large k. Intuitively, a characterizer ?bundles? the length k
future distribution into the state vector by projection.
If two equivalent OOMs A, A? are related by ??a = ?a ?1 , and C is a characterizer for A,
?
it is easy to check that C is a characterizer for A.
We conclude this section by explaining the basic learning equations. An analysis of (1)
reveals that for any state wa? and operator ?b from an OOM it holds that
?a wa? = P (a|?
a)wa?a ,
(4)
a)wa?a thus form an
where a
?a is the concatenation of a
? with a. The vectors wa? and P (a|?
argument-value pair for ?a . Let a
?1 , . . . , a
?l be a finite sequence of finite sequences over
O, and let V = (wa?1 ? ? ? wa?l ) be the matrix containing the corresponding state vectors.
Let again C be a m ? ? sized characterizer of length k and ?b1 , . . . , ?b? be the alphabetical
enumeration of Ok . Let V = (P (?bi |?
aj )) be the ? ? l matrix containing the conditional
continuation probabilities of the initial sequences a
?j by the sequences ?bi . It is easy to see
that V = CV . Likewise, let Wa = (P (a|?
a1 )wa?1 a ? ? ? P (a|?
al )wa?l a ) contain the vectors
corresponding to the rhs of (4), and let W a = (P (a?bi |?
aj )) be the analog of V . It is easily
verified that Wa = CW a . Furthermore, by construction it holds that ?a V = Wa .
A linear operator on Rm is uniquely determined by l ? m argument-value pairs provided
there are at least m linearly independent argument vectors in these pairs. Thus, if a characterizer C is found such that V = CV has rank m, the operators ?a of an OOM characterized
by C are uniquely determined by V and the matrices W a via ?a = Wa V ? = CW a (CV )? ,
where ? denotes the pseudo-inverse. Now, given a training sequence S, the conditional
continuation probabilities P (?bi |?
aj ), P (a?bi |?
aj ) that make up V , W a can be estimated from
S by an obvious counting scheme, yielding estimates P? (?bi |?
aj ), P? (a?bi |?
aj ) for making up
? , respectively. This leads to the general form of OOM learning equations:
V? and W
a
? (C V? )? .
??a = C W
a
(5)
In words, to learn an OOM from S, first fix a model dimension m, a characterizer C, in? by frequency counting,
dicative sequences a
?1 , . . . , a
?l , then construct estimates V? and W
a
and finally use (5) to obtain estimates of the operators. This estimation procedure is asymptotically correct in the sense that, if the training data were generated by an m-dimensional
OOM in the first place, this generator will almost surely be perfectly recovered as the size
? conof training data goes to infinity. The reason for this is that the estimates V? and W
a
verge almost surely to V and W a . The starting state can be recovered from the estimated
operators by exploiting ( a?O ?a )w0 = w0 or directly from C and V? (see [2] for details).
3
The ES Family of Learning Algorithms
All learning algorithms based on (5) are asymptotically correct (which EM algorithms are
not, by the way), but their statistical efficiency (model variance) depends crucially on (i)
the choice of indicative sequences a
?1 , . . . , a
?l and (ii) the characterizer C (assuming that
the model dimension m is determined by other means, e.g. by cross-validation). We will
first address (ii) and describe an iterative scheme to obtain characterizers that lead to a low
model variance.
The choice of C has a twofold impact on model variance. First, the pseudoinverse operation in (5) blows up variation in C V? depending on the matrix condition number of this
matrix. Thus, C should be chosen such that the condition of C V? gets close to 1. This strategy was pioneered in [5], who obtained the first halfway statistically satisfactory learning
procedures. In contrast, here we set out from the second mechanism by which C influences
model variance, namely, choose C such that the variance of C V? itself is minimized.
We need a few algebraic preparations. First, observe that if some characterizer C is used
? and is an OOM equivalence transformation, then if C? =
with (5), obtaining a model A,
?? is an equivalent version of A? via .
C is used with (5), the obtained model A
Furthermore, it is easy to see [2] that two characterizers C1 , C2 characterize the same OOM
iff C1 V = C2 V . We call two characterizers similar if this holds, and write C1 ? C2 .
Clearly C1 ? C2 iff C2 = C1 +G for some G satisfying GV = 0 and 1G = 0. That is, the
similarity equivalence class of some characterizer C is the set {C + G|GV = 0, 1G = 0}.
Together with the first observation this implies that we may confine our search for ?good?
characterizers to a single (and arbitrary) such equivalence class of characterizers. Let C0 in
the remainder be a representative of an arbitrarily chosen similarity class whose members
all characterize A.
In
that the variance of C V? is monotonically tied to
[2] it is explained
?
aj bi )wa?j ? C(:, i)2 , where C(:, i) is the i-th column of C.
i=1,...,?;j=1,...,l P (?
This observation allows us to determine an optimal (minimal variance of C V? within the
equivalence class of C0 ) characterizer Copt as the solution to the following minimization
problem:
Copt
=
Gopt
=
C0 + Gopt , where
arg min
G
P (?
aj ?bi )wa?j ? (C0 + G)(:, i)2
(6)
i=1,...,?;j=1,...,l
under the constraints GV = 0 and 1G = 0. This problem can be analytically solved
[2] and has a surprising and beautiful solution, which we now explain. In a nutshell, Copt
is composed column-wise by certain states of a time-reversed version of A. We describe
in more detail time-reversal of OOMs. Given an OOM A = (Rm , (?a )a?O , w0 ) with an
induced probability distribution PA , its reverse OOM Ar = (Rm , (?ar )a?O , w0r ) is characterized by a probability distribution PAr satisfying
? a0 ? ? ? an ? O? : PA (a0 ? ? ? an ) = PAr (an ? ? ? a0 ).
(7)
A reverse OOM can be easily computed from the ?forward? OOM as follows. If A =
(Rm , (?a )a?O , w0 ) is an OOM for a stationary process, and w0 has no zero entry, then
Ar = (Rm , (D?a D?1 )a?O , w0 )
(8)
is a reverse OOM to A, where D = diag(w0 ) is a diagonal matrix with w0 on its diagonal.
Now let ?b1 , . . . , ?b? again be the sequences employed in V . Let Ar = (Rm , (?ar )a?O , w0 ) be
the reverse OOM to A, which was characterized by C0 . Furthermore, for ?bi = b1 . . . bk let
w?bri = ?br1 ? ? ? ?brk w0 /1?br1 ? ? ? ?brk w0 . Then it holds that C = (w?br1 ? ? ? w?br? ) is a characterizer
for an OOM equivalent to A. C can effectively be transformed into a characterizer C r for
A by C r = r C, where
?
?
1??b1
?
?
r = (C ? ... ?)?1 .
1??b?
(9)
We call C r the reverse characterizer of A, because it is composed from the states of a
reverse OOM to A. The analytical solution to (6) turns out to be [2]
Copt = C r .
(10)
To summarize, within a similarity class of characterizers, the one which minimizes model
variance is the (unique) reverse characterizer in this class. It can be cheaply computed from
the ?forward? OOM via (8) and (9). This analytical finding suggests the following generic,
iterative procedure to obtain characterizers that minimize model variance:
1. Setup. Choose a model dimension m and a characterizer length k. Compute
V , W a from the training string S.
2. Initialization. Estimate an initial model A?(0) with some ?classical? OOM estimation method (a refined such method is detailed out in [2]).
3. Efficiency sharpening iteration. Assume that A?(n) is given. Compute its reverse
characterizer C? r(n+1) . Use this in (5) to obtain a new model estimate A?(n+1) .
4. Termination. Terminate when the training log-likelihood of models A?(n) appear
to settle on a plateau.
The rationale behind this scheme is that the initial model A?(0) is obtained essentially from
an uninformed, ad hoc characterizer, for which one has to expect a large model variation
and thus (on the average) a poor A?(0) . However, the characterizer C? r(1) obtained from
the reversed A?(0) is not uninformed any longer but shaped by a reasonable reverse model.
Thus the estimator producing A?(1) can be expected to produce a model closer to the correct
one due to its improved efficiency, etc. Notice that this does not guarantee a convergence
of models, nor any monotonic development of any performance parameter in the obtained
model sequence. In fact, the training log likelihood of the model sequence typically shoots
to a plateau level in about 2 to 5 iterations, after which it starts to jitter about this level,
only slowly coming to rest ? or even not stabilizing at all; it is sometimes observed that the
log likelihood enters a small-amplitude oscillation around the plateau level. An analytical
understanding of the asymptotic learning dynamics cannot currently be offered.
We have developed two specific instantiations of the general ES learning scheme, differentiated by the set of indicative sequences used. The first simply uses l = ?, a
?1 , . . . , a
?l =
?b1 , . . . , ?b? , which leads to a computationally very cheap iterated recomputation of (5) with
updated reverse characterizers. We call this the ?poor man?s? ES algorithm.
The statistical efficiency of the poor man?s ES algorithm is impaired by the fact that only the
counting statistics of subsequences of length 2k are exploited. The other ES instantiation
exploits the statistics of all subsequences in the original training string. It is technically
rather involved and rests on a suffix tree (ST) representation of S. We can only give a
coarse sketch here (details in [2]). In each iteration, the current reverse model is run backwards through S and the obtained reverse states are additively collected bottom-up in the
nodes of the ST. From the ST nodes the collected states are then harvested into matrices
? , that is, an explicit computation of the reverse
corresponding directly to C V? and C W
a
characterizer is not required. This method incurs a computational load per iteration which
is somewhat lower than Baum-Welch for SE-HMMs (because only a backward pass of the
current model has to be computed), plus the required initial ST construction which is linear
in the size of S.
4
The CLG Algorithm
We must be very brief here due to space limitations. The CLG algorithm will be detailed
out in a future paper. It is an iterative update scheme for the matrix parameters [?
?a ]ij
of a MOOM. This scheme is analytically derived as gradient descent in the model log
likelihood surface over the log space of these matrix parameters, observing constraints
of non-negativity of these parameters and the general OOM constraints (i) and (ii) from
Eqn. (2). Note that the constraint (iii) from (2) is automatically satisfied in MOOMs.
We skip the derivation of the CLG scheme and describe only its ?mechanics?. Let S =
s1 . . . sN be the training string and for 1 ? k ? N define a
?k = s1 . . . sk , ?bk = sk+1 . . . sN .
Define for some m-dimensional OOM and a ? O
?k =
?k wa? k?1
1??bk
, ya =
, y0 = max{[ya ]ij }, [ya0 ]i,j = [ya ]i,j /y0 .
i,j,a
1??bk wa?k
1?sk wa?k?1
sk =a
(11)
Then the update equation is
?a ]ij ? [ya0 ]?ij ,
[?
?a+ ]ij = ?j ? [?
(12)
where ??a+ is the new estimate of ?a , ?j ?s are normalization parameters determined by the
constraint (ii) from Eqn. (2), and ? is a learning rate which here unconventionally appears
in the exponent because the gradient descent is carried out in the log parameter space.
Note that by (12) [?
?a+ ]ij remains non-negative if [?
?a ]ij is. This update scheme is derived
in a way that is unrelated to the derivation of the EM algorithm; to our surprise we found
that for ? = 1 (12) is equivalent to the Baum-Welch algorithm for TE-HMMs. However,
significantly faster convergence is achieved with non-unit ?; in the experiments carried out
so far a value close to 2 was heuristically found to work best.
5
Numerical Comparisons
We compared the poor man?s ES algorithm, the suffix-tree based algorithm, the CLG algorithm and the standard SE-HMM/Baum-Welch method on four different types of data,
which were generated by (a) randomly constructed, 10-dimensional, 5-symbol SE-HMMs,
(b) randomly constructed, 10-dimensional, 5-symbol MOOMs, (c) a 3-dimensional, 2symbol OOM which is not equivalent to any HMM nor MOOM (the ?probability clock?
process [2]), (d) a belletristic text (Mark Twain?s short story ?The 1,000,000 Pound Note?).
For each of (a) and (b), 40 experiments were carried out with freshly constructed generators per experiment; a training string of length 1000 and a test string of length 10000
was produced from each generator. For (c), likewise 40 experiments were carried out with
freshly generated training/testing sequences of same lengthes as before; here however the
generator was identical for all experiments. For (a) ? (c), the results reported below are
averaged numbers over the 40 experiments. For the (d) dataset, after preprocessing which
shrunk the number of different symbols to 27, the original string was sorted sentence-wise
into a training and a testing string, each of length ? 21000 (details in [2]).
The following settings were used with the various training methods. (i) The poor man?s
ES algorithm was used with a length k = 2 of indicative sequences on all datasets. Two
ES iterations were carried out and the model of the last iteration was used to compute the
reported log likelihoods. (ii) For the suffix-tree based ES algorithm, on datasets (a) ? (c),
likewise two ES iterations were done and the model from the iteration with the lowest (reverse) training LL was used for reporting. On dataset (d), 4 ES iterations were called and
similarly the model with the best reverse training LL was chosen. (iii) In the MOOM studies, a learning rate of ? = 1.85 was used. Iterations were stopped when two consecutive
training LL?s differed by less than 5e-5% or after 100 iterations. (iv) For HMM/BaumWelch training, the public-domain implementation provided by Kevin Murphy was used.
Iterations were stopped after 100 steps or if LL?s differed by less than 1e-5%. All computations were done in Matlab on 2 GHz PCs except the HMM training on dataset (d) which
was done on a 330 MHz machine (the reported CPU times were scaled by 330/2000 to
make them comparable with the other studies). Figure 1 shows the training and testing
loglikelihoods as well as the CPU times for all methods and datasets.
?1200
2
?1250
?1400
2
?1450
1
?1300
1
?1500
?1350
0
?1400
?1550
0
?1600
?1
?1450
?1500
?1
?1650
(a)
2
4
6
8
10
?2
12
?1700
(b)
2
4
6
8
10
?2
12
4
?670
x 10
2
4
?3.5
?675
1
3.5
?4
?680
0
3
?4.5
?685
?1
?690
2
3
4
2.5
?5
(c)
(d)
6
8
10
?2
12
?5.5
51020
40
60
100
2
150
Figure 1: Findings for datasets (a)?(d). In each panel, the left y-axis shows log likelihoods
for training and testing (testing LL normalized to training stringlength) and the right y-axis
measures the log 10 of CPU times. HMM models are documented in solid/black lines,
poor man?s ES models in dotted/magenta lines, suffix-tree ES models in broken/blue, and
MOOMs in dash-dotted/red lines. The thickest lines in each panel show training LL, the
thinnest CPU time, and intermediate testing LL. The x-axes indicate model dimension. On
dataset (c), no results of the poor man?s algorithm are given because the learning equations
became ill-conditioned for all but the lowest dimensions.
Some comments on Fig. 1. (1) The CPU times roughly exhibit an even log spread over
almost 2 orders of magnitude, in the order poor man?s (fastest) ? suffix-tree ES ? CLG ?
Baum-Welch. (2) CLG has the lowest training LL throughout, which needs an explanation
because the proper OOMs trained by ES are more expressive. Apparently the ES algorithm
does not lead to local ML optima; otherwise suffix-tree ES models should show the lowest training LL. (3) On HMM-generated data (a), Baum-Welch HMMs can play out their
natural bias for this sort of data and achieve a lower test error than the other methods. (4)
On the MOOM data (b), the test LL of MOOM/CLG and OOM/poor man models of dimension 2 equals the best HMM/Baum-Welch test LL which is attained at a dimension of
4; the OOM/suffix-tree test LL at dimension 2 is superior to the best HMM test LL. (5) On
the ?probability clock? data (c), the suffix-tree ES trained OOMs surpassed the non-OOM
models in test LL, with the optimal value obtained at the (correct) model dimension 3. This
comes as no surprise because these data come from a generator that is incommensurable
with either HMMs or MOOMs. (6) On the large empirical dataset (d) the CLG/MOOMs
have by a fair margin the highest training LL, but the test LL quickly drops to unacceptable
lows. It is hard to explain this by overfitting, considering the complexity and the size of
the training string. The other three types of models are evenly ordered in both training and
testing error from HMMs (poorest) to suffix-tree ES trained OOMs. Overfitting does not
occur up to the maximal dimension investigated. Depending on whether one wants a very
fast algorithm with good, or a fast algorithm with very good train/test LL, one here would
choose the poor man?s or the suffix-tree ES algorithm as the winner. (7) One detail in panel
(d) needs an explanation. The CPU time for the suffix-tree ES has an isolated peak for the
smallest dimension. This is earned by the construction of the suffix tree, which was built
only for the smallest dimension and re-used later.
6
Conclusion
We presented, in a sadly condensed fashion, three novel learning algorithms for
symbol dynamics. A detailed treatment of the Efficiency Sharpening algorithm is
given in [2], and a Matlab toolbox for it can be fetched from http://www.faculty.iubremen.de/hjaeger/OOM/OOMTool.zip. The numerical investigations reported here were
done using this toolbox. Our numerical simulations demonstrate that there is an altogether
new world of faster and often statistically more efficient algorithms for sequence modelling than Baum-Welch/SE-HMMs. The topics that we will address next in our research
group are (i) a mathematical analysis of the asymptotic behaviour of the ES algorithms,
(ii) online adaptive versions of these algorithms, and (iii) versions of the ES algorithms for
nonstationary time series.
References
[1] M. L. Littman, R. S. Sutton, and S. Singh. Predictive representation of state. In Advances in Neural Information Processing Systems 14 (Proc. NIPS 01), pages 1555?1561, 2001.
http://www.eecs.umich.edu/?baveja/Papers/psr.pdf.
[2] H. Jaeger, M. Zhao, K. Kretzschmar, T. Oberstein, D. Popovici, and A. Kolling. Learning observable operator models via the es algorithm. In S. Haykin, J. Principe, T. Sejnowski, and
J. McWhirter, editors, New Directions in Statistical Signal Processing: from Systems to Brains,
chapter 20. MIT Press, to appear in 2005.
[3] H. Xue and V. Govindaraju. Stochastic models combining discrete symbols and continuous
attributes in handwriting recognition. In Proc. DAS 2002, 2002.
[4] R. Edwards, J.J. McDonald, and M.J. Tsatsomeros. On matrices with common invariant cones
with applications in neural and gene networks. Linear Algebra and its Applications, in press,
2004 (online version). http://www.math.wsu.edu/math/faculty/tsat/files/emt.pdf.
[5] K. Kretzschmar. Learning symbol sequences with Observable Operator Models. GMD Report
161, Fraunhofer Institute AIS, 2003. http://omk.sourceforge.net/files/OomLearn.pdf.
| 2872 |@word faculty:2 version:6 suitably:1 c0:5 termination:1 heuristically:1 additively:1 simulation:1 crucially:1 incurs:1 solid:1 initial:6 series:2 recovered:2 current:2 surprising:1 must:1 numerical:3 cheap:1 gv:3 drop:2 update:3 stationary:3 indicative:3 short:2 haykin:1 pointer:1 coarse:1 math:2 node:2 mathematical:1 unacceptable:1 c2:5 constructed:3 shorthand:1 x0:1 expected:1 roughly:1 frequently:2 nor:2 mechanic:1 brain:1 automatically:1 cpu:6 enumeration:2 considering:1 provided:3 notation:1 unrelated:1 panel:3 lowest:4 minimizes:1 string:8 developed:3 finding:2 sharpening:4 transformation:2 guarantee:1 pseudo:1 every:1 nutshell:1 rm:12 scaled:1 unit:4 appear:2 producing:1 before:1 local:2 sutton:1 subscript:1 might:1 plus:1 black:1 initialization:1 equivalence:6 conversely:1 suggests:1 hmms:18 fastest:1 bi:10 statistically:2 averaged:1 practical:1 satisfactorily:1 unique:1 testing:7 alphabetical:2 procedure:3 empirical:1 significantly:1 projection:1 word:1 symbolic:1 get:1 cannot:1 close:2 operator:14 context:1 influence:1 www:3 equivalent:9 deterministic:1 baum:11 go:1 starting:2 welch:11 stabilizing:1 simplicity:1 estimator:4 notion:1 variation:2 updated:1 construction:3 play:1 oom:31 pioneered:1 us:1 pa:2 satisfying:3 recognition:1 continues:1 thinnest:1 observed:1 bottom:1 solved:2 enters:1 sadly:1 brk:2 earned:1 highest:1 mentioned:1 broken:1 complexity:1 littman:1 neglected:1 dynamic:2 trained:3 singh:1 solving:1 algebra:2 orig:1 predictive:2 technically:1 efficiency:10 easily:2 represented:1 various:1 chapter:1 derivation:2 train:2 fast:3 describe:3 sejnowski:1 kevin:1 refined:1 whose:4 valued:1 otherwise:1 statistic:2 itself:1 online:2 hoc:1 sequence:22 analytical:3 net:1 coming:1 ai0:3 unresolved:1 remainder:1 maximal:1 combining:1 loglikelihoods:1 iff:4 achieve:1 getting:1 sourceforge:1 exploiting:1 convergence:2 impaired:1 optimum:1 jaeger:3 produce:1 depending:2 uninformed:2 ij:7 progress:1 edward:1 skip:1 implies:1 indicate:1 come:2 direction:1 correct:5 attribute:1 stochastic:5 shrunk:1 settle:1 public:1 behaviour:1 fix:1 investigation:1 hold:6 confine:1 around:1 mcwhirter:1 predict:1 consecutive:1 smallest:2 purpose:1 estimation:3 proc:2 condensed:1 currently:1 ain:3 minimization:1 mit:1 clearly:1 always:1 rather:1 derived:3 ax:1 properly:1 notational:1 rank:1 likelihood:7 check:2 modelling:1 contrast:1 am:1 sense:1 twain:1 suffix:12 typically:1 a0:3 hidden:3 transformed:1 condense:1 germany:1 iu:1 arg:1 ill:1 exponent:1 development:1 constrained:3 equal:1 construct:2 shaped:1 runtimes:1 identical:1 future:3 minimized:1 report:1 few:4 randomly:2 composed:2 murphy:1 interest:1 possibility:1 weakness:1 yielding:1 pc:1 behind:1 bundle:1 emit:1 closer:1 tree:12 iv:1 re:1 isolated:1 minimal:1 stopped:2 mdimensional:1 column:3 ar:5 mhz:1 entry:1 characterize:2 reported:4 eec:1 learnt:1 xue:1 synthetic:1 st:4 international:1 fundamental:1 peak:1 probabilistic:1 together:1 quickly:1 again:2 recomputation:1 satisfied:1 containing:2 choose:4 slowly:1 verge:1 zhao:3 de:2 blow:1 includes:1 depends:1 ad:1 later:1 observing:1 characterizes:1 red:1 start:1 apparently:1 sort:1 minimize:1 became:1 variance:9 who:1 likewise:3 yield:1 identify:1 modelled:1 iterated:1 produced:1 marginally:1 gopt:2 history:2 explain:2 plateau:3 frequency:1 involved:1 obvious:1 handwriting:1 dataset:5 treatment:1 govindaraju:1 improves:1 amplitude:1 actually:1 appears:1 ok:2 attained:1 improved:1 done:4 pound:1 misnomer:1 furthermore:3 clock:2 sketch:1 eqn:2 expressive:1 aj:8 name:1 effect:1 contain:1 normalized:1 analytically:2 iteratively:2 satisfactory:1 ll:17 uniquely:2 pdf:3 outline:1 demonstrate:1 mcdonald:1 performs:1 wise:2 shoot:1 novel:3 recently:1 superior:1 common:1 overview:1 emt:1 winner:1 analog:1 cv:3 ai:1 similarly:1 baveja:1 mealy:3 similarity:3 surface:2 longer:1 etc:1 recent:1 reverse:16 certain:1 arbitrarily:1 life:1 exploited:1 criterium:2 herbert:1 captured:1 somewhat:2 zip:1 employed:2 surely:2 algebraically:1 determine:1 monotonically:1 signal:1 ii:9 violate:1 faster:2 characterized:3 cross:1 long:1 prevented:1 equally:1 a1:2 impact:1 variant:2 basic:4 essentially:1 surpassed:1 iteration:12 represent:1 sometimes:1 normalization:1 achieved:1 c1:5 want:1 rest:2 file:2 comment:1 induced:1 tend:1 member:1 call:5 nonstationary:2 counting:3 backwards:1 intermediate:1 iii:6 easy:4 variety:1 perfectly:1 andreas:1 incommensurable:1 proneness:1 br:1 whether:2 algebraic:1 clg:10 matlab:3 se:8 clear:1 detailed:3 gmd:1 documented:1 generate:1 continuation:2 http:4 notice:1 dotted:2 trapped:1 estimated:2 per:2 blue:1 discrete:2 write:1 group:1 four:1 verified:1 backward:1 asymptotically:3 halfway:1 year:1 sum:2 cone:1 run:1 inverse:1 baumwelch:1 jitter:1 place:1 almost:3 family:1 decide:1 reasonable:1 reporting:1 throughout:1 oscillation:1 fetched:1 comparable:1 poorest:1 dash:1 nontrivial:1 occur:1 infinity:1 constraint:5 argument:3 min:1 bri:1 according:2 poor:11 em:4 y0:2 making:1 s1:2 intuitively:1 explained:1 invariant:1 computationally:1 equation:5 remains:1 turn:1 mechanism:1 reversal:1 umich:1 operation:1 observe:1 generic:2 differentiated:1 alternative:2 altogether:1 original:2 denotes:4 exploit:1 classical:1 strategy:1 diagonal:2 exhibit:1 gradient:5 copt:4 cw:2 reversed:2 concatenation:2 hmm:10 w0:25 evenly:1 topic:1 collected:2 reason:1 assuming:1 length:11 setup:1 negative:4 implementation:1 proper:1 observation:2 markov:2 datasets:5 finite:7 descent:4 arbitrary:1 expressiveness:1 standardly:1 bk:4 pair:3 namely:1 toolbox:3 required:2 sentence:1 nip:1 address:2 below:1 summarize:1 built:1 max:1 memory:1 explanation:2 event:2 natural:1 beautiful:1 scheme:8 brief:1 axis:2 started:1 carried:5 negativity:3 fraunhofer:1 sn:2 text:1 popovici:1 understanding:1 asymptotic:2 harvested:1 par:2 expect:1 rationale:1 limitation:1 generator:5 validation:1 offered:1 article:1 editor:1 story:1 bremen:3 row:1 last:3 transpose:1 bias:1 institute:1 fall:1 explaining:1 ghz:1 dimension:12 xn:5 transition:2 world:1 forward:2 adaptive:1 preprocessing:1 far:2 emitting:4 ya0:2 observable:9 compact:1 gene:1 ml:2 pseudoinverse:1 overfitting:2 reveals:1 instantiation:2 b1:7 conclude:1 br1:3 freshly:2 subsequence:2 search:1 iterative:3 continuous:1 sk:4 learn:1 terminate:1 obtaining:1 investigated:1 domain:1 diag:1 da:1 spread:1 linearly:1 rh:1 fair:1 fig:1 referred:1 representative:1 differed:2 fashion:1 explicit:1 tied:1 theorem:2 magenta:1 load:1 specific:1 symbol:10 kolling:3 normalizing:1 exists:2 effectively:1 magnitude:1 te:4 conditioned:1 margin:1 surprise:2 led:1 simply:1 psr:1 cheaply:1 wsu:1 ordered:1 monotonic:1 ooms:24 satisfies:1 conditional:3 month:2 sized:1 sorted:1 twofold:1 man:9 hard:1 specifically:1 determined:4 except:1 acting:1 called:1 pas:1 e:27 ya:3 principe:1 mark:1 preparation:1 constructive:1 |
2,062 | 2,873 | Modeling Memory Transfer and Savings in
Cerebellar Motor Learning
Naoki Masuda
RIKEN Brain Science Institute
Wako, Saitama 351-0198, Japan
[email protected]
Shun-ichi Amari
RIKEN Brain Science Institute
Wako, Saitama 351-0198, Japan
[email protected]
Abstract
There is a long-standing controversy on the site of the cerebellar motor
learning. Different theories and experimental results suggest that either
the cerebellar flocculus or the brainstem learns the task and stores the
memory. With a dynamical system approach, we clarify the mechanism
of transferring the memory generated in the flocculus to the brainstem
and that of so-called savings phenomena. The brainstem learning must
comply with a sort of Hebbian rule depending on Purkinje-cell activities.
In contrast to earlier numerical models, our model is simple but it accommodates explanations and predictions of experimental situations as
qualitative features of trajectories in the phase space of synaptic weights,
without fine parameter tuning.
1
Introduction
The cerebellum is involved in various types of motor learning. As schematically shown in
Fig. 1, the cerebellum is composed of the cerebellar cortex and the cerebellar nuclei (we
depict the vestibular nucleus V N in Fig. 1). There are two main pathways linking external
input from the mossy fibers (mf ) to motor outputs, which originate from the cerebellar
nuclei. The pathway that relays the mossy fibers directly to the cerebellar nuclei is called
the direct pathway. Each nucleus cell receives about 104 mossy fiber synapses.
The pathway involving the mossy fibers, the granule cells (gr), the parallel fibers (pl), and
the Purkinje cells (P r) in the flocculo-nodular lobes of the cerebellar cortex, is called the
indirect pathway. Because the Purkinje cells, which are the sole source of output from
the cerebellar cortex, are GABAergic, firing rates of the nuclei are suppressed when this
pathway is active. The indirect pathway also includes recurrent collaterals terminating
on various types of inhibitory cells. Another anatomical feature of the indirect pathway
is that climbing fibers (Cm in Fig. 1) from the inferior olive (IO) innervate on Purkinje
cells. Taking into account the huge mass of intermediate computational units in the indirect
pathway, or the granule cells, Marr conjectured that the cerebellum operates as a perceptron
with high computational power [8]. The climbing fibers were thought to induce long-term
potentiation (LTP) of pl-P r synapses to reinforce the signal transduction. Albus claimed
that long-term depression (LTD) rather than LTP should occur so that the Purkinje cells
inhibit the nuclei [2]. The climbing fibers were thought to serve as teaching lines that
convey error-correcting signals.
u
e=ru-z
gr x=Au Pr
IO
pl *w
Cm
y=wx=wAu
mf
*v
z=vu-y
VN
Figure 1: Architecture of the VOR model.
The vestibulo-ocular reflex (VOR) is a standard benchmark for exploring synaptic substrates of cerebellar motor learning. The VOR is a short-latency reflex eye movement that
stabilizes images on the retina during head movement. Motion of the head drives eye movements in the opposite direction. When a subject wears a prism, adaptation of the VOR gain
occurs for image stabilization. In this context, in vivo experiments confirmed that the LTD
hypothesis is correct (reviewed in [6]). However, the cerebellum is not the only site of
convergence of visual and vestibular signals. The learning scheme depending only on the
indirect pathway is called the flocculus hypothesis. An alternative is the brainstem hypothesis in which synaptic plasticity is assumed to occur in the direct pathway (mf ? V N )
[12]. This idea is supported by experimental evidence that flocculus shutdown after 3 days
of VOR adaptation does not impair the motor memory [7]. Moreover, in other experiments,
plasticity of the Purkinje cells in response to vestibular inputs, as required in the flocculus
hypothesis, really occurs but in the direction opposite to that predicted by the flocculus
hypothesis [5, 12]. Also, LTP of the mf -V N synapses, which is necessary to implement
the brainstem hypothesis [3], has been suggested in experiments [14].
Relative contributions of the flocculus mechanism and the brainstem mechanism to motor
learning remain illusive [3, 5, 9]. The same controversy exists regarding the mechanism of
associative eyelid conditioning [9, 10, 11]. Related is the distinction between short-term
and long-term plasticities. Many of the experiments in favor of the flocculus hypothesis are
concerned with short-term learning, whereas plasticity involving the vestibular nuclei is
suggested to be functional in the long term. Short-term motor memory in the flocculus may
eventually be transferred to the brainstem. This is termed the memory transfer hypothesis
[9]. Medina and Mauk proposed a numerical model and examined what types of brainstem
learning rules are compatible with memory transfer [10]. They concluded that the brainstem plasticity should be driven by coincident activities of the Purkinje cells and the mossy
fibers. The necessity of Hebbian type of learning in the direct pathway is also supported
by another numerical model [13]. We propose a much simpler model to understand the
essential mechanism of memory transfer without fine parameter manipulations.
Another goal of this work is to explain savings of learning. Savings are observed in natural
learning tasks. Because animals can be trained just for a limited amount of time per day,
the task period and the rest period, of e.g. 1 day, alternate. Performance is improved during
the task period, and it degrades during the rest period (in the dark). However, when the
alternation is repeated, the performance is enhanced more rapidly and progressively in later
sessions [7] (also, S. Nagao, private communication). The flocculus may be responsible
for daily rapid learning and forgetting, and the brainstem may underlie gradual memory
consolidation [11]. While our target phenomenon of interest is the VOR, the proposed
model is fairly general.
2
Model
Looking at Fig. 1, let us denote by u ? Rm the external input to the mossy fibers. It is
propagated to the granule cells via synaptic connectivity represented by an n by m matrix
A, where presumably n m. The output of the granule cells, or x ? Au ? Rn ,
is received by the Purkinje-cell layer. For simplicity, we assume just one Purkinje cell
whose output is written as y ? wx, where w ? R ? Rm . Since pl-P r synapses are
excitatory, the elements of w are positive. The direct pathway (mf ? V N ) is defined by
a plastic connection matrix v ? R ? Rm . The output to the VOR actuator is given by
z = vu ? y = vu ? wAu, which is the output of the sole neuron of the cerebellar nuclei.
This form of z takes into account that the contribution of the indirect pathway is inhibitory
and that of the direct pathway is excitatory.
The animal learns to adapt z as close as possible to the desirable motor output ru. For
a large (resp. small) desirable gain r, the correct direction of synaptic changes is the decrease (resp. increase) in w and the increase (resp. decrease) in v [5]. The learning error
e ? ru ? z is carried by the climbing fibers and projects onto the Purkinje cell, which
enables supervised learning [6]. The LTD of w occurs when the parallel-fiber input and the
climbing-fiber input are simultaneously large [6, 9]. Since we can write
1 ?e2
? = ??1 ex = ? ?1
w
,
(1)
2 ?w
where ?1 is the learning rate, w evolves to minimize e2 . Equation (1) is a type of WidrowHoff rule [4, p. 320]. With spontaneous inputs only, or in the presence of x and the absence
of e, w experiences LTP [6, 9]. We model this effect by adding ?2 x to Eq. (1). This term
provides subtractive normalization that counteracts the use-dependent LTD [4, p. 290].
However, subtractive normalization cannot prohibit w from running away when the error
signal is turned off. Therefore, we additionally assume multiplicative normalization term
?3 w to limit the magnitude of w [4, p. 290, 314]. In the end, Eq. (1) is modified to
? = ??1 (ru ? vu + wAu)Au + ?2 Au ? ?3 w,
w
(2)
where ?2 and ?3 are rates of memory decay satisfying ?2 , ?3 ?1 .
In the dark, the VOR gain, which might have changed via adaptation, tends back to a value
close to unity [5]. Let us represent this reference gain by r = r0 . With the synaptic
strengths in this null condition denoted by (w, v) = (w0 , v0 ), we obtain r0 u = v0 u ?
? = 0 in Eq. (2), we derive
w0 Au. By setting w
?2 Au = ?1 (r0 u ? v0 u + w0 Au)Au + ?3 w0 = ?3 w0 .
(3)
Substituting Eq. (3) into Eq. (2) results in
? = ??1 (ru ? vu + wAu) Au ? ?3 (w ? w0 ) .
w
(4)
Experiments show that v can be potentiated [14]. Enhancement of the excitability of the
nucleus output (z) in response to tetannic stimulation, or sustained u, is also in line with the
LTP of v [1]. In contrast, LTD of v is biologically unknown. Numerical models suggest
that LTP in the nuclei should be driven by y [10, 11]. However, the mechanism and the
specificity underlying plasticity of v are not well understood [9]. Therefore, we assume that
both LTP and LTD of v occur in an associative manner, and we represent the LTP effect
by a general function F . In parallel to the learning rule of w, we assume a subtractive
normalization term ??5 u [10]. We also add a multiplicative normalization term ?6 v to
constrain v. Finally, we obtain
v? = ?4 F(u, y, z, e) ? ?5 u ? ?6 v.
(5)
Presumably, v changes much more slowly (on a time scale of 8?12 hr) than w changes (0.5
hr) [10, 13]. Therefore, we assume ?1 ?4 ?5 , ?6 .
3
Analysis of Memory Transfer
Let us examine a couple of learning rules in the direct pathway to identify robust learning
mechanisms.
3.1
Supervised learning
Although the climbing fibers carrying e send excitatory collaterals to the cerebellar nuclei, supervised learning there has very little experimental support [5]. Here we show that
supervised learning in the direct pathway is theoretically unlikely. Let us assume that modification of v decreases |e|. Accordingly, we set F = ??e2 /?v = eu. Then, Eq. (5)
becomes
v? = ?4 (ru ? vu + wAu)u ? ?5 u ? ?6 v.
(6)
In the natural situation, r = r0 . Hence,
?5 u = ?4 (r0 u ? v0 u + w0 Au)u ? ?6 v0 = ??6 v0 .
(7)
Inserting Eq. (7) into Eq. (6) yields
v? = ?4 (ru ? vu + wAu) u ? ?6 (v ? v0 ).
(8)
For further analysis, let us assume m = n = 1 (for which we quit bold notations) and
perform the slow-fast analysis based on ?1 ?3 , ?4 ?6 . Equations (4) and (8) define
the nullclines w? = 0 and v? = 0, which are represented respectively by
v
v
?1 A2 u2 + ? 3
(w ? w0 ), and
?1 Au2
?4 u2
?4 Au2
= v0 +
(r
?
r
)
+
(w ? w0 ).
0
?4 u2 + ? 6
?4 u2 + ? 6
= v0 + r ? r 0 +
(9)
(10)
Since w? = O(?1 ) O(?4 ) = v? in an early stage, a trajectory in the w-v plane initially
approaches the fast manifold (Eq. (9)) and moves along it toward the equilibrium given by
w ? = w0 ?
?1 ?6 Au2 (r ? r0 )
,
?1 ?6 A2 u2 + ? 3 ?4 u2 + ? 3 ?6
v ? = v0 +
?3 ?4 u2 (r ? r0 )
. (11)
?1 ?6 A2 u2 + ? 3 ?4 u2 + ? 3 ?6
LTD of w and LTP of v are expected for adaptation to a larger gain (r > r0 ), and LTP of
w and LTD of v are expected for r < r0 . The results are consistent with both the flocculus
hypothesis and the brainstem hypothesis as far as the direction of learning is concerned [5].
When r > r0 (resp. r < r0 ), LTD (resp. LTP) of w first occurs to decrease the learning
error. Then, the motor memory stored in w is gradually transferred by LTP (resp. LTD) of
v replacing LTD (resp. LTP) of w. In the long run, the memory is stored mainly in v, not
in w.
However, the memory transfer based on supervised learning has fundamental deficiencies.
First, since ?1 ?3 and ?4 ?6 , both nullclines Eqs. (9) and (10) have a slope close
to A in the w-v plane. This means that the relative position of the equilibrium depends
heavily on the parameter values, especially on the learning rates, the choice of which is
rather arbitrary. Then, (w ? , v ? ) may be located so that, for example, the LTP of w or LTD
of v results from r > r0 . Also, the degree of transfer, or |w ? ? w0 | / |v ? ? v0 |, is not robust
against parameter changes. This may underlie the fact that LTD of w was not followed by
partial LTP in the numerical simulations in [10]. Even if the position of (w ? , v ? ) happens
to support LTD of w and LTP of v, memory transfer takes a long time. This is because
Eqs. (9) and (10) are fairly close, which means that v? is small on the fast manifold (w? = 0).
We can also imagine a type of Hebbian rule with F = ?z 2 /?v = zu. Similar calculations
show that this rule also realizes memory transfer only in an unreliable manner.
A
B
v
e=0
.
w=0
w*,v*
v
w0,v0
w0,v0
.
w=0
.
v=0
w*,v*
e=0
w
.
v=0
w
Figure 2: Dynamics of the synaptic weights in the Purkinje cell-dependent learning. (A)
r > r0 and (B) r < r0 .
3.2
Purkinje cell-dependent learning
Results of numerical studies support that v should be subject to a type of Hebbian learning
depending on two afferents to the vestibular nuclei, namely, u and y [10, 11, 13]. Changes
in the VOR gain are signaled by y. Since LTP should logically occur when y is small and u
is large, we set F = (ymax ? y)u, where ymax is the maximum firing rate of the Purkinje
cell. Then, we obtain
v? = ?4 (ymax ? wAu)u ? ?5 u ? ?6 v.
(12)
The subtraction normalization is determined from the equilibrum condition:
?5 u = ?4 (ymax ? w0 Au) ? ?6 v0 .
(13)
Substituting Eq. (13) into Eq. (12) yields
v? = ?4 (w0 ? w)Au2 + ?6 (v0 ? v).
(14)
When m = n = 1, the nullclines are given by Eq. (9) and
v = v0 ?
?4 Au2
(w ? w0 ),
?6
(15)
which are depicted in Fig. 2(A) and (B) for r > r0 and r < r0 , respectively. As shown by
arrows in Fig. 2, trajectories in the w-v space first approach the fast manifold Eq. (9) and
then move along it toward the equilibrium given by
w ? = w0 ?
?1 ?6 Au2 (r ? r0 )
,
?1 ?4 A2 u4 + ? 1 ?6 A2 u2 + ? 3 ?6
v ? = v0 +
?1 ?4 A2 u4 (r ? r0 )
.
?1 ?4 A2 u4 + ? 1 ?6 A2 u2 + ? 3 ?6
(16)
Equation (15) has a large negative slope because ?4 ?6 . Consequently, setting r > r0
(resp. r < r0 ) duly results in LTD (resp. LTP) of w and LTP (resp. LTD) of v. At the
same time, LTD (resp. LTP) of w in an early stage of learning is partially compensated
by subsequent LTP (resp. LTD) of w, which agrees with previously reported numerical
results [10]. In contrast to the supervised and Hebbian learning rules, this learning is robust
against parameter changes since the positions and the slopes of the two nullclines are apart
from each other. Owing to this property, in the long term, the memory is transferred more
rapidly along the w-nullcline than for the other two learning rules. Another benefit of the
large negative slope of Eq. (15) is that |v ? ? v0 | |w? ? w0 | holds, which means efficient
memory transfer from w to v.
The error at the equilibrum state is
e?
=
?3 ?6 (r ? r0 )u
2
?1 ?4 A u4 + ? 1 ?6 A2 u2
+ ? 3 ?6
.
(17)
Equation (17) guarantees that the e = 0 line is located as shown in Fig. 2, and the learning
proceeds so as to decrease |e|. The performance overshoot, which is unrealistic, does not
occur.
4
Numerical Simulations of Savings
The learning rule proposed in Sec. 3.2 explains savings as well. To show this, we mimic a
situation of savings by periodically alternating the task period and the rest period. Specifically, we start with r = r0 = 1, w = w0 , v = v0 , and the learning condition (r = 2 or
r = 0.5) is applied for 4 hours a day. During the rest of the day (20 hours), the dark condition is simulated by giving no teaching signal to the model. Changes in the VOR gains
for 8 consecutive days are shown in Fig. 3(A) and (C) for r = 2 and r = 0.5, respectively.
The numerical results are consistent with the savings found in other reported experiments
[7] and models [11]; the animal forgets much of the acquired gain in the dark, while a
small fraction is transferred each day to the cerebellar nuclei. The time-dependent synaptic
weights are shown in Fig. 3(B) (r = 2) and (D) (r = 0.5) and suggest that v is really
responsible for savings and that its plasticity needs guidance under the short-term learning
of w. The memory transfer occurs even in the dark condition, as indicated by the increase
(resp. decrease) of v in the dark shown in Fig. 3(B) (resp. (D)). This happens because ruin
of the short-term memory of w drives the learning of v for some time even after the daily
training has finished. For the indirect pathway, a dark condition defines an off-task period
during which w gradually loses its associations.
For comparison, let us deal with the case in which v is fixed. Then, the learning rule Eq. (4)
is reduced to
w? = ??1 [(r ? r0 ) u + (w ? w0 ) Au] Au ? ?3 (w ? w0 ) .
(18)
The VOR adaptation with this rule is shown in Fig. 4(A) (r = 2) and (B) (r = 0.5). Longterm retention of the acquired gain is now impossible, whereas the short-term learning, or
the adaptation within a day, deteriorates little. Since savings do not occur, the ultimate
learning error is larger than when v is plastic.
However, if w is fixed and v is plastic, the VOR gain is not adaptive, since y does not
carry teaching signals any longer. In this case, we must implement supervised learning of
v for learning to occur. Then, r adapts only gradually on the slow time scale of ? 4 , and the
short-term learning is lost.
5
Discussion
Our model explains how the flocculus and the brainstem cooperate in motor learning. Presumably, the indirect pathway involving the flocculus is computationally powerful because
of a huge number of intermediate granule cells, but its memory is of short-term nature.
The direct pathway bypassing the mossy fibers to the cerebellar nuclei is likely to have
less computational power but stores motor memory for a long period. A part of the motor
memory is expected to be passed from the flocculus to the nuclei. This happens in a robust
manner if the direct pathway is equipped with the learning rule dependent on correlation
between the Purkinje-cell firing and the mossy-fiber firing. To explore whether associative
LTP/LTD in the cerebellar nuclei really exists will be a subject of future experimental work.
Our model is also applicable to savings.
A
B
2
3
2.5
1.5
v
r
1.75
2
1.25
1
1.5
0
50
100
time [hr]
150
0
2
w
B
D
1
2
0.75
1.5
v
r
1
0.5
1
0
50
100
time [hr]
150
2
2.5
3
w
Figure 3: Numerical simulations of savings with the Purkinje cell-dependent learning rule.
We set A = 0.4, u = 1, w0 = 2, r0 = 1, v0 = r0 + Aw0 , ?1 = 7, ?3 = 0.3, ?4 = 0.05,
?6 = 0.002. The target gains are (A, B) r = 2 and (C, D) r = 0.5. (A) and (C) show VOR
gains. (B) and (D) show trajectories in the w-v space (thin solid lines) together with the
nullclines (thick solid lines) and e = 0 (thick dotted lines).
A
B
2
1
1.5
r
r
1.75
0.75
1.25
1
0.5
0
50
100
time [hr]
150
0
50
100
time [hr]
150
Figure 4: Numerical simulations of savings with fixed v. The parameter values are the
same as those used in Fig. 3. The target gains are (A) r = 2 and (B) r = 0.5.
In the earlier models [10, 11], quantitative meanings were given to the equilibrium synaptic
weights. Actually, they are solely determined from non-experimentally determined parameters, namely, the balance between the learning rates (in our terminology, ? 1 , ?2 , ?4 and
?5 ). Also, the balance seems to play a role in preventing runaway of synaptic weights. In
contrast, our model uses the ratio of learning rates (and values of other parameters) just for
qualitative purposes and is capable of explaning and predicting experimental settings without parameter tuning. For example, the earlier arguments negating the flocculus hypothesis
are based on the fact that the plasticity of the flocculus (w) responding to vestibular inputs
occurs but in the direction opposite to the expectation of the flocculus hypothesis [5, 12].
However, this experimental observation is not necessarily contradictory to either the flocculus hypothesis or the two-site hypothesis. As shown in Fig. 2(A), when adapting to a
large VOR gain, w experiences LTD in the initial stage [6]. Then, partial LTP ensues as
the motor memory is transferred to the nuclei. Another prediction is about adaptation to a
small gain. Figure 2(B) predicts that, in this case, LTP in the indirect pathway is gradually
transferred to LTD in the direct pathway. Partial LTD following LTP is anticipated in the
flocculus. This implies savings in unlearning.
Acknowledgments
We thank S. Nagao for helpful discussions. This work was supported by the Special Postdoctoral Researchers Program of RIKEN.
References
[1] C. D. Aizenman, D. J. Linden. Rapid, synaptically driven increases in the intrinsic excitability
of cerebellar deep nuclear neurons. Nat. Neurosci., 3, 109?111 (2000).
[2] J. S. Albus. A theory of cerebellar function. Math. Biosci., 10, 25?61 (1971).
[3] E. S. Boyden, A. Katoh, J. L. Raymond. Cerebellum-dependent learning: the role of multiple
plasticity mechanisms. Annu. Rev. Neurosci., 27, 581?609 (2004).
[4] P. Dayan, L. F. Abbott. Theoretial Neuroscience ? Computational and Mathematical Modeling
of Neural Systems. MIT (2001).
[5] S. du Lac, J. L. Raymond, T. J. Sejnowski, S. G. Lisberger. Learning and memory in the
vestibulo-ocular reflex. Annu. Rev. Neurosci., 18, 409?441 (1995).
[6] M. Ito. Long-term depression. Ann. Rev. Neurosci., 12, 85?102 (1989).
[7] A. E. Luebke, D. A. Robinson. Gain changes of the cat?s vestibulo-ocular reflex after flocculus
deactivation. Exp. Brain Res., 98, 379?390 (1994).
[8] D. Marr. A theory of cerebellar cortex. J. Physiol., 202, 437?470 (1969).
[9] M. D. Mauk. Roles of cerebellar cortex and nuclei in motor learning: contradictions or clues?
Neuron, 18, 343?346 (1997).
[10] J. F. Medina, M. D. Mauk. Simulations of cerebellar motor learning: computational analysis of
plasticity at the mossy fiber to deep nucleus synapse. J. Neurosci., 19, 7140?7151 (1999).
[11] J. F. Medina, K. S. Garcia, M. D. Mauk. A mechanism for savings in the cerebellum. J. Neurosci., 21, 4081?4089 (2001).
[12] F. A. Miles, D. J. Braitman, B. M. Dow. Long-term adaptive changes in primate vestibuloocular
reflex. IV. Electrophysiological observations in flocculus of adapted monkeys. J. Neurophysiol.,
43, 1477?1493 (1980).
[13] B. W. Peterson, J. F. Baker, J. C. Houk. A model of adaptive control of vestibuloocular reflex
based on properties of cross-axis adaptation. Ann. New York Acad. Sci. 627, 319?337 (1991).
[14] R. J. Racine, D. A. Wilson, R. Gingell, D. Sunderland. Long-term potentiation in the interpositus and vestibular nuclei in the rat. Exp. Brain Res., 63, 158?162 (1986).
| 2873 |@word private:1 longterm:1 seems:1 simulation:5 gradual:1 lobe:1 solid:2 carry:1 initial:1 necessity:1 wako:2 katoh:1 must:2 olive:1 written:1 vor:14 physiol:1 subsequent:1 numerical:11 wx:2 plasticity:10 periodically:1 enables:1 motor:16 depict:1 progressively:1 boyden:1 accordingly:1 plane:2 short:9 provides:1 math:1 simpler:1 mathematical:1 along:3 direct:10 qualitative:2 sustained:1 pathway:23 unlearning:1 manner:3 acquired:2 theoretically:1 forgetting:1 expected:3 rapid:2 examine:1 brain:6 nullcline:1 little:2 equipped:1 becomes:1 project:1 moreover:1 underlying:1 notation:1 mass:1 baker:1 null:1 what:1 cm:2 monkey:1 guarantee:1 quantitative:1 rm:3 control:1 unit:1 underlie:2 positive:1 retention:1 understood:1 naoki:1 tends:1 limit:1 io:2 acad:1 firing:4 solely:1 might:1 au:13 examined:1 limited:1 acknowledgment:1 responsible:2 vu:7 lost:1 implement:2 thought:2 adapting:1 induce:1 specificity:1 suggest:3 onto:1 close:4 cannot:1 context:1 impossible:1 compensated:1 send:1 simplicity:1 correcting:1 contradiction:1 rule:14 marr:2 nuclear:1 mossy:9 resp:14 enhanced:1 target:3 spontaneous:1 heavily:1 imagine:1 substrate:1 play:1 us:1 hypothesis:14 element:1 satisfying:1 located:2 u4:4 predicts:1 observed:1 role:3 eu:1 movement:3 inhibit:1 decrease:6 dynamic:1 controversy:2 terminating:1 trained:1 carrying:1 overshoot:1 serve:1 neurophysiol:1 indirect:9 various:2 fiber:17 represented:2 cat:1 riken:5 fast:4 sejnowski:1 whose:1 larger:2 amari:2 favor:1 associative:3 au2:6 propose:1 flocculus:21 adaptation:8 inserting:1 turned:1 rapidly:2 ymax:4 adapts:1 albus:2 convergence:1 enhancement:1 depending:3 recurrent:1 derive:1 sole:2 received:1 eq:17 predicted:1 implies:1 direction:5 thick:2 correct:2 owing:1 stabilization:1 brainstem:12 runaway:1 shun:1 explains:2 potentiation:2 really:3 quit:1 exploring:1 pl:4 clarify:1 hold:1 bypassing:1 ruin:1 exp:2 presumably:3 houk:1 equilibrium:4 stabilizes:1 substituting:2 early:2 a2:9 relay:1 consecutive:1 purpose:1 applicable:1 realizes:1 agrees:1 mit:1 illusive:1 modified:1 rather:2 nullclines:5 vestibuloocular:2 wilson:1 mainly:1 logically:1 contrast:4 helpful:1 dependent:7 dayan:1 unlikely:1 transferring:1 initially:1 sunderland:1 denoted:1 animal:3 special:1 fairly:2 saving:15 thin:1 anticipated:1 mimic:1 future:1 retina:1 composed:1 simultaneously:1 lac:1 phase:1 huge:2 interest:1 capable:1 partial:3 necessary:1 daily:2 experience:2 collateral:2 iv:1 re:2 signaled:1 guidance:1 modeling:2 purkinje:15 earlier:3 negating:1 saitama:2 gr:2 stored:2 reported:2 ensues:1 masuda:2 fundamental:1 standing:1 off:2 together:1 connectivity:1 slowly:1 external:2 japan:2 account:2 bold:1 sec:1 includes:1 afferent:1 depends:1 later:1 multiplicative:2 deactivation:1 start:1 sort:1 parallel:3 slope:4 vivo:1 contribution:2 minimize:1 yield:2 identify:1 climbing:6 duly:1 plastic:3 trajectory:4 confirmed:1 drive:2 researcher:1 explain:1 synapsis:4 synaptic:10 interpositus:1 against:2 involved:1 ocular:3 e2:3 propagated:1 gain:16 couple:1 electrophysiological:1 actually:1 back:1 day:8 supervised:7 response:2 improved:1 synapse:1 just:3 stage:3 correlation:1 receives:1 dow:1 replacing:1 defines:1 indicated:1 effect:2 nagao:2 hence:1 excitability:2 alternating:1 mile:1 deal:1 cerebellum:6 during:5 inferior:1 prohibit:1 rat:1 motion:1 cooperate:1 meaning:1 image:2 stimulation:1 functional:1 conditioning:1 jp:2 linking:1 counteracts:1 association:1 biosci:1 tuning:2 session:1 teaching:3 innervate:1 wear:1 cortex:5 longer:1 v0:20 add:1 conjectured:1 driven:3 apart:1 termed:1 store:2 claimed:1 manipulation:1 prism:1 alternation:1 r0:25 subtraction:1 period:8 signal:6 multiple:1 desirable:2 hebbian:5 adapt:1 calculation:1 cross:1 long:12 prediction:2 involving:3 expectation:1 cerebellar:20 normalization:6 represent:2 synaptically:1 cell:22 schematically:1 whereas:2 fine:2 source:1 concluded:1 rest:4 subject:3 ltp:25 presence:1 intermediate:2 concerned:2 architecture:1 opposite:3 idea:1 regarding:1 whether:1 ultimate:1 passed:1 ltd:22 york:1 depression:2 deep:2 latency:1 amount:1 dark:7 reduced:1 inhibitory:2 dotted:1 deteriorates:1 neuroscience:1 per:1 anatomical:1 write:1 ichi:1 terminology:1 abbott:1 fraction:1 run:1 powerful:1 vn:1 layer:1 followed:1 activity:2 strength:1 occur:7 adapted:1 deficiency:1 constrain:1 argument:1 transferred:6 alternate:1 remain:1 suppressed:1 unity:1 evolves:1 biologically:1 modification:1 happens:3 rev:3 primate:1 gradually:4 pr:1 computationally:1 equation:4 previously:1 eventually:1 mechanism:9 end:1 actuator:1 away:1 alternative:1 responding:1 running:1 giving:1 especially:1 granule:5 move:2 occurs:6 degrades:1 thank:1 reinforce:1 simulated:1 accommodates:1 sci:1 w0:22 originate:1 manifold:3 toward:2 ru:7 ratio:1 balance:2 racine:1 negative:2 unknown:1 perform:1 potentiated:1 neuron:3 observation:2 benchmark:1 coincident:1 situation:3 communication:1 head:2 looking:1 rn:1 arbitrary:1 namely:2 required:1 connection:1 distinction:1 hour:2 vestibular:7 robinson:1 impair:1 suggested:2 proceeds:1 dynamical:1 program:1 memory:25 explanation:1 power:2 unrealistic:1 natural:2 predicting:1 hr:6 scheme:1 eye:2 finished:1 gabaergic:1 axis:1 carried:1 raymond:2 comply:1 relative:2 nucleus:21 degree:1 consistent:2 vestibulo:3 subtractive:3 compatible:1 excitatory:3 consolidation:1 supported:3 changed:1 understand:1 perceptron:1 institute:2 peterson:1 taking:1 eyelid:1 benefit:1 preventing:1 adaptive:3 clue:1 far:1 unreliable:1 active:1 assumed:1 postdoctoral:1 reviewed:1 additionally:1 nature:1 transfer:11 robust:4 du:1 necessarily:1 main:1 neurosci:6 arrow:1 repeated:1 convey:1 site:3 fig:13 transduction:1 slow:2 position:3 medina:3 forgets:1 ito:1 learns:2 annu:2 zu:1 decay:1 linden:1 evidence:1 exists:2 essential:1 intrinsic:1 adding:1 magnitude:1 nat:1 mf:5 depicted:1 garcia:1 likely:1 explore:1 visual:1 partially:1 u2:12 reflex:6 lisberger:1 loses:1 goal:1 consequently:1 ann:2 absence:1 change:9 experimentally:1 determined:3 specifically:1 operates:1 contradictory:1 called:4 experimental:7 support:3 phenomenon:2 ex:1 |
2,063 | 2,874 | Noise and the two-thirds power law
Uri Maoz1,2,3 , Elon Portugaly3 , Tamar Flash2 and Yair Weiss3,1
1
Interdisciplinary Center for Neural Computation, The Hebrew University of Jerusalem, Edmond
Safra Campus, Givat Ram Jerusalem 91904, Israel; 2 Department of Computer Science and Applied
Mathematics, The Weizmann Institute of Science, PO Box 26 Rehovot 76100, Israel; 3 School of
Computer Science and Engineering, The Hebrew University of Jerusalem, Edmond Safra Campus,
Givat Ram Jerusalem 91904, Israel
Abstract
The two-thirds power law, an empirical law stating an inverse non-linear
relationship between the tangential hand speed and the curvature of its
trajectory during curved motion, is widely acknowledged to be an invariant of upper-limb movement. It has also been shown to exist in eyemotion, locomotion and was even demonstrated in motion perception
and prediction. This ubiquity has fostered various attempts to uncover
the origins of this empirical relationship. In these it was generally attributed either to smoothness in hand- or joint-space or to the result of
mechanisms that damp noise inherent in the motor system to produce the
smooth trajectories evident in healthy human motion.
We show here that white Gaussian noise also obeys this power-law. Analysis of signal and noise combinations shows that trajectories that were
synthetically created not to comply with the power-law are transformed
to power-law compliant ones after combination with low levels of noise.
Furthermore, there exist colored noise types that drive non-power-law
trajectories to power-law compliance and are not affected by smoothing.
These results suggest caution when running experiments aimed at verifying the power-law or assuming its underlying existence without proper
analysis of the noise. Our results could also suggest that the power-law
might be derived not from smoothness or smoothness-inducing mechanisms operating on the noise inherent in our motor system but rather from
the correlated noise which is inherent in this motor system.
1
Introduction
A number of regularities have been empirically observed for the motion of the end-point
of the human upper-limb during curved and drawing movements. One of these has been
termed ?the two-thirds power law? ([1]). It can be formulated as:
v(t) = const ? ?(t)?
(1)
log (v(t)) = const + ? log (?(t))
(2)
or, in log-space
where v is the tangential end-point speed, ? is the instantaneous curvature of the path, and ?
is approximately ? 13 . The various studies that lend support to this power-law go beyond its
simple verification. There are those that suggest it as a tool to extract natural segmentation
into primitives of complex movements ([2], [3]). Others show the development of the
power-law with age for children ([4]). There is also research that suggests it appears for
three-dimensional (3D) drawings under isometric force conditions ([5]). It was even found
in neural population coding in the monkey motor brain area controlling the hand ([6]).
Other studies have located the power-law elsewhere than the hand. It was found to apply in
eye-motion ([7]) and even in motion perception ([8],[9]) and movement prediction based
on biological motion ([10]). Recent studies have also found it in locomotion ([11]). This
power-law has thus been widely accepted as an important invariant in biological movement
trajectories, so much so that it has become an evaluation criterion for the quality of models
(e.g. [12]).
This has motivated various attempts to find some deeper explanation that supposedly underlies this regularity. The power-law was shown to possibly be a result of minimization
of jerk ([13],[14]), jerk along a predefined path ([15]), or endpoint variability due to noise
inherent in the motor system ([12]). Others have claimed that it stems from forward kinematics of sinusoidal movements at the joints ([16]). Another explanation has to do with the
mathematically interesting fact that motion according to the power-law maintains constant
affine velocity ([17],[18]).
We were thus very much surprised by the following:
n
Observation: Given a time series (xi , yi )i=1 in which xi , yi ? N (0, 1) i.i.d.(xi ,xj ,yi ,yj )
for i 6= j, and assuming that the series is of equal time intervals, calculate ? and v in order
to obtain ? from the linear regression of log(v) versus log(?). The linear regression plot of
log(v) versus log(?) is within range both in its regression coefficient and R2 value to what
experimentalists consider as compliance with the power-law (see figure 1b). Therefore this
white Gaussian noise trajectory seems to fit the two-thirds power law model in equation (1)
above.
1.1
Problem formulation
For any regular planar curve parameterized with t, we get from the Frenet-Serret formulas
(see [19])1 :
|?
xy? ? x?
? y|
|?
xy? ? x?
? y|
?(t) =
(3)
3 =
3
v (t)
(x? 2 + y? 2 ) 2
p
where v(t) = x? 2 + y? 2 . Denoting ? (t) = |?
xy? ? x?
? y | we obtain:
1
1
v(t) = ?(t) 3 ? ?(t)? 3
(4)
or, in log-space2 :
1
1
log (?(t)) ? log (?(t))
(5)
3
3
Given a trajectory for which ? is constant, the power-law in equation (1) above is obtained
exactly (the term ? is in fact the affine velocity of [17],[18], and thus a trajectory that yields
a constant ? would mean movement at constant affine velocity).
log (v(t)) =
1
Though there exists a definition for signed curvature for planar curves (i.e. without the absolute
value in the numerator of equation 3), we refer to the absolute value of the curvature, as done in the
power-law. Therefore, in our case, ?(t) is the absolute value of the instantaneous curvature.
2
Non-linear regression in (4) should naturally be performed instead of log-space linear regression
in (5). However, this linear regression in log-space is the method of choice in the motor-control
literature, despite the criticism of [16]. We therefore opted for it here as well.
2
Slope: 0.14
0
1
0
log(v)
log(?)
?2
?4
?1
?2
?6
?3
?4
?8
(a)
Slope:
?0.29
?5
0
log(?)
5
10
(b)
?5
0
log(?)
5
Figure 1: Given a trajectory composed of normally distributed position data with constant time
intervals, we calculate and plot: (a) log(?) versus log(?) and (b) log(v) versus log(?) with their
linear regression lines. The correlation coefficient in (a) is 0.14, entailing the one in (b) to be -0.29
(see text). Moreover, the R2 value in (a) is 0.04, much smaller than in the 0.57 value in (b).
Denoting the linear regression coefficient of log(v) versus log(?) by ? and the linear regression coefficient of log(?) versus log(?) by ?, it can be easily shown that (5) entails:
1 ?
?=? +
(6)
3 3
Hence, if log(?) and log(?) are statistically uncorrelated, the linear regression coefficient
between them, which we termed ?, would be 0, and thus from (6) the linear regression
coefficient of log(v) versus log(?), which we named ?, would be exactly ? 13 . Therefore,
any trajectory that produces log(?) and log(?) that are statistically uncorrelated would
precisely conform to the power-law in (1)3 .
If log(?) and log(?) are weakly correlated, such that ? (the linear regression coefficient of
log(?) versus log(?)) is small, the effect on ? (the linear regression coefficient of log(v)
versus log(?)) would result in a positive offset of 3? from the ? 31 value of the power-law.
Below, we analyze ? for random position data, and show that it is indeed small and that ?
takes values close to ? 13 . Figure 1 portrays a typical log(v) versus log(?) linear regression
plot for the case of a trajectory composed of random data sampled from an i.i.d. normal
distribution.
2
Power-law analysis for trajectories composed of normally
distributed samples
n
Let us take the time-series (xi , yi )i=1 where xi , yi ? N (0, 1), i.i.d. Let ti denote the time
at sample i and for all i let ti+1 ? ti = 1. From this time series we calculate ?, ? and v by
central finite differences4 . Again, we denote the linear regression coefficient of log(?) verd = const + ? log(?), where ? = Covariance[log(?),log(?)] .
sus log(?) by ?, and thus log(?)
Variance[log(?)]
And from (6) we know that a linear regression of log(v) versus log(?) would result in
? = ? 31 + 3? .
The fact that ? is scaled down three-fold to give the offset of ? from ? 13 is significant. It
means that for ? to achieve values far from ? 31 , ? would need to be very big. For example,
3
? being constant is naturally a special case of uncorrelated log(?) and log(?).
We used the central finite differencing technique here mainly for ease of use and analysis. Other
differentiation techniques, either utilizing more samples (e.g. the Lagrange 5-point method) or analytic differentiation of smoothing functions (e.g. smoothing splines), yielded similar results. In a
more general sense, smoothing techniques introduce local correlations (between neighboring samples
in time) into the trajectory. Yet globally, for large time series, the correlation remains weak.
4
in order for ? to be 0 (i.e. motion at constant tangential speed), ? would need to be 1,
which requires perfect correlation between log(?), which is a time-dependant variable,
and log(?), which is a geometric one. This could be taken to suggest that for a control
system to maintain movement at ? values that are remote from ? 31 would require some
non-trivial control of the correlation between log(?) and log(?).
Running 100 Monte-Carlo simulations5 , each drawing a time series of 1,000,000 normally
distributed points, we estimated ? = 0.1428 ? 0.0013 (R2 = 0.0357 ? 0.0006). ??s
magnitude and its corresponding R2 value suggest that log(?) and log(?) are only weakly
correlated (hence the ball-like shape in Figure 1a). The same type of simulations gave
? = ?0.2857?0.0004 (R2 = 0.5715?0.0011), as expected. Both ? and its R2 magnitudes
are within what is considered by experimentalists to be the range of applicable values for
the power-law. Moreover, standard outlier detection and removal techniques as well as
robust linear regression make ? approach closer to ? 31 and increase the R2 value.
Measurements of human drawing movements in 3D also exhibit the power-law ([20],[16]).
We therefore decided to repeat the same analysis procedure for 3D data (i.e. drawing timeseries (xi , yi , zi )ni=1 i.i.d. from N (0, 1), and extracting v, ? and ? according to their 3D
definitions). This time we obtained ? = ?0.0417 ? 0.0009 (R2 = 0.0036 ? 0.0002) and
as expected ? = ?0.3472 ? 0.0003 (R2 = 0.6944 ? 0.0006). This is even closer to the
power-law values, as defined in (1).
This phenomenon also occurs when we repeat the procedure for trajectories composed
of uniformly distributed samples with constant time intervals. The linear regression of
log(v) versus log(?) for planar trajectories gives ? = ?0.2859 ? 0.0004 (R2 = 0.5724 ?
0.0009). 3D trajectories of uniformly distributed samples give us ? = ?0.3475 ? 0.0003
(R2 = 0.6956?0.0007) under the same simulation procedure. In both cases the parameters
obtained for the uniform distribution are very close to those of the normal distribution.
3
Analysis of signal and noise combinations
3.1
Original (Non-filtered) signal and noise combinations
Another interesting question has to do with the combination of signal and noise. Every experimentally measured signal has some noise incorporated in it, be it measurement-device
noise or noise internal to the human motor system. But how much noise must be present
to transform a signal that does not conform to the power-law to one that does? We took a
planar ellipse with a major axis of 0.35m and minor axis of 0.13m (well within the standard
range of dimensions used as templates for measuring the power-law for humans, see figure 2b), and spread 120 equispaced samples over its perimeter (a typical number of samples
for the sampling rate given below). The time intervals were constant at 0.01s (100 Hz is
of the order of magnitude of contemporary measurement equipment). This elliptic trajectory is thus traversed at constant speed, despite not having constant curvature. It therefore
does not obey the power-law (a ?sanity check? of our simulations gave ? = ?0.0003,
R2 = 0.0028).
At this stage, normally distributed noise with various standard-deviations was added to
this ellipse. We ran 100 simulations for every noise magnitude and averaged the powerlaw parameters ? and R2 obtained from log(v) versus log(?) linear regressions for each
noise magnitude (see figure 2a). The level of noise required to drive the non-power-lawcompliant trajectory to obey the power-law is rather small; a standard deviation of about
5
The Matlab code for this simple simulation can be found at:
http://www.cs.huji.ac.il/~urim/NIPS_2005/Monte_Carlo.m
0
?0.05
0.6
?0.15
R2
?
?0.1
0.4
?0.2
0.2
?0.25
?0.3
0
(a)
5
Noise magnitude
0
0
10
?3
x 10
5
Noise magnitude
10
?3
x 10
(b)
0.8
?0.05
0.6
?0.1
2
?0.2
R
?
?0.15
0.4
?0.25
0.2
?0.3
?0.35
(c)
0
5
Noise magnitude
10
?3
x 10
0
0
5
Noise magnitude
10
?3
x 10
(d)
Figure 2: (a) ? and R2 values of the power-law fit for trajectories composed of the non-power-law
planar ellipse given in (b) combined with various magnitudes of noise (portrayed as the standard
deviations of the normally distributed noise that was added). (c) ? and R2 values for the non-powerlaw 3D bent ellipse given in (d). All distances are measured in meters.
0.005m is sufficient6 .
The same procedure was performed for a 3D bent-ellipse of similar proportions and perimeter in order to test the effects of noise on spatial trajectories (see figure 2c and d). We placed
the samples on this bent ellipse in an equispaced manner, so that it would be traversed at
constant speed (and indeed ? = ?0.0042, R2 = 0.1411). This time the standard deviation
of the noise which was required for power-law-like behavior in 3D was 0.003m or so, a bit
smaller than in the planar case. Naturally, had we chosen smaller ellipses, less noise would
have been required to make them obey the power-law (for instance, if we take a 0.1 by
0.05m ellipse, the same effect would be obtained with noise of about 0.002m standard deviation for the planar case and 0.0015m for a 3D bent ellipse of the same magnitude). Note
that both for the planar and spatial shapes, the noise-level that drives the non-power-law
signal to conform to the power-law is in the order of magnitude of the average displacement between consecutive samples.
3.2
Does filtering solve the problem?
All the analysis above was for raw data, whereas it is common practice to low-pass filter
experimentally obtained trajectories before extracting ? and v. If we take a non-power-law
signal, contaminate it with enough noise for it to comply with the power-law and then filter it, would the resulting signal obey the power-law or not? We attempted to answer this
question by contaminating the constant-speed bent-ellipse of the previous subsection (reminder: ? = ?0.0003, R2 = 0.0028) with Gaussian noise of standard deviation 0.005m.
This resulted in a trajectory with ? = ?0.3154 ? 0.0323 (R2 = 0.6303 ? 0.0751) for
100 simulation runs (actually a bit closer to the power-law than the noise alone). We then
low-pass filtered each trajectory with a zero-lag second-order Butterworth filter with 10
6
rate.
0.005m is about half the distance between consecutive samples in this trajectory and sampling
Figure 3: a graphical outline of Procedure 1 (from the top to the bottom left) and Procedure 2 (from
the top to the bottom right). (a) The original non-power-law signal (b) Signal in (a) plus white noise
(c) Signal in (b) after smoothing. (d) Signal in (a) with added correlated noise (e) Signal in (d) after
smoothing. All signals but (a) and (c) obey the power-law.
Hz cutoff frequency. This returned a signal essentially without power-law compliance, i.e.
? = ?0.0472 ? 0.0209 (R2 = 0.1190 ? 0.0801) . Let us name this process Procedure 1.
But what if the noise at hand is more resistant to smoothing? Taking 3D Gaussian noise
and smoothing it (using the same type of Butterworth filtering as above) does not make the
resulting signal any less compliant to the power-law. Monte-Carlo simulations of smoothed
random trajectories (100 repetitions of 1,000,000 samples each) resulted in ? = ?0.3473?
0.0003 (R2 = 0.6945 ? 0.0007), which is the same as the original noise (which had ? =
?0.3472 ? 0.0003, R2 = 0.6944 ? 0.0006)7 . We therefore ran the signal plus noise
simulations again, this time adding smoothed-noise (increasing its magnitude five-fold to
compensate for the loss of energy of this noise due to the filtering) to the constant-speed
bent-ellipse. This time the combined signal yielded a power-law fit of ? = ?0.3175 ?
0.0414 (R2 = 0.6260 ? 0.0798), leaving it power-law compliant. However, this time the
same filtering procedure as above left us with a signal that could still be considered to
obey the power-law, with ? = ?0.2747 ? 0.0481 (R2 = 0.5498 ? 0.0698). We name
this process Procedure 2. Procedures 1 and 2 are portrayed graphically in figure 3. If
we continue and increase the noise magnitude the effect of the smoothing at the end of
Procedure 2 becomes less apparent, with the smoothed trajectories sometimes conforming
to the power-law (mainly in terms of R2 ) better than before the smoothing.
3.3
Levels of noise inherent to upper limb movement in human data
We conducted a preliminary experiment to explore the level of noise intrinsic to the human motor system. Subjects were instructed to repetitively and continuously trace ellipses
in 3D while seated with their trunk restrained to the back of a rigid chair and their eyes
closed (to avoid spatial cues in the room, see [16]). They were to suppose that there exists
a spatial elliptical pattern before them, which they were to traverse with their hand. The
3D hand position was recorded at 100 Hz using NDI?s Optotrak 2010. Given their goal, it
is reasonable to assume that the variance between the trajectories in the different iterations
is composed of measurement noise as well as noise internal to the subject?s motor systems
(since the inter-repetition drift was removed by PCA alignment after segmenting the continuous motion into underlying iterations). We thus analyzed the recorded time series in
order to find that variance (see figure 4a) and compared this to the average variance of the
synthetic equispaced bent ellipse combined with correlated noise (see figure 4b). While
more careful experiments may be needed to extract the exact SNR of human limb movements, it appears that the level of noise in human limb movement is comparable to the level
of noise that can cause power-law behavior even for non power-law signals.
7
This result goes hand in hand with what was said before. Introducing local correlations between
samples does not alter the power-law-from-noise phenomenon.
0.02
Synthetic
Human
STD of intersection points
0.018
(a)
(b)
0.016
0.014
0.012
0.01
0.008
0
50
100
150
Figure 4: Noise level in repetitive upper limb movement. The variance of the different iterations
was measured at 10 different positions along the ellipse, defined by a plane passing through the
origin, perpendicular to the first two principle components of the ellipses. The angles between every
two neighboring planes were equal. (a) For each position, the intersection of each iteration of the
trajectory with the plane was calculated. (b) The standard deviation of the different intersections of
each plane was measured, and is depicted for synthetic and human data.
4
Discussion
We do not suggest that the power-law, which stems from analysis of human data, is a
bogus phenomenon, resulting only from measurement noise. Yet our results do suggest
caution when carrying out experiments that either aim to verify the power-law or assume
its existence. When performing such experimentation, one should always be sure to verify
that the signal-to-noise ratio in the system is well within the bounds where it does not
drive the results toward the power-law. It might further be wise to conduct Monte-Carlo
simulations with the specific parameters of the problem to ascertain this. If we focus on the
measurement device noise alone, it should be noted that whereas many modern devices for
planar motion measurement tend to have a measurement accuracy superior to the 0.002m
or so (which we have shown to be enough to produce power-law from noise), the same
cannot be said for contemporary 3D measurement devices. There, errors of magnitudes in
the order of about 0.002m can certainly occur. In addition, one should keep in mind that
even for smaller noise magnitudes some drift toward the power-law does occur. This must
be taken into consideration when analyzing the results. Last, muscle-tremor must also be
borne in mind as another source of noise, especially when dealing with pathologies.
Moreover, following the results above, it is clear that when a significant amount of noise is
incorporated into the system, simply applying an off-the-shelf smoothing procedure would
not necessarily satisfactorily remove it, especially if it is correlated (i.e. not white). Moreover, the smoothing procedure will most likely distort the signal to some degree, even if the
noise is white. Therefore smoothing is not an easy ?magic cure? for the power-law-fromnoise phenomenon.
Another interesting aspect of our results has to do with the light they shed on the origins of
the power-law. Previous works showed that the power-law can be derived from smoothness
criteria for human trajectories, be it the assumption that these minimize the end-point?s jerk
([14]), jerk along a predefined path ([15]) or variability due to noise inherent in the motor
system itself ([12]), or that the power law is due to smoothing inherent in the human motor
system (especially the muscles, [21]) or to smooth joint oscillations ([16]). The results
presented here suggest the opposite might be true as well. The power-law can be derived
from the noise itself, which is inherent in our motor system (and which is likely to be
correlated noise), rather than from any smoothing mechanisms which damp it.
Acknowledgements
This research was supported in part by the HFSPO grant to T.F.; E.P. is supported by an Eshkol
fellowship of the Israeli Ministry of Science.
References
[1] F. Lacquaniti, C. Terzuolo, and P. Viviani. The law relating kinematic and figural aspects of
drawing movements. Acta Psychologica, 54:115?130, 1983.
[2] P. Viviani. Do units of motor action really exist. Experimental Brain Research, 15:201?216,
1986.
[3] P. Viviani and M. Cenzato. Segmentation and coupling in complex movements. Journal of
Experimental Psychology: Human Perception and Performance, 11(6):828?845, 1985.
[4] P. Viviani and R. Schneider. A developmental study of the relationship between geometry and
kinematics in drawing movements. Journal of Experimental Psychology, 17:198?218, 1991.
[5] J. T. Massey, J. T. Lurito, G. Pellizzer, and A. P. Georgopoulos. Three-dimensional drawings in
isometric conditions: relation between geometry and kinematics. Experimental Brain Research,
88(3):685?690, 1992.
[6] A. B. Schwartz. Direct cortical representation of drawing. Science, 265(5171):540?542, 1994.
[7] C. deSperati and P. Viviani. The relationship between curvature and velocity in two dimensional
smooth pursuit eye movement. The Journal of Neuroscience, 17(10):3932?3945, 1997.
[8] P. Viviani and N. Stucchi. Biological movements look uniform: evidence of motor perceptual interactions. Journal of Experimental Psychology: Human Perception and Performance,
18(3):603?626, 1992.
[9] P. Viviani, G. Baud Bovoy, and M. Redolfi. Perceiving and tracking kinesthetic stimuli: further evidence of motor perceptual interactions. Journal of Experimental Psychology: Human
Perception and Performance, 23(4):1232?1252, 1997.
[10] S. Kandel, J. Orliaguet, and P. Viviani. Perceptual anticipation in handwriting: The role of
implicit motor competence. Perception and Psychophysics, 62(4):706?716, 2000.
[11] S. Vieilledent, Y. Kerlirzin, S. Dalbera, and A. Berthoz. Relationship between velocity and
curvature of a human locomotor trajectory. Neuroscience Letters, 305(1):65?69, 2001.
[12] C. M. Harris and D. M. Wolpert. Signal-dependent noise determines motor planning. Nature,
394(6695):780?784, 1998.
[13] P. Viviani and T. Flash. Minimum-jerk, two-thirds power law, and isochrony: converging approaches to movement planning. Journal of Experimental Psychology: Human Perception and
Performance, 21(1):32?53, 1995.
[14] M. J. Richardson and T. Flash. Comparing smooth arm movements with the two-thirds power
law and the related segmented-control hypothesis. Journal of Neuroscience, 22(18):8201?8211,
2002.
[15] E. Todorov and M. Jordan. Smoothness maximization along a predefined path accurately predicts the speed profiles of complex arm movements. Journal of Neurophysiology, 80(2):696?
714, 1998.
[16] S. Schaal and D. Sternad. Origins and violations of the 2/3 power law in rhythmic threedimensional arm movements. Experimental Brain Research, 136(1):60?72, 2001.
[17] A. A. Handzel and T. Flash. Geometric methods in the study of human motor control. Cognitive
Studies, 6:1?13, 1999.
[18] F. E. Pollick and G. Sapiro. Constant affine velocity predicts the 1/3 power law of planar motion
perception and generation. Vision Research, 37(3):347?353, 1997.
[19] J. Oprea. Differential geometry and its applications. Prentice-Hall, 1997.
[20] U. Maoz and T. Flash. Power-laws of three-dimensional movement. Unpublished manuscript.
[21] P. L. Gribble and D. J. Ostry. Origins of the power law relation between movement velocity and curvature: modeling the effects of muscle mechanics and limb dynamics. Journal of
Neurophysiology, 76:2853?2860, 1996.
| 2874 |@word neurophysiology:2 seems:1 proportion:1 simulation:9 covariance:1 lacquaniti:1 series:7 denoting:2 elliptical:1 comparing:1 yet:2 must:3 conforming:1 shape:2 analytic:1 motor:18 remove:1 plot:3 alone:2 half:1 cue:1 device:4 eshkol:1 plane:4 colored:1 filtered:2 traverse:1 five:1 along:4 direct:1 become:1 differential:1 surprised:1 manner:1 introduce:1 inter:1 expected:2 indeed:2 behavior:2 planning:2 mechanic:1 brain:4 globally:1 increasing:1 becomes:1 campus:2 underlying:2 moreover:4 israel:3 what:4 monkey:1 caution:2 differentiation:2 contaminate:1 sapiro:1 every:3 ti:3 viviani:9 shed:1 exactly:2 scaled:1 schwartz:1 control:5 normally:5 grant:1 unit:1 segmenting:1 positive:1 before:4 engineering:1 local:2 despite:2 analyzing:1 path:4 approximately:1 might:3 signed:1 plus:2 acta:1 suggests:1 ease:1 range:3 statistically:2 obeys:1 weizmann:1 decided:1 averaged:1 perpendicular:1 satisfactorily:1 yj:1 practice:1 procedure:13 displacement:1 area:1 empirical:2 regular:1 anticipation:1 suggest:8 kinesthetic:1 get:1 cannot:1 close:2 pollick:1 prentice:1 applying:1 www:1 demonstrated:1 center:1 handzel:1 jerusalem:4 go:2 primitive:1 graphically:1 tremor:1 powerlaw:2 utilizing:1 population:1 controlling:1 suppose:1 exact:1 verd:1 equispaced:3 locomotion:2 origin:5 hypothesis:1 velocity:7 located:1 std:1 predicts:2 observed:1 bottom:2 role:1 verifying:1 calculate:3 remote:1 movement:24 contemporary:2 removed:1 ran:2 supposedly:1 developmental:1 dynamic:1 weakly:2 entailing:1 carrying:1 po:1 joint:3 easily:1 various:5 monte:3 sanity:1 apparent:1 lag:1 widely:2 solve:1 drawing:9 richardson:1 transform:1 itself:2 took:1 interaction:2 neighboring:2 figural:1 achieve:1 maoz:1 inducing:1 regularity:2 produce:3 perfect:1 coupling:1 stating:1 ac:1 measured:4 minor:1 school:1 c:1 filter:3 human:20 require:1 really:1 preliminary:1 biological:3 mathematically:1 traversed:2 considered:2 hall:1 normal:2 major:1 consecutive:2 applicable:1 healthy:1 repetition:2 tool:1 minimization:1 butterworth:2 gaussian:4 always:1 aim:1 rather:3 avoid:1 shelf:1 ndi:1 derived:3 focus:1 schaal:1 check:1 mainly:2 opted:1 criticism:1 equipment:1 sense:1 dependent:1 rigid:1 relation:2 transformed:1 development:1 smoothing:15 special:1 spatial:4 psychophysics:1 equal:2 having:1 sampling:2 look:1 alter:1 others:2 spline:1 stimulus:1 inherent:8 tangential:3 modern:1 composed:6 resulted:2 geometry:3 maintain:1 attempt:2 detection:1 kinematic:1 evaluation:1 certainly:1 alignment:1 violation:1 analyzed:1 light:1 perimeter:2 predefined:3 closer:3 xy:3 conduct:1 instance:1 ostry:1 modeling:1 measuring:1 maximization:1 introducing:1 deviation:7 snr:1 uniform:2 conducted:1 answer:1 damp:2 synthetic:3 combined:3 huji:1 interdisciplinary:1 off:1 compliant:3 continuously:1 again:2 central:2 recorded:2 possibly:1 borne:1 cognitive:1 sinusoidal:1 coding:1 coefficient:9 performed:2 closed:1 analyze:1 maintains:1 slope:2 minimize:1 il:1 ni:1 accuracy:1 variance:5 yield:1 weak:1 raw:1 accurately:1 carlo:3 trajectory:30 drive:4 definition:2 distort:1 energy:1 frequency:1 naturally:3 attributed:1 handwriting:1 sampled:1 subsection:1 reminder:1 segmentation:2 uncover:1 actually:1 back:1 appears:2 manuscript:1 isometric:2 planar:10 formulation:1 done:1 box:1 though:1 optotrak:1 furthermore:1 stage:1 implicit:1 correlation:6 hand:9 dependant:1 quality:1 name:2 effect:5 verify:2 true:1 hence:2 restrained:1 white:5 during:2 numerator:1 noted:1 criterion:2 evident:1 outline:1 motion:12 wise:1 instantaneous:2 consideration:1 common:1 superior:1 empirically:1 endpoint:1 relating:1 refer:1 significant:2 measurement:9 smoothness:5 mathematics:1 pathology:1 had:2 resistant:1 entail:1 operating:1 locomotor:1 curvature:9 contaminating:1 recent:1 showed:1 termed:2 claimed:1 continue:1 yi:6 muscle:3 ministry:1 minimum:1 schneider:1 signal:24 stem:2 smooth:4 segmented:1 repetitively:1 compensate:1 space2:1 bent:7 ellipsis:3 prediction:2 underlies:1 regression:19 converging:1 experimentalists:2 essentially:1 vision:1 iteration:4 sometimes:1 repetitive:1 whereas:2 addition:1 fellowship:1 interval:4 leaving:1 source:1 sure:1 compliance:3 hz:3 subject:2 tend:1 jordan:1 extracting:2 synthetically:1 enough:2 easy:1 jerk:5 xj:1 fit:3 gave:2 zi:1 psychology:5 todorov:1 opposite:1 tamar:1 motivated:1 pca:1 sus:1 returned:1 passing:1 cause:1 action:1 matlab:1 generally:1 clear:1 aimed:1 amount:1 http:1 exist:3 estimated:1 neuroscience:3 rehovot:1 conform:3 affected:1 acknowledged:1 cutoff:1 ram:2 massey:1 run:1 inverse:1 parameterized:1 angle:1 letter:1 named:1 reasonable:1 oscillation:1 comparable:1 bit:2 bound:1 fold:2 yielded:2 occur:2 precisely:1 georgopoulos:1 aspect:2 speed:8 chair:1 performing:1 department:1 according:2 combination:5 ball:1 smaller:4 ascertain:1 berthoz:1 outlier:1 invariant:2 taken:2 equation:3 remains:1 trunk:1 kinematics:3 mechanism:3 needed:1 know:1 mind:2 end:4 pursuit:1 experimentation:1 apply:1 limb:7 edmond:2 obey:6 elliptic:1 ubiquity:1 oprea:1 yair:1 existence:2 original:3 top:2 running:2 graphical:1 const:3 especially:3 ellipse:12 threedimensional:1 question:2 added:3 occurs:1 said:2 exhibit:1 distance:2 trivial:1 toward:2 assuming:2 code:1 relationship:5 ratio:1 hebrew:2 differencing:1 trace:1 magic:1 proper:1 upper:4 observation:1 finite:2 curved:2 timeseries:1 variability:2 incorporated:2 smoothed:3 competence:1 drift:2 unpublished:1 required:3 fostered:1 israeli:1 beyond:1 below:2 perception:8 pattern:1 safra:2 lend:1 explanation:2 power:68 natural:1 force:1 arm:3 eye:3 axis:2 created:1 extract:2 text:1 comply:2 literature:1 geometric:2 removal:1 meter:1 acknowledgement:1 law:69 loss:1 interesting:3 generation:1 filtering:4 versus:13 age:1 degree:1 affine:4 verification:1 principle:1 uncorrelated:3 seated:1 elsewhere:1 repeat:2 placed:1 last:1 supported:2 deeper:1 institute:1 template:1 taking:1 absolute:3 rhythmic:1 distributed:7 curve:2 dimension:1 calculated:1 cortical:1 cure:1 forward:1 instructed:1 far:1 keep:1 dealing:1 xi:6 continuous:1 nature:1 robust:1 complex:3 necessarily:1 elon:1 spread:1 big:1 noise:66 profile:1 child:1 position:5 kandel:1 psychologica:1 portrayed:2 perceptual:3 third:6 formula:1 down:1 specific:1 r2:25 offset:2 evidence:2 exists:2 portrays:1 intrinsic:1 adding:1 magnitude:16 uri:1 givat:2 wolpert:1 intersection:3 depicted:1 simply:1 explore:1 likely:2 lagrange:1 tracking:1 determines:1 harris:1 goal:1 formulated:1 careful:1 flash:4 room:1 experimentally:2 typical:2 perceiving:1 uniformly:2 pas:2 accepted:1 experimental:8 attempted:1 internal:2 support:1 phenomenon:4 correlated:7 |
2,064 | 2,875 | Top-Down Control of Visual Attention:
A Rational Account
Michael C. Mozer
Dept. of Comp. Science &
Institute of Cog. Science
University of Colorado
Boulder, CO 80309 USA
Michael Shettel
Dept. of Comp. Science &
Institute of Cog. Science
University of Colorado
Boulder, CO 80309 USA
Shaun Vecera
Dept. of Psychology
University of Iowa
Iowa City, IA 52242 USA
Abstract
Theories of visual attention commonly posit that early parallel processes extract conspicuous features such as color contrast and motion from the visual field. These features
are then combined into a saliency map, and attention is directed to the most salient
regions first. Top-down attentional control is achieved by modulating the contribution of
different feature types to the saliency map. A key source of data concerning attentional
control comes from behavioral studies in which the effect of recent experience is examined as individuals repeatedly perform a perceptual discrimination task (e.g., ?what
shape is the odd-colored object??). The robust finding is that repetition of features of
recent trials (e.g., target color) facilitates performance. We view this facilitation as an
adaptation to the statistical structure of the environment. We propose a probabilistic
model of the environment that is updated after each trial. Under the assumption that
attentional control operates so as to make performance more efficient for more likely
environmental states, we obtain parsimonious explanations for data from four different
experiments. Further, our model provides a rational explanation for why the influence of
past experience on attentional control is short lived.
1 INTRODUCTION
The brain does not have the computational capacity to fully process the massive quantity
of information provided by the eyes. Selective attention operates to filter the spatiotemporal stream to a manageable quantity. Key to understanding the nature of attention is discovering the algorithm governing selection, i.e., understanding what information will be
selected and what will be suppressed. Selection is influenced by attributes of the spatiotemporal stream, often referred to as bottom-up contributions to attention. For example,
attention is drawn to abrupt onsets, motion, and regions of high contrast in brightness and
color. Most theories of attention posit that some visual information processing is performed preattentively and in parallel across the visual field. This processing extracts primitive visual features such as color and motion, which provide the bottom-up cues for
attentional guidance. However, attention is not driven willy nilly by these cues. The
deployment of attention can be modulated by task instructions, current goals, and domain
knowledge, collectively referred to as top-down contributions to attention.
How do bottom-up and top-down contributions to attention interact? Most psychologically and neurobiologically motivated models propose a very similar architecture in which
information from bottom-up and top-down sources combines in a saliency (or activation)
map (e.g., Itti et al., 1998; Koch & Ullman, 1985; Mozer, 1991; Wolfe, 1994). The
saliency map indicates, for each location in the visual field, the relative importance of that
location. Attention is drawn to the most salient locations first.
Figure 1 sketches the basic architecture that incorporates bottom-up and top-down contributions to the saliency map. The visual image is analyzed to extract maps of primitive features such as color and orientation. Associated with each location in a map is a scalar
visual image
horizontal
primitive feature maps
vertical
green
top-down gains
red
saliency
map
FIGURE 1. An attentional saliency
map constructed from bottom-up
and top-down information
bottom-up activations
FIGURE 2. Sample display from
Experiment 1 of Maljkovic and
Nakayama (1994)
response or activation indicating the presence of a particular feature. Most models assume
that responses are stronger at locations with high local feature contrast, consistent with
neurophysiological data, e.g., the response of a red feature detector to a red object is stronger if the object is surrounded by green objects. The saliency map is obtained by taking a
sum of bottom-up activations from the feature maps. The bottom-up activations are modulated by a top-down gain that specifies the contribution of a particular map to saliency in
the current task and environment. Wolfe (1994) describes a heuristic algorithm for determining appropriate gains in a visual search task, where the goal is to detect a target object
among distractor objects. Wolfe proposes that maps encoding features that discriminate
between target and distractors have higher gains, and to be consistent with the data, he
proposes limits on the magnitude of gain modulation and the number of gains that can be
modulated. More recently, Wolfe et al. (2003) have been explicit in proposing optimization as a principle for setting gains given the task definition and stimulus environment.
One aspect of optimizing attentional control involves configuring the attentional system to
perform a given task; for example, in a visual search task for a red vertical target among
green vertical and red horizontal distractors, the task definition should result in a higher
gain for red and vertical feature maps than for other feature maps. However, there is a
more subtle form of gain modulation, which depends on the statistics of display environments. For example, if green vertical distractors predominate, then red is a better discriminative cue than vertical; and if red horizontal distractors predominate, then vertical is a
better discriminative cue than red.
In this paper, we propose a model that encodes statistics of the environment in order to
allow for optimization of attentional control to the structure of the environment. Our
model is designed to address a key set of behavioral data, which we describe next.
1.1 Attentional priming phenomena
Psychological studies involve a sequence of experimental trials that begin with a stimulus
presentation and end with a response from the human participant. Typically, trial order is
randomized, and the context preceding a trial is ignored. However, in sequential studies,
performance is examined on one trial contingent on the past history of trials. These
sequential studies explore how experience influences future performance. Consider a the
sequential attentional task of Maljkovic and Nakayama (1994). On each trial, the stimulus
display (Figure 2) consists of three notched diamonds, one a singleton in color?either
green among red or red among green. The task is to report whether the singleton diamond,
referred to as the target, is notched on the left or the right. The task is easy because the singleton pops out, i.e., the time to locate the singleton does not depend on the number of diamonds in the display. Nonetheless, the response time significantly depends on the
sequence of trials leading up to the current trial: If the target is the same color on the cur-
rent trial as on the previous trial, response time is roughly 100 ms faster than if the target is
a different color on the current trial. Considering that response times are on the order of
700 ms, this effect, which we term attentional priming, is gigantic in the scheme of psychological phenomena.
2 ATTENTIONAL CONTROL AS ADAPTATION TO THE
STATISTICS OF THE ENVIRONMENT
We interpret the phenomenon of attentional priming via a particular perspective on attentional control, which can be summarized in two bullets.
? The perceptual system dynamically constructs a probabilistic model of the environment based on its past experience.
? Control parameters of the attentional system are tuned so as to optimize performance
under the current environmental model.
The primary focus of this paper is the environmental model, but we first discuss the nature
of performance optimization.
The role of attention is to make processing of some stimuli more efficient, and consequently, the processing of other stimuli less efficient. For example, if the gain on the red
feature map is turned up, processing will be efficient for red items, but competition from
red items will reduce the efficiency for green items. Thus, optimal control should tune the
system for the most likely states of the world by minimizing an objective function such as:
J(g) =
? P ( e )RT g ( e )
(1)
e
where g is a vector of top-down gains, e is an index over environmental states, P(.) is the
probability of an environmental state, and RTg(.) is the expected response time?assuming
a constant error rate?to the environmental state under gains g. Determining the optimal
gains is a challenge because every gain setting will result in facilitation of responses to
some environmental states but hindrance of responses to other states.
The optimal control problem could be solved via direct reinforcement learning, but the
rapidity of human learning makes this possibility unlikely: In a variety of experimental
tasks, evidence suggests that adaptation to a new task or environment can occur in just one
or two trials (e.g., Rogers & Monsell, 1996). Model-based reinforcement learning is an
attractive alternative, because given a model, optimization can occur without further experience in the real world. Although the number of real-world trials necessary to achieve a
given level of performance is comparable for direct and model-based reinforcement learning in stationary environments (Kearns & Singh, 1999), naturalistic environments can be
viewed as highly nonstationary. In such a situation, the framework we suggest is well
motivated: After each experience, the environment model is updated. The updated environmental model is then used to retune the attentional system.
In this paper, we propose a particular model of the environment suitable for visual search
tasks. Rather than explicitly modeling the optimization of attentional control by setting
gains, we assume that the optimization process will serve to minimize Equation 1.
Because any gain adjustment will facilitate performance in some environmental states and
hinder performance in others, an optimized control system should obtain faster reaction
times for more probable environmental states. This assumption allows us to explain experimental results in a minimal, parsimonious framework.
3 MODELING THE ENVIRONMENT
Focusing on the domain of visual search, we characterize the environment in terms of a
probability distribution over configurations of target and distractor features. We distinguish three classes of features: defining, reported, and irrelevant. To explain these terms,
consider the task of searching a display of size varying, colored, notched diamonds (Figure 2), with the task of detecting the singleton in color and judging the notch location.
Color is the defining feature, notch location is the reported feature, and size is an irrelevant feature. To simplify the exposition, we treat all features as having discrete values, an
assumption which is true of the experimental tasks we model. We begin by considering
displays containing a single target and a single distractor, and shortly generalize to multidistractor displays.
We use the framework of Bayesian networks to characterize the environment. Each feature of the target and distractor is a discrete random variable, e.g., Tcolor for target color
and Dnotch for the location of the notch on the distractor. The Bayes net encodes the probability distribution over environmental states; in our working example, this distribution is
P(Tcolor, Tsize, Tnotch, Dcolor, Dsize, Dnotch).
The structure of the Bayes net specifies the relationships among the features. The simplest
model one could consider would be to treat the features as independent, illustrated in Figure 3a for singleton-color search task. The opposite extreme would be the full joint distribution, which could be represented by a look up table indexed by the six features, or by
the cascading Bayes net architecture in Figure 3b. The architecture we propose, which
we?ll refer to as the dominance model (Figure 3c), has an intermediate dependency structure, and expresses the joint distribution as:
P(Tcolor)P(Dcolor |Tcolor)P(Tsize |Tcolor)P(Tnotch |Tcolor)P(Dsize |Dcolor)P(Dnotch |Tcolor).
The structured model is constructed based on three rules.
1. The defining feature of the target is at the root of the tree.
2. The defining feature of the distractor is conditionally dependent on the defining feature of the target. We refer to this rule as dominance of the target over the distractor.
3. The reported and irrelevant features of target (distractor) are conditionally dependent
on the defining feature of the target (distractor). We refer to this rule as dominance of
the defining feature over nondefining features.
As we will demonstrate, the dominance model produces a parsimonious account of a wide
range of experimental data.
3.1 Updating the environment model
The model?s parameters are the conditional distributions embodied in the links. In the
example of Figure 3c with binary random variables, the model has 11 parameters. However, these parameters are determined by the environment: To be adaptive in nonstationary
environments, the model must be updated following each experienced state. We propose a
simple exponentially weighted averaging approach. For two variables V and W with
observed values v and w on trial t, a conditional distribution, P t ( V = u W = w ) = ? uv , is
(a)
Tcolor
Dcolor
Tsize
Tnotch
(b)
Tcolor
Dcolor
Dsize
Tsize
Dnotch
Tnotch
(c)
Tcolor
Dcolor
Dsize
Tsize
Dsize
Dnotch
Tnotch
Dnotch
FIGURE 3. Three models of a visual-search environment with colored, notched, size-varying diamonds. (a)
feature-independence model; (b) full-joint model; (c) dominance model.
defined, where ? is the Kronecker delta. The distribution representing the environment
E
following trial t, denoted P t , is then updated as follows:
E
E
P t ( V = u W = w ) = ?P t ? 1 ( V = u W = w ) + ( 1 ? ? )P t ( V = u W = w )
(2)
for all u, where ? is a memory constant. Note that no update is performed for values of W
other than w. An analogous update is performed for unconditional distributions.
E
How the model is initialized?i.e., specifying P 0 ?is irrelevant, because all experimental
tasks that we model, participants begin the experiment with many dozens of practice trials.
E
Data is not collected during practice trials. Consequently, any transient effects of P 0 do
E
not impact the results. In our simulations, we begin with a uniform distribution for P 0 ,
and include practice trials as in the human studies.
Thus far, we?ve assumed a single target and a single distractor. The experiments that we
model involve multiple distractors. The simple extension we require to handle multiple
distractors is to define a frequentist probability for each distractor feature V,
P t ( V = v W = w ) = C vw ? C w , where C vw is the count of co-occurrences of feature values v and w among the distractors, and C w is the count of w.
Our model is extremely simple. Given a description of the visual search task and environment, the model has only a single degree of freedom, ? . In all simulations, we fix
? = 0.75 ; however, the choice of ? does not qualitatively impact any result.
4 SIMULATIONS
In this section, we show that the model can explain a range of data from four different
experiments examining attentional priming. All experiments measure response times of
participants. On each trial, the model can be used to obtain a probability of the display
configuration (the environmental state) on that trial, given the history of trials to that
point. Our critical assumption?as motivated earlier?is that response times monotonically decrease with increasing probability, indicating that visual information processing is
better configured for more likely environmental states. The particular relationship we
assume is that response times are linear in log probability. This assumption yields long
response time tails, as are observed in all human studies.
4.1 Maljkovic and Nakayama (1994, Experiment 5)
In this experiment, participants were asked to search for a singleton in color in a display of
three red or green diamonds. Each diamond was notched on either the left or right side,
and the task was to report the side of the notch on the color singleton. The well-practiced
participants made very few errors. Reaction time (RT) was examined as a function of
whether the target on a given trial is the same or different color as the target on trial n steps
back or ahead. Figure 4 shows the results, with the human RTs in the left panel and the
simulation log probabilities in the right panel. The horizontal axis represents n. Both
graphs show the same outcome: repetition of target color facilitates performance. This
influence lasts only for a half dozen trials, with an exponentially decreasing influence further into the past. In the model, this decreasing influence is due to the exponential decay of
recent history (Equation 2). Figure 4 also shows that?as expected?the future has no
influence on the current trial.
4.2 Maljkovic and Nakayama (1994, Experiment 8)
In the previous experiment, it is impossible to determine whether facilitation is due to repetition of the target?s color or the distractor?s color, because the display contains only two
colors, and therefore repetition of target color implies repetition of distractor color. To
unconfound these two potential factors, an experiment like the previous one was con-
ducted using four distinct colors, allowing one to examine the effect of repeating the target
color while varying the distractor color, and vice versa. The sequence of trials was composed of subsequences of up-to-six consecutive trials with either the target or distractor
color held constant while the other color was varied trial to trial. Following each subsequence, both target and distractors were changed. Figure 5 shows that for both humans and
the simulation, performance improves toward an asymptote as the number of target and
distractor repetitions increases; in the model, the asymptote is due to the probability of the
repeated color in the environment model approaching 1.0. The performance improvement
is greater for target than distractor repetition; in the model, this difference is due to the
dominance of the defining feature of the target over the defining feature of the distractor.
4.3 Huang, Holcombe, and Pashler (2004, Experiment 1)
Huang et al. (2004) and Hillstrom (2000) conducted studies to determine whether repetitions of one feature facilitate performance independently of repetitions of another feature.
In the Huang et al. study, participants searched for a singleton in size in a display consisting of lines that were short and long, slanted left or right, and colored white or black. The
reported feature was target slant. Slant, size, and color were uncorrelated. Huang et al. discovered that repeating an irrelevant feature (color or orientation) facilitated performance,
but only when the defining feature (size) was repeated. As shown in Figure 6, the model
replicates human performance, due to the dominance of the defining feature over the
reported and irrelevant features.
4.4 Wolfe, Butcher, Lee, and Hyde (2003, Experiment 1)
In an empirical tour-de-force, Wolfe et al. (2003) explored singleton search over a range of
environments. The task is to detect the presence or absence of a singleton in displays conHuman data
Different Color
600
Different Color
590
580
570
Same Color
560
550
Simulation
3.4
?log(P(trial))
Reaction Time (msec)
610
?15
?13
?11
?9
?7
Past
?5
?3
?1
+1
+3
+5
Future
3.2
3
2.6
+7
Same Color
2.8
?15
?13
Relative Trial Number
?11
?9
?7
Past
?5
?3
?1
+1
+3
+5
Future
+7
Relative Trial Number
FIGURE 4. Experiment 5 of Maljkovic and Nakayama (1994): performance on a given trial conditional on the
color of the target on a previous or subsequent trial. Human data is from subject KN.
650
6
Distractors
Same
630
5.5
?log(P(trial))
FIGURE 5. Experiment 8 of
Maljkovic
and
Nakayama
(1994). (left panel) human data,
average of subjects KN and
SS; (right panel) simulation
Reaction Time (msec)
640
620
Target
Same
610
5
Distractors
Same
4.5
4
600
Target
Same
3.5
590
3
580
1
2
3
4
5
1
6
4
5
6
4
1000
Size Alternate
Size Alternate
?log(P(trial))
Reaction Time (msec)
3
4.2
1050
FIGURE 6. Experiment 1 of
Huang, Holcombe, & Pashler
(2004). (left panel) human data;
(right panel) simulation
2
Order in Sequence
Order in Sequence
950
3.8
3.6
3.4
900
3.2
Size Repeat
850
Size Repeat
3
Color Repeat
Color Alternate
Color Repeat
Color Alternate
sisting of colored (red or green), oriented (horizontal or vertical) lines. Target-absent trials
were used primarily to ensure participants were searching the display. The experiment
examined seven experimental conditions, which varied in the amount of uncertainty as to
the target identity. The essential conditions, from least to most uncertainty, are: blocked
(e.g., target always red vertical among green horizontals), mixed feature (e.g., target
always a color singleton), mixed dimension (e.g., target either red or vertical), and fully
mixed (target could be red, green, vertical, or horizontal). With this design, one can ascertain how uncertainty in the environment and in the target definition influence task difficulty. Because the defining feature in this experiment could be either color or orientation,
we modeled the environment with two Bayes nets?one color dominant and one orientation dominant?and performed model averaging. A comparison of Figures 7a and 7b
show a correspondence between human RTs and model predictions. Less uncertainty in
the environment leads to more efficient performance. One interesting result from the
model is its prediction that the mixed-feature condition is easier than the fully-mixed condition; that is, search is more efficient when the dimension (i.e., color vs. orientation) of
the singleton is known, even though the model has no abstract representation of feature
dimensions, only feature values.
4.5 Optimal adaptation constant
In all simulations so far, we fixed the memory constant. From the human data, it is clear
that memory for recent experience is relatively short lived, on the order of a half dozen trials (e.g., left panel of Figure 4). In this section we provide a rational argument for the short
duration of memory in attentional control.
Figure 7c shows mean negative log probability in each condition of the Wolfe et al. (2003)
experiment, as a function of ? . To assess these probabilities, for each experimental condition, the model was initialized so that all of the conditional distributions were uniform,
and then a block of trials was run. Log probability for all trials in the block was averaged.
The negative log probability (y axis of the Figure) is a measure of the model?s misprediction of the next trial in the sequence.
For complex environments, such as the fully-mixed condition, a small memory constant is
detrimental: With rapid memory decay, the effective history of trials is a high-variance
sample of the distribution of environmental states. For simple environments, a large memory constant is detrimental: With slow memory decay, the model does not transition
quickly from the initial environmental model to one that reflects the statistics of a new
environment. Thus, the memory constant is constrained by being large enough that the
environment model can hold on to sufficient history to represent complex environments,
and by being small enough that the model adapts quickly to novel environments. If the
conditions in Wolfe et al. give some indication of the range of naturalistic environments an
agent encounters, we have a rational account of why attentional priming is so short lived.
Whether priming lasts 2 trials or 20, the surprising empirical result is that it does not last
200 or 2000 trials. Our rational argument provides a rough insight into this finding.
(a)
fully mixed
mixed feature
mixed dimension
blocked
460
(c)
Simulation
fully mixed
mixed feature
mixed dimension
blocked
4
5
420
?log(P(trial))
440
2
1
400
Blocked Red or Vertical
Blocked Red and Vertical
Mixed Feature
Mixed Dimension
Fully Mixed
4
3
?log(P(trial))
reaction time (msec)
(b)
Human Data
480
3
2
1
380
0
360
0
red or vert
red and vert
target type
red or vert
red and vert
target type
0
0.5
0.8
0.9
0.95
0.98
Memory Constant
FIGURE 7. (a) Human data for Wolfe et al. (2003), Experiment 1; (b) simulation; (c) misprediction of model
(i.e., lower y value = better) as a function of ? for five experimental condition
5 DISCUSSION
The psychological literature contains two opposing accounts of attentional priming and its
relation to attentional control. Huang et al. (2004) and Hillstrom (2000) propose an episodic account in which a distinct memory trace?representing the complete configuration
of features in the display?is laid down for each trial, and priming depends on configural
similarity of the current trial to previous trials. Alternatively, Maljkovic and Nakayama
(1994) and Wolfe et al. (2003) propose a feature-strengthening account in which detection
of a feature on one trial increases its ability to attract attention on subsequent trials, and
priming is proportional to the number of overlapping features from one trial to the next.
The episodic account corresponds roughly to the full joint model (Figure 3b), and the feature-strengthening account corresponds roughly to the independence model (Figure 3a).
Neither account is adequate to explain the range of data we presented. However, an intermediate account, the dominance model (Figure 3c), is not only sufficient, but it offers a
parsimonious, rational explanation. Beyond the model?s basic assumptions, it has only one
free parameter, and can explain results from diverse experimental paradigms.
The model makes a further theoretical contribution. Wolfe et al. distinguish the environments in their experiment in terms of the amount of top-down control available, implying
that different mechanisms might be operating in different environments. However, in our
account, top-down control is not some substance distributed in different amounts depending on the nature of the environment. Our account treats all environments uniformly, relying on attentional control to adapt to the environment at hand.
We conclude with two limitations of the present work. First, our account presumes a particular network architecture, instead of a more elegant Bayesian approach that specifies
priors over architectures, and performs automatic model selection via the sequence of trials. We did explore such a Bayesian approach, but it was unable to explain the data. Second, at least one finding in the literature is problematic for the model. Hillstrom (2000)
occasionally finds that RTs slow when an irrelevant target feature is repeated but the defining target feature is not. However, because this effect is observed only in some experiments, it is likely that any model would require elaboration to explain the variability.
ACKNOWLEDGEMENTS
We thank Jeremy Wolfe for providing the raw data from his experiment for reanalysis. This research was funded
by NSF BCS Award 0339103.
REFERENCES
Huang, L, Holcombe, A. O., & Pashler, H. (2004). Repetition priming in visual search: Episodic retrieval, not
feature priming. Memory & Cognition, 32, 12?20.
Hillstrom, A. P. (2000). Repetition effects in visual search. Perception & Psychophysics, 62, 800-817.
Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE
Trans. Pattern Analysis & Machine Intelligence, 20, 1254?1259.
Kearns, M., & Singh, S. (1999). Finite-sample convergence rates for Q-learning and indirect algorithms. In
Advances in Neural Information Processing Systems 11 (pp. 996?1002). Cambridge, MA: MIT Press.
Koch, C. and Ullman, S. (1985). Shifts in selective visual attention: towards the underlying neural circuitry.
Human Neurobiology, 4, 219?227.
Maljkovic, V., & Nakayama, K. (1994). Priming of pop-out: I. Role of features. Mem. & Cognition, 22, 657-672.
Mozer, M. C. (1991). The perception of multiple objects: A connectionist approach. Cambridge, MA: MIT Press.
Rogers, R. D., & Monsell, S. (1995). The cost of a predictable switch between simple cognitive tasks. Journal of
Experimental Psychology: General, 124, 207?231.
Wolfe, J.M. (1994). Guided Search 2.0: A Revised Model of Visual Search. Psych. Bull. & Rev., 1, 202?238.
Wolfe, J. S., Butcher, S. J., Lee, C., & Hyde, M. (2003). Changing your mind: on the contributions of top-down
and bottom-up guidance in visual search for feature singletons. Journal of Exptl. Psychology: Human Perception & Performance, 29, 483-502.
| 2875 |@word trial:55 manageable:1 stronger:2 instruction:1 simulation:11 brightness:1 initial:1 configuration:3 contains:2 practiced:1 tuned:1 past:6 reaction:6 current:7 surprising:1 activation:5 must:1 slanted:1 subsequent:2 shape:1 asymptote:2 designed:1 update:2 discrimination:1 stationary:1 cue:4 discovering:1 selected:1 item:3 rts:3 half:2 implying:1 intelligence:1 short:5 colored:5 provides:2 detecting:1 location:8 five:1 constructed:2 direct:2 consists:1 combine:1 behavioral:2 expected:2 rapid:2 roughly:3 examine:1 distractor:18 brain:1 relying:1 decreasing:2 considering:2 increasing:1 provided:1 begin:4 misprediction:2 underlying:1 panel:7 what:3 psych:1 proposing:1 finding:3 configural:1 every:1 control:19 configuring:1 gigantic:1 local:1 treat:3 limit:1 encoding:1 modulation:2 black:1 might:1 examined:4 dynamically:1 suggests:1 specifying:1 co:3 deployment:1 range:5 averaged:1 directed:1 practice:3 block:2 episodic:3 empirical:2 significantly:1 vert:4 suggest:1 naturalistic:2 selection:3 context:1 influence:7 impossible:1 pashler:3 optimize:1 map:17 primitive:3 attention:17 independently:1 duration:1 abrupt:1 rule:3 insight:1 cascading:1 facilitation:3 his:1 searching:2 handle:1 analogous:1 updated:5 target:43 colorado:2 massive:1 wolfe:14 updating:1 neurobiologically:1 bottom:10 role:2 observed:3 solved:1 region:2 decrease:1 mozer:3 environment:40 predictable:1 asked:1 hinder:1 depend:1 singh:2 serve:1 efficiency:1 joint:4 indirect:1 represented:1 distinct:2 describe:1 effective:1 outcome:1 heuristic:1 s:1 ability:1 statistic:4 sequence:7 indication:1 net:4 propose:8 strengthening:2 adaptation:4 turned:1 achieve:1 adapts:1 description:1 competition:1 convergence:1 produce:1 object:7 depending:1 odd:1 involves:1 come:1 implies:1 posit:2 guided:1 attribute:1 filter:1 holcombe:3 human:16 transient:1 rogers:2 notched:5 require:2 fix:1 hyde:2 probable:1 extension:1 hold:1 koch:3 cognition:2 circuitry:1 early:1 consecutive:1 modulating:1 repetition:11 vice:1 city:1 weighted:1 reflects:1 rough:1 mit:2 always:2 rather:1 maljkovic:8 varying:3 focus:1 improvement:1 indicates:1 contrast:3 detect:2 dependent:2 attract:1 typically:1 unlikely:1 relation:1 butcher:2 selective:2 among:7 orientation:5 denoted:1 proposes:2 constrained:1 psychophysics:1 field:3 construct:1 having:1 represents:1 look:1 future:4 report:2 stimulus:5 others:1 simplify:1 few:1 primarily:1 connectionist:1 oriented:1 composed:1 ve:1 individual:1 consisting:1 opposing:1 freedom:1 detection:1 possibility:1 highly:1 replicates:1 analyzed:1 extreme:1 unconditional:1 held:1 necessary:1 experience:7 indexed:1 tree:1 initialized:2 guidance:2 theoretical:1 minimal:1 psychological:3 modeling:2 earlier:1 bull:1 cost:1 tour:1 uniform:2 examining:1 conducted:1 characterize:2 reported:5 v:1 dependency:1 kn:2 spatiotemporal:2 combined:1 randomized:1 probabilistic:2 lee:2 michael:2 quickly:2 containing:1 huang:7 cognitive:1 itti:2 leading:1 ullman:2 presumes:1 account:13 potential:1 jeremy:1 singleton:14 de:1 summarized:1 configured:1 explicitly:1 onset:1 stream:2 depends:3 performed:4 view:1 root:1 red:25 bayes:4 participant:7 parallel:2 contribution:8 minimize:1 ass:1 variance:1 yield:1 saliency:10 generalize:1 bayesian:3 raw:1 niebur:1 comp:2 history:5 detector:1 explain:7 influenced:1 definition:3 nonetheless:1 pp:1 associated:1 con:1 cur:1 rational:6 gain:16 color:42 knowledge:1 distractors:10 improves:1 subtle:1 back:1 focusing:1 shettel:1 higher:2 response:14 though:1 governing:1 just:1 sketch:1 working:1 horizontal:7 hindrance:1 hand:1 overlapping:1 bullet:1 usa:3 effect:6 facilitate:2 true:1 illustrated:1 white:1 attractive:1 conditionally:2 ll:1 during:1 m:2 complete:1 demonstrate:1 performs:1 motion:3 image:2 novel:1 recently:1 rapidity:1 retune:1 exponentially:2 tail:1 he:1 interpret:1 refer:3 blocked:5 versa:1 cambridge:2 slant:2 automatic:1 uv:1 funded:1 similarity:1 operating:1 dominant:2 exptl:1 recent:4 perspective:1 optimizing:1 irrelevant:7 driven:1 occasionally:1 binary:1 contingent:1 greater:1 preceding:1 determine:2 paradigm:1 monotonically:1 full:3 multiple:3 bcs:1 faster:2 adapt:1 offer:1 long:2 retrieval:1 elaboration:1 concerning:1 award:1 impact:2 prediction:2 basic:2 psychologically:1 represent:1 achieved:1 source:2 subject:2 elegant:1 facilitates:2 incorporates:1 nonstationary:2 vw:2 presence:2 intermediate:2 easy:1 enough:2 variety:1 independence:2 switch:1 psychology:3 architecture:6 approaching:1 opposite:1 reduce:1 absent:1 shift:1 whether:5 motivated:3 six:2 notch:4 repeatedly:1 adequate:1 ignored:1 clear:1 involve:2 tune:1 amount:3 repeating:2 simplest:1 specifies:3 problematic:1 nsf:1 judging:1 delta:1 diverse:1 discrete:2 express:1 dominance:8 key:3 salient:2 four:3 drawn:2 changing:1 neither:1 graph:1 sum:1 run:1 facilitated:1 uncertainty:4 hillstrom:4 laid:1 parsimonious:4 comparable:1 monsell:2 distinguish:2 display:14 correspondence:1 occur:2 ahead:1 kronecker:1 your:1 scene:1 encodes:2 aspect:1 argument:2 extremely:1 relatively:1 structured:1 alternate:4 across:1 describes:1 ascertain:1 suppressed:1 conspicuous:1 rev:1 boulder:2 equation:2 discus:1 count:2 mechanism:1 mind:1 end:1 available:1 appropriate:1 occurrence:1 rtg:1 alternative:1 frequentist:1 shortly:1 encounter:1 top:13 include:1 ensure:1 objective:1 quantity:2 primary:1 rt:2 predominate:2 detrimental:2 attentional:24 link:1 unable:1 capacity:1 thank:1 seven:1 collected:1 toward:1 assuming:1 index:1 relationship:2 modeled:1 providing:1 minimizing:1 trace:1 negative:2 design:1 lived:3 perform:2 diamond:7 allowing:1 vertical:13 revised:1 finite:1 situation:1 defining:13 variability:1 neurobiology:1 locate:1 discovered:1 varied:2 optimized:1 pop:2 trans:1 address:1 beyond:1 perception:3 pattern:1 challenge:1 green:11 memory:12 explanation:3 ia:1 suitable:1 critical:1 difficulty:1 force:1 representing:2 scheme:1 eye:1 axis:2 extract:3 embodied:1 prior:1 understanding:2 literature:2 acknowledgement:1 determining:2 relative:3 fully:7 mixed:15 interesting:1 limitation:1 proportional:1 iowa:2 degree:1 agent:1 sufficient:2 consistent:2 principle:1 uncorrelated:1 surrounded:1 reanalysis:1 vecera:1 changed:1 repeat:4 last:3 free:1 side:2 allow:1 institute:2 wide:1 taking:1 distributed:1 dimension:6 willy:1 world:3 transition:1 commonly:1 reinforcement:3 adaptive:1 qualitatively:1 made:1 far:2 mem:1 assumed:1 conclude:1 discriminative:2 alternatively:1 preattentively:1 subsequence:2 search:15 why:2 table:1 nature:3 robust:1 nakayama:8 interact:1 priming:12 complex:2 domain:2 did:1 repeated:3 referred:3 slow:2 experienced:1 explicit:1 msec:4 exponential:1 perceptual:2 rent:1 dozen:3 down:14 cog:2 substance:1 explored:1 decay:3 evidence:1 essential:1 sequential:3 importance:1 magnitude:1 easier:1 explore:2 likely:4 neurophysiological:1 visual:22 adjustment:1 scalar:1 collectively:1 corresponds:2 environmental:15 ma:2 conditional:4 goal:2 presentation:1 viewed:1 consequently:2 exposition:1 identity:1 towards:1 absence:1 determined:1 operates:2 uniformly:1 averaging:2 kearns:2 discriminate:1 experimental:11 indicating:2 searched:1 modulated:3 dept:3 phenomenon:3 |
2,065 | 2,876 | Measuring Shared Information and Coordinated
Activity in Neuronal Networks
Kristina Lisa Klinkner
Cosma Rohilla Shalizi
Marcelo F. Camperi
Statistics Department
University of Michigan
Ann Arbor, MI 48109
[email protected]
Statistics Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Physics Department
University of San Francisco
San Francisco, CA 94118
[email protected]
Abstract
Most nervous systems encode information about stimuli in the responding activity of large neuronal networks. This activity often manifests
itself as dynamically coordinated sequences of action potentials. Since
multiple electrode recordings are now a standard tool in neuroscience
research, it is important to have a measure of such network-wide behavioral coordination and information sharing, applicable to multiple neural
spike train data. We propose a new statistic, informational coherence,
which measures how much better one unit can be predicted by knowing
the dynamical state of another. We argue informational coherence is a
measure of association and shared information which is superior to traditional pairwise measures of synchronization and correlation. To find the
dynamical states, we use a recently-introduced algorithm which reconstructs effective state spaces from stochastic time series. We then extend
the pairwise measure to a multivariate analysis of the network by estimating the network multi-information. We illustrate our method by testing it
on a detailed model of the transition from gamma to beta rhythms.
Much of the most important information in neural systems is shared over multiple neurons or cortical areas, in such forms as population codes and distributed representations
[1]. On behavioral time scales, neural information is stored in temporal patterns of activity as opposed to static markers; therefore, as information is shared between neurons
or brain regions, it is physically instantiated as coordination between entire sequences of
neural spikes. Furthermore, neural systems and regions of the brain often require coordinated neural activity to perform important functions; acting in concert requires multiple
neurons or cortical areas to share information [2]. Thus, if we want to measure the dynamic
network-wide behavior of neurons and test hypotheses about them, we need reliable, practical methods to detect and quantify behavioral coordination and the associated information
sharing across multiple neural units. These would be especially useful in testing ideas
about how particular forms of coordination relate to distributed coding (e.g., that of [3]).
Current techniques to analyze relations among spike trains handle only pairs of neurons, so
we further need a method which is extendible to analyze the coordination in the network,
system, or region as a whole. Here we propose a new measure of behavioral coordination
and information sharing, informational coherence, based on the notion of dynamical state.
Section 1 argues that coordinated behavior in neural systems is often not captured by exist-
ing measures of synchronization or correlation, and that something sensitive to nonlinear,
stochastic, predictive relationships is needed. Section 2 defines informational coherence
as the (normalized) mutual information between the dynamical states of two systems and
explains how looking at the states, rather than just observables, fulfills the needs laid out in
Section 1. Since we rarely know the right states a prori, Section 2.1 briefly describes how
we reconstruct effective state spaces from data. Section 2.2 gives some details about how
we calculate the informational coherence and approximate the global information stored
in the network. Section 3 applies our method to a model system (a biophysically detailed
conductance-based model) comparing our results to those of more familiar second-order
statistics. In the interest of space, we omit proofs and a full discussion of the existing literature, giving only minimal references here; proofs and references will appear in a longer
paper now in preparation.
1
Synchrony or Coherence?
Most hypotheses which involve the idea that information sharing is reflected in coordinated
activity across neural units invoke a very specific notion of coordinated activity, namely
strict synchrony: the units should be doing exactly the same thing (e.g., spiking) at exactly
the same time. Investigators then measure coordination by measuring how close the units
come to being strictly synchronized (e.g., variance in spike times).
From an informational point of view, there is no reason to favor strict synchrony over
other kinds of coordination. One neuron consistently spiking 50 ms after another is just as
informative a relationship as two simultaneously spiking, but such stable phase relations
are missed by strict-synchrony approaches. Indeed, whatever the exact nature of the neural
code, it uses temporally extended patterns of activity, and so information sharing should be
reflected in coordination of those patterns, rather than just the instantaneous activity.
There are three common ways of going beyond strict synchrony: cross-correlation and
related second-order statistics, mutual information, and topological generalized synchrony.
The cross-correlation function (the normalized covariance function; this includes, for
present purposes, the joint peristimulus time histogram [2]), is one of the most widespread
measures of synchronization. It can be efficiently calculated from observable series; it
handles statistical as well as deterministic relationships between processes; by incorporating variable lags, it reduces the problem of phase locking. Fourier transformation of the
covariance function ?XY (h) yields the cross-spectrum FXY (?), which in turn gives the
2
(?)/FX (?)FY (?), a normalized correlation between
spectral coherence cXY (?) = FXY
the Fourier components of X and Y . Integrated over frequencies, the spectral coherence
measures, essentially, the degree of linear cross-predictability of the two series. ([4] applies
spectral coherence to coordinated neural activity.) However, such second-order statistics
only handle linear relationships. Since neural processes are known to be strongly nonlinear, there is little reason to think these statistics adequately measure coordination and
synchrony in neural systems.
Mutual information is attractive because it handles both nonlinear and stochastic relationships and has a very natural and appealing interpretation. Unfortunately, it often seems
to fail in practice, being disappointingly small even between signals which are known to
be tightly coupled [5]. The major reason is that the neural codes use distinct patterns of
activity over time, rather than many different instantaneous actions, and the usual approach
misses these extended patterns. Consider two neurons, one of which drives the other to
spike 50 ms after it does, the driving neuron spiking once every 500 ms. These are very
tightly coordinated, but whether the first neuron spiked at time t conveys little information
about what the second neuron is doing at t ? it?s not spiking, but it?s not spiking most of
the time anyway. Mutual information calculated from the direct observations conflates the
?no spike? of the second neuron preparing to fire with its just-sitting-around ?no spike?.
Here, mutual information could find the coordination if we used a 50 ms lag, but that won?t
work in general. Take two rate-coding neurons with base-line firing rates of 1 Hz, and suppose that a stimulus excites one to 10 Hz and suppresses the other to 0.1 Hz. The spiking
rates thus share a lot of information, but whether the one neuron spiked at t is uninformative
about what the other neuron did then, and lagging won?t help.
Generalized synchrony is based on the idea of establishing relationships between the states
of the various units. ?State? here is taken in the sense of physics, dynamics and control
theory: the state at time t is a variable which fixes the distribution of observables at all
times ? t, rendering the past of the system irrelevant [6]. Knowing the state allows us to
predict, as well as possible, how the system will evolve, and how it will respond to external
forces [7]. Two coupled systems are said to exhibit generalized synchrony if the state of one
system is given by a mapping from the state of the other. Applications to data employ statespace reconstruction [8]: if the state x ? X evolves according to smooth, d-dimensional
deterministic dynamics, and we observe a generic function y = f (x), then the space Y
of time-delay vectors [y(t), y(t ? ? ), ...y(t ? (k ? 1)? )] is diffeomorphic to X if k > 2d,
for generic choices of lag ? . The various versions of generalized synchrony differ on how,
precisely, to quantify the mappings between reconstructed state spaces, but they all appear
to be empirically equivalent to one another and to notions of phase synchronization based
on Hilbert transforms [5]. Thus all of these measures accommodate nonlinear relationships,
and are potentially very flexible. Unfortunately, there is essentially no reason to believe that
neural systems have deterministic dynamics at experimentally-accessible levels of detail,
much less that there are deterministic relationships among such states for different units.
What we want, then, but none of these alternatives provides, is a quantity which measures
predictive relationships among states, but allows those relationships to be nonlinear and
stochastic. The next section introduces just such a measure, which we call ?informational
coherence?.
2
States and Informational Coherence
There are alternatives to calculating the ?surface? mutual information between the sequences of observations themselves (which, as described, fails to capture coordination).
If we know that the units are phase oscillators, or rate coders, we can estimate their instantaneous phase or rate and, by calculating the mutual information between those variables,
see how coordinated the units? patterns of activity are. However, phases and rates do not
exhaust the repertoire of neural patterns and a more general, common scheme is desirable.
The most general notion of ?pattern of activity? is simply that of the dynamical state of the
system, in the sense mentioned above. We now formalize this.
Assuming the usual notation for Shannon information [9], the information content of a
state variable X is H[X] and the mutual information between X and Y is I[X; Y ]. As
is well-known, I[X; Y ] ? min H[X], H[Y ]. We use this to normalize the mutual state
information to a 0 ? 1 scale, and this is the informational coherence (IC).
?(X, Y )
=
I[X; Y ]
, with 0/0 = 0 .
min H[X], H[Y ]
(1)
? can be interpreted as follows. I[X; Y ] is the Kullback-Leibler divergence between the
joint distribution of X and Y , and the product of their marginal distributions [9], indicating
the error involved in ignoring the dependence between X and Y . The mutual information
between predictive, dynamical states thus gauges the error involved in assuming the two
systems are independent, i.e., how much predictions could improve by taking into account
the dependence. Hence it measures the amount of dynamically-relevant information shared
between the two systems. ? simply normalizes this value, and indicates the degree to
which two systems have coordinated patterns of behavior (cf. [10], although this only uses
directly observable quantities).
2.1
Reconstruction and Estimation of Effective State Spaces
As mentioned, the state space of a deterministic dynamical system can be reconstructed
from a sequence of observations. This is the main tool of experimental nonlinear dynamics
[8]; but the assumption of determinism is crucial and false, for almost any interesting neural
system. While classical state-space reconstruction won?t work on stochastic processes,
such processes do have state-space representations [11], and, in the special case of discretevalued, discrete-time series, there are ways to reconstruct the state space.
Here we use the CSSR algorithm, introduced in [12] (code available at
http://bactra.org/CSSR). This produces causal state models, which are stochastic
automata capable of statistically-optimal nonlinear prediction; the state of the machine
is a minimal sufficient statistic for the future of the observable process[13].1 The basic
idea is to form a set of states which should be (1) Markovian, (2) sufficient statistics for
the next observable, and (3) have deterministic transitions (in the automata-theory sense).
The algorithm begins with a minimal, one-state, IID model, and checks whether these
properties hold, by means of hypothesis tests. If they fail, the model is modified, generally
but not always by adding more states, and the new model is checked again. Each state
of the model corresponds to a distinct distribution over future events, i.e., to a statistical
pattern of behavior. Under mild conditions, which do not involve prior knowledge of
the state space, CSSR converges in probability to the unique causal state model of the
data-generating process [12]. In practice, CSSR is quite fast (linear in the data size), and
generalizes at least as well as training hidden Markov models with the EM algorithm and
using cross-validation for selection, the standard heuristic [12].
One advantage of the causal state approach (which it shares with classical state-space reconstruction) is that state estimation is greatly simplified. In the general case of nonlinear
state estimation, it is necessary to know not just the form of the stochastic dynamics in
the state space and the observation function, but also their precise parametric values and
the distribution of observation and driving noises. Estimating the state from the observable
time series then becomes a computationally-intensive application of Bayes?s Rule [17].
Due to the way causal states are built as statistics of the data, with probability 1 there is a
finite time, t, at which the causal state at time t is certain. This is not just with some degree
of belief or confidence: because of the way the states are constructed, it is impossible for the
process to be in any other state at that time. Once the causal state has been established, it can
be updated recursively, i.e., the causal state at time t + 1 is an explicit function of the causal
state at time t and the observation at t + 1. The causal state model can be automatically
converted, therefore, into a finite-state transducer which reads in an observation time series
and outputs the corresponding series of states [18, 13]. (Our implementation of CSSR
filters its training data automatically.) The result is a new time series of states, from which
all non-predictive components have been filtered out.
2.2
Estimating the Coherence
Our algorithm for estimating the matrix of informational coherences is as follows. For each
unit, we reconstruct the causal state model, and filter the observable time series to produce
a series of causal states. Then, for each pair of neurons, we construct a joint histogram of
1
Causal state models have the same expressive power as observable operator models [14] or predictive state representations [7], and greater power than variable-length Markov models [15, 16].
a
b
Figure 1: Rastergrams of neuronal spike-times in the network. Excitatory, pyramidal neurons (numbers 1 to 1000) are shown in green, inhibitory interneurons (numbers 1001 to 1300) in red. During the
first 10 seconds (a), the current connections among the pyramidal cells are suppressed and a gamma
rhythm emerges (left). At t = 10s, those connections become active, leading to a beta rhythm (b,
right).
the state distribution, estimate the mutual information between the states, and normalize by
the single-unit state informations. This gives a symmetric matrix of ? values.
Even if two systems are independent, their estimated IC will, on average, be positive, because, while they should have zero mutual information, the empirical estimate of mutual
information is non-negative. Thus, the significance of IC values must be assessed against
the null hypothesis of system independence. The easiest way to do so is to take the reconstructed state models for the two systems and run them forward, independently of one
another, to generate a large number of simulated state sequences; from these calculate values of the IC. This procedure will approximate the sampling distribution of the IC under a
null model which preserves the dynamics of each system, but not their interaction. We can
then find p-values as usual. We omit them here to save space.
2.3
Approximating the Network Multi-Information
There is broad agreement [2] that analyses of networks should not just be an analysis of
pairs of neurons, averaged over pairs. Ideally, an analysis of information sharing in a network would look at the over-all structure of statistical dependence between the various
units, reflected in the complete joint probability distribution P of the states. This would
then allow us, for instance, to calculate the n-fold multi-information, I[X1 , X2 , . . . Xn ] ?
D(P ||Q), the Kullback-Leibler divergence between the joint distribution P and the product of marginal distributions Q, analogous to the pairwise mutual information [19]. Calculated over the predictive states, the multi-information would give the total amount of
shared dynamical information in the system. Just as we normalized the mutual information
I[X1 , X2 ] by its maximum possible value, min H[X1 ], H[X2 ], we normalize the multiinformation by its maximum, which is the smallest sum of n ? 1 marginal entropies:
X
I[X1 ; X2 ; . . . Xn ] ? min
H[Xn ]
k
i6=k
Unfortunately, P is a distribution over a very high dimensional space and so, hard to estimate well without strong parametric constraints. We thus consider approximations.
The lowest-order approximation treats all the units as independent; this is the distribution
Q. One step up are tree distributions, where the global distribution is a function of the joint
distributions of pairs of units. Not every pair of units needs to enter into such a distribution,
though every unit must be part of some pair. Graphically, a tree distribution corresponds to a
spanning tree, with edges linking units whose interactions enter into the global probability,
and conversely spanning trees determine tree distributions. Writing ET for the set of pairs
(i, j) and abbreviating X1 = x1 , X2 = x2 , . . . Xn = xn by X = x, one has
n
Y
T (Xi = xi , Xj = xj ) Y
T (X = x) =
T (Xi = xi )
(2)
T (Xi = xi )T (Xj = xj ) i=1
(i,j)?ET
where the marginal distributions T (Xi ) and the pair distributions T (Xi , Xj ) are estimated
by the empirical marginal and pair distributions.
We must now pick edges ET so that T best approximates the true global distribution P .
A natural approach is to minimize D(P ||T ), the divergence between P and its tree approximation. Chow and Liu [20] showed that the maximum-weight spanning tree gives the
divergence-minimizing distribution, taking an edge?s weight to be the mutual information
between the variables it links.
There are three advantages to using the Chow-Liu approximation. (1) Estimating T from
empirical probabilities gives a consistent maximum likelihood estimator of the ideal ChowLiu tree [20], with reasonable rates of convergence, so T can be reliably known even if
P cannot. (2) There are efficient algorithms for constructing maximum-weight spanning
trees, such as Prim?s algorithm [21, sec. 23.2], which runs in time O(n2 + n log n). Thus,
the approximation is computationally tractable. (3) The KL divergence of the Chow-Liu
distribution from Q gives a lower bound on the network multi-information; that bound is
just the sum of the mutual informations along the edges in the tree:
X
I[Xi ; Xj ]
(3)
I[X1 ; X2 ; . . . Xn ] ? D(T ||Q) =
(i,j)?ET
Even if we knew P exactly, Eq. 3 would be useful as an alternative to calculating D(P ||Q)
directly, evaluating log P (x)/Q(x) for all the exponentially-many configurations x.
It is natural to seek higher-order approximations to P , e.g., using three-way interactions
not decomposable into pairwise interactions [22, 19]. But it is hard to do so effectively,
because finding the optimal approximation to P when such interactions are allowed is NP
[23], and analytical formulas like Eq. 3 generally do not exist [19]. We therefore confine
ourselves to the Chow-Liu approximation here.
3
Example: A Model of Gamma and Beta Rhythms
We use simulated data as a test case, instead of empirical multiple electrode recordings,
which allows us to try the method on a system of over 1000 neurons and compare the
measure against expected results. The model, taken from [24], was originally designed to
study episodes of gamma (30?80Hz) and beta (12?30Hz) oscillations in the mammalian
nervous system, which often occur successively with a spontaneous transition between
them. More concretely, the rhythms studied were those displayed by in vitro hippocampal
(CA1) slice preparations and by in vivo neocortical EEGs.
The model contains two neuron populations: excitatory (AMPA) pyramidal neurons and
inhibitory (GABAA ) interneurons, defined by conductance-based Hodgkin-Huxley-style
equations. Simulations were carried out in a network of 1000 pyramidal cells and 300
interneurons. Each cell was modeled as a one-compartment neuron with all-to-all coupling,
endowed with the basic sodium and potassium spiking currents, an external applied current,
and some Gaussian input noise.
The first 10 seconds of the simulation correspond to the gamma rhythm, in which only a
group of neurons is made to spike via a linearly increasing applied current. The beta rhythm
a
b
c
d
Figure 2: Heat-maps of coordination for the network, as measured by zero-lag cross-correlation
(top row) and informational coherence (bottom), contrasting the gamma rhythm (left column) with
the beta (right). Colors run from red (no coordination) through yellow to pale cream (maximum).
(subsequent 10 seconds) is obtained by activating pyramidal-pyramidal recurrent connections (potentiated by Hebbian preprocessing as a result of synchrony during the gamma
rhythm) and a slow outward after-hyper-polarization (AHP) current (the M-current), suppressed during gamma due to the metabotropic activation used in the generation of the
rhythm. During the beta rhythm, pyramidal cells, silent during gamma rhythm, fire on a
subset of interneurons cycles (Fig. 1).
Fig. 2 compares zero-lag cross-correlation, a second-order method of quantifying coordination, with the informational coherence calculated from the reconstructed states. (In
this simulation, we could have calculated the actual states of the model neurons directly,
rather than reconstructing them, but for purposes of testing our method we did not.) Crosscorrelation finds some of the relationships visible in Fig. 1, but is confused by, for instance,
the phase shifts between pyramidal cells. (Surface mutual information, not shown, gives
similar results.) Informational coherence, however, has no trouble recognizing the two populations as effectively coordinated blocks. The presence of dynamical noise, problematic
for ordinary state reconstruction, is not an issue. The average IC is 0.411 (or 0.797 if the
inactive, low-numbered neurons are excluded). The tree estimate of the global informational multi-information is 3243.7 bits, with a global coherence of 0.777. The right half of
Fig. 2 repeats this analysis for the beta rhythm; in this stage, the average IC is 0.614, and
the tree estimate of the global multi-information is 7377.7 bits, though the estimated global
coherence falls very slightly to 0.742. This is because low-numbered neurons which were
quiescent before are now active, contributing to the global information, but the over-all
pattern is somewhat weaker and more noisy (as can be seen from Fig. 1b.) So, as expected,
the total information content is higher, but the overall coordination across the network is
lower.
4
Conclusion
Informational coherence provides a measure of neural information sharing and coordinated
activity which accommodates nonlinear, stochastic relationships between extended patterns
of spiking. It is robust to dynamical noise and leads to a genuinely multivariate measure
of global coordination across networks or regions. Applied to data from multi-electrode
recordings, it should be a valuable tool in evaluating hypotheses about distributed neural
representation and function.
Acknowledgments
Thanks to R. Haslinger, E. Ionides and S. Page; and for support to the Santa Fe Institute (under grants
from Intel, the NSF and the MacArthur Foundation, and DARPA agreement F30602-00-2-0583), the
Clare Booth Luce Foundation (KLK) and the James S. McDonnell Foundation (CRS).
References
[1] L. F. Abbott and T. J. Sejnowski, eds. Neural Codes and Distributed Representations. MIT
Press, 1998.
[2] E. N. Brown, R. E. Kass, and P. P. Mitra. Nature Neuroscience, 7:456?461, 2004.
[3] D. H. Ballard, Z. Zhang, and R. P. N. Rao. In R. P. N. Rao, B. A. Olshausen, and M. S. Lewicki,
eds., Probabilistic Models of the Brain, pp. 273?284, MIT Press, 2002.
[4] D. R. Brillinger and A. E. P. Villa. In D. R. Brillinger, L. T. Fernholz, and S. Morgenthaler,
eds., The Practice of Data Analysis, pp. 77?92. Princeton U.P., 1997.
[5] R. Quian Quiroga et al. Physical Review E, 65:041903, 2002.
[6] R. F. Streater. Statistical Dynamics. Imperial College Press, London.
[7] M. L. Littman, R. S. Sutton, and S. Singh. In T. G. Dietterich, S. Becker, and Z. Ghahramani,
eds., Advances in Neural Information Processing Systems 14, pp. 1555?1561. MIT Press, 2002.
[8] H. Kantz and T. Schreiber. Nonlinear Time Series Analysis. Cambridge U.P., 1997.
[9] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, 1991.
[10] M. Palus et al. Physical Review E, 63:046211, 2001.
[11] F. B. Knight. Annals of Probability, 3:573?596, 1975.
[12] C. R. Shalizi and K. L. Shalizi. In M. Chickering and J. Halpern, eds., Uncertainty in Artificial
Intelligence: Proceedings of the Twentieth Conference, pp. 504?511. AUAI Press, 2004.
[13] C. R. Shalizi and J. P. Crutchfield. Journal of Statistical Physics, 104:817?819, 2001.
[14] H. Jaeger. Neural Computation, 12:1371?1398, 2000.
[15] D. Ron, Y. Singer, and N. Tishby. Machine Learning, 25:117?149, 1996.
[16] P. B?uhlmann and A. J. Wyner. Annals of Statistics, 27:480?513, 1999.
[17] N. U. Ahmed. Linear and Nonlinear Filtering for Scientists and Engineers. World Scientific,
1998.
[18] D. R. Upper. PhD thesis, University of California, Berkeley, 1997.
[19] E. Schneidman, S. Still, M. J. Berry, and W. Bialek. Physical Review Letters, 91:238701, 2003.
[20] C. K. Chow and C. N. Liu. IEEE Transactions on Information Theory, IT-14:462?467, 1968.
[21] T. H. Cormen et al. Introduction to Algorithms. 2nd ed. MIT Press, 2001.
[22] S. Amari. IEEE Transacttions on Information Theory, 47:1701?1711, 2001.
[23] S. Kirshner, P. Smyth, and A. Robertson. Tech. Rep. 04-04, UC Irvine, Information and Computer Science, 2004.
[24] M. S. Olufsen et al. Journal of Computational Neuroscience, 14:33?54, 2003.
| 2876 |@word mild:1 version:1 briefly:1 seems:1 nd:1 haslinger:1 seek:1 simulation:3 covariance:2 pick:1 klk:1 accommodate:1 recursively:1 disappointingly:1 liu:5 series:11 configuration:1 contains:1 past:1 existing:1 current:7 comparing:1 ka:1 activation:1 must:3 subsequent:1 visible:1 informative:1 designed:1 concert:1 kristina:1 half:1 intelligence:1 nervous:2 filtered:1 provides:2 ron:1 org:1 zhang:1 along:1 constructed:1 direct:1 beta:8 become:1 transducer:1 behavioral:4 lagging:1 pairwise:4 expected:2 indeed:1 behavior:4 themselves:1 abbreviating:1 multi:8 brain:3 informational:15 automatically:2 little:2 actual:1 increasing:1 becomes:1 begin:1 estimating:5 notation:1 confused:1 null:2 coder:1 what:3 kind:1 interpreted:1 lowest:1 easiest:1 suppresses:1 ca1:1 contrasting:1 finding:1 transformation:1 brillinger:2 temporal:1 berkeley:1 every:3 auai:1 exactly:3 whatever:1 unit:17 control:1 omit:2 appear:2 grant:1 positive:1 before:1 scientist:1 mitra:1 treat:1 chowliu:1 sutton:1 establishing:1 firing:1 studied:1 dynamically:2 multiinformation:1 conversely:1 statistically:1 averaged:1 practical:1 unique:1 acknowledgment:1 testing:3 practice:3 block:1 procedure:1 area:2 empirical:4 confidence:1 numbered:2 cannot:1 close:1 selection:1 operator:1 impossible:1 writing:1 equivalent:1 deterministic:6 map:1 graphically:1 independently:1 automaton:2 decomposable:1 rule:1 estimator:1 population:3 handle:4 notion:4 fx:1 anyway:1 analogous:1 updated:1 annals:2 spontaneous:1 suppose:1 exact:1 smyth:1 us:2 hypothesis:5 agreement:2 pa:1 element:1 robertson:1 genuinely:1 mammalian:1 bottom:1 capture:1 calculate:3 region:4 cycle:1 episode:1 knight:1 valuable:1 mentioned:2 locking:1 ideally:1 littman:1 dynamic:8 halpern:1 singh:1 predictive:6 gabaa:1 observables:2 joint:6 darpa:1 various:3 train:2 instantiated:1 distinct:2 effective:3 fast:1 heat:1 sejnowski:1 london:1 artificial:1 hyper:1 quite:1 lag:5 heuristic:1 whose:1 fernholz:1 reconstruct:3 amari:1 favor:1 statistic:11 think:1 itself:1 noisy:1 sequence:5 advantage:2 analytical:1 propose:2 reconstruction:5 interaction:5 product:2 pale:1 relevant:1 normalize:3 convergence:1 electrode:3 potassium:1 jaeger:1 produce:2 generating:1 converges:1 help:1 illustrate:1 coupling:1 recurrent:1 stat:1 measured:1 excites:1 eq:2 strong:1 predicted:1 come:1 quantify:2 synchronized:1 differ:1 filter:2 stochastic:8 kirshner:1 explains:1 require:1 activating:1 shalizi:4 fix:1 repertoire:1 strictly:1 fxy:2 quiroga:1 hold:1 around:1 confine:1 ic:7 mapping:2 predict:1 driving:2 major:1 smallest:1 purpose:2 estimation:3 applicable:1 coordination:17 uhlmann:1 sensitive:1 schreiber:1 gauge:1 tool:3 mit:4 always:1 gaussian:1 modified:1 rather:4 cr:1 encode:1 consistently:1 indicates:1 check:1 likelihood:1 greatly:1 tech:1 detect:1 sense:3 diffeomorphic:1 entire:1 integrated:1 chow:5 hidden:1 relation:2 going:1 issue:1 among:4 flexible:1 overall:1 special:1 mutual:18 marginal:5 uc:1 once:2 construct:1 sampling:1 preparing:1 broad:1 look:1 future:2 np:1 stimulus:2 employ:1 gamma:9 simultaneously:1 tightly:2 preserve:1 divergence:5 familiar:1 phase:7 ourselves:1 fire:2 conductance:2 interest:1 interneurons:4 introduces:1 palus:1 edge:4 capable:1 necessary:1 xy:1 tree:12 causal:12 minimal:3 instance:2 column:1 markovian:1 rao:2 cover:1 measuring:2 ordinary:1 subset:1 delay:1 recognizing:1 tishby:1 stored:2 thanks:1 accessible:1 probabilistic:1 physic:3 invoke:1 thesis:1 again:1 successively:1 reconstructs:1 opposed:1 external:2 cosma:1 leading:1 style:1 crosscorrelation:1 clare:1 account:1 potential:1 converted:1 coding:2 exhaust:1 includes:1 sec:1 coordinated:12 view:1 lot:1 try:1 analyze:2 doing:2 red:2 bayes:1 synchrony:11 vivo:1 minimize:1 marcelo:1 compartment:1 cxy:1 variance:1 efficiently:1 yield:1 sitting:1 correspond:1 yellow:1 biophysically:1 iid:1 none:1 drive:1 sharing:7 checked:1 ed:6 against:2 frequency:1 involved:2 james:1 pp:4 conveys:1 proof:2 associated:1 mi:1 static:1 irvine:1 camperi:2 manifest:1 knowledge:1 emerges:1 color:1 hilbert:1 formalize:1 higher:2 originally:1 reflected:3 though:2 strongly:1 furthermore:1 just:10 stage:1 correlation:7 expressive:1 nonlinear:11 marker:1 widespread:1 defines:1 rohilla:1 scientific:1 believe:1 olshausen:1 dietterich:1 normalized:4 true:1 brown:1 adequately:1 hence:1 polarization:1 metabotropic:1 read:1 symmetric:1 leibler:2 excluded:1 attractive:1 during:5 rhythm:13 won:3 m:4 generalized:4 hippocampal:1 complete:1 neocortical:1 argues:1 instantaneous:3 macarthur:1 recently:1 superior:1 common:2 spiking:9 empirically:1 vitro:1 physical:3 exponentially:1 association:1 extend:1 interpretation:1 linking:1 approximates:1 mellon:1 cambridge:1 enter:2 i6:1 stable:1 longer:1 surface:2 base:1 something:1 multivariate:2 showed:1 irrelevant:1 certain:1 rep:1 captured:1 seen:1 greater:1 somewhat:1 determine:1 schneidman:1 signal:1 multiple:6 full:1 desirable:1 reduces:1 hebbian:1 ing:1 smooth:1 ahmed:1 cross:7 prediction:2 basic:2 essentially:2 cmu:1 physically:1 histogram:2 cell:5 want:2 uninformative:1 pyramidal:8 crucial:1 strict:4 recording:3 hz:5 thing:1 cream:1 ionides:1 call:1 presence:1 ideal:1 rendering:1 independence:1 xj:6 silent:1 idea:4 knowing:2 luce:1 intensive:1 shift:1 inactive:1 whether:3 quian:1 crutchfield:1 becker:1 action:2 useful:2 generally:2 detailed:2 involve:2 santa:1 transforms:1 amount:2 outward:1 http:1 generate:1 exist:2 problematic:1 inhibitory:2 nsf:1 neuroscience:3 estimated:3 carnegie:1 discrete:1 group:1 imperial:1 abbott:1 sum:2 run:3 letter:1 uncertainty:1 respond:1 hodgkin:1 laid:1 almost:1 reasonable:1 missed:1 oscillation:1 coherence:20 bit:2 bound:2 fold:1 topological:1 activity:14 occur:1 precisely:1 constraint:1 huxley:1 x2:7 fourier:2 min:4 department:3 according:1 mcdonnell:1 cormen:1 across:4 describes:1 em:1 suppressed:2 reconstructing:1 slightly:1 appealing:1 evolves:1 spiked:2 taken:2 computationally:2 equation:1 turn:1 fail:2 needed:1 know:3 singer:1 tractable:1 ahp:1 umich:1 available:1 generalizes:1 endowed:1 observe:1 spectral:3 generic:2 save:1 alternative:3 thomas:1 responding:1 top:1 cf:1 trouble:1 calculating:3 f30602:1 giving:1 ghahramani:1 especially:1 approximating:1 classical:2 quantity:2 spike:9 parametric:2 dependence:3 usual:3 traditional:1 villa:1 said:1 exhibit:1 bialek:1 link:1 simulated:2 accommodates:1 argue:1 fy:1 reason:4 spanning:4 assuming:2 code:5 length:1 modeled:1 relationship:12 minimizing:1 unfortunately:3 fe:1 potentially:1 relate:1 negative:1 implementation:1 reliably:1 perform:1 potentiated:1 upper:1 neuron:25 observation:7 markov:2 finite:2 displayed:1 extended:3 looking:1 precise:1 conflates:1 introduced:2 pair:10 namely:1 kl:1 connection:3 extendible:1 california:1 established:1 beyond:1 dynamical:10 pattern:12 built:1 reliable:1 green:1 belief:1 power:2 event:1 natural:3 force:1 sodium:1 scheme:1 improve:1 usfca:1 wyner:1 temporally:1 carried:1 coupled:2 prior:1 literature:1 discretevalued:1 review:3 berry:1 evolve:1 contributing:1 synchronization:4 interesting:1 generation:1 filtering:1 validation:1 foundation:3 degree:3 sufficient:2 consistent:1 share:3 normalizes:1 row:1 excitatory:2 repeat:1 lisa:1 allow:1 weaker:1 institute:1 wide:2 fall:1 taking:2 determinism:1 distributed:4 slice:1 calculated:5 cortical:2 transition:3 xn:6 evaluating:2 world:1 forward:1 concretely:1 made:1 san:2 simplified:1 preprocessing:1 transaction:1 reconstructed:4 approximate:2 observable:7 kullback:2 global:10 active:2 pittsburgh:1 francisco:2 knew:1 xi:9 quiescent:1 spectrum:1 morgenthaler:1 nature:2 ballard:1 robust:1 ca:1 ignoring:1 eeg:1 ampa:1 constructing:1 did:2 significance:1 main:1 linearly:1 whole:1 noise:4 n2:1 allowed:1 x1:7 neuronal:3 fig:5 intel:1 slow:1 predictability:1 wiley:1 fails:1 explicit:1 chickering:1 formula:1 specific:1 peristimulus:1 prim:1 incorporating:1 false:1 adding:1 effectively:2 phd:1 booth:1 entropy:1 michigan:1 simply:2 twentieth:1 lewicki:1 applies:2 corresponds:2 ann:1 quantifying:1 oscillator:1 shared:6 content:2 experimentally:1 hard:2 acting:1 miss:1 engineer:1 total:2 arbor:1 experimental:1 shannon:1 rarely:1 indicating:1 college:1 support:1 fulfills:1 assessed:1 preparation:2 investigator:1 statespace:1 princeton:1 |
2,066 | 2,877 | TD(0) Leads to Better Policies than
Approximate Value Iteration
Benjamin Van Roy
Management Science and Engineering and Electrical Engineering
Stanford University
Stanford, CA 94305
[email protected]
Abstract
We consider approximate value iteration with a parameterized approximator in which the state space is partitioned and the optimal cost-to-go
function over each partition is approximated by a constant. We establish performance loss bounds for policies derived from approximations
associated with fixed points. These bounds identify benefits to having
projection weights equal to the invariant distribution of the resulting policy. Such projection weighting leads to the same fixed points as TD(0).
Our analysis also leads to the first performance loss bound for approximate value iteration with an average cost objective.
1
Preliminaries
Consider a discrete-time communicating Markov decision process (MDP) with a finite state
space S = {1, . . . , |S|}. At each state x ? S, there is a finite set Ux of admissible actions.
If the current state is x and an action u ? Ux is selected, a cost of gu (x) is incurred, and
the
P system transitions to a state y ? S with probability pxy (u). For any x ? S and u ? Ux ,
y?S pxy (u) = 1. Costs are discounted at a rate of ? ? (0, 1) per period. Each instance
of such an MDP is defined by a quintuple (S, U, g, p, ?).
A (stationary deterministic) policy is a mapping ? that assigns an action u ? Ux to each
state x ? S. If actions are selected based on a policy ?, the state follows a Markov process
with transition matrix P? , where each (x, y)th entry is equal to pxy (?(x)). The restriction
to communicating MDPs ensures that it is possible to reach any state from any other state.
Each
? is associated with a cost-to-go function J? ? <|S| , defined by J? =
P? policy
t t
?1
g? , where, with some abuse of notation, g? (x) = g?(x) (x)
t=0 ? P? g? = (I ? ?P? )
for each x ? S. A policy
P ? is said to be greedy with respect to a function J if
?(x) ? argmin(gu (x) + ? y?S pxy (u)J(y)) for all x ? S.
u?Ux
The optimal cost-to-go function J ? ? <|S| is defined by J ? (x) = min? J? (x), for all
x ? S. A policy ?? is said to be optimal if J?? = J ? . It is well-known that an optimal
policy exists. Further, a policy ?? is optimal if and only if it is greedy with respect to J ? .
Hence, given the optimal cost-to-go function, optimal actions can computed be minimizing
the right-hand side of the above inclusion.
Value iteration generates a sequence J` converging to J ? according to J`+1 = T J` ,
where
P T is the dynamic programming operator, defined by (T J)(x) = minu?Ux (gu (x) +
? y?S pxy (u)J(y)), for all x ? S and J ? <|S| . This sequence converges to J ? for any
initialization of J0 .
2
Approximate Value Iteration
The state spaces of relevant MDPs are typically so large that computation and storage of
a cost-to-go function is infeasible. One approach to dealing with this obstacle involves
partitioning the state space S into a manageable number K of disjoint subsets S1 , . . . , SK
and approximating the optimal cost-to-go function with a function that is constant over
each partition. This can be thought of as a form of state aggregation ? all states within a
given partition are assumed to share a common optimal cost-to-go.
To represent an approximation, we define a matrix ? ? <|S|?K such that each kth column
is an indicator function for the kth partition Sk . Hence, for any r ? <K , k, and x ? Sk ,
(?r)(x) = rk . In this paper, we study variations of value iteration, each of which computes
a vector r so that ?r approximates J ? . The use of such a policy ?r which is greedy with
respect to ?r is justified by the following result (see [10] for a proof):
Theorem 1 If ? is a greedy policy with respect to a function J? ? <|S| then
kJ? ? J ? k? ?
2?
? ?.
kJ ? ? Jk
1??
One common way of approximating a function J ? <|S| with a function of the form ?r involves projection with respect to a weighted Euclidean norm k?k? . The weighted Euclidean
1/2
P
|S|
2
norm: kJk2,? =
. Here, ? ? <+ is a vector of weights that assign
x?S ?(x)J (x)
relative emphasis among states. The projection ?? J is the function ?r that attains the minimum of kJ ??rk2,? ; if there are multiple functions ?r that attain the minimum, they must
form an affine space, and the projection is taken to be the one with minimal norm k?rk2,? .
Note that in our context, where each kth column of ? represents
an indicator
P
Pfunction for
the kth partition, for any ?, J, and x ? Sk , (?? J)(x) = y?Sk ?(y)J(y)/ y?Sk ?(y).
Approximate value iteration begins with a function ?r(0) and generates a sequence according to ?r(`+1) = ?? T ?r(`) . It is well-known that the dynamic programming operator T is
a contraction mapping with respect to the maximum norm. Further, ?? is maximum-norm
nonexpansive [16, 7, 8]. (This is not true for general ?, but is true in our context in which
columns of ? are indicator functions for partitions.) It follows that the composition ?? T
is a contraction mapping. By the contraction mapping theorem, ?? T has a unique fixed
point ??
r, which is the limit of the sequence ?r(`) . Further, the following result holds:
Theorem 2 For any MDP, partition, and weights ? with support intersecting every partition, if ??
r = ?? T ??
r then
k??
r ? J ? k? ?
2
min kJ ? ? ?rk? ,
1 ? ? r?<K
and
(1 ? ?)kJ?r? ? J ? k? ?
4?
min kJ ? ? ?rk? .
1 ? ? r?<K
The first inequality of the theorem is an approximation error bound, established in [16, 7, 8]
for broader classes of approximators that include state aggregation as a special case. The
second is a performance loss bound, derived by simply combining the approximation error
bound and Theorem 1.
Note that J?r? (x) ? J ? (x) for all x, so the left-hand side of the performance loss bound
is the maximal increase in cost-to-go, normalized by 1 ? ?. This normalization is natural,
since a cost-to-go function is a linear combination of expected future costs, with coefficients
1, ?, ?2 , . . ., which sum to 1/(1 ? ?).
Our motivation of the normalizing constant begs the question of whether, for fixed MDP
parameters (S, U, g, p) and fixed ?, minr kJ ? ? ?rk? also grows with 1/(1 ? ?). It turns
out that minr kJ ? ? ?rk? = O(1). To see why, note that for any ?,
J? = (I ? ?P? )?1 g? =
1
? ? + h? ,
1??
where ?? (x) is the expected average cost if the process starts in state x and is controlled
by policy ?,
? ?1
1X t
?? = lim
P? g? ,
? ?? ?
t=0
and h? is the discounted differential cost function
h? = (I ? ?P? )?1 (g? ? ?? ).
Both ?? and h? converge to finite vectors as ? approaches 1 [3]. For an optimal policy
?? , lim??1 ??? (x) does not depend on x (in our context of a communicating MDP). Since
constant functions lie in the range of ?,
lim min kJ ? ? ?rk? ? lim kh?? k? < ?.
??1 r?<K
??1
The performance loss bound still exhibits an undesirable dependence on ? through the
coefficient 4?/(1 ? ?). In most relevant contexts, ? is close to 1; a representative value
might be 0.99. Consequently, 4?/(1 ? ?) can be very large. Unfortunately, the bound is
sharp, as expressed by the following theorem. We will denote by 1 the vector with every
component equal to 1.
Theorem 3 For any ? > 0, ? ? (0, 1), and ? ? 0, there exists MDP parameters
(S, U, g, p) and a partition such that minr?<K kJ ? ? ?rk? = ? and, if ??
r = ?? T ??
r
with ? = 1,
4?
min kJ ? ? ?rk? ? ?.
(1 ? ?)kJ?r? ? J ? k? ?
1 ? ? r?<K
This theorem is established through an example in [22]. The choice of uniform weights
(? = 1) is meant to point out that even for such a simple, perhaps natural, choice of
weights, the performance loss bound is sharp.
Based on Theorems 2 and 3, one might expect that there exists MDP parameters (S, U, g, p)
and a partition such that, with ? = 1,
1
?
?
(1 ? ?)kJ?r? ? J k? = ?
min kJ ? ?rk? .
1 ? ? r?<K
In other words, that the performance loss is both lower and upper bounded by 1/(1 ? ?)
times the smallest possible approximation error. It turns out that this is not true, at least
if we restrict to a finite state space. However, as the following theorem establishes, the
coefficient multiplying minr?<K kJ ? ? ?rk? can grow arbitrarily large as ? increases,
keeping all else fixed.
Theorem 4 For any L and ? ? 0, there exists MDP parameters (S, U, g, p) and a partition such that lim??1 minr?<K kJ ? ? ?rk? = ? and, if ??
r = ?? T ??
r with ? = 1,
lim inf (1 ? ?) (J?r? (x) ? J ? (x)) ? L lim min kJ ? ? ?rk? ,
??1 r?<K
??1
for all x ? S.
This Theorem is also established through an example [22].
For any ? and x,
lim ((1 ? ?)J? (x) ? ?? (x)) = lim(1 ? ?)h? (x) = 0.
??1
??1
Combined with Theorem 4, this yields the following corollary.
Corollary 1 For any L and ? ? 0, there exists MDP parameters (S, U, g, p) and a partition such that lim??1 minr?<K kJ ? ? ?rk? = ? and, if ??
r = ?? T ??
r with ? = 1,
?
lim inf (??r? (x) ? ??? (x)) ? L lim min kJ ? ?rk? ,
??1
??1 r?<K
for all x ? S.
3
Using the Invariant Distribution
In the previous section, we considered an approximation ??
r that solves ?? T ??
r = ??
r for
some arbitrary pre-selected weights ?. We now turn to consider use of an invariant state
distribution ?r? of P?r? as the weight vector.1 This leads to a circular definition: the weights
are used in defining r? and now we are defining the weights in terms of r?. What we are
really after here is a vector r? that satisfies ??r? T ??
r = ??
r. The following theorem captures
the associated benefits. (Due to space limitations, we omit the proof, which is provided in
the full length version of this paper [22].)
Theorem 5 For any MDP and partition, if ??
r = ??r? T ??
r and ?r? has support intersecting
every partition, (1 ? ?)?rT? (J?r? ? J ? ) ? 2? minr?<K kJ ? ? ?rk? .
When ? is close to 1, which is typical, the right-hand side of our new performance loss
bound is far less than that of Theorem 2. The primary improvement is in the omission of a
factor of 1 ? ? from the denominator. But for the bounds to be compared in a meaningful
way, we must also relate the left-hand-side expressions. A relation can be based on the fact
that for all ?, lim??1 k(1 ? ?)J? ? ?? k? = 0, as explained in Section 2. In particular,
based on this, we have
lim(1 ? ?)kJ? ? J ? k? = |?? ? ?? | = ?? ? ?? = lim ? T (J? ? J ? ),
??1
??1
for all policies ? and probability distributions ?. Hence, the left-hand-side expressions
from the two performance bounds become directly comparable as ? approaches 1.
Another interesting comparison can be made by contrasting Corollary 1 against the following immediate consequence of Theorem 5.
Corollary P
2 For all MDP parameters (S, U, g, p) and partitions, if ??
r = ??r? T ??
r and
lim inf ??1 x?Sk ?r?(x) > 0 for all k,
lim sup k??r? ? ??? k? ? 2 lim min kJ ? ? ?rk? .
??1
??1 r?<K
The comparison suggests that solving ??
r = ??r? T ??
r is strongly preferable to solving
??
r = ?? T ??
r with ? = 1.
1
By an invariant state distribution of a transition matrix P , we mean any probability distribution
? such that ? T P = ? T . In the event that P?r? has multiple invariant distributions, ?r? denotes an
arbitrary choice.
4
Exploration
If a vector r? solves ??
r = ??r? T ??
r and the support of ?r? intersects every partition, Theorem
5 promises a desirable bound. However, there are two significant shortcomings to this
solution concept, which we will address in this section. First, in some cases, the equation
??r? T ??
r = ??
r does not have a solution. It is easy to produce examples of this; though
no example has been documented for the particular class of approximators we are using
here, [2] offers an example involving a different linearly parameterized approximator that
captures the spirit of what can happen. Second, it would be nice to relax the requirement
that the support of ?r? intersect every partition.
To address these shortcomings, we introduce stochastic policies. A stochastic policy ?
maps state-action pairs to probabilities. For each x ? S and u ? Ux , ?(x, u) is the
probability ofPtaking action u when in state x. Hence, ?(x, u) ? 0 for all x ? S and
u ? Ux , and u?Ux ?(x, u) = 1 for all x ? S.
Given a scalar > 0 and a function J, the -greedy Boltzmann exploration policy with
respect to J is defined by
e?(Tu J)(x)(|Ux |?1)/e
.
?(Tu J)(x)(|Ux |?1)/e
u?Ux e
?(x, u) = P
For any > 0 and r, let ?r denote the -greedy Boltzmann exploration policy with respect
to ?r. Further, we define a modified dynamic programming operator that incorporates
Boltzmann exploration:
P
e?(Tu J)(x)(|Ux |?1)/e (Tu J)(x)
Px
(T J)(x) = u?U
.
?(Tu J)(x)(|Ux |?1)/e
u?Ux e
As approaches 0, -greedy Boltzmann exploration policies become greedy and the modified dynamic programming operators become the dynamic programming operator. More
precisely, for all r, x, and J, lim?0 ?r (x, ?r (x)) = 1 and lim?1 T J = T J. These are
immediate consequences of the following result (see [4] for a proof).
P ?vi (n?1)/e
P
Lemma 1 For any n, v ? <n , mini vi + ?
vi / i e?vi (n?1)/e ?
ie
mini vi .
Because we are only concerned with communicating MDPs, there is a unique invariant
state distribution associated with each -greedy Boltzmann exploration policy ?r and the
support of this distribution is S. Let ?r denote this distribution. We consider a vector r? that
solves ??
r = ??r? T ??
r. For any > 0, there exists a solution to this equation (this is an
immediate extension of Theorem 5.1 from [4]).
We have the following performance loss bound, which parallels Theorem 5 but with an
equation for which a solution is guaranteed to exist and without any requirement on the
resulting invariant distribution. (Again, we omit the proof, which is available in [22].)
Theorem 6 For any MDP, partition, and > 0, if ??
r = ??r? T ??
r then (1 ?
T
?
?
?)(?r?) (J?r? ? J ) ? 2? minr?<K kJ ? ?rk? + .
5
Computation: TD(0)
Though computation is not a focus of this paper, we offer a brief discussion here. First,
we describe a simple algorithm from [16], which draws on ideas from temporal-difference
learning [11, 12] and Q-learning [23, 24] to solve ??
r = ?? T ??
r. It requires an ability to sample a sequence of states x(0) , x(1) , x(2) , . . ., each independent and identically
distributed accordingP
to ?. Also required is a way to efficiently compute (T ?r)(x) =
minu?Ux (gu (x) + ? y?S pxy (u)(?r)(y)), for any given x and r. This is typically possible when the action set Ux and the support of px? (u) (i.e., the set of states that can follow
x if action u is selected) are not too large. The algorithm generates a sequence of vectors
r(`) according to
r(`+1) = r(`) + ?` ?(x(`) ) (T ?r(`) )(x(`) ) ? (?r(`) )(x(`) ) ,
where ?` is a step size and ?(x) denotes the column vector made up of components from
the xth row of ?. In [16], using results from [15, 9], it is shown that under appropriate assumptions on the step size sequence, r(`) converges to a vector r? that solves ??
r = ?? T ??
r.
The equation ??
r = ?? T ??
r may have no solution. Further, the requirement that states
are sampled independently from the invariant distribution may be impractical. However, a
natural extension of the above algorithm leads to an easily implementable version of TD(0)
that aims at solving ??
r = ??r? T ??
r. The algorithm requires simulation of a trajectory
x0 , x1 , x2 , . . . of the MDP, with each action ut ? Uxt generated by the -greedy Boltzmann exploration policy with respect to ?r(t) . The sequence of vectors r(t) is generated
according to
r(t+1) = r(t) + ?t ?(xt ) (T ?r(t) )(xt ) ? (?r(t) )(xt ) .
Under suitable conditions on the step size sequence, if this algorithm converges, the limit
satisfies ??
r = ??r? T ??
r. Whether such an algorithm converges and whether there are
other algorithms that can effectively solve ??
r = ??r? T ??
r for broad classes of relevant
problems remain open issues.
6
Extensions and Open Issues
Our results demonstrate that weighting a Euclidean norm projection by the invariant distribution of a greedy (or approximately greedy) policy can lead to a dramatic performance
gain. It is intriguing that temporal-difference learning implicitly carries out such a projection, and consequently, any limit of convergence obeys the stronger performance loss
bound.
This is not the first time that the invariant distribution has been shown to play a critical
role in approximate value iteration and temporal-difference learning. In prior work involving approximation of a cost-to-go function for a fixed policy (no control) and a general
linearly parameterized approximator (arbitrary matrix ?), it was shown that weighting by
the invariant distribution is key to ensuring convergence and an approximation error bound
[17, 18]. Earlier empirical work anticipated this [13, 14].
The temporal-difference learning algorithm presented in Section 5 is a version of TD(0),
This is a special case of TD(?), which is parameterized by ? ? [0, 1]. It is not known
whether the results of this paper can be extended to the general case of ? ? [0, 1]. Prior
research has suggested that larger values of ? lead to superior results. In particular, an
example of [1] and the approximation error bounds of [17, 18], both of which are restricted
to the case of a fixed policy, suggest that approximation error is amplified by a factor of
1/(1 ? ?) as ? is changed from 1 to 0. The results of Sections 3 and 4 suggest that
this factor vanishes if one considers a controlled process and performance loss rather than
approximation error.
Whether the results of this paper can be extended to accommodate approximate value iteration with general linearly parameterized approximators remains an open issue. In this
broader context, error and performance loss bounds of the kind offered by Theorem 2 are
unavailable, even when the invariant distribution is used to weight the projection. Such
error and performance bounds are available, on the other hand, for the solution to a certain
linear program [5, 6]. Whether a factor of 1/(1 ? ?) can similarly be eliminated from these
bounds is an open issue.
Our results can be extended to accommodate an average cost objective, assuming that the
MDP is communicating. With Boltzmann exploration, the equation of interest becomes
?
??
r = ??r? (T ??
r ? ?1).
? ? < of the minimal average cost ?? ? < and an
The variables include an estimate ?
approximation ??
r of the optimal differential cost function h? . The discount factor ? is set
to 1 in computing an -greedy Boltzmann exploration policy as well as T . There is an
average-cost version of temporal-difference learning for which any limit of convergence
? r?) satisfies this equation [19, 20, 21]. Generalization of Theorem 2 does not lead to a
(?,
useful result because the right-hand side of the bound becomes infinite as ? approaches 1.
On the other hand, generalization of Theorem 6 yields the first performance loss bound for
approximate value iteration with an average-cost objective:
Theorem 7 For any communicating MDP with an average-cost objective, partition, and
? then
r ? ?1)
> 0, if ??
r = ??r? (T ??
??r? ? ?? ? 2 min kh? ? ?rk? + .
r?<K
Here, ??r? ? < denotes the average cost under policy ?r?, which is well-defined because the
process is irreducible under an -greedy Boltzmann exploration policy. This theorem can be
proved by taking limits on the left and right-hand sides of the bound of Theorem 6. It is easy
to see that the limit of the left-hand side is ??r? ? ?? . The limit of minr?<K kJ ? ? ?rk?
on the right-hand side is minr?<K kh? ? ?rk? . (This follows from the analysis of [3].)
Acknowledgments
This material is based upon work supported by the National Science Foundation under
Grant ECS-9985229 and by the Office of Naval Research under Grant MURI N00014-001-0637. The author?s understanding of the topic benefited from collaborations with Dimitri
Bertsekas, Daniela de Farias, and John Tsitsiklis. A full length version of this paper has
been submitted to Mathematics of Operations Research and has benefited from a number
of useful comments and suggestions made by reviewers.
References
[1] D. P. Bertsekas. A counterexample to temporal-difference learning. Neural Computation, 7:270?279, 1994.
[2] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific,
Belmont, MA, 1996.
[3] D. Blackwell. Discrete dynamic programming. Annals of Mathematical Statistics,
33:719?726, 1962.
[4] D. P. de Farias and B. Van Roy. On the existence of fixed points for approximate
value iteration and temporal-difference learning. Journal of Optimization Theory and
Applications, 105(3), 2000.
[5] D. P. de Farias and B. Van Roy. Approximate dynamic programming via linear programming. In Advances in Neural Information Processing Systems 14. MIT Press,
2002.
[6] D. P. de Farias and B. Van Roy. The linear programming approach to approximate
dynamic programming. Operations Research, 51(6):850?865, 2003.
[7] G. J. Gordon. Stable function approximation in dynamic programming. Technical
Report CMU-CS-95-103, Carnegie Mellon University, 1995.
[8] G. J. Gordon. Stable function approximation in dynamic programming. In Machine
Learning: Proceedings of the Twelfth International Conference (ICML), San Francisco, CA, 1995.
[9] T. Jaakkola, M. I. Jordan, and S. P. Singh. On the Convergence of Stochastic Iterative
Dynamic Programming Algorithms. Neural Computation, 6:1185?1201, 1994.
[10] S. P. Singh and R. C. Yee. An upper-bound on the loss from approximate optimalvalue functions. Machine Learning, 1994.
[11] R. S. Sutton. Temporal Credit Assignment in Reinforcement Learning. PhD thesis,
University of Massachusetts, Amherst, Amherst, MA, 1984.
[12] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine
Learning, 3:9?44, 1988.
[13] R. S. Sutton. On the virtues of linear learning and trajectory distributions. In Proceedings of the Workshop on Value Function Approximation, Machine Learning Conference, 1995.
[14] R. S. Sutton. Generalization in reinforcement learning: Successful examples using
sparse coarse coding. In Advances in Neural Information Processing Systems 8, Cambridge, MA, 1996. MIT Press.
[15] J. N. Tsitsiklis. Asynchronous stochastic approximation and Q-learning. Machine
Learning, 16:185?202, 1994.
[16] J. N. Tsitsiklis and B. Van Roy. Feature?based methods for large scale dynamic
programming. Machine Learning, 22:59?94, 1996.
[17] J. N. Tsitsiklis and B. Van Roy. An analysis of temporal?difference learning with
function approximation. IEEE Transactions on Automatic Control, 42(5):674?690,
1997.
[18] J. N. Tsitsiklis and B. Van Roy. Analysis of temporal-difference learning with function approximation. In Advances in Neural Information Processing Systems 9, Cambridge, MA, 1997. MIT Press.
[19] J. N. Tsitsiklis and B. Van Roy. Average cost temporal-difference learning. In Proceedings of the IEEE Conference on Decision and Control, 1997.
[20] J. N. Tsitsiklis and B. Van Roy. Average cost temporal-difference learning. Automatica, 35(11):1799?1808, 1999.
[21] J. N. Tsitsiklis and B. Van Roy. On average versus discounted reward temporaldifference learning. Machine Learning, 49(2-3):179?191, 2002.
[22] B. Van Roy. Performance loss bounds for approximate value iteration with state
aggregation. Under review with Mathematics of Operations Research, available at
www.stanford.edu/ bvr/psfiles/aggregation.pdf, 2005.
[23] C. J. C. H. Watkins. Learning From Delayed Rewards. PhD thesis, Cambridge University, Cambridge, UK, 1989.
[24] C. J. C. H. Watkins and P. Dayan. Q-learning. Machine Learning, 8:279?292, 1992.
| 2877 |@word version:5 manageable:1 norm:6 stronger:1 twelfth:1 open:4 simulation:1 contraction:3 dramatic:1 accommodate:2 carry:1 current:1 intriguing:1 must:2 john:1 belmont:1 happen:1 partition:19 stationary:1 greedy:14 selected:4 coarse:1 mathematical:1 differential:2 become:3 introduce:1 x0:1 expected:2 discounted:3 td:6 becomes:2 begin:1 provided:1 notation:1 bounded:1 what:2 argmin:1 kind:1 contrasting:1 impractical:1 temporal:13 every:5 preferable:1 uk:1 partitioning:1 control:3 grant:2 omit:2 bertsekas:3 engineering:2 limit:7 consequence:2 sutton:4 abuse:1 approximately:1 might:2 emphasis:1 initialization:1 suggests:1 range:1 obeys:1 unique:2 acknowledgment:1 j0:1 intersect:1 empirical:1 attain:1 thought:1 projection:8 word:1 pre:1 suggest:2 undesirable:1 close:2 operator:5 storage:1 context:5 yee:1 restriction:1 www:1 deterministic:1 map:1 reviewer:1 go:10 independently:1 assigns:1 communicating:6 variation:1 annals:1 play:1 programming:15 roy:11 approximated:1 jk:1 muri:1 role:1 electrical:1 capture:2 ensures:1 benjamin:1 vanishes:1 reward:2 dynamic:13 depend:1 solving:3 singh:2 upon:1 gu:4 farias:4 easily:1 intersects:1 shortcoming:2 describe:1 stanford:4 solve:2 larger:1 relax:1 ability:1 statistic:1 sequence:9 maximal:1 tu:5 relevant:3 combining:1 amplified:1 kh:3 convergence:4 requirement:3 produce:1 converges:4 solves:4 c:1 involves:2 pfunction:1 stochastic:4 exploration:10 material:1 assign:1 generalization:3 really:1 preliminary:1 extension:3 hold:1 considered:1 credit:1 minu:2 mapping:4 predict:1 smallest:1 establishes:1 weighted:2 mit:3 aim:1 modified:2 rather:1 broader:2 office:1 jaakkola:1 corollary:4 derived:2 focus:1 naval:1 improvement:1 attains:1 dayan:1 typically:2 relation:1 issue:4 among:1 special:2 equal:3 having:1 eliminated:1 represents:1 broad:1 icml:1 anticipated:1 future:1 report:1 gordon:2 irreducible:1 national:1 delayed:1 interest:1 circular:1 euclidean:3 minimal:2 instance:1 column:4 earlier:1 obstacle:1 assignment:1 minr:10 cost:25 entry:1 subset:1 uniform:1 successful:1 too:1 combined:1 international:1 amherst:2 ie:1 intersecting:2 again:1 thesis:2 management:1 dimitri:1 de:4 coding:1 psfiles:1 coefficient:3 vi:5 sup:1 start:1 aggregation:4 parallel:1 efficiently:1 yield:2 identify:1 multiplying:1 trajectory:2 submitted:1 reach:1 definition:1 against:1 associated:4 proof:4 sampled:1 gain:1 proved:1 massachusetts:1 lim:20 ut:1 follow:1 though:2 strongly:1 hand:11 perhaps:1 scientific:1 mdp:15 grows:1 normalized:1 rk2:2 true:3 concept:1 hence:4 pdf:1 demonstrate:1 common:2 superior:1 approximates:1 significant:1 composition:1 mellon:1 cambridge:4 counterexample:1 automatic:1 mathematics:2 similarly:1 inclusion:1 stable:2 inf:3 certain:1 n00014:1 inequality:1 arbitrarily:1 approximators:3 minimum:2 converge:1 period:1 multiple:2 full:2 desirable:1 technical:1 offer:2 controlled:2 ensuring:1 converging:1 involving:2 neuro:1 denominator:1 cmu:1 iteration:12 represent:1 normalization:1 justified:1 else:1 grow:1 comment:1 incorporates:1 spirit:1 jordan:1 easy:2 concerned:1 identically:1 restrict:1 idea:1 whether:6 expression:2 action:10 useful:2 discount:1 documented:1 exist:1 disjoint:1 per:1 xth:1 discrete:2 carnegie:1 promise:1 key:1 sum:1 parameterized:5 draw:1 decision:2 comparable:1 bound:26 guaranteed:1 pxy:6 precisely:1 x2:1 generates:3 min:10 quintuple:1 px:2 according:4 combination:1 nonexpansive:1 remain:1 partitioned:1 s1:1 explained:1 invariant:12 restricted:1 taken:1 equation:6 remains:1 turn:3 daniela:1 available:3 operation:3 appropriate:1 existence:1 denotes:3 include:2 establish:1 approximating:2 objective:4 question:1 primary:1 dependence:1 rt:1 said:2 exhibit:1 kth:4 athena:1 bvr:2 topic:1 considers:1 uxt:1 assuming:1 length:2 mini:2 minimizing:1 unfortunately:1 relate:1 policy:28 boltzmann:9 upper:2 markov:2 finite:4 implementable:1 immediate:3 defining:2 extended:3 sharp:2 arbitrary:3 omission:1 pair:1 required:1 blackwell:1 established:3 address:2 suggested:1 program:1 event:1 suitable:1 natural:3 critical:1 indicator:3 mdps:3 brief:1 kj:24 nice:1 prior:2 understanding:1 review:1 relative:1 loss:15 expect:1 interesting:1 limitation:1 suggestion:1 approximator:3 versus:1 foundation:1 incurred:1 affine:1 offered:1 temporaldifference:1 begs:1 share:1 collaboration:1 row:1 changed:1 supported:1 keeping:1 asynchronous:1 infeasible:1 tsitsiklis:9 side:9 taking:1 sparse:1 van:11 benefit:2 distributed:1 transition:3 computes:1 author:1 made:3 reinforcement:2 san:1 far:1 ec:1 transaction:1 approximate:13 implicitly:1 dealing:1 automatica:1 assumed:1 francisco:1 iterative:1 sk:7 why:1 ca:2 unavailable:1 linearly:3 motivation:1 x1:1 benefited:2 representative:1 lie:1 watkins:2 weighting:3 admissible:1 rk:20 theorem:27 xt:3 virtue:1 normalizing:1 exists:6 workshop:1 effectively:1 phd:2 simply:1 expressed:1 ux:17 scalar:1 satisfies:3 ma:4 consequently:2 typical:1 infinite:1 lemma:1 meaningful:1 support:6 meant:1 |
2,067 | 2,878 | An Approximate Inference Approach for the
PCA Reconstruction Error
Manfred Opper
Electronics and Computer Science
University of Southampton
Southampton, SO17 1BJ
[email protected]
Abstract
The problem of computing a resample estimate for the reconstruction
error in PCA is reformulated as an inference problem with the help of
the replica method. Using the expectation consistent (EC) approximation, the intractable inference problem can be solved efficiently using
only two variational parameters. A perturbative correction to the result
is computed and an alternative simplified derivation is also presented.
1
Introduction
This paper was motivated by recent joint work with Ole Winther on approximate inference
techniques (the expectation consistent (EC) approximation [1] related to Tom Minka?s EP
[2] approach) which allows us to tackle high?dimensional sums and integrals required for
Bayesian probabilistic inference.
I was looking for a nice model on which I could test this approximation. It had to be simple
enough so that I would not be bogged down by large numerical simulations. But it had
to be nontrivial enough to be of at least modest interest to Machine Learning. With the
somewhat unorthodox application of approximate inference to resampling in PCA I hope
to be able to stress the following points:
? Approximate efficient inference techniques can be useful in areas of Machine
Learning where one would not necessarily assume that they are applicable. This
can happen when the underlying probabilistic model is not immediately visible
but shows only up as a result a of mathematical transformation.
? Approximate inference methods can be highly robust allowing for analytic continuations of model parameters to the complex plane or even noninteger dimensions.
? It is not always necessary to use a large number of variational parameters in order
to get reasonable accuracy.
? Inference methods could be systematically improved using perturbative corrections.
The work was also stimulated by previous joint work with D?orthe Malzahn [3] on resampling estimates for generalization errors of Gaussian process models and Supportvector?
Machines.
2
Resampling estimators for PCA
Principal Component Analysis (PCA) is a well known and widely applied tool for data
analysis. The goal is to project data vectors y from a typically high (d-) dimensional
space into an optimally chosen lower (q-) dimensional linear space with q << d, thereby
minimizing the expected projection error ? = E||y ? Pq [y]||2 , where Pq [y] denotes the
projection. E stands for an expectation over the distribution of the data. In practice where
the distribution is not available, one has to work with a data sample D0 consisting of N
T
vectors yk = (yk (1), yk (2), . . . , yk (d)) , k = 1, . . . , N . We arrange these vectors into a
(d?N ) data matrix Y = (y1 , y2 , . . . , yN ). Assuming centered data, the optimal subspace
is spanned by the eigenvectors ul of the d ? d data covariance matrix C = N1 YYT
corresponding to the q largest eigenvalues ?k . We will assume that these correspond to all
eigenvectors ?k > ? above some threshold value ?.
After computing the PCA projection, one would be interested in finding out if the computed
subspace represents the data well by estimating the average projection error on novel data
y (ie not contained in D0 ) which are drawn from the same distribution.
Fixing the projection Pq , the error can be rewritten as
X
E=
E Tr yyT ul uTl
(1)
?l <?
where P
the expectation is only over y and the training data are fixed. The training error
2
Et =
?l <? ?l can be obtained without knowledge of the distribution but will usually
only give an optimistically biased estimate for E.
2.1
A resampling estimate for the error
New artificial data samples D of arbitrary size can be created by resampling a number of
data points from D0 with or without replacement. A simple choice would be to choose
all data independently with the same probability 1/N , but other possibilities can also be
implemented within our formalism. Thus, some yi in D0 may appear multiple times in D
and others not at all. The idea of performing PCA on resampled data sets D and testing
on the remaining data D0 \D, motivates the following definition of a resample averaged
reconstruction error
?
?
X
1
Er =
ED ?
Tr yi yiT ul uTl ?
(2)
N0
yi ?D;?
/
l <?
as a proxy for E. ED is the expectation over the resampling process. This is an estimator of the bootstrap type [3,4]. N0 is the expected number of data in D0 which are not
contained in the random set D. The rest of the paper will discuss a method for efficiently
approximating (2).
2.2
Basic formalism
We introduce ?occupation numbers? si which count how many times yi is containd in D.
We also introduce two matrices D and C. D is a diagonal random matrix
Dii = Di =
1
(si + ?si ,0 )
??
C() =
?
YDYT .
N
(3)
C(0) is proportional
P to the covariance matrix of the resampled data. ? is the sampling rate,
i.e. ?N = ED [ i si ] is the expexted number of data in D (counting multiplicities). The
role of ? will be explained later. Using , we can generate expressions that can be used in
(2) to sum over the data which are not contained in the set D
1 X
C0 (0) =
?sj ,0 yj yjT .
(4)
?N j
In the following ?k and uk will always denote eigenvalues and eigenvectors of the data
dependent (i.e. random) covariance matrix C(0).
The desired averages can be constructed from the d ? d matrix Green?s function
?1
G(?) = (C(0) + ?I)
=
X uk uT
k
?k + ?
(5)
k
Using the well known representation
of the Dirac ? distribution given by ?(x) =
?
1
where i = ?1 and = denotes the imaginary part, we get
lim??0+ = ?(x?i?)
lim+
??0
X
1
= G(? ? i?) =
uk uTk ? (?k + ?) .
?
(6)
k
Hence, we have
Er =
Er0
Z
?
+
d?0 ?r (?0 )
(7)
0+
where
?
?
X
1
1
?r (?) =
lim =
ED ?
?sj ,0 Tr yj yjT G(?? ? i?) ?
? ??0+ N0
j
(8)
defines the error density from all eigenvalues > 0 and Er0 is the contribution from the
eigenspace with ?k = 0. The latter can also be easily expressed from G as
?
?
X
1
Er0 = lim
ED ?
?sj ,0 Tr yj yjT ?G(?) ?
(9)
??0 N0
j
We can also compute the resample averaged density of eigenvalues using
?(?) =
3
1
lim = ED [Tr G(?? ? i?)]
??N ??0+
(10)
A Gaussian probabilistic model
The matrix Green?s function for ? > 0 can be generated from a Gaussian partition function
Z. This is a well known construction in statistical physics, and has also been used within
the NIPS community to study the distribution of eigenvalues for an average case analysis
of PCA [5]. Its use for computing the expected reconstruction error is to my knowledge
new.
With the (N ? N ) kernel matrix K = N1 YT Y we define the Gaussian partition function
Z
1 T
?1
Z =
dx exp ? x K + D x
(11)
2
Z
1
1
= |K| 2 ?d/2 (2?)(N ?d)/2 dd z exp ? zT (C() + ?I) z .
(12)
2
x is an N dimensional integration variable. The equality can be easily shown by expressing
the integrals as determinants. 1 The first representation (11) is useful for computing the
resampling average and the second one connects directly to the definition of the matrix
Green?s function G. Note, that by its dependence on the kernel matrix K, a generalization
to d = ? dimensional feature spaces and kernel PCA is straightforward. The partition
function can then be understood as a certain Gaussian process expectation. We will not
discuss this point further.
The free energy F = ? ln Z enables us to generate the following quantities
?2
? ln Z
? =0
=
N
1 X
?s ,0 Tr yj yjT G(?)
?N j=1 j
(13)
? ln Z
d
=
+ Tr G(?)
(14)
??
?
where we have used (4) for (13). (13) will be used for the computation of (8) and (14)
applies to the density of eigenvalues. Note that the definition of the partition function Z
requires that ? > 0, whereas the application to the reconstruction error (7) needs negative
values ? = ?? < 0. Hence, an analytic continuation of end results must be performed.
?2
4
Resampling average and replicas
(13) and (14) show that we can compute the desired resampling averages from the expected
free energy ?ED [ln Z]. This can be expressed using the ?replica trick? of statistical physics
(see e.g. [6]) using
1
(15)
ED [ln Z] = lim ln ED [Z n ] ,
n?0 n
where one attempts an approximate computation of ED [Z n ] for integer n and uses a continuation to real numbers at the end. The n times replicated and averaged partition function
(11) can be written in the form
Z
(n) .
n
Z
= ED [Z ] = dx ?1 (x) ?2 (x)
(16)
.
where we set x = (x1 , . . . , xn ) and
"
(
)#
n
1X T
?1 (x) = ED exp ?
x Dxa
2 a=1 a
"
n
1 X T ?1
?2 (x) = exp ?
x K xa
2 a=1 a
#
(17)
The unaveraged partition function Z (11) is Gaussian, but the averaged Z (n) is not and
usually intractable.
5
Approximate inference
To approximate Z (n) , we will use the EC approximation recently introduced by Opper &
Winther [1]. For this method we need two auxiliary distributions
T
1
1 ? 1 ?0 xT x
?1 (x)e??1 x x
p0 (x) =
e 2
,
(18)
Z1
Z0
where ?1 and ?0 are ?variational? parameters to be optimized. p1 tries to mimic the intractable p(x) ? ?1 (x) ?2 (x), replacing the multivariate Gaussian ?2 by a simpler, i.e.
p1 (x) =
1
1
If K has zero eigenvalues, a division of Z by |K| 2 is necessary. This additive renormalization
of the free energy ? ln Z will not influence the subsequent computations.
tractable diagonal one. One may think of using a general diagonal matrix ?1 , but we will
restrict ourselves in the present case to the simplest case of a spherical Gaussian with a
single parameter ?1 .
The strategy is to split Z (n) into a product of Z1 and a term that has to be further approximated:
Z
T
Z (n) = Z1 dx p1 (x) ?2 (x) e?1 x x
(19)
Z
T
(n)
? Z1 dx p0 (x) ?2 (x) e?1 x x ? ZEC (?1 , ?0 ) .
The approximation replaces the intractable average over p1 by a tractable one over p0 . To
optmize ?1 and ?0 we argue as follows: We try to make p0 as close as possible to p1 by
matching the moments hxT xi1 = hxT xi0 . The index denotes the distribution which is used
for averaging. By this step, ?0 becomes a function of ?1 . Second, since the true partition
function Z (n) is independent of ?1 , we expect that a good approximation to Z (n) should
be stationary with respect to variations of ?1 . Both conditions can be expressed by the
(n)
requirement that ln ZEC (?1 , ?0 ) must be stationary with respect to variations of ?1 and
?0 .
Within this EC approximation we can carry out the replica limit ED [ln Z] ? ln ZEC =
(n)
limn?0 n1 ln ZEC and get after some calculations
Z
? 21 xT (D+(?0 ??)I)x
? ln ZEC = ?ED ln dx e
?
(20)
Z
Z
?1
T
1
1 T
? ln dx e? 2 x (K +?I)x + ln dx e? 2 ?0 x x
where we have set ? = ?0 ? ?1 . Since the first Gaussian integral factorises, we can
now perform the resampling average in (20) relatively easy for the case when all sj ?s in
s
(3) are independent. Assuming e.g. Poisson probabilities p(s) = e?? ?s! gives a good
approximation for the case of resampling ?N points with replacement.
The variational equations which make (20) stationary are
?i
1
1
1
1 X
ED
=
=
?0 ? ? + Di
?0
N
1 + ?k ?
?0
(21)
k
where ?k are the eigenvalues of the matrix K. The variational equations have to be solved
in the region ? = ?? < 0 where the original partition function does not exist. The resulting
parameters ?0 and ? will usually come out as complex numbers.
6
Experiments
By eliminating the parameter ?0 from (21) it is possible to reduce the numerical computations to solving a nonlinear equation for a single complex parameter ? which can be
solved easily and fast by a Newton method. While the analytical results are based on Poisson statistics, the simulations of random resampling was performed by choosing a fixed
number (equal to the expected number of the Poisson distribution) of data at random with
replacement.
The first experiment was for a set of data generated at random from a spherical Gaussian.
To show that resampling maybe useful, we give on on the left hand side of Figure 1 the
reconstruction error as a function of the value of ? below which eigenvalues are dicarded.
25
20
15
10
5
0
0
1
2
3
eigenvalue ?
4
5
Figure 1: Left: Errors for PCA on N = 32 spherically Gaussian data with d = 25 and ? =
3. Smooth curve: approximate resampled error estimate, upper step function: true error.
Lower step function: Training error. Right: Comparison of EC approximation (line) and
simulation (histogramme) of the resampled density of eigenvalues for N = 50 spherically
Gaussian data of dimensionality d = 25. The sampling rate was ? = 3.
The smooth function is the approximate resampling error (3? oversampled to leave not
many data out of the samples) from our method. The upper step function gives the true
reconstruction error (easy to calculate for spherical data) from (1). The lower step function
is the training error. The right panel demonstrates the accuracy of the approximation on
a similar set of data. We compare the analytically approximated density of states with the
results of a true resampling experiment, where eigenvalues for many samples are counted
into small bins. The theoretical curve follows closely the experiment.
Since the good accuracy might be attributed to the high symmetry of the toy data, we have
also performed experiments on a set of N = 100 handwritten digits with d = 784. The
results in Figure 2 are promising. Although the density of eigenvalues is more accurate
than the resampling error, the latter comes still out reasonable.
7
Corrections
I will show next that the EC approximation can be augmented by a perturbation expansion.
Going back to (19), we can write
Z
Z
Z
T
T
1
Z (n)
dk
?ikT x
= dx p1 (x) ?2 (x) e?1 x x = dx ?2 (x) e 2 ?x x
e
?(k)
Z1
(2?)N n
T
. R
where ?(k) = dx p1 (x)eik x is the characteristic function of the density p1 (18). ln ?(k)
is the cumulant generating function. Using the symmetries of the density p1 , we can perform a power series expansion of ln ?(k), which starts with a quadratic term (second cumulant)
M2 T
ln ?(k) = ?
k k + R(k) ,
(22)
2
where M2 = hxTa xa i1 . It can be shown that if we neglect R(k) (containing the higher order
cumulants) and carry out the integral over k, we end up replacing p1 by a simpler Gaussian
p0 with matching moments M2 , i.e. the EC approximation. Higher order corrections to
the free energy ?ED [ln Z] = ? ln ZEC + ?F1 + . . . can be obtained perturbatively by
M2 T
writing ?(k) = e? 2 k k (1 + R(k) + . . .). This expansion is similar in spirit to Edgeworth
20
15
10
5
0
0
eigenvalue ?
0.5
1
1.5
Figure 2: Left: Resampling error (? = 1) for PCA on a set of 100 handwritten digits (?5?)
with d = 784. The approximation (line) for ? = 1 is compared with simulations of the
random resampling. Right: Resampled density of eigenvalues for the same data set. Only
the nonzero eigenvalues are shown.
expansions in statistics. The present case is more complicated by the extra dimensions
introduced by the replicating of variables and the limit n ? 0. After a lengthy calculation
one finds for the lowest order correction (containing the monomials in k of order 4) to the
free energy:
2 X
2
?1
1
?0
?F1 = ? ED
?1 ?
?0 K?1 + ?I ii ? 1
(23)
4
?0 ? ? + Di
i
I illustrate the effect of ?F1 on a correction to the reconstruction error in the ?zero?
subspace? using (9) and (13) for the digit data as a function of ?. Resampling used the
Poisson approximation.The left panel of Figure 3 demonstrates that the true correction is
fairly small. The right panel shows that the lowest order term ?F1 accounts for a major
part of the true correction when ? < 3. The strong underestimation for larger ? needs
further investigation.
8
The calculation without replicas
Knowing with hindsight how the final EC result (20) looks like, we can rederive it using
another method which does not rely on the ?replica trick?. We first write down an exact
expression for ? ln Z before averaging. Expressing Gaussian integrals by determinants
yields
Z
Z
?1
1 T
? 12 xT (D+(?0 ??)I)x
? ln dx e? 2 x (K +?I)x + (24)
? ln Z = ? ln dx e
Z
T
1
1
+ ln dx e? 2 ?0 x x + ln det(I + r)
2
?1
?0
?1
where the matrix r has elements rij = 1 ? ?0 ??+D
?
K
+
?I
?
I
. The
0
i
ij
EC approximation is obtained by simply neglecting r. Corrections to this are found by
expanding
?
X
(?1)k+1
Tr rk
(25)
ln det (I + r) = Tr ln (I + r) =
k
k=1
Correction to resampling error
0.6
22
Resampled reconstruction error (? = 0)
20
18
16
0.4
14
12
10
0.2
8
6
4
2
0
0.5
1
1.5
2
2.5
Resampling rate ?
3
3.5
4
0
0
0.5
1
1.5
2
2.5
Resampling rate ?
3
3.5
4
Figure 3: Left: Resampling error Er0 from the ? = 0 subspace as a function of resampling
rate for the digits data. The approximation (lower line) is compared with simulations of
the random resampling (upper line). Right: The difference between approximation and
simulations (upper curve) and its estimate (lower curve) from the perturbative correction
(23).
The first order term in the expansion (25) vanishes after averaging (see (21)) and the second
order term gives exactly the correction of the cumulant method (23).
9
Outlook
It will be interesting to extend the perturbative framework for the computation of corrections to inference approximations to other, more complex models. However, our results
indicate that the use and convergence of such perturbation expansion needs to be critically
investigated and that the lowest order may not always give a clear indication of the accuracy of the approximation. The alternative derivation for our simple model could present
an interesting ground for testing these ideas.
Acknowledgments
I would like to thank Ole Winther for the great collaboration on the EC approximation.
References
[1] Manfred Opper and Ole Winther. Expectation consistent free energies for approximate inference.
In NIPS 17, 2005.
[2] T. P. Minka. Expectation propagation for approximate Bayesian inference. In UAI 2001, pages
362?369, 2001.
[3] D. Malzahn and M. Opper. An approximate analytical approach to resampling averages. Journal
of Machine Learning Research, pages 1151?1173, 2003.
[4] B. Efron, R. J. Tibshirani. An Introduction to the Bootstrap. Monographs on Statistics and Applied
Probability 57, Chapman & Hall, 1993.
[5] D. C. Hoyle and M. Rattray Limiting form of the sample covariance matrix eigenspectrum in
PCA and kernel PCA. In NIPS 16, 2003.
[6] A. Engel and C. Van den Broeck, Statistical Mechanics of Learning (Cambridge University Press,
2001).
| 2878 |@word determinant:2 eliminating:1 c0:1 simulation:6 covariance:4 p0:5 thereby:1 tr:9 outlook:1 carry:2 moment:2 electronics:1 series:1 imaginary:1 si:4 perturbative:4 dx:13 must:2 written:1 numerical:2 happen:1 visible:1 partition:8 additive:1 analytic:2 enables:1 subsequent:1 n0:4 resampling:26 stationary:3 plane:1 manfred:2 simpler:2 mathematical:1 constructed:1 er0:4 introduce:2 expected:5 p1:10 mechanic:1 spherical:3 becomes:1 project:1 estimating:1 underlying:1 panel:3 eigenspace:1 lowest:3 finding:1 transformation:1 hindsight:1 tackle:1 exactly:1 demonstrates:2 uk:4 yn:1 appear:1 before:1 understood:1 limit:2 optimistically:1 might:1 averaged:4 acknowledgment:1 yj:4 testing:2 practice:1 edgeworth:1 bootstrap:2 digit:4 area:1 projection:5 matching:2 get:3 close:1 influence:1 writing:1 yt:1 straightforward:1 independently:1 immediately:1 m2:4 estimator:2 spanned:1 variation:2 limiting:1 construction:1 exact:1 us:1 trick:2 element:1 approximated:2 noninteger:1 ep:1 role:1 solved:3 rij:1 calculate:1 region:1 yk:4 monograph:1 vanishes:1 solving:1 division:1 easily:3 joint:2 derivation:2 fast:1 ole:3 artificial:1 choosing:1 widely:1 larger:1 statistic:3 think:1 final:1 eigenvalue:16 indication:1 analytical:2 reconstruction:9 product:1 dirac:1 convergence:1 requirement:1 generating:1 leave:1 help:1 illustrate:1 ac:1 fixing:1 ij:1 strong:1 implemented:1 auxiliary:1 come:2 indicate:1 closely:1 centered:1 dii:1 bin:1 f1:4 generalization:2 investigation:1 correction:13 hall:1 ground:1 exp:4 great:1 bj:1 mo:1 major:1 arrange:1 resample:3 applicable:1 largest:1 engel:1 tool:1 hope:1 always:3 gaussian:14 utl:2 inference:13 dependent:1 typically:1 going:1 interested:1 i1:1 unaveraged:1 integration:1 fairly:1 equal:1 sampling:2 chapman:1 represents:1 look:1 mimic:1 eik:1 others:1 consisting:1 replacement:3 connects:1 ourselves:1 n1:3 attempt:1 interest:1 highly:1 possibility:1 accurate:1 integral:5 neglecting:1 necessary:2 modest:1 desired:2 theoretical:1 formalism:2 cumulants:1 southampton:2 monomials:1 optimally:1 my:1 broeck:1 density:9 winther:4 ie:1 probabilistic:3 physic:2 xi1:1 containing:2 choose:1 toy:1 account:1 ikt:1 later:1 performed:3 try:2 start:1 complicated:1 contribution:1 perturbatively:1 accuracy:4 characteristic:1 efficiently:2 correspond:1 yield:1 bayesian:2 handwritten:2 critically:1 ed:17 lengthy:1 definition:3 energy:6 minka:2 di:3 attributed:1 soton:1 knowledge:2 ut:1 lim:6 dimensionality:1 yyt:2 efron:1 back:1 higher:2 tom:1 improved:1 xa:2 hand:1 replacing:2 nonlinear:1 propagation:1 defines:1 effect:1 y2:1 true:6 hence:2 equality:1 analytically:1 spherically:2 nonzero:1 orthe:1 stress:1 variational:5 novel:1 recently:1 extend:1 xi0:1 expressing:2 cambridge:1 replicating:1 had:2 pq:3 multivariate:1 recent:1 certain:1 yi:4 somewhat:1 utk:1 hoyle:1 ii:1 multiple:1 d0:6 smooth:2 calculation:3 yjt:4 basic:1 expectation:8 poisson:4 kernel:4 whereas:1 limn:1 biased:1 rest:1 extra:1 unorthodox:1 spirit:1 integer:1 counting:1 split:1 enough:2 easy:2 restrict:1 reduce:1 idea:2 knowing:1 det:2 motivated:1 pca:13 expression:2 ul:3 reformulated:1 useful:3 clear:1 eigenvectors:3 maybe:1 simplest:1 continuation:3 generate:2 exist:1 tibshirani:1 rattray:1 write:2 threshold:1 drawn:1 yit:1 replica:6 sum:2 reasonable:2 resampled:6 replaces:1 quadratic:1 nontrivial:1 performing:1 relatively:1 dxa:1 explained:1 den:1 multiplicity:1 ln:28 equation:3 discus:2 count:1 tractable:2 end:3 available:1 rewritten:1 alternative:2 original:1 denotes:3 remaining:1 newton:1 neglect:1 approximating:1 quantity:1 strategy:1 dependence:1 diagonal:3 so17:1 subspace:4 thank:1 argue:1 eigenspectrum:1 assuming:2 index:1 minimizing:1 negative:1 motivates:1 zt:1 perform:2 allowing:1 upper:4 looking:1 y1:1 perturbation:2 arbitrary:1 community:1 introduced:2 required:1 z1:5 optimized:1 oversampled:1 nip:3 malzahn:2 able:1 usually:3 below:1 green:3 power:1 rely:1 factorises:1 created:1 nice:1 occupation:1 expect:1 interesting:2 proportional:1 consistent:3 proxy:1 dd:1 systematically:1 collaboration:1 free:6 side:1 van:1 curve:4 opper:4 dimension:2 stand:1 xn:1 replicated:1 simplified:1 counted:1 ec:11 sj:4 approximate:13 uai:1 stimulated:1 promising:1 robust:1 expanding:1 symmetry:2 expansion:6 investigated:1 necessarily:1 complex:4 x1:1 augmented:1 renormalization:1 rederive:1 down:2 z0:1 rk:1 xt:3 er:2 dk:1 intractable:4 hxt:2 simply:1 expressed:3 contained:3 applies:1 goal:1 averaging:3 principal:1 underestimation:1 latter:2 cumulant:3 |
2,068 | 2,879 | Dynamic Social Network Analysis using Latent
Space Models
Purnamrita Sarkar, Andrew W. Moore
Center for Automated Learning and Discovery
Carnegie Mellon University
Pittsburgh, PA 15213
(psarkar,awm)@cs.cmu.edu
Abstract
This paper explores two aspects of social network modeling. First,
we generalize a successful static model of relationships into a dynamic
model that accounts for friendships drifting over time. Second, we show
how to make it tractable to learn such models from data, even as the
number of entities n gets large. The generalized model associates each
entity with a point in p-dimensional Euclidian latent space. The points
can move as time progresses but large moves in latent space are improbable. Observed links between entities are more likely if the entities are
close in latent space. We show how to make such a model tractable (subquadratic in the number of entities) by the use of appropriate kernel functions for similarity in latent space; the use of low dimensional kd-trees; a
new efficient dynamic adaptation of multidimensional scaling for a first
pass of approximate projection of entities into latent space; and an efficient conjugate gradient update rule for non-linear local optimization in
which amortized time per entity during an update is O(log n). We use
both synthetic and real-world data on upto 11,000 entities which indicate
linear scaling in computation time and improved performance over four
alternative approaches. We also illustrate the system operating on twelve
years of NIPS co-publication data. We present a detailed version of this
work in [1].
1
Introduction
Social network analysis is becoming increasingly important in many fields besides sociology including intelligence analysis [2], marketing [3] and recommender systems [4]. Here
we consider learning in systems in which relationships drift over time.
Consider a friendship graph in which the nodes are entities and two entities are linked if
and only if they have been observed to collaborate in some way. In 2002, Raftery et al
[5]introduced a model similar to Multidimensional Scaling in which entities are associated
with locations in p-dimensional space, and links are more likely if the entities are close in
latent space. In this paper we suppose that each observed link is associated with a discrete
timestep, so each timestep produces its own graph of observed links, and information is
preserved between timesteps by two assumptions. First we assume entities can move in
latent space between timesteps, but large moves are improbable. Second, we make a standard Markov assumption that latent locations at time t + 1 are conditionally independent
of all previous locations given the latent locations at time t and that the observed graph at
time t is conditionally independent of all other positions and graphs, given the locations at
time t (see Figure 1).
Let Gt be the graph of observed pairwise links at time t. Assuming n entities, and a
p-dimensional latent space, let Xt be an n ? p matrix in which the ith row, called xi , corresponds to the latent position of entity i at time t. Our conditional independence structure,
familiar in HMMs and Kalman filters, is shown in Figure 1. For most of this paper we treat
the problem as a tracking problem in which we estimate Xt at each timestep as a function
of the current observed graph Gt and the previously estimated positions Xt?1 . We want
Xt = arg max P (X|Gt , Xt?1 ) = arg max P (Gt |X)P (X|Xt?1 )
X
(1)
X
In Section 2 we design models of P (Gt |Xt ) and P (Xt |Xt?1 ) that meet our modeling
needs and which have learning times that are tractable as n gets large. In Sections 3 and
4 we introduce a two-stage procedure for locally optimizing equation (1). The first stage
generalizes linear multidimensional scaling algorithms to the dynamic case while carefully
maintaining the ability to computationally exploit sparsity in the graph. This gives an
approximate estimate of Xt . The second stage refines this estimate using an augmented
conjugate gradient approach in which gradient updates can use kd-trees over latent space
to allow O(n log n) computation per step.
X0
X1
?
XT
Figure 1: Model through time
G0
2
G1
GT
The DSNL (Dynamic Social Network in Latent space) Model
Let dij = |xi ? xj | be the Euclidian distance between entities i and j in latent space at time
t. For clarity we will not use a t subscript on these variables except where it is needed. We
denote linkage at time t by i ? j, and absence of a link by i 6? j. p(i ? j) denotes the
probability of observing the link. We use p(i ? j) and pij interchangeably.
2.1 Observation Model
The likelihood score function P (Gt |Xt ) intuitively measures how well the model explains
pairs of entities which are actually connected in the training graph as well as those that are
not. Thus it is simply
Y
Y
pij
P (Gt |Xt ) =
(1 ? pij )
(2)
i?j
i6?j
Following [5] the link probability is a logistic function of dij and is denoted as pL
ij , i.e.
1
(3)
1 + e(dij ??)
where ? is a constant whose significance is explained shortly. So far this model is similar
to [5]. To extend this model to the dynamic case, we now make two important alterations.
pL
ij =
First, we allow entities to vary their sociability. Some entities participate in many links
while others are in few. We give each entity a radius, which will be used as a sphere of
interaction within latent space. We denote entity i?s radius as ri . We introduce the term
rij to replace ? in equation (3). rij is the maximum of the radii of i and j. Intuitively, an
entity with higher degree will have a larger radius. Thus we define the radius of entity i
with degree ?i as, c(?i + 1), so that rij is c ? (max(?i , ?j ) + 1), and c will be estimated
from the data. In practice, we estimate the constant c by a simple line-search on the score
function. The constant 1 ensures a nonzero radius.
70
0.6
0.55
60
Simple
Logistic
0.5
50
0.45
0.4
40
0.35
0.3
30
New
Linkage
Probability
0.25
0.2
20
0.15
10
0.1
1
0.5
0
0.2
0.4
0.8
0.6
0
1
1
0
?1
1
?2
(A)
?1
0
?2
(B)
Figure 2: A. The actual logistic function, and our kernelized version with ? = 0.1. B.The actual
(flat, with one minimum), and the modified (steep with two minima) constraint functions, for two
dimensions, with Xt varying over a 2-d grid, from (?2, ?2) to (2, 2), and Xt?1 = (1, 1)
The second alteration is to weigh the link probabilities by a kernel function. We alter the
simple logistic link probability pL
ij , such that two entities have high probability of linkage
only if their latent coordinates are within distance rij of one another. Beyond this range
there is a constant noise probability ? of linkage. Later we will need the kernelized function
to be continuous and differentiable at rij . Thus we pick the biquadratic kernel.
K(dij ) = (1 ? (dij /rij )2 )2 ,
= 0,
when dij ? rij
otherwise
Using this function we redefine our link probability pij as
This is equivalent to having,
1
K(dij ) + ?(1 ? K(dij ))
1 + e(dij ?rij )
=?
pij =
pL
ij K(dij )
(4)
+ ?(1 ? K(dij )) .
when dij ? rij
otherwise
(5)
We plot this function in Figure 2A.
2.2 Transition Model
The second part of the score penalizes large displacements from the previous time step. We
use the most obvious Gaussian model: each coordinate of each latent position is independently subjected to a Gaussian perturbation
with mean 0 and variance ? 2 . Thus
n
log P (Xt |Xt?1 ) = ?
X
|Xi,t ? Xi,t?1 |2 /2? 2 + const
(6)
i=1
3
Learning Stage One: Linear Approximation
We generalize classical multidimensional scaling (MDS) [6] to get an initial estimate of the
positions in the latent space. We begin by recapping what MDS does. It takes as input an
n ? n matrix of non-negative distances D where Di,j denotes the target distance between
entity i and entity j. It produces an n ? p matrix X where the ith row is the position
? ? XX T |F where | ? |F
of entity i in p-dimensional latent space. MDS finds arg minX |D
?
denotes the Frobenius norm [7]. D is the similarity matrix obtained from D, using standard
? and ? be a diagonal
linear algebra operations. Let ? be the matrix of the eigenvectors of D,
matrix with the corresponding eigenvalues. Denote the matrix of the p positive eigenvalues
by ?p and the corresponding columns of ? by ?p . From this follows the expression of
1
classical MDS, i.e. X = ?p ?p2 .
Two questions remain. Firstly, what should be our target distance matrix D? Secondly,
how should this be extended to account for time? The first answer follows from [5] and
defines Dij as length of the shortest path from i to j in graph G. We restrict this length to
a maximum of three hops in order to avoid the full n2 computation of all-shortest paths. D
thus has a dense mostly constant structure.
When accounting for time, we do not want the positions of entities to change drastically
from one time step to another. Hence we try to minimize |Xt ? Xt?1 |F along with the
? t denote the D
? matrix derived from Gt . We formulate the
main objective of MDS. Let D
?
above problem as minimization of |Dt ? Xt XtT |F + ?|Xt ? Xt?1 |F , where ? is a parameter
which controls the importance of the two parts of the objective function. The above does
not have a closed form solution. However, by constraining the objective function further,
we can obtain a closed form solution for a closely related problem. The idea is to work
with the distances and not the positions themselves. Since we are learning the positions
from distances, we change our constraint (during this linear stage of learning) to encourage
the pairwise distance between all pairs of entities to change little between each time step,
instead of encouraging the individual coordinates to change little. Hence we try to minimize
(7)
T
? t ? Xt XtT |F + ?|Xt XtT ? Xt?1 Xt?1
|D
|F
?t ?
?
+
?
which is equivalent to minimizing the trace of (D
T
T
). The above expression has an analytical solution: an
)T (Xt XtT ? Xt?1 Xt?1
Xt?1 Xt?1
?t
Xt XtT )T (D
Xt XtT )
?(Xt XtT
affine combination of the current information from the graph and the coordinates at the last
timestep. Namely, the new solution satisfies,
Xt XtT =
1 ?
?
T
Dt +
Xt?1 Xt?1
1+?
1+?
(8)
?t ,
We plot the two constraint functions in Figure 2B. When ? is zero, Xt XtT equals D
T
and when ? ? ?, it is equal to Xt?1 Xt?1
. As in MDS, eigendecomposition of the right
hand side of equation 8 yields the solution Xt which minimizes the objective function in
equation 7.
We now have a method which finds latent coordinates for time t that are consistent with
Gt and have similar pairwise distances as Xt?1 . But although all pairwise distances may
be similar, the coordinates may be very different. Indeed, even if ? is very large and we
only care about preserving distances, the resulting X may be any reflection, rotation or
translation of the original Xt?1 . We solve this by applying the Procrustes transform to the
solution Xt of equation 8. This transform finds the linear area-preserving transformation
of Xt that brings it closest to the previous configuration Xt?1 . The solution is unique
if XtT Xt?1 is nonsingular [8], and for zero centered Xt and Xt?1 , is given by Xt? =
Xt U V T , where XtT Xt?1 = U SV T using Singular Value Decomposition (SVD).
Before moving on to stage two?s nonlinear optimization we must address the scalability of
stage one. The naive implementation (SVD of the matrix from equation 8) has a cost of
? t , and Xt XtT , are dense n ? n matrices. However in [1]
O(n3 ), for n nodes, since both D
we show how we use the power method [9] to exploit the dense mostly constant structure of
Dt and the fact that Xt XtT is just an outer product of two thin n ? p matrices. The power
method is an iterative eigendecomposition technique which only involves multiplying a
matrix by a vector. Its net cost can be shown to be O(n2 f + n + pn) per iteration, where
f is the fraction of non-constant entries in Dt .
4
Stage Two: Nonlinear Search
Stage One places entities in reasonably consistent locations which fit our intuition, but it is
not tied to the probabilistic model from Section 2. Stage two uses these locations as initializations for applying nonlinear optimization directly to the model in equation 1. We use
conjugate gradient (CG) which was the most effective of several alternatives attempted. The
most important practical question is how to make these gradient computations tractable,
especially when the model likelihood involves a double sum over all entities. We must
compute the partial derivatives of logP (Gt |Xt ) + logP (Xt |Xt?1 ) with respect to all values
xi,k,t for i ? 1...n and k ? 1..p. First consider the P (Gt |Xt ) term:
? log P (Gt |Xt ) X ? log pij X ?log(1 ? pij ) X ?pij /?Xi,k,t X ?pij /?Xi,k,t
=
+
=
?
?Xi,k,t
?Xi,k,t
?Xi,k,t
pij
1 ? pij
j,i?j
j,i?j
j,i6?j
?pij /?Xi,k,t =
j,i6?j
?(pL
ij K
(9)
?pL
ij
+ ?(1 ? K))
?K
?K
=K
+ pL
??
= ?i,j,k,t
ij
?Xi,k,t
?Xi,k,t
?Xi,k,t
?Xi,k,t
(10)
However K, the biquadratic kernel introduced in equation 4, evaluates to zero and has a
zero derivative when dij > rij . Plugging this information in (10), we have,
?
?i,j,k,t when dij ? rij ,
?pij /?Xi,k,t =
(11)
0
otherwise.
Equation (9) now becomes
X ?i,j,k,t
X
? log P (Gt |Xt )
=
?
?Xi,k,t
p
ij
j,i?j
dij ?rij
j,i6?j
dij ?rij
?i,j,k,t
1 ? pij
(12)
when dij ? rij and zero otherwise. This simplification is very important because we
can now use a spatial data structure such as a kd-tree in the low dimensional latent space to
retrieve all pairs of entities that lie within each other?s radius in time O(rn+n log n) where
r is the average number of in-radius neighbors of an entity [10, 11]. The computation of the
gradient involves only those pairs. A slightly more sophisticated trick, omitted for space
reasons, lets us compute log P (Gt |Xt ), in O(rn + n log n) time. From equation(6), we
have
? log P (Xt |Xt?1 )
Xi,k,t ? Xi,k,t?1
=?
?Xi,k,t
?2
(13)
In the early stages of Conjugate Gradient, there is a danger of a plateau in our score
function in which our first derivative is insensitive to two entities that are connected, but
are not within each other?s radius. To aid the early steps of CG, we add an additional term
to the score function, which penalizes allP
pairs of connected entities according to the square
2
of their separation in latent space, i.e.
i?j dij . Weighting this by a constant pConst,
our final CG gradient becomes
X
? log P (Gt |Xt )
? log P (Xt |Xt?1 )
?Scoret
=
+
? pConst ? 2
(Xi,k,t ? Xj,k,t )
?Xi,k,t
?Xi,k,t
?Xi,k,t
j
i?j
5
Results
We report experiments on synthetic data generated by a model described below and the
NIPS co-publication data 1 . We investigate three things: ability of the algorithm to reconstruct the latent space based only on link observations, anecdotal evaluation of what
happens to the NIPS data, and scalability results on large datasets from Citeseer.
5.1
Comparing with ground truth
We generate synthetic data for six consecutive timesteps. At each timestep the next set of
two-dimensional latent coordinates are generated with the former positions as mean, and a
gaussian noise of standard deviation ? = 0.01. Each entity is assigned a random radius.
At each step , each entity is linked with a relatively higher probability to the ones falling
within its radius, or containing it within their radii. There is a noise probability of 0.1, by
1
See http://www.cs.toronto.edu/?roweis/data.html
which any two entities i and j outside the maximum pairwise radii rij are connected. We
generate graphs of sizes 20 to 1280, doubling the size every time. Accuracy is measured
by drawing a test set from the same model, and determining the ROC curve for predicting
whether a pair of entities will be linked in the test set. We experiment with six approaches:
A. The True model that was used to generate the data (this is an upper bound on the performance of any learning algotihm).
B. The DSNL model learned using the above algorithms.
C. A random model, guessing link probabilities randomly (this should have an AUC of 0.5).
D. The Simple Counting model (Control Experiment). This ranks the likelihood of being
linked in the testset according to the frequency of linkage in the training set. It can be
considered as the equivalent of the 1-nearest-neighbor method in classification: it does not
generalize, but merely duplicates the training set.
E. Time-varying MDS: The model that results from running stage one only.
F. MDS with no time: The model that results from ignoring time information and running
independent MDS on each timestep.
Figure 3 shows the ROC curves for the third timestep on a test set of size 160. Table 1
shows the AUC scores of our approach and the five alternatives for 3 different sizes of the
dataset over the first, third, and last time steps.
Table 1. AUC score on graphs of size n for six
different models (A) True (B) Model learned
by DSNL,(C) Random Model,(D) Simple
Counting model(Control), (E) MDS with time,
and (F) MDS without time.
600
500
True Positive
400
300
200
True Model
Random Model
MDS without time
Time?varying MDS
Model Learned
Control Experiment
100
0
0
2000
4000
6000
8000
False Positive
10000
12000
Figure 3: ROC curves of the six different
models described earlier for test set of size
160 at timestep 3, in simulated data.
14000
Time
A
B
1
3
6
0.94
0.93
0.93
0.85
0.88
0.82
1
3
6
0.86
0.86
0.86
0.83
0.79
0.81
1
3
6
0.81
0.80
0.81
0.79
0.79
0.78
C
D
n=80
0.48 0.76
0.48 0.81
0.50 0.76
n=320
0.50 0.70
0.51 0.70
0.50 0.71
n=1280
0.50 0.68
0.50 0.69
0.50 0.68
E
F
0.77
0.77
0.77
0.67
0.65
0.67
0.72
0.72
0.74
0.65
0.62
0.64
0.61
0.74
0.70
0.70
0.71
0.70
In all the cases we see that the true model has the highest AUC score, followed by the
model learned by DSNL. The simple counting model rightly guesses some of the links in
the test graph from the training graph. However it also predicts the noise as links, and ends
up being beaten by the model we learn. The results show that it is not sufficient to only
perform Stage One. When the number of links is small, MDS without time does poorly
compared to our temporal version. However as the number of links grows quadratically
with the number of entities, regular MDS does almost as well as the temporal version:
this is not a surprise because the generalization benefit from the previous timestep becomes
unnecessary with sufficient data on the current timestep. Further experiments we conducted
[1] show that the experiments initialized with time-variant MDS converges almost twice as
fast as those with random initialization, and also converges to a better log-likelihood.
5.2
Visualizing the NIPS coauthorship data over time
For clarity we present a subset of the NIPS dataset, obtained by choosing a well-connected
author, and including all authors and links within a few hops. We dropped authors who
appeared only once and we merged the timesteps into three groups: 1987-1990 (Figure 4A),
1991-1994(Figure 4B), and 1995-1998(Figure 4C). In each picture we have the links for
that timestep, a few well connected people highlighted, with their radii. These radii are
learnt from the model. Remember that the distance between two people is related to the
radii. Two people with very small radii, are considered far apart in the model even if they
are physically close. To give some intuition of the movement of the rest of the points, we
divided the area in the first timestep in 4 parts, and colored and shaped the points in each
differently. This coloring and shaping is preserved throughout all the timesteps.
In this paper we limit ourselves to anecdotal examination of the latent positions. For example, with BurgesC and V apnikV we see that they had very small radii in the first four
years, and were further apart from one another, since there was no co-publication. However
in the second timestep they move closer, though there are no direct links. This is because
of the fact that they both had co-published with neighbors of one another. On the third time
step they make a connection, and are assigned almost identical coordinates, since they have
a very overlapping set of neighbors.
We end the discussion with entities HintonG , GhahramaniZ , and JordanM . In the first
timestep they did not coauthor with one another, and were placed outside one-another?s
radii. In the second timestep GhahramaniZ , and HintonG coauthor with JordanM .
However since HintonG had a large radius and more links than the former, it is harder
for him to meet all the constraints, and he doesn?t move very close to JordanM . In the
next timestep however GhahramaniZ has a link with both of the others, and they move
substantially closer to one another.
5.3 Performance Issues
Figure 4D shows the performance against the number of entities. When kd-trees are used
and the graphs are sparse scaling is clearly sub-quadratic and nearly linear in the number
of entities, meeting our expectation of O(n log n) performance. We successfully applied
our algorithms to networks of sizes up to 11,000 [1]. The results show subquadratic timecomplexity along with satisfactory link prediction on test sets.
6
Conclusions and Future Work
This paper has described a method for modeling relationships that change over time. We
believe it is useful both for understanding relationships in a mass of historical data and also
as a tool for predicting future interactions, and we plan to explore both directions further.
In [1] we develop a forward-backward algorithm, optimizing the global likelihood instead
of treating the model as a tracking model. We also plan to extend this to find the posterior
distributions of the coordinates following the approach used by [5].
Acknowledgments
We are very grateful to Anna Goldenberg for her valuable insights. We also thank Paul
Komarek and Sajid Siddiqi for some very helpful discussions and useful comments. This
work was partially funded by DARPA EELD grant F30602-01-2-0569.
References
[1] P. Sarkar and A. Moore. Dynamic social network analysis using latent space models. SIGKDD
Explorations: Special Issue on Link Mining, 2005.
[2] J. Schroeder, J. J. Xu, and H. Chen. Crimelink explorer: Using domain knowledge to facilitate
automated crime association analysis. In ISI, pages 168?180, 2003.
[3] J. J. Carrasco, D. C. Fain, K. J. Lang, and L. Zhukov. Clustering of bipartite advertiser-keyword
graph. In ICDM, 2003.
[4] J. Palau, M. Montaner, and B. Lo? pez. Collaboration analysis in recommender systems using
social networks. In Eighth Intl. Workshop on Cooperative Info. Agents (CIA?04), 2004.
Koch
C
KochC
Manwani
ManwaniA
Burges
A
C
Vapnik
V
Viola
Sejnowski
T
BurgesC
P
VapnikV
ViolaP
SejnowskiT
HintonG
Jordan
M
Hinton
G
Ghahramani
Z
JordanM
GhahramaniZ
(A)
(B)
1
ManwaniA
KochC
quadratic score
score using kd?tree
0.9
0.8
Viola
P
SejnowskiT
HintonG
GhahramaniZ
JordanM
BurgesC
VapnikV
Time in Seconds
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
300
(C)
400
500
600
700
Number of entities
800
900
1000
(D)
Figure 4: NIPS coauthorship data at A. Timestep 1: green stars in upper-left corner, magenta pluses in top right, cyan spots in lower right, and blue crosses in the bottom-left. B.
Timestep 2. C. Timestep 3. D. Time taken for score calculation vs number of entities.
[5] A. E. Raftery, M. S. Handcock, and P. D. Hoff. Latent space approaches to social network
analysis. J. Amer. Stat. Assoc., 15:460, 2002.
[6] R. L. Breiger, S. A. Boorman, and P. Arabie. An algorithm for clustering relational data with
applications to social network analysis and comparison with multidimensional scaling. J. of
Math. Psych., 12:328?383, 1975.
[7] I. Borg and P. Groenen. Modern Multidimensional Scaling. Springer-Verlag, 1997.
[8] R. Sibson. Studies in the robustness of multidimensional scaling : Perturbational analysis of
classical scaling. J. Royal Stat. Soc. B, Methodological, 41:217?229, 1979.
[9] David S. Watkins. Fundamentals of Matrix Computations. John Wiley & Sons, 1991.
[10] F. Preparata and M. Shamos. Computational Geometry: An Introduction. Springer, 1985.
[11] A. G. Gray and A. W. Moore. N-body problems in statistical learning. In NIPS, 2001.
| 2879 |@word version:4 norm:1 accounting:1 decomposition:1 citeseer:1 pick:1 euclidian:2 harder:1 initial:1 configuration:1 score:11 current:3 comparing:1 montaner:1 lang:1 must:2 john:1 refines:1 plot:2 treating:1 update:3 v:1 intelligence:1 guess:1 ith:2 colored:1 math:1 node:2 location:7 toronto:1 firstly:1 five:1 along:2 direct:1 borg:1 redefine:1 introduce:2 pairwise:5 x0:1 purnamrita:1 indeed:1 themselves:1 isi:1 actual:2 little:2 encouraging:1 becomes:3 begin:1 xx:1 mass:1 what:3 minimizes:1 substantially:1 psych:1 transformation:1 temporal:2 remember:1 every:1 multidimensional:7 assoc:1 control:4 grant:1 positive:3 before:1 dropped:1 local:1 treat:1 limit:1 meet:2 subscript:1 becoming:1 path:2 sajid:1 plus:1 twice:1 initialization:2 co:4 hmms:1 range:1 unique:1 practical:1 acknowledgment:1 practice:1 spot:1 procedure:1 perturbational:1 displacement:1 danger:1 area:2 projection:1 regular:1 get:3 close:4 applying:2 www:1 equivalent:3 center:1 independently:1 formulate:1 rule:1 insight:1 retrieve:1 coordinate:9 sejnowskit:2 target:2 suppose:1 us:1 pa:1 associate:1 amortized:1 sociability:1 trick:1 carrasco:1 predicts:1 cooperative:1 observed:7 bottom:1 rij:15 ensures:1 connected:6 keyword:1 movement:1 highest:1 valuable:1 weigh:1 intuition:2 dynamic:7 arabie:1 grateful:1 algebra:1 bipartite:1 darpa:1 differently:1 fast:1 effective:1 sejnowski:1 outside:2 choosing:1 shamos:1 whose:1 larger:1 solve:1 drawing:1 otherwise:4 reconstruct:1 ability:2 g1:1 transform:2 highlighted:1 final:1 differentiable:1 eigenvalue:2 analytical:1 net:1 interaction:2 product:1 adaptation:1 poorly:1 roweis:1 frobenius:1 scalability:2 double:1 intl:1 produce:2 converges:2 illustrate:1 andrew:1 develop:1 stat:2 measured:1 nearest:1 ij:8 progress:1 p2:1 soc:1 c:2 involves:3 indicate:1 direction:1 radius:20 closely:1 merged:1 filter:1 awm:1 centered:1 exploration:1 explains:1 generalization:1 secondly:1 pl:7 koch:1 considered:2 ground:1 vary:1 early:2 consecutive:1 omitted:1 him:1 successfully:1 tool:1 minimization:1 anecdotal:2 clearly:1 gaussian:3 modified:1 avoid:1 pn:1 varying:3 publication:3 derived:1 methodological:1 rank:1 likelihood:5 sigkdd:1 cg:3 helpful:1 goldenberg:1 kernelized:2 her:1 arg:3 classification:1 html:1 issue:2 denoted:1 groenen:1 plan:2 spatial:1 special:1 hoff:1 field:1 equal:2 once:1 having:1 shaped:1 hop:2 identical:1 nearly:1 thin:1 alter:1 future:2 subquadratic:2 others:2 report:1 duplicate:1 few:3 preparata:1 modern:1 randomly:1 individual:1 familiar:1 geometry:1 ourselves:1 investigate:1 mining:1 evaluation:1 encourage:1 partial:1 closer:2 improbable:2 tree:5 penalizes:2 initialized:1 sociology:1 column:1 modeling:3 earlier:1 logp:2 cost:2 deviation:1 entry:1 subset:1 successful:1 dij:19 conducted:1 answer:1 sv:1 learnt:1 synthetic:3 explores:1 twelve:1 fundamental:1 probabilistic:1 containing:1 corner:1 derivative:3 account:2 alteration:2 star:1 fain:1 scoret:1 later:1 try:2 closed:2 linked:4 observing:1 minimize:2 square:1 accuracy:1 variance:1 who:1 yield:1 nonsingular:1 generalize:3 multiplying:1 published:1 plateau:1 eeld:1 evaluates:1 against:1 frequency:1 obvious:1 associated:2 di:1 static:1 dataset:2 knowledge:1 shaping:1 carefully:1 actually:1 sophisticated:1 coloring:1 higher:2 dt:4 improved:1 amer:1 though:1 marketing:1 stage:13 just:1 hand:1 nonlinear:3 overlapping:1 defines:1 logistic:4 brings:1 gray:1 believe:1 grows:1 facilitate:1 true:5 former:2 hence:2 assigned:2 manwani:1 moore:3 nonzero:1 satisfactory:1 conditionally:2 visualizing:1 during:2 interchangeably:1 auc:4 generalized:1 reflection:1 rotation:1 insensitive:1 extend:2 he:1 association:1 mellon:1 collaborate:1 grid:1 i6:4 handcock:1 had:3 funded:1 moving:1 similarity:2 operating:1 gt:16 add:1 closest:1 own:1 posterior:1 optimizing:2 apart:2 verlag:1 meeting:1 preserving:2 minimum:2 additional:1 care:1 shortest:2 advertiser:1 full:1 calculation:1 cross:1 sphere:1 divided:1 icdm:1 plugging:1 prediction:1 variant:1 cmu:1 expectation:1 physically:1 iteration:1 kernel:4 coauthor:2 preserved:2 want:2 singular:1 rest:1 comment:1 thing:1 jordan:1 counting:3 constraining:1 automated:2 independence:1 xj:2 timesteps:5 fit:1 restrict:1 idea:1 whether:1 expression:2 six:4 linkage:5 useful:2 detailed:1 eigenvectors:1 procrustes:1 locally:1 siddiqi:1 generate:3 http:1 estimated:2 per:3 blue:1 carnegie:1 discrete:1 group:1 sibson:1 four:2 falling:1 clarity:2 backward:1 timestep:19 graph:16 merely:1 fraction:1 year:2 sum:1 allp:1 place:1 almost:3 throughout:1 separation:1 scaling:10 bound:1 cyan:1 followed:1 simplification:1 quadratic:2 schroeder:1 constraint:4 ri:1 flat:1 n3:1 aspect:1 relatively:1 according:2 combination:1 kd:5 conjugate:4 remain:1 slightly:1 increasingly:1 son:1 happens:1 intuitively:2 explained:1 taken:1 computationally:1 equation:10 previously:1 needed:1 tractable:4 subjected:1 end:2 generalizes:1 operation:1 appropriate:1 upto:1 cia:1 alternative:3 robustness:1 shortly:1 drifting:1 original:1 denotes:3 running:2 clustering:2 top:1 maintaining:1 const:1 exploit:2 f30602:1 ghahramani:1 especially:1 classical:3 move:7 g0:1 question:2 objective:4 md:16 diagonal:1 guessing:1 gradient:8 minx:1 distance:12 link:25 thank:1 simulated:1 entity:45 outer:1 participate:1 reason:1 assuming:1 besides:1 kalman:1 length:2 relationship:4 minimizing:1 steep:1 mostly:2 info:1 trace:1 negative:1 design:1 implementation:1 rightly:1 perform:1 recommender:2 upper:2 observation:2 markov:1 datasets:1 pez:1 extended:1 viola:2 hinton:1 relational:1 rn:2 perturbation:1 drift:1 sarkar:2 introduced:2 david:1 pair:6 namely:1 connection:1 crime:1 learned:4 quadratically:1 nip:7 address:1 beyond:1 below:1 eighth:1 appeared:1 sparsity:1 including:2 max:3 green:1 royal:1 power:2 explorer:1 examination:1 predicting:2 picture:1 raftery:2 naive:1 understanding:1 discovery:1 xtt:13 determining:1 eigendecomposition:2 degree:2 agent:1 affine:1 pij:14 consistent:2 sufficient:2 collaboration:1 translation:1 row:2 lo:1 placed:1 last:2 drastically:1 side:1 allow:2 burges:1 neighbor:4 sparse:1 benefit:1 curve:3 dimension:1 world:1 transition:1 doesn:1 author:3 forward:1 testset:1 historical:1 far:2 social:8 approximate:2 global:1 pittsburgh:1 unnecessary:1 xi:24 search:2 latent:28 continuous:1 iterative:1 table:2 learn:2 reasonably:1 ignoring:1 domain:1 did:1 significance:1 dense:3 main:1 anna:1 noise:4 paul:1 n2:2 x1:1 augmented:1 xu:1 body:1 roc:3 aid:1 wiley:1 sub:1 position:11 lie:1 tied:1 watkins:1 weighting:1 third:3 magenta:1 friendship:2 xt:66 beaten:1 workshop:1 false:1 vapnik:1 importance:1 chen:1 surprise:1 simply:1 likely:2 explore:1 tracking:2 partially:1 doubling:1 springer:2 corresponds:1 truth:1 satisfies:1 conditional:1 replace:1 absence:1 change:5 except:1 called:1 pas:1 svd:2 attempted:1 coauthorship:2 people:3 |
2,069 | 288 | 160
Tang
Analytic Solutions to the Formation of
Feature-Analysing Cells of a Three-Layer
Feedforward Visual Information
Processing Neural Net
D.S. Tang
Microelectronics and Computer Technology Corporation
3500 West Balcones Center Drive
Austin, TX 78759-6509
email: [email protected]
ABSTRACT
Analytic solutions to the information-theoretic evolution equation of the connection strength of a three-layer feedforward neural
net for visual information processing are presented. The results
are (1) the receptive fields of the feature-analysing cells correspond to the eigenvector of the maximum eigenvalue of the Fredholm integral equation of the first kind derived from the evolution
equation of the connection strength; (2) a symmetry-breaking
mechanism (parity-violation) has been identified to be responsible for the changes of the morphology of the receptive field;
(3) the conditions for the formation of different morphologies are
explicitly identified.
1 INTRODUCTION
The use of Shannon's information theory ( Shannon and Weaver,1949) to the study
of neural nets has been shown to be very instructive in explaining the formation
of different receptive fi~lds in the early visual information processing, as evident by
the works of Linsker (1986,1988). It has been demonstrated that the connection
strengths which maximize the information rate from one layer of neurons to the
next exhibit center-surround, all-excitatory fall-inhibitory and orientation-selective
properties. This could lead to a better understanding on the mechanisms with
which the cells are self-organized to achieve adaptive responses to the changing enviroment. However, results from these studies are mainly numerical in nature and
therefore do n~t provide deeper insights as to how and under what conditions the
morphologies of the feature-aIlalyzing cells are formed. We present in this paper
Analytic Solutions to the Formation of Feature-Analysing Cells
accurate analytic solutions to the problems posed by Linsker. Namely, we solve
analytically the evolution equation of the connection strength, obtain close expressions for the receptive fields and derive the formation conditions for different classes
of morphologies. These results are crucial to the understanding of the architecture
of neural net as an information processing system. Below, we briefly summarize the
analytic techniques involved and the main results we obtained.
2 THREE-LAYER FEEDFORWARD NEURAL NET
The neural net configuration (Fig. 1) is identical to that reported in references 2
and 3 in which a feedforword three-layer neural net is considered. The layers are
labelled consecutively as layer-A, layer-B and layer-C.
-~~-:--_-:--
_ _ _ _ _ LAYERA
---:_-+-~,--...,...........,
_ _ _ LAYER B
_ _ _ _-''--_ _ _ _ _ LAYER C
Figure 1: The neural net configuration
The input-output relation for the signals to propagate from one layer to the consecutive layer is assumed to be linear,
Nj
Mj =
L CjdLi + ~).
(1)
i=l
~ is assumed to be an additive Gaussian white noise with constant standard deviation Q and Jero mean. L. and Mj are the ith stochastic input signal and the jth
stochastic output signal respectively. Cji is the connection strength which defines
the morphology of the receptive field and is to be determined by maximizing the
information rate. The spatial summation in equation (1) is to sum over all N j
161
162
Tang
inputs located according to a gaussian distributed within the same layer, with the
center of the distribution lying directly above the location of the Mj output signal.
If the statistical behavior of the input signal is assumed to be Gaussian,
(2)
then the information rate can be derived and is given by
R(M) = !Zn[l + ECiQijCj]
Q2
2
Ect'
(3)
The matrix Q is the correlation of the Us, Qi:i = E[(Li - i)(Lj - i)] with mean i.
The set of connection strengths which optimize the information rate subject to a
normalization condition, E Ct = A, and to their overall absolute mean, Cl: Ci)2 =
B, constitute physically plausible receptive fields. Below is the solutions to the
problem.
3 FREDHOLM INTEGRAL EQUATION
The evolution equation for the connection strength Cn. which maximizes the information rate subject to the constraints is
.
1 N
cn. = N L(Qn.i
+ k 2 )Ci.
(4)
i=l
k2 is the Lagrange multiplier. First, we assume that the statistical ensemble of
the visual images has the highest information content under the condition of fixed
variance. Then, from the maximum entropy principle, it can be shown that the
Gaussian distribution with a correlation Qij being a constant multiple of the kronecker delta function describes the statistics of this ensemble of visual images. It
can be shown that the solution to the above equation with Qni being a kronecker
delta function is a constant. Therefore, the connection strengths which defines the
linear input-output relation from layer A to layer B is either all-excitatory or allinhibitory. Hence, without loss of generality, we take the values of the layer A to
layer B connection strengths to be all-excitatory. Making use of this result, the
correlation function of the output signals at layer B (i.e. the input signals to layer
C) is derived
(5)
where r is the distance between the nth and the ith output signals. CQ = 1fNj
50. To study the connection strengths of the input-output relation from layer B to
layer C, it is more convenient to work with continuous spatial variables. Then the
solutions to the discrete evolution equation which maximizes the information rate
are solutions to the following Fredholm integral equation of the first kind with the
maximum eigenvalue ).,
C(f) = ;).
i:""
K(RIf)C(R)dR
(6)
Analytic Solutions to the Formation of Feature-Analysing Cells
=
where the kernal is K(RIr)
(Q(R-r)+k2)P(R) and the Gaussian input population
distribution density is p(r) = Cpexp(-~)
In continuous variables,
r. with Cp = ~.
wr.
the connection strength is denoted by C(r). A complete set of solutions to this
Fredholm integral equation can be analytically derived. We are interested only in
the solutions with the maximum eigenvalues. Below we present the results.
( ANALYTIC SOLUTIONS
The solution with the maximum eigenvalue has a few number of nodes. This can
be constructed as a linear superposition of an infinite number of gaussian functions
with different variances and means, which are treated as independent variables to be
solved with the Fredholm integral equation. Full details are contained in reference
3.
(a) Symmetric solution C(-r) = C(r):
For k2 :f: 0, the connection strength is
r2
C(r) = b[t + Gexp(--2)
20'0
?th G = ---L+
a7r
WI
I
,-;r
2a'"
?
and H --
---L+
a7r
I
H
+ (1 -
+~.
,-;r
2a'" .,."..
?
0
r2
H) Gex p(--2
2 )1
(7)
0'00
2
Here, a 2 - . 5rB'
r' ;t
o
0.66667,
r'
::f= 0.73205 and a = CQCp/N)'.
troo
The eigenvalue is given by
). =
k2 Cp 1r[
N
2a
2
+
G
1
l
2tr o
+
H
1
201 2
+ (1 _ H)
G
1
1
2c;r
00
+
]
1?
201 2
(8)
For k2 = 0, the connection strength is
(9)
and the eigenvalue is
\ _
1\-
CQCp'K
1
N[2r=T?
(10)
1
11?
+ 2012
+~
GO
These can be shown to be identical to the case of k2 =1= 0 when the limit k2 - 0 is
appropriately taken.
(b) Antisymmetric solution C(-r) = -C(r):
The connection strength is
C(r) = (Jx
r2
+ gy)exp(--2 [12rB
,
1
1 + !:a.3
01
,
+ !.a..
tr~
D?
(11)
The eigenvalue is
'KCQCp
). =
2 [1
N2rB 2r'""
?
1 ]2 ?
+ 2011 2 + ~
GO
(12)
163
164
Tang
In the above equations, b, f and 9 are normalization constants.
Below are the conditions under which the different morphologies (Fig.2 ) are formed.
(i)k2 > 0, the symmetric solution has the largest eigenvalue. The receptive field is
either all-excitatory or all-inhibitory, Fig.2a.
(ii)-0.891CQ < k2 < 0, the symmetric solution has the largest eigenvalue. The
receptive field has a mexcian-hat appearance, Fig.2b.
(iii)k2 < -0.891CQ, the anti-symmetric solution has the largest eigenvalue. The
receptive field has two regions divided by a straight line of arbitrary direction(degeneracy). The two regions are mirror image of each other. One is totally
inhibitory and the other is totally excitatory, Fig.2c.
,
0.4----------------------,
,
Symmetric solution
0.3'
0.3
g
::;
:>
=
:0
I
I
0.:.5
I
I
I
I
I
I
I
I
~
?u
( b)
0.2
.-/\
"'-
V
0.15
' ..
I
I
I
I
I
I
I
I
I
CC)
I
I
./\
0.11-
I
I
I
I
I
I
I
I
,V
:I Antisymmetric solution
~
0.05 -
.J
.%
?1
0
%
k/Cq
Figure 2: Relations between the receptive field and the maximum eigenvalues.
Inserts are examples of the connection strength C(r) versus the spatial
dimension in the x-direction.
Analytic Solutions to the Formation of Feature-Analysing Cells
Note that the information rate as given by Eq.(3} is invariant under the operation
of the spatial refiection,-r -+ r. The solutions to the optimaziation problem violates
parity-conservation as the overall mean of the connection strength (i.e. equivalently
k 2 ) changes to different values.
Results from numerical simulations agree very well with the analytic results. Numerical simulations are performed from 80 to 600 synapses. The agreement is good
even for the case in which the number of synapses are 200.
In summary, we have shown precisely how the mexican-hat morphology emerges as
identified by (ii) above. Furthermore, a symmetry-breaking(parity-violation) mechanism has been identified to explain the changes of the morphology from spatially
symmetric to anti-symmetric appearance as k2 passes through -0.891CQ. It is very
likely that similar symmetry breaking mechanisms are present in neural nets with
lateral connections.
References
C.E.Shannon and W. Weaver, The mathematical Theory of Communication
(Univ. of illinois Press,Urbana,1949).
2. R.Linsker, Proc. Natl. Acad. Sci. USA 83,7508(1986); Computer 21 (3),
105(1988).
3. D.S. Tang, Phys.Rev A, 40,6626(1989).
1.
165
PART II:
SPEECH AND SIGNAL PROCESSING
| 288 |@word briefly:1 multiplier:1 evolution:5 analytically:2 direction:2 hence:1 spatially:1 symmetric:7 receptive:10 simulation:2 propagate:1 consecutively:1 stochastic:2 white:1 exhibit:1 self:1 violates:1 tr:2 distance:1 lateral:1 sci:1 configuration:2 evident:1 theoretic:1 gexp:1 summation:1 complete:1 insert:1 cp:2 a7r:2 com:1 lying:1 considered:1 image:3 cq:5 exp:1 fi:1 equivalently:1 additive:1 numerical:3 analytic:9 early:1 consecutive:1 jx:1 proc:1 rir:1 superposition:1 neuron:1 surround:1 largest:3 ith:2 urbana:1 anti:2 illinois:1 communication:1 node:1 location:1 gaussian:6 arbitrary:1 mathematical:1 constructed:1 ect:1 qij:1 derived:4 namely:1 connection:17 mainly:1 behavior:1 morphology:8 below:4 lj:1 relation:4 summarize:1 maximize:1 totally:2 selective:1 interested:1 signal:9 ii:3 multiple:1 overall:2 maximizes:2 orientation:1 full:1 what:1 denoted:1 kind:2 treated:1 eigenvector:1 q2:1 spatial:4 weaver:2 nth:1 field:9 divided:1 corporation:1 nj:1 technology:1 identical:2 qi:1 linsker:3 k2:11 physically:1 normalization:2 few:1 understanding:2 cell:7 loss:1 limit:1 acad:1 crucial:1 appropriately:1 versus:1 pass:1 subject:2 violation:2 principle:1 feedforward:3 natl:1 iii:1 austin:1 excitatory:5 summary:1 responsible:1 accurate:1 parity:3 integral:5 architecture:1 identified:4 gex:1 jth:1 cn:2 deeper:1 explaining:1 fall:1 absolute:1 expression:1 mcc:1 distributed:1 cji:1 dimension:1 convenient:1 qn:1 adaptive:1 speech:1 close:1 zn:1 constitute:1 deviation:1 optimize:1 demonstrated:1 center:3 maximizing:1 go:2 reported:1 assumed:3 conservation:1 inhibitory:3 insight:1 continuous:2 density:1 delta:2 wr:1 rb:2 nature:1 population:1 mj:3 discrete:1 symmetry:3 cl:1 changing:1 antisymmetric:2 agreement:1 dr:1 main:1 located:1 noise:1 li:1 sum:1 gy:1 fig:5 solved:1 west:1 region:2 explicitly:1 highest:1 performed:1 layer:22 ct:1 breaking:3 instructive:1 enviroment:1 tang:6 formed:2 strength:16 constraint:1 variance:2 kronecker:2 precisely:1 ensemble:2 correspond:1 r2:3 microelectronics:1 lds:1 tx:1 balcones:1 ci:2 fredholm:5 mirror:1 univ:1 drive:1 cc:1 straight:1 according:1 explain:1 synapsis:2 phys:1 formation:7 entropy:1 describes:1 email:1 appearance:2 likely:1 posed:1 solve:1 plausible:1 wi:1 kernal:1 involved:1 rev:1 making:1 visual:5 statistic:1 lagrange:1 contained:1 invariant:1 degeneracy:1 taken:1 equation:13 eigenvalue:11 agree:1 net:9 emerges:1 organized:1 mechanism:4 labelled:1 content:1 rif:1 analysing:5 change:3 operation:1 determined:1 infinite:1 response:1 achieve:1 mexican:1 generality:1 furthermore:1 qni:1 shannon:3 correlation:3 hat:2 derive:1 defines:2 eq:1 usa:1 |
2,070 | 2,880 | Fusion of Similarity Data in Clustering
Tilman Lange and Joachim M. Buhmann
(langet,jbuhmann)@inf.ethz.ch
Institute of Computational Science, Dept. of Computer Sience,
ETH Zurich, Switzerland
Abstract
Fusing multiple information sources can yield significant benefits to successfully accomplish learning tasks. Many studies have focussed on fusing information in supervised learning contexts. We present an approach
to utilize multiple information sources in the form of similarity data for
unsupervised learning. Based on similarity information, the clustering
task is phrased as a non-negative matrix factorization problem of a mixture of similarity measurements. The tradeoff between the informativeness of data sources and the sparseness of their mixture is controlled by
an entropy-based weighting mechanism. For the purpose of model selection, a stability-based approach is employed to ensure the selection
of the most self-consistent hypothesis. The experiments demonstrate the
performance of the method on toy as well as real world data sets.
1
Introduction
Clustering has found increasing attention in the past few years due to the enormous information flood in many areas of information processing and data analysis. The ability of an
algorithm to determine an interesting partition of the set of objects under consideration,
however, heavily depends on the available information. It is, therefore, reasonable to equip
an algorithm with as much information as possible and to endow it with the capability to
distinguish between relevant and irrelevant information sources. How to reasonably identify a weighting of the different information sources such that an interesting group structure
can be successfully uncovered, remains, however, a largely unresolved issue.
Different sources of information about the same objects naturally arise in many application
scenarios. In computer vision, for example, information sources can consist of plain intensity measurements, edge maps, the similarity to other images or even human similarity
assessments. Similarly in bio-informatics: the similarity of proteins,e.g., can be assessed in
different ways, ranging from the comparison of gene profiles to direct comparisons at the
sequence level using alignment methods.
In this work, we use a non-negative matrix factorization approach (nmf) to pairwise clustering of similarity data that is extended in a second step in order to incorporate a suitable
weighting of multiple information sources, leading to a mixture of similarities. The latter
represents the main contribution of this work. Algorithms for nmf have recently found a
lot of attention. Our proposal is inspired by the work in [11] and [5]. Only recently, [18]
have also employed a nmf to perform clustering. For the purpose of model selection, we
employ a stability-based approach that has already been successfully applied to model se-
lection problems in clustering (e.g. in [9]). Instead of following the strategy to first embed
the similarities into a space with Euclidean geometry and then to perform clustering and,
where required, feature selection/weighting on the stacked feature vector, we advocate an
approach that is closer to the original similarity data by performing nmf.
Some work has been devoted to feature selection and weighting in clustering problems. In
[13] a variant of the k-means algorithm has been studied that employs the Fisher criterion
to assess the importance of individual features. In [14, 10], Gaussian mixture model-based
approaches to feature selection are introduced. The more general problem of learning a
suitable metric has also been investigated, e.g. in [17]. Similarity measurements represent
a particularly generic form of providing input to a clustering algorithm. Fusing such representations has only recently been studied in the context of kernel-based supervised learning,
e.g. in [7] using semi-definite programming and in [3] using a boosting procedure. In [1],
an approach to learning the bandwidth parameter of an rbf-kernel for spectral clustering is
studied.
The paper is organized as follows: section 2 introduces the nmf-based clustering method
combined with a data-source weighting (section 3). Section 4 discusses an out-of-sample
extension allowing us to predict assignments and to employ the stability principle for model
selection. Experimental evidence in favor of our approach is given in section 5.
2
Clustering by Non-Negative Matrix Factorization
Suppose we want to group a finite set of objects On := {o1 , . . . , on }. Usually, there are
multiple ways of measuring the similarity between different objects. Such relations give
rise to similarities sij := s(oi , oj ) 1 where we assume non-negativity sij ? 0, symmetry
sji = sij , and boundedness sij < ?. For n objects, we summarize the similarity data in a
n?n matrix S = (sij ) which is re-normalized to P = S/1tn S1n , where 1n := (1, . . . , 1)t .
The re-normalized similarities can be interpreted as the probability of the joint occurrence
of objects i, j.
We aim now at finding a non-negative matrix factorization of P ? [0, 1]n?n into a product
WHt of the n ? k matrices W and H with non-negative entries for which additionally
holds 1tn W1k = 1 and Ht 1n = 1k , where k denotes the number of clusters. That is,
one aims at explaining the overall probability for a co-occurrence by a latent cause, the
unobserved classes. The constraints ensure, that the entries of both, W and H, can be
considered as probabilities: the entry wi? of W is the joint probability q(i, ?) of object
i and class ? whereas hjk in H is the probability q(j|?). This model implicitly assumes
independence of object i and j conditioned on ?. Given a factorization
of P in W and H,
P
we can use the maximum a posteriori estimate, arg max? hi? j wj? , to arrive at a hard
assignment of objects to classes.
In order to obtain a factorization, we minimize the cross-entropy
X
X
C(PkWHt ) := ?
pij log
wi? hj?
(1)
i,j
?
which becomes minimal iff P = WHt 2 and is not convex in W and H together. Note, that
the factorization is not necessarily unique. We resort to a local optimization scheme,
P which
is inspired by the Expectation-Maximization (EM) algorithm: Let ??ij ? 0 with ? ??ij =
P
P
w h
1. Then, by the convexity of ? log x, we obtain ? log ? wi? hj? ? ? ? ??ij log ?i??ijj? ,
1
In the following, we represent objects by their indices.
The Kullback-Leibler divergence is D(PkWHt ) = ?H(P) + C(PkWHt ) ? 0 with equality
iff P = WHt .
2
which yields the relaxed objective function:
X
t
?
C(PkWH
) := ?
pij ??ij log wi? hj? + ??ij log ??ij ? C(PkWHt ).
(2)
i,j,?
With this relaxation, we can employ an alternating minimization scheme for minimizing
the bound on C. As in EM, one iterates
1. Given W and H, minimize C? w.r.t. ??ij
?
2. Given the values ??ij , find estimates for W and H by minimizing C.
until convergence, which produces a sequence of estimates
(t) (t)
(t)
??ij
wi? hj?
= P (t) (t) ,
? wi? hj?
(t+1)
wi?
=
X
j
(t)
pij ??ij ,
(t+1)
hj?
P
=P
i
(t)
pij ??ij
a,b
(t)
(3)
pab ??ab
? This is an instance of an MM algorithm [8]. We
that converges to a local minimum of C.
P
use the convention hj? = 0 whenever i,j pij ??ij = 0. The per-iteration complexity is
O(n2 ).
3
Fusing Multiple Data Sources
Measuring the similarity of objects in, say, L different ways results in L normalized
simiP
larity matrices P1 , . . . , PL . We introduce now weights ?l , 1 ? l ? L, with l ?l = 1. For
fixed ? = (?l ) ? [0, 1]L , the aggregated and normalized similarity becomes the convex
? = P ?l Pl . Hence, p?ij is a mixture of individual similarities p(l) , i.e. a
combination P
ij
l
? by minimizing
mixture of different explanations. Again, we seek a good factorization of P
the cross-entropy, which then becomes
min E? C(Pl kWHt )
(4)
? ,W,H
P
where E? [fl ] =
l ?l fl denotes the expectation w.r.t. the discrete distribution ? . The
same relaxation as in the last section can be used, i.e. for all ? , W and H, we have
? l kWHt )]. Hence, we can employ a slightly modified,
E? [C(Pl kWHt )] ? E? [C(P
nested alternating minimization approach: Given fixed ? , obtain estimates W and H using
the relaxation of the last section. The update equations change to
P
P (l) (t)
X X (l) (t)
(t+1)
(t+1)
l ?l
i pij ??ij
.
(5)
wi?
=
?l
pij ??ij , hj? = P
P
(l) (t)
j
l
l ?l
i,j pij ??ij
Given the current estimates of W and H, we could minimize the objective in equation (4)
t
?k1 = 1. To this end, set cl := C(Pl kWH
w.r.t. ? subject to the constraint k?
P) and let
c = (cl )l . Minimizing the expression in equation (4) subject to the constraints l ?l = 1
and ? 0, therefore, becomes a linear program (LP) min? ct? such that 1tL? = 1, ? 0,
where denotes the element-wise ?-relation. The LP solution is very sparse since the optimal solutions for the linear program lie on the corners of the simplex in the positive orthant
spanned by the constraints. In particular, it lacks a means to control the sparseness of the
coefficients ? . We, therefore, use a maximum entropy approach ([6]) for sparseness control:
the entropy is upper bounded by log L and measures the sparseness of the vector ? , since
the lower the entropy the more peaked the distribution ? can be. Hence, by lower bounding
the entropy, we specify the maximal admissible sparseness. This approach is reasonable
as we actually want to combine multiple (not only identify one) information sources but
the best fit in an unsupervised problem will be usually obtained by choosing only a single
source. Thus, we modify the objective originally given in eq. (4) to the entropy-regularized
? l kWHt )] ? ?H(?
?), so that the mathematical program given above beproblem E? [C(P
comes
?)
s.t. 1tL? = 1, ? 0,
(6)
min ct? ? ?H(?
?
where H denotes the (discrete) entropy and ? ? R+ is a positive Lagrange parameter. The
optimization problem in eq. (6) has an analytical solution, namely the Gibbs distribution
?l ? exp(?cl /?)
(7)
For ? ? ? one obtains ?l = 1/L, while for ? ? 0, the LP solution is recovered and the
estimates become the sparser the more the individual cl differ. Put differently, the parameter
? enables us to explore the space of different similarity combinations. The issue of selecting
a reasonable value for the parameter ? will be discussed in the next section.
Iterating this nested procedure will yield a locally optimal solution to the problem of minimizing the entropy-constrained objective, since (i) we obtain a local minimum of the modified objective function and (ii) solving the outer optimization problem can only further
decrease the entropy-constrained objective function.
4
Generalization and Model Selection
In this section, we introduce an out-of-sample extension that allows us to classify objects,
that have not been used for learning the parameters ? , W and H. The extension mechanism can be seen as in spirit of the Nystr?om extension (c.f. [16]). Introducing such a
generalization mechanism is worthwhile for two reasons: (i) To speed-up the computation
if the number n of objects under consideration is very large: By selecting a small subset
of m n objects for the initial fit followed by the application of a computationally less
expensive prediction step, one can realize such a speed-up. (ii) The free parameters of the
approach, the number of clusters k as well as the sparseness control parameter ?, can be
estimated using a re-sampling-based stability assessment that relies on the ability of an
algorithm to generalize to previously unseen objects.
Out-of-Sample Extension: Suppose we have to predict class memberships for r (= n ?
? l . Given the decomposition
m in the hold-out case) additional objects in the r ? m matrix S
into W and H, let zik be the ?posterior?
estimated for the i-th object in the data set used for
P
the original fit, i.e. zi? ? hi? j wj? . We can express the weighted, normalized similarity
P
(l) P
(l)
between a new object o and object i as p?io := l ?l s?oi / l,j ?l s?oj . We approximate now
zo? for a new object o by
X
z?o? =
zi? p?io ,
(8)
i
which amounts to an interpolation of the zo? . These values can be obtained using the originally computed zi? which are weighted according to their similarity between object i and
o. In the analogy to the Nystr?om approximation, the (zi? ) play the role of basis elements
while the p?io amount to coefficients in the basis approximation. The prediction procedure
requires O(mr(l + r + k)) steps.
Model Selection: The approach presented so far has two free parameters, the number
of classes k and the sparseness penalty ?. In [9], a method for determining the number of
classes has been introduced, that assesses the variability of clustering solutions. Thus, we
focus on selecting ? using stability. The assessment can be regarded as a generalization
of cross-validation, as it relies on the dissimilarity of solutions generated from multiple
sub-samples. In a second step, the solutions obtained from these samples are extended to
the complete data set by an appropriate predictor. Multiple classifications of the same data
0.4
0.4
0.35
0.35
0.3
0.3
probability ?l
avg. disagreement
0.5
0.45
0.25
0.2
0.15
0.1
0.2
0.15
0.1
0.05
0.05
0
?3
10
(a)
0.25
?1
10
0
2
10
10
sparsity parameter ?
3
10
0
(b)
1
2
3
4
5
data source index
6
(c)
Figure 1: Results on the toy data set (1(a)): The stability assessment (1(b)) suggests the
range ? ? {101 , 102 , 5 ? 102 }, which yield solutions matching the ground-truth. In 1(c), the
?l are depicted for a sub-sample and ? in this range.
set are obtained, whose similarity can be measured. For two clustering solutions Y, Y0 ?
{1, . . . , k}n , we define their disagreement as
n
d(Y, Y0 ) = min
??Sk
1X
I{y 6=?(yi0 )}
n i=1 i
(9)
where Sk denotes the set of all permutation on sets of size k and IA is the indicator function on the expression A. The measure quantifies the 0-1 loss after the labels have been
permuted, so that the two clustering solutions are in the best possible agreement. Perfect
agreement up to a permutation of the labels implies d(Y, Y0 ) = 0. The optimal permutation can be determined in O(k 3 ) by phrasing the problem as a weighted bipartite matching
problem. Following the approach in [9], we select the ?, given a pre-specified range of
admissible values, such that the average disagreement observed on B sub-samples is minimal. In this sense, the entropy regularization mechanism guides the search for similarity
combinations leading to stable grouping solutions. Note that, multiple minima can occur
and may yield solutions emphasizing different aspects of the data.
5
Experimental Results and Discussion
The performance of our proposal is explored by analyzing toy and real world data. For
the model selection (sec. 4), we have used B = 20 sub-samples with the proposed out-ofsample extension for prediction. For the stability assessment, different ? have been chosen
by ? ? {10?3 , 10?2 , 10?1 , .5, 1, 101 , 102 , 5 ? 102 , 103 , 104 }. We compared our results with
NCut [15] and Lee and Seung?s two NMF algorithms [11] (which measure the approximation error of the factorization with (i) the KL divergence and (ii) the squared Frobenius
norm) applied to the uniform combination of similarities.
Toy Experiment: Figure 1(a) depicts a data set consisting of two nested rings, where
the clustering task consists of identifying each ring as a class. We used rbf-kernels
k(x, y) = exp(?kx ? yk2 /2? 2 ) for ? varying in {10?4 , 10?3 , 10?2 , 100 , 101 } as well
as the path kernel introduced in [4]. All methods fail when used with the individual kernels except for the path-kernel. The non-trivial problem is to detect the correct structure
despite the disturbing influence of 5 un-informative kernels. Data sets of size dn/5e have
been generated by sub-sampling. Figure 1(b) depicts the stability assessment, where we see
very small disagreements for ? ? {101 , 102 , 5 ? 102 }. At the minimum, the solution almost
perfectly matches the ground-truth (1 error). A plot of the resulting ?-coefficients is given
in figure 1(c). NCut as well as the other nmf-methods lead to an error rate of ? 0.5 when
applied to the uniformly combined similarities.
(a)
(b)
Figure 2: Images for the segmentation experiments.
Image segmentation example:3 The next task consists of finding a reasonable segmentation of the images depicted in figures 2(b) and 2(a). For both images, we measured localized intensity histograms and additionally computed Gabor filter responses (e.g. [12]) on 3
scales for 4 different orientations. For each response image, the same histogramming procedure has been used. For all the histograms, we computed the pairwise Jensen-Shannon
divergence (e.g. [2]) for all pairs (i, j) of image sites and took the element-wise exponential of the negative Jensen-Shannon divergences. The resulting similarity matrices have
been used as input for the nmf-based data fusion. For the sub-sampling, m = 500 objects
have been employed. Figures 3(a) (for the shell image) and 3(b) (for the bird image) show
the stability curves for these examples which exhibit minima for non-trivial ? resulting in
non-uniform ? . Figure 3(c) depicts the resulting segmentation generated using ? indicated
by the stability assessment, while 3(d) shows a segmentation result, where ? is closer to the
uniform distribution but the stability score for the corresponding ? is low. Again, we can see
that weighting the different similarity measurements has a beneficial effect, since it leads
to improved results. The comparison with the NCut result on the uniformly weighted data
(fig. 3(e)) confirms that a non-trivial weighting is desirable here. Note that we have used the
full data set with NCut. For, the image in fig. 2(b), we observe similar behavior: the stability
selected solution (fig. 3(f)) is more meaningful than the NCut solution (fig. 3(g)) obtained
on the uniformly weighted data. In this example, the intensity information dominates the
solution obtained on the uniformly combined similarities. However, the texture information alone does not yield a sensible segmentation. Only the non-trivial combination, where
the influence of intensity information is decreased and that of the texture information is
increased, gives rise to the desired result. It is additionally noteworthy, that the prediction
mechanism employed works rather well: In both examples, it has been able to generalize
the segmentation from m = 500 to more than 3500 objects. However, artifacts resulting
from the subsampling-and-prediction procedure cannot always be avoided, as can be seen
in 3(f). They vanish, however, once the algorithm is re-applied to the full data (fig. 3(h)).
Clustering of Protein Sequences: Our final application is about the functional categorization of yeast proteins. We partially adopted the data used in [7] 4 . Since several of the
3588 proteins belong to more than one category, we extracted a subset of 1579 proteins exclusively belonging to one of the three categories cell cycle + DNA processing,transcription
and protein fate. This step ensures a clear ground-truth for comparison. Of the matrices used
in [7], we employed a Gauss Kernel derived from gene expression profiles, one derived
from Swiss-Waterman alignments, one obtained from comparisons of protein domains as
well as two diffusion kernels derived from protein-protein interaction data. Although the
data is not very discriminative for the 3-class problem, the solutions generated on the data
combined using the ? for the most stable ? lead to more than 10% improvement w.r.t. the
3
4
Only comparisons with NCut reported. The nmf results are slightly worse than those of NCut.
The data is available at http://noble.gs.washington.edu/proj/yeast/.
0.3
0.3
0.25
0.2
avg. disagreement
avg. disagreement
0.25
0.15
0.1
0.05
0.1
0.05
0
0
?0.05
0.2
0.15
?0.05
?3
10
?1
10
0
2
10
10
sparsity parameter ?
3
?3
10
10
(a)
?1
10
0
2
10
10
sparsity parameter ?
3
10
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 3: Stability plots and segmentation results for the images in 2(a) and 2(b) (see text).
ground-truth (the disagreement measure of section 4 is used) in comparison with the solution obtained using the least stable ?-parameter. The latter, however, was hardly better
than random guessing by having an overall disagreement of more than 0.60 (more precisely, 0.6392 ? 0.0455) on this data. For the most stable ?, we observed a disagreement
around 0.52 depending on the sub-sample (best 0.5267 ? 0.0403). In this case, the largest
weight was assigned to the protein-protein interaction data. NCut and the two nmf methods proposed in [11] lead to rates 0.5953, 0.6080 and 0.6035, respectively, when applied
to the naive combination. Note, that the clustering results are comparable with some of
those obtained in [7], where the protein-protein interaction data has been used to construct
a (supervised) classifier.
6
Conclusion
This work introduced an approach to combining similarity data originating from multiple
sources for grouping a set of objects. Adopting a pairwise clustering perspective enables
a smooth integration of multiple similarity measurements. To be able to distinguish between desired and distractive information, a weighting mechanism is introduced leading
to a potentially sparse convex combination of the measurements. Here, an entropy constraint is employed to control the amount of sparseness actually allowed. A stability-based
model selection mechanism is used to select this free parameter. We emphasize, that this
procedure represents a completely unsupervised model selection strategy. The experimental evaluation on toy and real world data demonstrates that our proposal yields meaningful
partitions and is able to distinguish between desired and spurious structure in data.
Future work will focus on (i) improving the optimization of the proposed model, (ii) the
integration of additional constraints and (iii) the introduction of a cluster-specific weighting
mechanism. The proposed method as well as its relation to other approaches discussed in
the literature is currently under further investigation.
References
[1] F. R. Bach and M. I. Jordan. Learning spectral clustering. In NIPS, volume 16. MIT
Press, 2004.
[2] J. Burbea and C. R. Rao. On the convexity of some divergence measures based on
entropy functions. IEEE Trans. Inform. Theory, 28(3), 1982.
[3] K. Crammer, J. Keshet, and Y. Singer. Kernel design using boosting. In NIPS, volume 15. MIT Press, 2003.
[4] B. Fischer, V. Roth, and J. M. Buhmann. Clustering with the connectivity kernel. In
NIPS, volume 16. MIT Press, 2004.
[5] Thomas Hofmann. Unsupervised learning by probabilistic latent semantic analysis.
Mach. Learn., 42(1-2):177?196, 2001.
[6] E. T. Jaynes. Information theory and statistical mechanics, I and II. Physical Reviews,
106 and 108:620?630 and 171?190, 1957.
[7] G. R. G. Lanckriet, M. Deng, N. Cristianini, M. I. Jordan, and W. S. Noble. Kernelbased data fusion and its application to protein function prediction in yeast. In Pacific
Symposium on Biocomputing, pages 300?311, 2004.
[8] Kenneth Lange. Optimization. Springer Texts in Statistics. Springer, 2004.
[9] T. Lange, M. Braun, V. Roth, and J.M. Buhmann. Stability-based model selection. In
NIPS, volume 15. MIT Press, 2003.
[10] M. H. C. Law, M. A. T. Figueiredo, and A. K. Jain. Simultaneous feature selection and clustering using mixture models. IEEE Trans. Pattern Anal. Mach. Intell.,
26(9):1154?1166, 2004.
[11] Daniel D. Lee and H. Sebastian Seung. Algorithms for non-negative matrix factorization. In NIPS, volume 13, pages 556?562, 2000.
[12] B. S. Manjunath and W. Y. Ma. Texture features for browsing and retrieval of image
data. IEEE Trans. Pattern Anal. Mach. Intell., 18(8):837?842, 1996.
[13] D. S. Modha and W. S. Spangler. Feature weighting in k-means clustering. Mach.
Learn., 52(3):217?237, 2003.
[14] V. Roth and T. Lange. Feature selection in clustering problems. In NIPS, volume 16.
MIT Press, 2004.
[15] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Trans.
Pattern Anal. Mach. Intell., 22(8):888?905, 2000.
[16] C. K. I. Williams and M. Seeger. Using the Nystr??? 12 m method to speed up kernel
machines. In NIPS, volume 13. MIT Press, 2001.
[17] E. Xing, A. Ng, M. Jordan, and S. Russell. Distance metric learning with application
to clustering with side-information. In NIPS, volume 15, 2003.
[18] W. Xu, X. Liu, and Y. Gong. Document clustering based on non-negative matrix
factorization. In SIGIR ?03, pages 267?273. ACM Press, 2003.
| 2880 |@word norm:1 yi0:1 confirms:1 seek:1 decomposition:1 nystr:3 boundedness:1 initial:1 liu:1 uncovered:1 score:1 selecting:3 exclusively:1 daniel:1 document:1 past:1 current:1 recovered:1 jaynes:1 realize:1 partition:2 informative:1 hofmann:1 enables:2 plot:2 update:1 zik:1 alone:1 selected:1 iterates:1 boosting:2 mathematical:1 dn:1 direct:1 become:1 symposium:1 consists:2 advocate:1 combine:1 introduce:2 pairwise:3 lection:1 behavior:1 p1:1 mechanic:1 inspired:2 increasing:1 becomes:4 bounded:1 interpreted:1 finding:2 unobserved:1 braun:1 classifier:1 demonstrates:1 jianbo:1 bio:1 control:4 positive:2 local:3 w1k:1 modify:1 io:3 despite:1 mach:5 analyzing:1 modha:1 path:2 interpolation:1 noteworthy:1 bird:1 studied:3 suggests:1 co:1 factorization:11 range:3 unique:1 definite:1 swiss:1 procedure:6 area:1 eth:1 gabor:1 matching:2 pre:1 protein:14 cannot:1 selection:15 put:1 context:2 influence:2 map:1 roth:3 shi:1 williams:1 attention:2 convex:3 sigir:1 identifying:1 regarded:1 spanned:1 stability:15 suppose:2 heavily:1 play:1 programming:1 hypothesis:1 agreement:2 lanckriet:1 element:3 expensive:1 particularly:1 cut:1 observed:2 role:1 wj:2 ensures:1 cycle:1 decrease:1 russell:1 convexity:2 complexity:1 seung:2 cristianini:1 solving:1 bipartite:1 basis:2 completely:1 joint:2 differently:1 stacked:1 zo:2 jain:1 choosing:1 whose:1 say:1 pab:1 ability:2 favor:1 fischer:1 flood:1 unseen:1 statistic:1 final:1 sequence:3 analytical:1 took:1 interaction:3 product:1 unresolved:1 maximal:1 relevant:1 combining:1 iff:2 wht:3 frobenius:1 convergence:1 cluster:3 produce:1 categorization:1 perfect:1 converges:1 ring:2 object:24 depending:1 gong:1 measured:2 ij:17 eq:2 come:1 implies:1 convention:1 differ:1 switzerland:1 correct:1 filter:1 human:1 generalization:3 investigation:1 extension:6 pl:5 hold:2 mm:1 around:1 considered:1 ground:4 exp:2 predict:2 purpose:2 label:2 currently:1 largest:1 successfully:3 weighted:5 minimization:2 mit:6 gaussian:1 always:1 aim:2 larity:1 modified:2 rather:1 hj:8 varying:1 endow:1 derived:3 focus:2 joachim:1 improvement:1 seeger:1 sense:1 detect:1 posteriori:1 membership:1 spurious:1 relation:3 proj:1 originating:1 issue:2 overall:2 arg:1 classification:1 orientation:1 histogramming:1 constrained:2 integration:2 once:1 construct:1 having:1 washington:1 sampling:3 ng:1 represents:2 unsupervised:4 noble:2 peaked:1 future:1 simplex:1 few:1 employ:5 divergence:5 intell:3 individual:4 geometry:1 consisting:1 ab:1 spangler:1 evaluation:1 alignment:2 introduces:1 mixture:7 devoted:1 edge:1 closer:2 euclidean:1 re:4 desired:3 minimal:2 instance:1 classify:1 increased:1 rao:1 measuring:2 assignment:2 maximization:1 fusing:4 introducing:1 entry:3 subset:2 predictor:1 uniform:3 reported:1 accomplish:1 combined:4 lee:2 probabilistic:1 informatics:1 together:1 connectivity:1 again:2 squared:1 worse:1 corner:1 resort:1 leading:3 toy:5 sec:1 coefficient:3 jitendra:1 depends:1 lot:1 xing:1 capability:1 contribution:1 ass:2 oi:2 minimize:3 om:2 largely:1 yield:7 identify:2 generalize:2 simultaneous:1 inform:1 whenever:1 sebastian:1 naturally:1 organized:1 segmentation:9 actually:2 originally:2 supervised:3 specify:1 response:2 improved:1 until:1 assessment:7 lack:1 artifact:1 indicated:1 yeast:3 effect:1 normalized:6 equality:1 hence:3 regularization:1 alternating:2 assigned:1 leibler:1 semantic:1 self:1 criterion:1 complete:1 demonstrate:1 tn:2 image:13 ranging:1 consideration:2 wise:2 recently:3 permuted:1 functional:1 physical:1 volume:8 discussed:2 belong:1 significant:1 measurement:6 gibbs:1 similarly:1 hjk:1 phrasing:1 stable:4 similarity:31 yk2:1 posterior:1 perspective:1 inf:1 irrelevant:1 scenario:1 sji:1 seen:2 minimum:5 additional:2 relaxed:1 mr:1 employed:6 deng:1 determine:1 aggregated:1 semi:1 ii:5 multiple:11 desirable:1 full:2 smooth:1 match:1 cross:3 bach:1 retrieval:1 controlled:1 prediction:6 variant:1 vision:1 metric:2 expectation:2 iteration:1 represent:2 kernel:12 histogram:2 adopting:1 cell:1 proposal:3 whereas:1 want:2 decreased:1 source:14 subject:2 fate:1 spirit:1 jordan:3 iii:1 independence:1 fit:3 zi:4 bandwidth:1 perfectly:1 lange:4 tradeoff:1 expression:3 manjunath:1 penalty:1 cause:1 hardly:1 iterating:1 se:1 clear:1 amount:3 locally:1 category:2 dna:1 http:1 estimated:2 per:1 discrete:2 express:1 group:2 enormous:1 ht:1 utilize:1 diffusion:1 kenneth:1 relaxation:3 year:1 arrive:1 almost:1 reasonable:4 comparable:1 bound:1 hi:2 fl:2 ct:2 distinguish:3 followed:1 g:1 occur:1 constraint:6 precisely:1 phrased:1 aspect:1 speed:3 min:4 performing:1 pacific:1 according:1 combination:7 belonging:1 beneficial:1 slightly:2 em:2 y0:3 wi:8 lp:3 sij:5 computationally:1 equation:3 zurich:1 remains:1 previously:1 discus:1 mechanism:8 fail:1 singer:1 end:1 adopted:1 available:2 observe:1 worthwhile:1 generic:1 spectral:2 appropriate:1 disagreement:9 occurrence:2 original:2 thomas:1 denotes:5 assumes:1 clustering:26 ensure:2 subsampling:1 k1:1 objective:6 malik:1 already:1 strategy:2 guessing:1 exhibit:1 distance:1 outer:1 sensible:1 trivial:4 reason:1 equip:1 o1:1 index:2 providing:1 minimizing:5 potentially:1 negative:8 rise:2 design:1 anal:3 perform:2 allowing:1 upper:1 finite:1 waterman:1 orthant:1 tilman:1 extended:2 variability:1 intensity:4 nmf:10 introduced:5 namely:1 required:1 specified:1 kl:1 pair:1 nip:8 trans:4 able:3 usually:2 ofsample:1 pattern:3 sparsity:3 summarize:1 program:3 oj:2 max:1 explanation:1 ia:1 suitable:2 regularized:1 buhmann:3 indicator:1 scheme:2 negativity:1 naive:1 text:2 review:1 literature:1 determining:1 law:1 loss:1 permutation:3 interesting:2 analogy:1 localized:1 validation:1 pij:8 consistent:1 informativeness:1 jbuhmann:1 principle:1 last:2 free:3 figueiredo:1 guide:1 side:1 institute:1 explaining:1 focussed:1 sparse:2 benefit:1 curve:1 plain:1 world:3 avg:3 disturbing:1 avoided:1 far:1 approximate:1 obtains:1 emphasize:1 implicitly:1 kullback:1 transcription:1 gene:2 discriminative:1 search:1 latent:2 un:1 quantifies:1 sk:2 additionally:3 learn:2 reasonably:1 symmetry:1 improving:1 investigated:1 necessarily:1 cl:4 domain:1 main:1 bounding:1 arise:1 profile:2 n2:1 allowed:1 xu:1 site:1 fig:5 tl:2 depicts:3 sub:7 exponential:1 lie:1 vanish:1 weighting:11 admissible:2 emphasizing:1 embed:1 specific:1 jensen:2 explored:1 evidence:1 fusion:3 consist:1 grouping:2 dominates:1 importance:1 texture:3 dissimilarity:1 keshet:1 conditioned:1 sparseness:8 kx:1 browsing:1 sparser:1 entropy:14 depicted:2 explore:1 ijj:1 ncut:8 lagrange:1 partially:1 springer:2 ch:1 nested:3 truth:4 relies:2 extracted:1 ma:1 shell:1 acm:1 rbf:2 fisher:1 hard:1 change:1 determined:1 except:1 uniformly:4 experimental:3 gauss:1 s1n:1 shannon:2 meaningful:2 select:2 kernelbased:1 latter:2 crammer:1 assessed:1 biocomputing:1 ethz:1 incorporate:1 dept:1 |
2,071 | 2,881 | Large-Scale Multiclass Transduction
Thomas G?artner
Fraunhofer AIS.KD, 53754 Sankt Augustin, [email protected]
Quoc V. Le, Simon Burton, Alex J. Smola, Vishy Vishwanathan
Statistical Machine Learning Program, NICTA and ANU, Canberra, ACT
{Quoc.Le, Simon.Burton, Alex.Smola, SVN.Vishwanathan}@nicta.com.au
Abstract
We present a method for performing transductive inference on very large
datasets. Our algorithm is based on multiclass Gaussian processes and is
effective whenever the multiplication of the kernel matrix or its inverse
with a vector can be computed sufficiently fast. This holds, for instance,
for certain graph and string kernels. Transduction is achieved by variational inference over the unlabeled data subject to a balancing constraint.
1
Introduction
While obtaining labeled data remains a time and labor consuming task, acquisition and
storage of unlabelled data is becoming increasingly cheap and easy. This development
has driven machine learning research into exploring algorithms that make extensive use of
unlabelled data at training time in order to obtain better generalization performance.
A common problem of many transductive approaches is that they scale badly with the
amount of unlabeled data, which prohibits the use of massive sets of unlabeled data. Our
algorithm shows improved scaling behavior, both for standard Gaussian Process classification and transduction. We perform classification on a dataset consisting of a digraph with
75, 888 vertices and 508, 960 edges. To the best of our knowledge it has so far not been
possible to perform transduction on graphs of this size in reasonable time (with standard
hardware). On standard data our method shows competitive or better performance.
Existing Transductive Approaches for SVMs use nonlinear programming [2] or EM-style
iterations for binary classification [4]. Moreover, on graphs various methods for unsupervised learning have been proposed [12, 11], all of which are mainly concerned with computing the kernel matrix on training and test set jointly. Other formulations impose that the
label assignment on the test set be consistent with the assumption of confident classification
[8]. Yet others impose that training and test set have similar marginal distributions [4].
The present paper uses all three properties. It is particularly efficient whenever K? or
K ?1 ? can be computed in linear time, where K ? Rm?m is the kernel matrix and ? ? Rm .
? We require consistency of training and test marginals. This avoids problems with
overly large majority classes and small training sets.
? Kernels (or their inverses) are computed on training and test set simultaneously.
On graphs this can lead to considerable computational savings.
? Self consistency of the estimates is achieved by a variational approach. This allows us to make use of Gaussian Process multiclass formulations.
2
Multiclass Classification
We begin with a brief overview over Gaussian Process multiclass classification [10] recast
in terms of exponential families. Denote by X ? Y with Y = {1..n} the domain of observations and labels. Moreover let X := {x1 , . . . , xm } and Y := {y1 , . . . , ym } be the set of
observations. It is our goal to estimate y|x via
X
p(y|x, ?) = exp (h?(x, y), ?i ? g(?|x)) where g(?|x) = log
exp (h?(x, y), ?i) . (1)
y?Y
?(x, y) are the joint sufficient statistics of x and y and g(?|x) is the log-partition function
which takes care of the normalization. We impose a normal prior on ?, leading to the
following negative joint likelihood in ? and Y :
m
X
1
(2)
P := ? log p(?, Y |X) =
[g(?|xi ) ? h?(xi , yi ), ?i] + 2 k?k2 + const.
2?
i=1
For transduction purposes p(?, Y |X) will prove more useful than p(?|Y, X). Note that a
normal prior on ? with variance ? 2 1 implies a Gaussian process on the random variable
t(x, y) := h?(x, y), ?i with covariance kernel
Cov [t(x, y), t(x? , y ? )] = ? 2 h?(x, y), ?(x? , y ? )i =: ? 2 k((x, y), (x? , y ? )).
(3)
Parametric Optimization Problem In the following we assume isotropy among the
class labels, that is h?(x, y), ?(x? , y ? )i = ?y,y? h?(x), ?(x? )i (this is not a necessary requirement for the efficiency of our algorithm, however it greatly simplifies the presentation). This allows us to decompose ? into ?1 , . . . , ?n such that
n
X
h?(x, y), ?i = h?(x), ?y i and k?k2 =
k?y k2 .
(4)
y=1
Applying
Pm Pn the representer theorem allows us to expand ? in terms of ?(xi , yi ) as ? =
i=1
y=1 ?iy ?(xi , y). In conjunction with (4) we have
?y =
m
X
?iy ?(xi ) where ? ? Rm?n .
(5)
i=1
Let ? ? Rm?n with ?ij = 1 if yi = j and ?ij = 0 otherwise, and K ? Rm?m with
Kij = h?(xi ), ?(xj )i. Here joint log-likelihood (2) in terms of ? and K yields
m
n
X
X
1
log
exp ([K?]iy ) ? tr ?? K? + 2 tr ?? K? + const.
(6)
2?
y=1
i=1
Equivalently we could expand (2) in terms of t := K?. This is commonly done in Gaussian
process literature and we will use both formulations, depending on the problem we need to
solve: if K? can be computed effectively, as is the case with string kernels [9], we use the
?-parameterization. Conversely, if K ?1 ? is cheap, as for example with graph kernels [7],
we use the t-parameterization.
Derivatives Second order methods such as Conjugate Gradient require the computation
of derivatives of ? log p(?, Y |X) with respect to ? in terms of ? or t. Using the shorthand
? ? Rm?n with ?ij := p(y = j|xi , ?) we have
?? P = K(? ? ? + ? ?2 ?) and ?t P = ? ? ? + ? ?2 K ?1 t.
(7)
m?n
To avoid spelling out tensors of fourth order for the second derivatives (since ? ? R
)
we state the action of the latter as bilinear forms on vectors ?, ?, u, v ? Rm?n . For convenience we use the ?Matlab? notation of ?.?? to denote element-wise multiplication of
matrices:
??2 P[?, ?] = tr(K?)? (?. ? (K?)) ? tr(?. ? K?)? (?. ? (K?)) + ? ?2 tr ? ? K?
(8a)
?t2 P[u, v]
(8b)
?
?
= tr u (?. ? v) ? tr(?. ? u) (?. ? v) + ?
?2
?
tr u K
?1
v.
Let L ? n be the computational time required to compute K? and K ?1 t respectively. One
may check that L = O(m) implies that each conjugate gradient (CG) descent step can
be performed in O(m) time. Combining this with rates of convergence for Newton-type
or nonlinear CG solver strategies yields overall time costs in the order of O(m log m) to
O(m2 ) worst case, a significant improvement over conventional O(m3 ) methods.
3
Transductive Inference by Variational Methods
As we are interested in transduction, the labels Y (and analogously the data X) decompose
as Y = Ytrain ? Ytest . To directly estimate p(Ytest |X, Ytrain ) we would need to integrating out ?, which is usually intractable. Instead, we now aim at estimating the mode of
p(?|X, Ytrain ) by variational means. With the KL-divergence D and an arbitrary distribution q the well-known bound (see e.g. [5])
? log p(?|X, Ytrain ) ? ? log p(?|X, Ytrain ) + D(q(Ytest )kp(Ytest |X, Ytrain , ?)) (9)
X
=?
(log p(Ytest , ?|X, Ytrain ) ? log q(Ytest )) q(Ytest ) (10)
Ytest
holds. This bound (10) can be minimized with respect to ? and q in an iterative fashion. The
key trick is that while using a factorizing approximation for q we restrict the latter to distributions which satisfy balancing constraints. That is, we require them to yield marginals
on the unlabeled data which are comparable with the labeled observations.
Decomposing the Variational Bound To simplify (10) observe that
p(Ytest , ?|X, Ytrain ) = p(Ytrain , Ytest , ?|X)/p(Ytrain |X).
(11)
In other words, the first term in (10) equals (6) up to a constant independent of ? or Ytest .
With qij := q(yi = j) we define ?ij (q) = qij for all i > mtrain and ?ij (q) = 1 if yi = 1
and 0 otherwise for all i ? mtrain . In other words, we are taking the expectation in ? over
all unobserved labels Ytest with respect to the distribution q(Ytest ). We have
X
q(Ytest ) log p(Ytest , ?|X, Ytrain )
Ytest
=
m
X
i=1
log
n
X
exp ([K?]ij ) ? tr ?(q)? K? +
j=1
1
tr ?? K? + const.
2? 2
(12)
For fixed q the optimization over ? proceeds as in Section 2. Next we discuss q.
Optimization over q The second term in (10) is the negative entropy of q. Since q factorizes we have
m
X
X
q(Ytest ) log q(Ytest ) =
qij log qij .
(13)
Ytest
i=mtrain +1
It is unreasonable to assume that q may be chosen freely from all factorizing distributions
(the latter would lead to a straightforward EM algorithm for transductive inference): if we
observe a certain distribution of labels on the training set, e.g., for binary classification we
see 45% positive and 55% negative labels, then it is very unlikely that the label distribution
on the test set deviates significantly. Hence we should make use of this information.
If m ? mtrain , however, a naive application of the variational bound can lead to cases
where q is concentrated on one class ? the increase in likelihood for a resulting very simple classifier completely outweighs any balancing constraints implicit in the data. This is
confirmed by experimental results. It is, incidentally, also the reason why SVM transduction optimization codes [4] impose a balancing constraint on the assignment of test labels.
We impose the following conditions:
rj? ?
m
X
qij ? rj+ for all j ? Y and
i=mtrain +1
n
X
qij = 1 for all i ? {mtrain ..m} .
j=1
Here the constraints rj? = pemp (y = j) ? ? and rj+ = pemp (y = j) + ? are chosen such
as to correspond to confidence intervals
given by finite sample size tail bounds. In other
Pmtrain
words we set pemp (y = j) = m?1
train
i=1 {yi = j} and ? such as to satisfy
(
)
mtrain
m
test
X
?1 X
?1
?
Pr mtrain
?i ? mtest
?i > ? ? ?
(14)
i=1
i=1
?i?
for iid {0, 1} random variables ?i and with mean p. This is a standard ghost-sample
inequality. It follows directly
from [3, Eq. (2.7)] after application of a union bound over
p
the class labels that ? ? log(2n/?)m/ (2mtrain mtest ).
4
Graphs, Strings and Vectors
We now discuss the two main applications where computational savings can be achieved:
graphs and strings. In the case of graphs, the advantage arises from the fact that K ?1 is
sparse, whereas for texts we can use fast string kernels [9] to compute K? in linear time.
Graphs Denote by G(V, E) the graph given by vertices V and edges E where each edge is
a set of two vertices. Then W ? R|V |?|V | denotes the adjacency matrix of the graph, where
Wij > 0 only if edge {i, j} ? E. We assume that the graph G, and thus also the adjacency
matrix W , is sparse. Now denote
P by 1 the identity matrix and by D the diagonal matrix of
vertex degrees, i.e., Dii = j Wij . Then the graph Laplacian and the normalized graph
Laplacian of G are given by
L := D ? W
? := 1 ? D? 21 W D? 21 ,
L
and
(15)
respectively. Many kernels K (or their inverse) on G are given by low-degree polynomials
of the Laplacian or the adjacency matrix of G, such as the following:
K=
l
X
i=1
ci W 2i , K =
l
Y
? or K ?1 = L
? + ?1.
(1 ? ci L),
(16)
i=1
In all three cases we assumed ci , ? ? 0 and l ? N. The first kernel arises from an l-step
random walk, the third case is typically referred to as regularized graph Laplacian. In these
cases K? or K ?1 t can be computed using L = l(|V | + |E|) operations. This means
that if the average degree of the graph does not increase with the number of observations,
L = O(m) as m = |V | for inference on graphs.
From Graphs to Graphical Models Graphs are one of the examples where transduction
actually improves computational cost: Assume that we are given the inverse kernel matrix
K ?1 on training and test set and we wish to perform induction only. In this case we need
?1
to
=
compute the kernel matrix (or its inverse) restricted to the training set. Let K
A B
, then the upper left hand corner (representing the training set part only) of
B? C
?1
K is given by the Schur complement A ? B ? C ?1 B
. Computing the latter is costly.
Moreover, neither the Schur complement nor its inverse are typically sparse.
Here we have a nice connection between graphical models and graph kernels. Assume that
t is a normal random variable with conditional independence properties. In this case the
inverse covariance matrix has nonzero entries only for variables with a direct dependency
structure. This follows directly from an application of the Clifford-Hammersley theorem to
Gaussian random variables [6]. In other words, if we are given a graphical model of normal
random variables, their conditional independence structure is reflected by K ?1 .
In the same way as in graphical models marginalization may induce dependencies, computing the kernel matrix on the training set only, may lead to dense matrices, even when
the inverse kernel on training and test data combined is sparse. The bottom line is there are
cases where it is computationally cheaper to take both training and test set into account and
optimize over a larger set of variables rather than dealing with a smaller dense matrix.
Strings: Efficient computation of string kernels using suffix
Pm trees was described in [9]. In
particular, it was observed that expansions of the form i=1 ?i k(xi , x) can be evaluated
in linear time in the length of x, provided some preprocessing for the coefficients ? and
observations
xi is performed. This preprocessing is independent of x and can be computed
P
in O( i |xi |) time. The efficient computation scheme covers all kernels of type
X
k(x, x? ) =
ws #s (x)#s (x? )
(17)
s
for arbitrary ws ? 0. Here, #s (x) denotes the number of occurrences of s in x and the
sum is carried out
Pover all substrings of x. This means that computation time for evaluating
K? is again O( i |xi |) as we need to evaluate the kernel expansion for all x ? X. Since
the average string length is independent of m this yields an O(m) algorithm for K?.
Vectors: If k(x, x? ) = ?(x)? ?(x? ) and ?(x) ? Rd for d ? m, it is possible to carry
out matrix vector multiplications in O(md) time. This is useful for cases where we have a
sparse matrix with a small number of low-rank updates (e.g. from low rank dense fill-ins).
5
Optimization
Optimization in ? and t: P is convex in ? (and in t since t = K?). This means that a combination of Conjugate-Gradient and Newton-Raphson (NR) can be used for optimization.
?1
? Compute updates ? ?? ? ? ???2 P ?? P via
? Solve the linear system approximately by Conjugate Gradient iterations.
? Find optimal ? by line search.
? Repeat until the norm of the gradient is sufficiently small.
Key is the fact that the arising linear system is only solved approximately, which can be
done using very few CG iterations. Since each of them is O(m) for fast kernel-vector
computations the overall cost is a sub-quadratic function of m.
Optimization in q is somewhat less straightforward: we need to find the optimal q in terms
of KL-divergence subject to the marginal constraint. Denote by ? the part of K? pertaining
to test data, or more formally ? ? Rmtest ?n with ?ij = [K?]i+mtrain ,j . We have:
X
minimize tr q ? ? +
qij log qij
(18)
q
subject to
i,j
qj?
?
X
i
qij ? qj+ , qij ? 0 and
X
i
qli = 1 for all j ? Y, l ? {1..mtest }
Table 1: Error rates on some benchmark datasets (mostly from UCI). The last column is
the error rates reported in [1]
DATASET
cancer
cancer (progn.)
heart (cleave.)
housing
ionosphere
pima
sonar
glass
wine
tictactoe
cmc
USPS
#I NST
699
569
297
506
351
769
208
214
178
958
1473
9298
#ATTR
9
30
13
13
34
8
60
10
13
9
10
256
I ND . GP
3.4%?4.1%
6.1%?3.7%
15.0%?5.6%
7.0%?1.0%
8.6%?6.3%
19.6%?8.1%
10.5%?5.1%
20.5%?1.6%
19.4%?5.7%
3.9%?0.7%
32.5%?7.1%
5.9%
T RANSD . GP
2.1%?4.7%
6.0%?3.7%
13.0%?6.3%
6.8%?0.9%
6.1%?3.4%
17.6%?8.0%
8.6%?3.4%
17.3%?4.5%
15.6%?4.2%
3.3%?0.6%
28.9%?7.5%
4.8%
S3 VM MIP
3.4%
3.3%
16.0%
15.1%
10.6%
22.2%
21.9%
?
?
?
?
?1
This is a convex optimization problem. Using Lagrange multipliers one
Pncan show that q
needs to satisfy qij = exp(??ij )bi cj where bi , cj ? 0. Solving for j qij = 1 yields
exp(?? )c
ij
j
qij = Pn exp(??
. This means that instead of an optimization problem in mtest ? n
il )cl
l=1
variables we now only need to optimize over n variables subject to 2n constraints.
Note that the exact matching constraint where qi+ = qi? amounts to a maximum likelihood
problem for a shifted exponential family model where qij = exp(?ij ) exp(?i ? gj (?i )).
It can be shown that the approximate matching problem is equivalent to a maximum a
posteriori optimization problem using the norm dual to expectation constraints on qij . We
are currently working on extending this setting
In summary, the optimization now only depends on n variables. It can be solved by standard
second order methods. As initialization we choose ?i such that the per class averages match
the marginal constraint while ignoring the per sample balance. After that a small number
Newton steps suffices for optimization.
6
Experiments
Unfortunately, we are not aware of other multiclass transductive learning algorithms. To
still be able to compare our approach to other transductive learning algorithms we performed experiments on some benchmark datasets. To investigate the performance of our
algorithm in classifying vertices of a graph, we choose the WebKB dataset.
Benchmark datasets Table 1 reports results on some benchmark datasets. To be able to
compare the error rates of the transductive multiclass Gaussian Process classifier proposed
in this paper, we also report error rates from [2] and an inductive multiclass Gaussian
Process classifier. The reported error rates are for 10-fold crossvalidations. Parameters
were chosen by crossvalidation inside the training folds.
Graph Mining To illustrate the effectiveness of our approach on graphs we performed
experiments on the well known WebKB dataset. This dataset consists of 8275 webpages
classified into 7 classes. Each webpage contains textual content and/or links to other webpages. As we are using this dataset to evaluate our graph mining algorithm, we ignore the
text on each webpage and consider the dataset as a labelled directed graph. To have the data
1
In [2] only subsets of USPS were considered due to the size of this problem.
Table 2: Results on WebKB for ?inverse? 10-fold crossvalidation
DATASET
|V |
|E| E RROR
DATASET
|V |
|E| E RROR
Cornell
867 1793
10%
Misc
4113
4462
66%
Texas
827 1683
8%
all
8275 14370
53%
Washington 1205 2368
10% Universities 4162
9591
12%
Wisconsin 1263 3678
15%
set as large as possible, we did not remove any webpages, opposed to most other work.
Table 2 reports the results of our algorithm on different subsets of the WebKB data as
well as on the full data. We use the co-linkage graph and report results for ?inverse? 10fold stratified crossvalidations, i.e., we use 1 fold as training data and 9 folds as test data.
Parameters are the same for all reported experiments and were found by experimenting with
a few parametersets on the ?Cornell? subset only. It turned out that the class membership
probabilities are not well-calibrated on this dataset. To overcome this, we predict on the
test set as follows: For each class the instances that are most likely to be in this class are
picked (if they haven?t been picked for a class with lower index) such that the fraction of
instances assigned to this class is the same on the training and test set. We will investigate
the reason for this in future work.
The setting most similar to ours is probably the one described in [11]. Although a directed graph approach outperforms there an undirected approach, we resorted to kernels
for undirected graphs, as those are computationally more attractive. We will investigate
computationally attractive digraph kernels in future work and expect similar benefits as reported by [11]. Though we are using more training data than [11] we are also considering
a more difficult learning problem (multiclass without removing various instances). To investigate the behaviour of our algorithm with less training data, we performed a 20-fold
inverse crossvalidation on the ?wisconsin? subset and observed an error rate of 17% there.
To further strengthen our results and show that the runtime performance of our algorithm
is sufficient for classifying the vertices of massive graphs, we also performed initial experiments on the Epinions dataset collected by Mathew Richardson and Pedro Domingos.
The dataset is a social network consisting of 75, 888 people connected by 508, 960 ?trust?
edges. Additionally the dataset comes with a list of 185 ?topreviewers? for 25 topic areas.
We tried to predict these but only got 12% of the topreviewers correct. As we are not aware
of any predictive results on this task, we suppose this low accuracy is inherent to this task.
However, the experiments show that the algorithm can be run on very large graph datasets.
7
Discussion and Extensions
We presented an efficient method for performing transduction on multiclass estimation
problems with Gaussian Processes. It performs particularly well whenever the kernel matrix has special numerical properties which allow fast matrix vector multiplication. That
said, also on standard dense problems we observed very good improvements (typically a
10% reduction of the training error) over standard induction.
Structured Labels and Conditional Random Fields are a clear area where to extend
the transductive setting. The key obstacle to overcome in this context is to find a suitable
marginal distribution: with increasing structure of the labels the confidence bounds per
subclass decrease dramatically. A promising strategy is to use only partial marginals on
maximal cliques and enforce them directly similarly to an unconditional Markov network.
Applications to Document Analysis require efficient small-memory-footprint suffix tree
implementations. We are currently working on this, which will allow GP classification to
perform estimation on large document collections. We believe it will be possible to use
out-of-core storage in conjunction with annotation to work on sequences of 108 characters.
Other Marginal Constraints than matching marginals are worth exploring. In particular,
constraints derived from exchangeable distributions such as those used by Latent Dirichlet
Allocation are a promising area to consider. This may also lead to connections between GP
classification and clustering.
Sparse O(m1.3 ) Solvers for Graphs have recently been proposed by the theoretical computer science community. It is worthwhile exploring their use for inference on graphs.
Acknowledgements The authors thank Mathew Richardson and Pedro Domingos for collecting the Epinions data and Deepayan Chakrabarti and Christos Faloutsos for providing
a preprocessed version. Parts of this work were carried out when TG was visiting NICTA.
National ICT Australia is funded through the Australian Government?s Backing Australia?s
Ability initiative, in part through the Australian Research Council. This work was supported
by grants of the ARC and by the Pascal Network of Excellence.
References
[1] K. Bennett. Combining support vector and mathematical programming methods for
classification. In Advances in Kernel Methods - -Support Vector Learning, pages 307
? 326. MIT Press, 1998.
[2] K. Bennett. Combining support vector and mathematical programming methods for
induction. In B. Sch?olkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in
Kernel Methods - -SV Learning, pages 307 ? 326, Cambridge, MA, 1999. MIT Press.
[3] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal
of the American Statistical Association, 58:13 ? 30, 1963.
[4] T. Joachims. Learning to Classify Text Using Support Vector Machines: Methods,
Theory, and Algorithms. The Kluwer International Series In Engineering And Computer Science. Kluwer Academic Publishers, Boston, May 2002. ISBN 0 - 7923 7679-X.
[5] M. I. Jordan, Z. Ghahramani, Tommi S. Jaakkola, and L. K. Saul. An introduction to
variational methods for graphical models. Machine Learning, 37(2):183 ? 233, 1999.
[6] S. L. Lauritzen. Graphical Models. Oxford University Press, 1996.
[7] A. J. Smola and I. R. Kondor. Kernels and regularization on graphs. In B. Sch?olkopf
and M. K. Warmuth, editors, Proceedings of the Annual Conference on Computational Learning Theory, Lecture Notes in Computer Science. Springer, 2003.
[8] V. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998.
[9] S. V. N. Vishwanathan and A. J. Smola. Fast kernels for string and tree matching.
In K. Tsuda, B. Sch?olkopf, and J.P. Vert, editors, Kernels and Bioinformatics, Cambridge, MA, 2004. MIT Press.
[10] C. K. I. Williams and D. Barber. Bayesian classification with Gaussian processes.
IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI, 20(12):1342
? 1351, 1998.
[11] D. Zhou, J. Huang, and B. Sch?olkopf. Learning from labeled and unlabeled data on a
directed graph. In International Conference on Machine Learning, 2005.
[12] X. Zhu, J. Lafferty, and Z. Ghahramani. Semi-supervised learning using gaussian
fields and harmonic functions. In International Conference on Machine Learning
ICML?03, 2003.
| 2881 |@word kondor:1 version:1 polynomial:1 norm:2 nd:1 tried:1 covariance:2 tr:11 carry:1 reduction:1 initial:1 contains:1 series:1 ours:1 document:2 outperforms:1 existing:1 com:1 yet:1 john:1 numerical:1 partition:1 cheap:2 remove:1 update:2 intelligence:1 parameterization:2 warmuth:1 core:1 mathematical:2 direct:1 chakrabarti:1 initiative:1 qij:15 prove:1 shorthand:1 artner:1 consists:1 inside:1 excellence:1 behavior:1 nor:1 solver:2 considering:1 increasing:1 begin:1 estimating:1 moreover:3 notation:1 provided:1 webkb:4 bounded:1 isotropy:1 sankt:1 string:9 prohibits:1 unobserved:1 collecting:1 act:1 subclass:1 runtime:1 rm:7 k2:3 classifier:3 exchangeable:1 grant:1 positive:1 engineering:1 bilinear:1 oxford:1 becoming:1 approximately:2 pami:1 au:1 initialization:1 conversely:1 co:1 stratified:1 bi:2 directed:3 union:1 footprint:1 area:3 significantly:1 got:1 matching:4 vert:1 word:4 integrating:1 confidence:2 induce:1 convenience:1 unlabeled:5 storage:2 context:1 applying:1 optimize:2 conventional:1 equivalent:1 straightforward:2 williams:1 convex:2 m2:1 attr:1 fill:1 suppose:1 massive:2 exact:1 programming:3 strengthen:1 us:1 domingo:2 trick:1 element:1 particularly:2 labeled:3 bottom:1 observed:3 burton:2 solved:2 worst:1 connected:1 decrease:1 solving:1 predictive:1 efficiency:1 completely:1 usps:2 joint:3 various:2 train:1 fast:5 effective:1 kp:1 pertaining:1 pemp:3 larger:1 solve:2 otherwise:2 ability:1 statistic:1 cov:1 richardson:2 transductive:9 jointly:1 gp:4 housing:1 advantage:1 sequence:1 isbn:1 maximal:1 uci:1 combining:3 turned:1 olkopf:4 crossvalidation:3 qli:1 convergence:1 webpage:5 requirement:1 extending:1 incidentally:1 depending:1 illustrate:1 ij:10 lauritzen:1 eq:1 cmc:1 implies:2 come:1 australian:2 tommi:1 correct:1 australia:2 dii:1 adjacency:3 require:4 government:1 behaviour:1 suffices:1 generalization:1 decompose:2 exploring:3 extension:1 hold:2 sufficiently:2 considered:1 normal:4 exp:9 predict:2 wine:1 purpose:1 estimation:2 label:12 currently:2 augustin:1 council:1 mit:3 gaussian:12 aim:1 rather:1 pn:2 avoid:1 cornell:2 zhou:1 factorizes:1 jaakkola:1 conjunction:2 derived:1 joachim:1 improvement:2 rank:2 likelihood:4 mainly:1 check:1 greatly:1 experimenting:1 cg:3 glass:1 posteriori:1 inference:6 suffix:2 membership:1 unlikely:1 typically:3 cleave:1 w:2 expand:2 wij:2 interested:1 backing:1 overall:2 classification:11 dual:1 pascal:1 among:1 development:1 special:1 marginal:5 equal:1 aware:2 saving:2 field:2 washington:1 unsupervised:1 icml:1 representer:1 future:2 minimized:1 report:4 others:1 t2:1 simplify:1 few:2 haven:1 inherent:1 simultaneously:1 divergence:2 national:1 cheaper:1 consisting:2 investigate:4 mining:2 unconditional:1 edge:5 partial:1 necessary:1 tree:3 walk:1 tsuda:1 mip:1 theoretical:1 instance:4 kij:1 column:1 obstacle:1 classify:1 cover:1 assignment:2 tg:1 cost:3 vertex:6 entry:1 subset:4 reported:4 dependency:2 sv:1 combined:1 confident:1 calibrated:1 international:3 gaertner:1 vm:1 ym:1 iy:3 analogously:1 clifford:1 again:1 opposed:1 choose:2 huang:1 hoeffding:1 corner:1 american:1 derivative:3 style:1 leading:1 account:1 de:1 coefficient:1 satisfy:3 depends:1 performed:6 picked:2 competitive:1 annotation:1 simon:2 minimize:1 il:1 accuracy:1 variance:1 yield:5 correspond:1 bayesian:1 iid:1 substring:1 confirmed:1 worth:1 classified:1 whenever:3 acquisition:1 vishy:1 dataset:13 knowledge:1 improves:1 cj:2 actually:1 supervised:1 reflected:1 improved:1 formulation:3 done:2 evaluated:1 though:1 smola:5 implicit:1 until:1 hand:1 working:2 trust:1 nonlinear:2 mode:1 believe:1 normalized:1 multiplier:1 inductive:1 hence:1 assigned:1 regularization:1 nonzero:1 misc:1 attractive:2 self:1 performs:1 variational:7 wise:1 harmonic:1 recently:1 common:1 overview:1 tail:1 extend:1 m1:1 association:1 marginals:4 kluwer:2 significant:1 nst:1 epinions:2 cambridge:2 ai:2 rd:1 consistency:2 pm:2 similarly:1 funded:1 gj:1 driven:1 certain:2 inequality:2 binary:2 yi:6 pover:1 care:1 impose:5 somewhat:1 freely:1 semi:1 full:1 rj:4 unlabelled:2 match:1 academic:1 raphson:1 laplacian:4 qi:2 expectation:2 iteration:3 kernel:28 normalization:1 achieved:3 whereas:1 interval:1 publisher:1 sch:4 probably:1 subject:4 undirected:2 lafferty:1 schur:2 effectiveness:1 jordan:1 easy:1 concerned:1 xj:1 independence:2 marginalization:1 restrict:1 simplifies:1 multiclass:10 svn:1 texas:1 qj:2 linkage:1 york:1 action:1 matlab:1 ytrain:11 useful:2 dramatically:1 clear:1 amount:2 hardware:1 svms:1 concentrated:1 s3:1 shifted:1 overly:1 arising:1 per:3 key:3 preprocessed:1 neither:1 resorted:1 graph:34 fraction:1 sum:2 run:1 inverse:11 fourth:1 family:2 reasonable:1 scaling:1 comparable:1 bound:7 fold:7 quadratic:1 mathew:2 annual:1 badly:1 vishwanathan:3 constraint:12 alex:2 performing:2 mtest:4 structured:1 combination:1 kd:1 conjugate:4 smaller:1 increasingly:1 em:2 character:1 son:1 quoc:2 restricted:1 pr:1 heart:1 computationally:3 remains:1 discus:2 decomposing:1 operation:1 unreasonable:1 observe:2 worthwhile:1 enforce:1 occurrence:1 faloutsos:1 thomas:2 denotes:2 dirichlet:1 clustering:1 graphical:6 outweighs:1 newton:3 const:3 ghahramani:2 tensor:1 parametric:1 strategy:2 costly:1 spelling:1 diagonal:1 md:1 nr:1 said:1 gradient:5 visiting:1 link:1 thank:1 majority:1 topic:1 barber:1 collected:1 reason:2 nicta:3 induction:3 code:1 length:2 index:1 deepayan:1 providing:1 balance:1 equivalently:1 difficult:1 mostly:1 unfortunately:1 pima:1 negative:3 implementation:1 perform:4 upper:1 observation:5 datasets:6 markov:1 benchmark:4 finite:1 arc:1 descent:1 y1:1 arbitrary:2 community:1 complement:2 required:1 kl:2 extensive:1 connection:2 textual:1 able:2 proceeds:1 usually:1 pattern:1 xm:1 ghost:1 hammersley:1 program:1 recast:1 memory:1 suitable:1 regularized:1 zhu:1 representing:1 scheme:1 brief:1 carried:2 fraunhofer:2 naive:1 deviate:1 prior:2 literature:1 text:3 nice:1 acknowledgement:1 multiplication:4 ict:1 wisconsin:2 mtrain:10 expect:1 lecture:1 allocation:1 degree:3 sufficient:2 consistent:1 editor:3 classifying:2 balancing:4 cancer:2 summary:1 repeat:1 last:1 supported:1 ytest:19 allow:2 burges:1 saul:1 taking:1 sparse:6 benefit:1 overcome:2 evaluating:1 avoids:1 author:1 commonly:1 collection:1 preprocessing:2 far:1 social:1 transaction:1 approximate:1 ignore:1 dealing:1 clique:1 assumed:1 consuming:1 xi:11 factorizing:2 search:1 iterative:1 latent:1 sonar:1 why:1 table:4 additionally:1 promising:2 ignoring:1 obtaining:1 expansion:2 cl:1 domain:1 did:1 main:1 dense:4 x1:1 canberra:1 referred:1 transduction:9 fashion:1 wiley:1 christos:1 sub:1 wish:1 exponential:2 third:1 theorem:2 removing:1 list:1 svm:1 ionosphere:1 intractable:1 vapnik:1 effectively:1 ci:3 anu:1 boston:1 entropy:1 likely:1 lagrange:1 labor:1 rror:2 springer:1 pedro:2 ma:2 conditional:3 goal:1 presentation:1 identity:1 digraph:2 labelled:1 bennett:2 considerable:1 content:1 experimental:1 m3:1 formally:1 people:1 support:4 latter:4 arises:2 bioinformatics:1 evaluate:2 |
2,072 | 2,882 | Infinite Latent Feature Models
and the Indian Buffet Process
Thomas L. Griffiths
Cognitive and Linguistic Sciences
Brown University, Providence RI
tom [email protected]
Zoubin Ghahramani
Gatsby Computational Neuroscience Unit
University College London, London
[email protected]
Abstract
We define a probability distribution over equivalence classes of binary
matrices with a finite number of rows and an unbounded number of
columns. This distribution is suitable for use as a prior in probabilistic
models that represent objects using a potentially infinite array of features.
We identify a simple generative process that results in the same distribution over equivalence classes, which we call the Indian buffet process.
We illustrate the use of this distribution as a prior in an infinite latent feature model, deriving a Markov chain Monte Carlo algorithm for inference
in this model and applying the algorithm to an image dataset.
1
Introduction
The statistical models typically used in unsupervised learning draw upon a relatively small
repertoire of representations. The simplest representation, used in mixture models, associates each object with a single latent class. This approach is appropriate when objects
can be partitioned into relatively homogeneous subsets. However, the properties of many
objects are better captured by representing each object using multiple latent features. For
instance, we could choose to represent each object as a binary vector, with entries indicating the presence or absence of each feature [1], allow each feature to take on a continuous
value, representing objects with points in a latent space [2], or define a factorial model, in
which each feature takes on one of a discrete set of values [3, 4].
A critical question in all of these approaches is the dimensionality of the representation:
how many classes or features are needed to express the latent structure expressed by a
set of objects. Often, determining the dimensionality of the representation is treated as a
model selection problem, with a particular dimensionality being chosen based upon some
measure of simplicity or generalization performance. This assumes that there is a single,
finite-dimensional representation that correctly characterizes the properties of the observed
objects. An alternative is to assume that the true dimensionality is unbounded, and that the
observed objects manifest only a finite subset of classes or features [5]. This alternative
is pursued in nonparametric Bayesian models, such as Dirichlet process mixture models
[6, 7, 8, 9]. In a Dirichlet process mixture model, each object is assigned to a latent class,
and each class is associated with a distribution over observable properties. The prior distribution over assignments of objects to classes is defined in such a way that the number
of classes used by the model is bounded only by the number of objects, making Dirichlet
process mixture models ?infinite? mixture models [10].
The prior distribution assumed in a Dirichlet process mixture model can be specified in
terms of a sequential process called the Chinese restaurant process (CRP) [11, 12]. In the
CRP, N customers enter a restaurant with infinitely many tables, each with infinite seating
mk
,
capacity. The ith customer chooses an already-occupied table k with probability i?1+?
where mk is the number of current occupants, and chooses a new table with probability
?
i?1+? . Customers are exchangeable under this process: the probability of a particular
seating arrangement depends only on the number of people at each table, and not the order
in which they enter the restaurant.
If we replace customers with objects and tables with classes, the CRP specifies a distribution over partitions of objects into classes. A partition is a division of the set of N objects
into subsets, where each object belongs to a single subset and the ordering of the subsets
does not matter. Two assignments of objects to classes that result in the same division of
objects correspond to the same partition. For example, if we had three objects, the class
assignments {c1 , c2 , c3 } = {1, 1, 2} would correspond to the same partition as {2, 2, 1},
since all that differs between these two cases is the labels of the classes. A partition thus
defines an equivalence class of assignment vectors.
The distribution over partitions implied by the CRP can be derived by taking the limit of
the probability of the corresponding equivalence class of assignment vectors in a model
where class assignments are generated from a multinomial distribution with a Dirichlet
prior [9, 10]. In this paper, we derive an infinitely exchangeable distribution over infinite
binary matrices by pursuing this strategy of taking the limit of a finite model. We also describe a stochastic process (the Indian buffet process, akin to the CRP) which generates this
distribution. Finally, we demonstrate how this distribution can be used as a prior in statistical models in which each object is represented by a sparse subset of an unbounded number
of features. Further discussion of the properties of this distribution, some generalizations,
and additional experiments, are available in the longer version of this paper [13].
2
A distribution on infinite binary matrices
In a latent feature model, each object is represented by a vector of latent feature values f i ,
and the observable properties of that object xi are generated from a distribution determined
by its latent features. Latent feature values can be continuous, as in principal component
analysis (PCA) [2], or discrete, as in cooperative vector quantization (CVQ) [3, 4]. In the
remainder of this section, we will assume that feature values are continuous. Using the ma
T T
to indicate the latent feature values for all N objects, the model
trix F = f1T f2T ? ? ? fN
is specified by a prior over features, p(F), and a distribution over observed property matrices conditioned on those features, p(X|F), where p(?) is a probability density function.
These distributions can be dealt with separately: p(F) specifies the number of features and
the distribution over values associated with each feature, while p(X|F) determines how
these features relate to the properties of objects. Our focus will be on p(F), showing how
such a prior can be defined without limiting the number of features.
We can break F into two components: a binary matrix Z indicating which features are possessed by each object, with zik = 1 if object i has feature k and 0 otherwise, and a matrix
V indicating the value of each feature for each object. F is the elementwise product of Z
and V, F = Z ? V, as illustrated in Figure 1. In many latent feature models (e.g., PCA)
objects have non-zero values on every feature, and every entry of Z is 1. In sparse latent
feature models (e.g., sparse PCA [14, 15]) only a subset of features take on non-zero values
for each object, and Z picks out these subsets. A prior on F can be defined by specifying
priors for Z and V, with p(F) = P (Z)p(V), where P (?) is a probability mass function.
We will focus on defining a prior on Z, since the effective dimensionality of a latent feature
model is determined by Z. Assuming that Z is sparse, we can define a prior for infinite latent feature models by defining a distribution over infinite binary matrices. Our discussion
of the Chinese restaurant process provides two desiderata for such a distribution: objects
K features
K features
(b)
N objects
N objects
0.9 1.4
?3.2 0
0
0
0
0.9
0
(c)
?0.3
N objects
(a)
0.2 ?2.8
1.8
0
?0.1
K features
1
3
0
0
5
0
3
0
0
1
4
2
0
4
5
Figure 1: A binary matrix Z, as shown in (a), indicates which features take non-zero values.
Elementwise multiplication of Z by a matrix V of continuous values produces a representation like (b). If V contains discrete values, we obtain a representation like (c).
should be exchangeable, and posterior inference should be tractable. It also suggests a
method by which these desiderata can be satisfied: start with a model that assumes a finite
number of features, and consider the limit as the number of features approaches infinity.
2.1
A finite feature model
We have N objects and K features, and the possession of feature k by object i is indicated
by a binary variable zik . The zik form a binary N ? K feature matrix, Z. Assume that
each object possesses feature k with probability ?k , and that the features are generated
independently. Under this model, the probability of Z given ? = {?1 , ?2 , . . . , ?K }, is
P (Z|?) =
K Y
N
Y
P (zik |?k ) =
k=1 i=1
K
Y
?kmk (1 ? ?k )N ?mk ,
(1)
k=1
PN
where mk = i=1 zik is the number of objects possessing feature k. We can define a prior
on ? by assuming that each ?k follows a beta distribution, to give
?
?k | ? ? Beta( K
, 1)
zik | ?k ? Bernoulli(?k )
Each zik is independent of all other assignments, conditioned on ?k , and the ?k are generated independently. We can integrate out ? to obtain the probability of Z, which is
P (Z)
=
K
Y
k=1
?
K ?(mk
?
+K
)?(N ? mk + 1)
.
?
?(N + 1 + K
)
(2)
This distribution is exchangeable, since mk is not affected by the ordering of the objects.
2.2
Equivalence classes
In order to find the limit of the distribution specified by Equation 2 as K ? ?, we need to
define equivalence classes of binary matrices ? the analogue of partitions for class assignments. Our equivalence classes will be defined with respect to a function on binary matrices, lof (?). This function maps binary matrices to left-ordered binary matrices. lof (Z) is
obtained by ordering the columns of the binary matrix Z from left to right by the magnitude
of the binary number expressed by that column, taking the first row as the most significant
bit. The left-ordering of a binary matrix is shown in Figure 2. In the first row of the leftordered matrix, the columns for which z1k = 1 are grouped at the left. In the second row,
the columns for which z2k = 1 are grouped at the left of the sets for which z1k = 1. This
grouping structure persists throughout the matrix.
The history of feature k at object i is defined to be (z1k , . . . , z(i?1)k ). Where no object is
specified, we will use history to refer to the full history of feature k, (z1k , . . . , zN k ). We
lof
Figure 2: Left-ordered form. A binary matrix is transformed into a left-ordered binary
matrix by the function lof (?). The entries in the left-ordered matrix were generated from
the Indian buffet process with ? = 10. Empty columns are omitted from both matrices.
will individuate the histories of features using the decimal equivalent of the binary numbers
corresponding to the column entries. For example, at object 3, features can have one of four
histories: 0, corresponding to a feature with no previous assignments, 1, being a feature for
which z2k = 1 but z1k = 0, 2, being a feature for which z1k = 1 but z2k = 0, and 3, being
a feature possessed by both previous objects were assigned. Kh will denote the number of
features possessing the history h, with K0 being the number of features for which mk = 0
P2N ?1
and K+ = h=1 Kh being the number of features for which mk > 0, so K = K0 + K+ .
Two binary matrices Y and Z are lof -equivalent if lof (Y) = lof (Z). The lof equivalence class of a binary matrix Z, denoted [Z], is the set of binary matrices that are
lof -equivalent to Z. lof -equivalence classes play the role for binary matrices that partitions play for assignment vectors: they collapse together all binary matrices (assignment
vectors) that differ only in column ordering (class labels). lof -equivalence classes are preserved through permutation of the rows or the columns of a matrix, provided the same
permutations are applied to the other members of the equivalence class. Performing inference at the level of lof -equivalence classes is appropriate in models where feature order
is not identifiable, with p(X|F) being unaffected by the order of the columns of F. Any
model in which the probability of X is specified in terms of a linear function of F, such
as
has this property. The cardinality of the lof -equivalence class [Z] is
PCA or CVQ,
K
= Q2NK!
, where Kh is the number of columns with full history h.
K0 ...K N
?1
2
2.3
?1
h=0
Kh !
Taking the infinite limit
Under the distribution defined by Equation 2, the probability of a particular lof -equivalence
class of binary matrices, [Z], is
P ([Z]) =
X
Z?[Z]
K!
P (Z) = Q2N ?1
h=0
K
Y
?
+K
)?(N ? mk + 1)
.
?
)
?(N + 1 + K
?
K ?(mk
Kh ! k=1
(3)
Rearranging terms, and using the fact that ?(x) = (x ? 1)?(x ? 1) for x > 1, we can
compute the limit of P ([Z]) as K approaches infinity
!K K+
Q k ?1
?
Y (N ? mk )! m
N!
? K+
K!
j=1 (j + K )
lim Q2N ?1
?
?
?
Q
N
K
?
K??
N!
)
Kh ! K0 ! K +
j=1 (j +
k=1
K
h=1
K+
?
= Q2N ?1
h=1
Kh !
?
1
?
exp{??HN }
?
K+
Y
(N ? mk )!(mk ? 1)!
,
N!
(4)
k=1
PN 1
where HN is the N th harmonic number, HN =
j=1 j . This distribution is infinitely
exchangeable, since neither Kh nor mk are affected by the ordering on objects. Technical
details of this limit are provided in [13].
2.4
The Indian buffet process
The probability distribution defined in Equation 4 can be derived from a simple stochastic
process. Due to the similarity to the Chinese restaurant process, we will also use a culinary
metaphor, appropriately adjusted for geography. Indian restaurants in London offer buffets
with an apparently infinite number of dishes. We will define a distribution over infinite
binary matrices by specifying how customers (objects) choose dishes (features).
In our Indian buffet process (IBP), N customers enter a restaurant one after another. Each
customer encounters a buffet consisting of infinitely many dishes arranged in a line. The
first customer starts at the left of the buffet and takes a serving from each dish, stopping
after a Poisson(?) number of dishes. The ith customer moves along the buffet, sampling
dishes in proportion to their popularity, taking dish k with probability mik , where mk is the
number of previous customers who have sampled that dish. Having reached the end of all
previous sampled dishes, the ith customer then tries a Poisson( ?i ) number of new dishes.
We can indicate which customers chose which dishes using a binary matrix Z with N rows
and infinitely many columns, where zik = 1 if the ith customer sampled the kth dish.
(i)
Using K1 to indicate the number of new dishes sampled by the ith customer, the probability of any particular matrix being produced by the IBP is
K+
Y
(N ? mk )!(mk ? 1)!
? K+
exp{??H
}
.
P (Z) = QN
N
(i)
N!
K
!
1
k=1
i=1
(5)
The matrices produced by this process are generally not in left-ordered form. These matrices are also not ordered arbitrarily, because the Poisson draws always result in choices
of new dishes that are to the right of the previously sampled dishes. Customers are not
(i)
exchangeable under this distribution, as the number of dishes counted as K1 depends
upon the order in which the customers make their choices. However, if we only pay attention to the lof -equivalence classes of the matrices generated by this process, we obtain
the infinitely exchangeable distribution P ([Z]) given by Equation 4:
QN
(i)
i=1 K1 !
Q2N ?1
Kh !
h=1
matrices
generated via this process map to the same left-ordered form, and P ([Z]) is obtained by
multiplying P (Z) from Equation 5 by this quantity. A similar but slightly more complicated process can be defined to produce left-ordered matrices directly [13].
2.5
Conditional distributions
To define a Gibbs sampler for models using the IBP, we need to know the conditional
distribution on feature assignments, P (zik = 1|Z?(ik) ). In the finite model, where P (Z)
is given by Equation 2, it is straightforward to compute this conditional distribution for any
zik . Integrating over ?k gives
?
m?i,k + K
P (zik = 1|z?i,k ) =
,
(6)
?
N+K
where z?i,k is the set of assignments of other objects, not including i, for feature k, and
m?i,k is the number of objects possessing feature k, not including i. We need only condition on z?i,k rather than Z?(ik) because the columns of the matrix are independent.
In the infinite case, we can derive the conditional distribution from the (exchangeable) IBP.
Choosing an ordering on objects such that the ith object corresponds to the last customer
to visit the buffet, we obtain
m?i,k
P (zik = 1|z?i,k ) =
,
(7)
N
for any k such that m?i,k > 0. The same result can be obtained by taking the limit of
Equation 6 as K ? ?. The number of new features associated with object i should be
?
) distribution. This can also be derived from Equation 6, using the
drawn from a Poisson( N
same kind of limiting argument as that presented above.
3
A linear-Gaussian binary latent feature model
To illustrate how the IBP can be used as a prior in models for unsupervised learning, we
derived and tested a linear-Gaussian latent feature model in which the features are binary.
In this case the feature matrix F reduces to the binary matrix Z. As above, we will start
with a finite model and then consider the infinite limit.
In our finite model, the D-dimensional vector of properties of an object i, xi is generated
2
from a Gaussian distribution with mean zi A and covariance matrix ?X = ?X
I, where
zi is a K-dimensional binary vector, and A is a K ? D matrix of weights. In matrix
notation, E [X] = ZA. If Z is a feature matrix, this is a form of binary factor analysis. The
distribution of X given Z, A, and ?X is matrix Gaussian with mean ZA and covariance
2
matrix ?X
I, where I is the identity matrix. The prior on A is also matrix Gaussian, with
2
mean 0 and covariance matrix ?A
I. Integrating out A, we have
p(X|Z, ?X , ?A )
=
1
(N ?K)D KD
(2?)N D/2 ?X
?A |ZT Z
exp{?
+
2
?X
D/2
2 I|
?A
? 2 ?1 T
1
tr(XT (I ? Z(ZT Z + X
Z )X)}. (8)
2
2 I)
2?X
?A
This result is intuitive: the exponentiated term is the difference between the inner product
of X and its projection onto the space spanned by Z, regularized to an extent determined
by the ratio of the variance of the noise in X to the variance of the prior on A. It follows
that p(X|Z, ?X , ?A ) depends only on the non-zero columns of Z, and thus remains welldefined when we take the limit as K ? ? (for more details see [13]).
We can define a Gibbs sampler for this model by computing the full conditional distribution
P (zik |X, Z?(i,k) , ?X , ?A ) ? p(X|Z, ?X , ?A )P (zik |z?i,k ).
(9)
The two terms on the right hand side can be evaluated using Equations 8 and 7 respectively.
The Gibbs sampler is then straightforward. Assignments for features for which m?i,k > 0
are drawn from the distribution specified by Equation 9. The distribution over the number
of new features for each object can be approximated by truncation, computing probabilities
(i)
for a range of values of K1 up to an upper bound. For each value, p(X|Z, ?X , ?A ) can
?
be computed from Equation 8, and the prior on the number of new features is Poisson( N
).
We will demonstrate this Gibbs sampler for the infinite binary linear-Gaussian model on a
dataset consisting of 100 240 ? 320 pixel images. We represented each image, xi , using
a 100-dimensional vector corresponding to the weights of the mean image and the first 99
principal components. Each image contained up to four everyday objects ? a $20 bill, a
Klein bottle, a prehistoric handaxe, and a cellular phone. Each object constituted a single
latent feature responsible for the observed pixel values. The images were generated by
sampling a feature vector, zi , from a distribution under which each feature was present
with probability 0.5, and then taking a photograph containing the appropriate objects using
a LogiTech digital webcam. Sample images are shown in Figure 3 (a).
The Gibbs sampler was initialized with K+ = 1, choosing the feature assignments for
the first column by setting zi1 = 1 with probability 0.5. ?A , ?X , and ? were initially
set to 0.5, 1.7, and 1 respectively, and then sampled by adding Metropolis steps to the
MCMC algorithm. Figure 3 shows trace plots for the first 1000 iterations of MCMC for the
number of features used by at least one object, K+ , and the model parameters ?A , ?X , and
?. All of these quantities stabilized after approximately 100 iterations, with the algorithm
(a)
(Positive)
(Negative)
(Negative)
(Negative)
0 0 0 0
0 1 0 0
1 1 1 0
1 0 1 1
(b)
(c)
K
+
10
5
0
0
100
200
300
400
500
600
700
800
900
1000
0
100
200
300
400
500
600
700
800
900
1000
0
100
200
300
400
500
600
700
800
900
1000
0
100
200
300
400
500
Iteration
600
700
800
900
1000
?
4
2
0
?
X
2
1
0
?
A
2
1
0
Figure 3: Data and results for the demonstration of the infinite linear-Gaussian binary
latent feature model. (a) Four sample images from the 100 in the dataset. Each image
had 320 ? 240 pixels, and contained from zero to four everyday objects. (b) The posterior
mean of the weights (A) for the four most frequent binary features from the 1000th sample.
Each image corresponds to a single feature. These features perfectly indicate the presence
or absence of the four objects. The first feature indicates the presence of the $20 bill,
the other three indicate the absence of the Klein bottle, the handaxe, and the cellphone.
(c) Reconstructions of the images in (a) using the binary codes inferred for those images.
These reconstructions are based upon the posterior mean of A for the 1000th sample. For
example, the code for the first image indicates that the $20 bill is absent, while the other
three objects are not. The lower panels show trace plots for the dimensionality of the
representation (K+ ) and the parameters ?, ?X , and ?A over 1000 iterations of sampling.
The values of all parameters stabilize after approximately 100 iterations.
finding solutions with approximately seven latent features. The four most common features
perfectly indicated the presence and absence of the four objects (shown in Figure 3 (b)), and
three less common features coded for slight differences in the locations of those objects.
4
Conclusion
We have shown that the methods that have been used to define infinite latent class models
[6, 7, 8, 9, 10, 11, 12] can be extended to models in which objects are represented in
terms of a set of latent features, deriving a distribution on infinite binary matrices that can
be used as a prior for such models. While we derived this prior as the infinite limit of
a simple distribution on finite binary matrices, we have shown that the same distribution
can be specified in terms of a simple stochastic process ? the Indian buffet process. This
distribution satisfies our two desiderata for a prior for infinite latent feature models: objects
are exchangeable, and inference remains tractable. Our success in transferring the strategy
of taking the limit of a finite model from latent classes to latent features suggests that a
similar approach could be applied with other representations, expanding the forms of latent
structure that can be recovered through unsupervised learning.
References
[1] N. Ueda and K. Saito. Parametric mixture models for multi-labeled text. In Advances in Neural
Information Processing Systems 15, Cambridge, 2003. MIT Press.
[2] I. T. Jolliffe. Principal component analysis. Springer, New York, 1986.
[3] R. S. Zemel and G. E. Hinton. Developing population codes by minimizing description length.
In Advances in Neural Information Processing Systems 6. Morgan Kaufmann, San Francisco,
CA, 1994.
[4] Z. Ghahramani. Factorial learning and the EM algorithm. In Advances in Neural Information
Processing Systems 7. Morgan Kaufmann, San Francisco, CA, 1995.
[5] C. E. Rasmussen and Z. Ghahramani. Occam?s razor. In Advances in Neural Information
Processing Systems 13. MIT Press, Cambridge, MA, 2001.
[6] C. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric
problems. The Annals of Statistics, 2:1152?1174, 1974.
[7] M. D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal
of the American Statistical Association, 90:577?588, 1995.
[8] T. S. Ferguson. Bayesian density estimation by mixtures of normal distributions. In M. Rizvi,
J. Rustagi, and D. Siegmund, editors, Recent advances in statistics, pages 287?302. Academic
Press, New York, 1983.
[9] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of
Computational and Graphical Statistics, 9:249?265, 2000.
[10] C. Rasmussen. The infinite Gaussian mixture model. In Advances in Neural Information Processing Systems 12. MIT Press, Cambridge, MA, 2000.
?
[11] D. Aldous. Exchangeability and related topics. In Ecole
d??et?e de probabilit?es de Saint-Flour,
XIII?1983, pages 1?198. Springer, Berlin, 1985.
[12] J. Pitman. Combinatorial stochastic processes, 2002. Notes for Saint Flour Summer School.
[13] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process.
Technical Report 2005-001, Gatsby Computational Neuroscience Unit, 2005.
[14] A. d?Aspremont, L. El Ghaoui, I. Jordan, and G. R. G. Lanckriet. A direct formulation for
sparse PCA using semidefinite programming. In Advances in Neural Information Processing
Systems 17. MIT Press, Cambridge, MA, 2005.
[15] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of Computational and Graphical Statistics, in press.
| 2882 |@word version:1 proportion:1 covariance:3 pick:1 tr:1 contains:1 cellphone:1 ecole:1 kmk:1 current:1 recovered:1 fn:1 partition:8 plot:2 zik:14 generative:1 pursued:1 ith:6 provides:1 location:1 unbounded:3 along:1 c2:1 direct:1 beta:2 ik:2 welldefined:1 nor:1 multi:1 metaphor:1 cardinality:1 provided:2 bounded:1 notation:1 panel:1 mass:1 kind:1 possession:1 z1k:6 finding:1 every:2 uk:1 exchangeable:9 unit:2 positive:1 persists:1 limit:12 approximately:3 chose:1 f1t:1 equivalence:15 specifying:2 suggests:2 collapse:1 range:1 responsible:1 differs:1 probabilit:1 saito:1 projection:1 integrating:2 griffith:3 zoubin:2 onto:1 selection:1 applying:1 equivalent:3 map:2 customer:17 bill:3 straightforward:2 attention:1 independently:2 simplicity:1 array:1 deriving:2 spanned:1 population:1 siegmund:1 limiting:2 annals:1 play:2 programming:1 homogeneous:1 lanckriet:1 associate:1 approximated:1 cooperative:1 labeled:1 observed:4 role:1 ordering:7 upon:4 division:2 k0:4 represented:4 describe:1 london:3 monte:1 effective:1 zemel:1 choosing:2 otherwise:1 statistic:4 ucl:1 reconstruction:2 product:2 remainder:1 frequent:1 intuitive:1 kh:9 description:1 everyday:2 empty:1 produce:2 escobar:1 object:63 illustrate:2 derive:2 ac:1 school:1 ibp:5 indicate:5 differ:1 stochastic:4 generalization:2 geography:1 repertoire:1 adjusted:1 lof:15 normal:1 exp:3 omitted:1 estimation:2 label:2 combinatorial:1 grouped:2 mit:4 always:1 gaussian:8 rather:1 occupied:1 pn:2 exchangeability:1 linguistic:1 derived:5 focus:2 bernoulli:1 indicates:3 inference:5 stopping:1 el:1 ferguson:1 typically:1 transferring:1 initially:1 transformed:1 pixel:3 denoted:1 having:1 sampling:4 unsupervised:3 report:1 xiii:1 consisting:2 flour:2 mixture:12 semidefinite:1 chain:2 initialized:1 mk:18 instance:1 column:15 zn:1 assignment:15 subset:8 entry:4 z2k:3 culinary:1 providence:1 chooses:2 density:3 probabilistic:1 together:1 satisfied:1 containing:1 choose:2 hn:3 cognitive:1 american:1 de:2 stabilize:1 matter:1 depends:3 break:1 try:1 apparently:1 characterizes:1 reached:1 start:3 complicated:1 variance:2 who:1 kaufmann:2 correspond:2 identify:1 dealt:1 bayesian:4 produced:2 carlo:1 occupant:1 multiplying:1 unaffected:1 history:7 za:2 associated:3 sampled:6 dataset:3 manifest:1 lim:1 dimensionality:6 tom:1 arranged:1 evaluated:1 formulation:1 crp:5 hand:1 defines:1 indicated:2 brown:2 true:1 assigned:2 neal:1 illustrated:1 razor:1 demonstrate:2 image:13 harmonic:1 possessing:3 common:2 multinomial:1 association:1 slight:1 elementwise:2 significant:1 refer:1 cambridge:4 gibbs:5 enter:3 had:2 longer:1 similarity:1 posterior:3 recent:1 aldous:1 belongs:1 dish:16 phone:1 binary:38 arbitrarily:1 success:1 captured:1 morgan:2 additional:1 multiple:1 full:3 reduces:1 technical:2 academic:1 offer:1 visit:1 coded:1 desideratum:3 poisson:5 iteration:5 represent:2 q2n:4 c1:1 preserved:1 separately:1 appropriately:1 posse:1 member:1 jordan:1 call:1 presence:4 restaurant:7 zi:3 hastie:1 perfectly:2 inner:1 absent:1 pca:5 akin:1 york:2 generally:1 factorial:2 nonparametric:2 simplest:1 specifies:2 stabilized:1 neuroscience:2 rustagi:1 tibshirani:1 correctly:1 popularity:1 klein:2 serving:1 discrete:3 affected:2 express:1 four:8 drawn:2 neither:1 throughout:1 pursuing:1 ueda:1 draw:2 bit:1 bound:1 pay:1 summer:1 identifiable:1 infinity:2 ri:1 generates:1 argument:1 performing:1 relatively:2 developing:1 kd:1 slightly:1 em:1 partitioned:1 metropolis:1 making:1 ghaoui:1 equation:11 previously:1 remains:2 jolliffe:1 needed:1 know:1 tractable:2 end:1 available:1 f2t:1 appropriate:3 alternative:2 buffet:13 encounter:1 thomas:1 assumes:2 dirichlet:7 saint:2 graphical:2 ghahramani:4 chinese:3 k1:4 webcam:1 implied:1 move:1 question:1 already:1 arrangement:1 quantity:2 strategy:2 parametric:1 kth:1 berlin:1 capacity:1 seating:2 seven:1 topic:1 extent:1 cellular:1 assuming:2 code:3 length:1 decimal:1 ratio:1 demonstration:1 minimizing:1 potentially:1 relate:1 trace:2 negative:3 zt:2 upper:1 markov:2 finite:11 possessed:2 defining:2 zi1:1 extended:1 hinton:1 inferred:1 bottle:2 specified:7 c3:1 including:2 analogue:1 suitable:1 critical:1 treated:1 regularized:1 representing:2 aspremont:1 text:1 prior:20 multiplication:1 determining:1 permutation:2 digital:1 integrate:1 editor:1 occam:1 row:6 last:1 truncation:1 rasmussen:2 side:1 allow:1 mik:1 exponentiated:1 taking:8 sparse:6 pitman:1 qn:2 san:2 counted:1 observable:2 assumed:1 francisco:2 xi:3 continuous:4 latent:28 p2n:1 table:5 rearranging:1 expanding:1 ca:2 handaxe:2 zou:1 constituted:1 noise:1 rizvi:1 west:1 gatsby:3 xt:1 showing:1 grouping:1 quantization:1 sequential:1 adding:1 magnitude:1 conditioned:2 photograph:1 antoniak:1 infinitely:6 expressed:2 ordered:8 contained:2 trix:1 springer:2 corresponds:2 determines:1 satisfies:1 ma:4 conditional:5 identity:1 replace:1 absence:4 infinite:22 determined:3 cvq:2 sampler:5 principal:4 called:1 e:1 indicating:3 college:1 people:1 indian:9 mcmc:2 tested:1 |
2,073 | 2,883 | Online Discovery and Learning
of Predictive State Representations
Peter McCracken
Department of Computing Science
University of Alberta
Edmonton, Alberta
Canada, T6G 2E8
[email protected]
Michael Bowling
Department of Computing Science
University of Alberta
Edmonton, Alberta
Canada, T6G 2E8
[email protected]
Abstract
Predictive state representations (PSRs) are a method of modeling dynamical systems using only observable data, such as actions and observations,
to describe their model. PSRs use predictions about the outcome of future tests to summarize the system state. The best existing techniques
for discovery and learning of PSRs use a Monte Carlo approach to explicitly estimate these outcome probabilities. In this paper, we present
a new algorithm for discovery and learning of PSRs that uses a gradient descent approach to compute the predictions for the current state.
The algorithm takes advantage of the large amount of structure inherent
in a valid prediction matrix to constrain its predictions. Furthermore,
the algorithm can be used online by an agent to constantly improve its
prediction quality; something that current state of the art discovery and
learning algorithms are unable to do. We give empirical results to show
that our constrained gradient algorithm is able to discover core tests using
very small amounts of data, and with larger amounts of data can compute
accurate predictions of the system dynamics.
1
Introduction
Representations of state in dynamical systems fall into three main categories. Methods
like k-order Markov models attempt to identify state by remembering what has happened
in the past. Methods such as partially observable Markov decision processes (POMDPs)
identify state as a distribution over postulated base states. A more recently developed group
of algorithms, known as predictive representations, identify state in dynamical systems by
predicting what will happen in the future. Algorithms following this paradigm include
observable operator models [1], predictive state representations [2, 3], TD-Nets [4] and
TPSRs [5]. In this research we focus on predictive state representations (PSRs). PSRs are
completely grounded in data obtained from the system, and they have been shown to be at
least as general and as compact as other methods, like POMDPs [3].
Until recently, algorithms for discovery and learning of PSRs could be used only in special cases. They have required explicit control of the system using a reset action [6, 5],
or have required the incoming data stream to be generated using an open-loop policy [7].
The algorithm presented in this paper does not require a reset action, nor does it make any
assumptions about the policy used to generate the data stream. Furthermore, we focus on
the online learning problem, i.e., how can an estimate of the current state vector and parameters be maintained and improved during a single pass over a string of data. Like the
myopic gradient descent algorithm [8], the algorithm we propose uses a gradient approach
to move its predictions closer to its empirical observations; however, our algorithm also
takes advantage of known constraints on valid test predictions. We show that this constrained gradient approach is capable of discovering a set of core tests quickly, and also of
making online predictions that improve as more data is available.
2
Predictive State Representations
Predictive state representations (PSRs) were introduced by Littman et al. [2] as a method
of modeling discrete-time, controlled dynamical systems. They possess several advantages
over other popular models such as POMDPs and k-order Markov models, foremost being
their ability to be learned entirely from sensorimotor data, requiring only a prior knowledge
of the set of actions, A, and observations, O.
Notation. An agent in a dynamical system experiences a sequence of action-observation
pairs, or ao pairs. The sequence of ao pairs the agent has already experienced, beginning at
the first time step, is known as a history. For instance, the history hn = a1 o1 a2 o2 . . . an on
of length n means that the agent chose action a1 and perceived observation o1 at the first
time step, after which the agent chose a2 and perceived o2 , and so on1 . A test is a sequence of ao pairs that begins immediately after a history. A test is said to succeed if the
observations in the sequence are observed in order, given that the actions in the sequence
are chosen in order. For instance, the test t = a1 o1 a2 o2 succeeds if the agent observes o1
followed by o2 , given that it performs actions a1 followed by a2 . A test fails if the action
sequence is taken but the observation sequence is not observed. A prediction about the
outcome of a test t depends on the history h that preceded it, so we write predictions as
p(t|h), to represent the probability
Qn of t succeeding after history h. For test t of length n, we
define a prediction p(t|h) as i=1 Pr(oi |a1 o1 . . . ai ). This definition is equivalent to the
usual definition in the PSR literature, but makes it explicit that predictions are independent
of the policy used to select actions. The special length zero test is called ?. If T is a set of
tests and H is a set of histories, p(t|h) is a single value, p(T |h) is a row vector containing
p(ti |h) for all tests ti ? T , p(t|H) is a column vector containing p(t|hj ) for all histories
hj ? H, and p(T |H) is a matrix containing p(ti |hj ) for all ti ? T and hj ? H.
PSRS. The fundamental principle underlying PSRs is that in most systems there exists a
set of tests, Q, that at any history are a sufficient statistic for determining the probability of
success for all possible tests. This means that for any test t there exists a function ft such
that p(t|h) = ft (p(Q|h)). In this paper, we restrict our discussion of PSRs to linear PSRs,
in which the function ft is a linear function of the tests in Q. Thus, p(t|h) = p(Q|h)mt ,
where mt is a column vector of weights. The tests in Q are known as core tests, and
determining which tests are core tests is known as the discovery problem. In addition to Q,
it will be convenient to discuss the set of one-step extensions of Q. A one-step extension
of a test t is a test aot, that prefixes the original test with a single ao pair. The set of all
one-step extensions of Q ? {?} will be called X.
The state vector of a PSR at time i is the set of predictions p(Q|hi ). At each time step, the
1
Much of the notation used in this paper is adopted from Wolfe et al. [7]. Here we use the notation
that a superscript ai or oi indicates the time step of an action or observation, and a subscript ai or oi
indicates that the action or observation is a particular element of the set A or O.
state vector is updated by computing, for each qj ? Q:
p(qj |hi ) =
p(Q|hi?1 )mai oi qj
p(ai oi qj |hi?1 )
=
p(ai oi |hi?1 )
p(Q|hi?1 )mai oi
Thus, in order to update the PSR at each time step, the vector mt must be known for each
test t ? X. This set of update vectors, that we will call mX , are the parameters of the PSR,
and estimation of these parameters is known as the learning problem.
3
Constrained Gradient Learning of PSRs
The goal of this paper is to develop an online algorithm for discovering and learning a
PSR without the necessity of a reset action. To be online, the algorithm must always have
an estimate of the current state vector, p(Q|hi ), and estimates of the parameters mX . In
this section, we introduce our constrained gradient approach to solving this problem. A
more complete explanation of this algorithm can be found in an expanded version of this
work [9]. To begin, in Section 3.1, we will assume that the set of core tests Q is given to
the algorithm; we describe how Q can be estimated online in Section 3.2.
3.1
Learning the PSR Parameters
The approach to learning taken by the constrained gradient algorithm is to approximate the
matrix p(T |H), for a selected set of tests T and histories H. We first discuss the proper
selection of T and H, and then describe how this matrix can be constructed online. Finally,
we show how the current PSR is extracted from the matrix.
Tests and Histories. At a minimum, T must contain the union of Q and X, since Q is
required to create the state vector and X is required to compute mX . However, as will
be explained in the next section, these tests are not sufficient to take full advantage of the
structure in a prediction matrix. The constrained gradient algorithm requires the tests in T
to satisfy two properties:
1. If tao ? T then t ? T
2. If taoi ? T then taoj ? T
?oj ? O
To build a valid set of tests, T is initialized to Q ? X. Tests are iteratively added to T until
it satisfies both of the above properties.
All histories in H are histories that have been experienced by the agent. The current history,
hi , must always be in H in order to make online predictions, and also to compute hi+1 .
The only other requirement of H is that it contain sufficient histories to compute the linear
functions mt for the tests in T (see Section 3.1). Our strategy is impose a bound N on the
size of H, and to restrict H to the N most recent histories encountered by the agent. When
a new data point is seen and a new row is added to the matrix, the oldest row in the matrix is
?forgotten.? In addition to restricting the size of H, forgetting old rows has the side-effect
that the rows estimated using the least amount of data are removed from the matrix, and no
longer affect the computation of mX .
Constructing the Prediction Matrix. The approach used to build the matrix p(T |H) is
to estimate and append a new row, p(T |hi ), after each new ai oi pair is encountered. Once
a row has been added, it is never changed. To initialize the algorithm, the first row of the
matrix p(T |h0 ), is set to uniform probabilities.2 The creation of the new row is performed
in two stages: a row estimation stage, and a gradient descent stage.
2
Each p(t|h0 ) is set to 1/|O|k , where k is the length of test t.
Both stages take advantage of four constraints on the predictions p(T |h) in order to be a
valid row in the prediction matrix:
1.
2.
3.
4.
Range: 0 ? p(t|h) ? 1
Null Test: p(?|h) = 1
P
Internal Consistency: p(t|h) = oj ?O p(taoj |h) ?a ? A
Conditional Probability: p(t|hao) = p(aot|h)/p(ao|h) ?a ? A, o ? O
The range constraint restricts the entries in the matrix to be valid probabilities. The null test
constraint defines the value of the null test. The internal consistency constraint ensures that
the probabilities within a single row form valid probability distributions. The conditional
probability constraint is required to maintain consistency between consecutive rows of the
matrix.
Consider time i ? 1 so that the last row of p(T |H) is hi?1 . After action ai is taken and
observation oi is seen, a new row for history hi = hi?1 ai oi must be added to the matrix.
First, as much of the new row as possible is computed using the conditional probability
constraint, and the predictions for history hi?1 . For all tests t ? T for which ai oi t ? T :
p(t|hi ) ?
p(ai oi t|hi?1 )
p(ai oi |hi?1 )
Because X ? T , it is guaranteed that p(Q|hi ) is estimated in this step.
The second phase of adding a new row is to compute predictions for the tests t ? T for
which ai oi t 6? T . An estimate of p(t|hi ) can be found by computing p(Q|hi )mt for an
appropriate mt , using the PSR assumption that any prediction is a linear combination of
core test predictions. Regression is used to find a vector mt that minimizes ||p(Q|H)mt ?
p(t|H)||2 . At this stage, the entire row for hi has been estimated. The regression step can
create probabilities that violate the range and normalization properties of a valid prediction.
To enforce the range property, any predictions that are less than 0 are set to a small positive
value3 . Then, to ensure internal consistency within the row, the normalization property is
enforced by setting predictions:
p(t|hi )p(taoj |hi )
p(taoj |hi ) ? P
i
oi ?O p(taoi |h )
?oj ? O
This preserves the ratio among sibling predictions and creates a valid probability distribution from them. The normalization is performed by normalizing shorter tests first, which
guarantees that a set of tests are not normalized to a value that will later change. The length
one tests are normalized to sum to 1.
The gradient descent stage of estimating a new row moves the constraint-generated predictions in the direction of the gradient created by the new observation. Any prediction
p(tao|hi ) whose test tao is successfully executed over the next several time steps is updated using p(tao|hi ) ? (1 ? ?)p(tao|hi ) + ?(p(t|hi )), for some learning rate 0 ? ? ? 1.
Note that this learning rule is a temporal difference update; prediction values are adjusted
toward the value of their parent.4 The update is accomplished by adding an appropriate
positive value to p(tao|hi ) and then running the normalization procedure on the row. The
value is computed such that after normalization, p(tao|hi ) contains the desired value. Tests
3
Setting values to zero can cause division by zero errors, if the prediction probability was not
actually supposed to be zero.
4
When the algorithm is used online, looking forward into the stream is impossible. In this case,
we maintain a buffer of ao pairs between the current time step and the histories that are added to the
prediction matrix. The length of the buffer is the length of the longest test in T . To compute the
predictions for the current time step, we iteratively update the PSR using the buffered data.
that are unsuccessfully executed (i.e. their action sequence is executed but their observation
sequence is not observed) will have their probability reduced due to this re-normalization
step. The learning parameter, ?, is decayed throughout the learning process.
Extracting the PSR. Once a new row for hi is estimated, the current PSR state vector
is p(Q|hi ). The parameters mX can be found by using the output of the regression from
the second phase, above. Thus, at every time step, the current best estimated PSR of the
system is available.
3.2
Discovery of Core Tests
In the previous section, we assumed that the set of core tests was given to the algorithm. In
general, though, Q is not known. A rudimentary, but effective, method of finding core tests
is to choose tests whose corresponding columns of the matrix p(T |H) are most linearly
b The
unrelated to the set of core tests already selected. Call the set of selected core tests Q.
b t}|H) is an indication of the linear relatedness of
condition number of the matrix p({Q,
test t; if it is well-conditioned, the test is likely to be linearly independent. To choose core
b t}|H) is most well-conditioned. If the
tests, we find the test t in X whose matrix p({Q,
condition number of that test is below a threshold parameter, it is chosen as a new core
b without surpassing the
test. The process can be repeated until no test can be added to Q
b will be a
threshold. Because candidate tests are selected from X, the discovered set Q
regular form PSR [10].
b is initialized to {?}. The above core test selection procedure runs after every N
The set Q
data points are seen, where N is the maximum number of histories kept in H. After each
new core test is selected, T is augmented with the one-step extensions of the new test, as
well as any other tests needed to satisfy the rules in Section 3.1.
4
Experiments and Results
The goal of the constrained gradient algorithm is to choose a correct set of core tests and
to make accurate, online predictions. In this section, we show empirical results that the
algorithm is capable of these goals. We also show offline results, in order to compare our
results with the suffix-history algorithm [7]. A more thorough suite of experiments can be
found in an expanded version of this work [9].
We tested our algorithm on the same set of problems from Cassandra?s POMDP page [11]
used as the test domain in other PSR trials [8, 6, 7]. For each problem, 10 trials were run,
with different training sequences and test sequences used for each trial. The sequences
were generated using a uniform random policy
The error for each history
P over actions.
1
i+1
hi was computed using the error measure |O|
oj |hi ) ? p?(ai+1 oj |hi ))2 [7].
oj ?O (p(a
This measures the mean error in the one-step tests involving the action that was actually
taken at step i + 1.
The same parameterization of the algorithm was used for all domains. The size bound
on H was set to 1000, and the condition threshold for adding new tests was 10. The
learning parameter ? was initialized to 1 and halved every 100,000 time steps. The core
test discovery procedure was run every 1000 data points.
4.1
Discovery Results
In this section, we examine the success of the constrained gradient algorithm at discovering core tests. Table 1 shows, for each test domain, the true number of core tests for the
Table 1: The number of core tests found by the constrained gradient algorithm. Data for
the suffix-history algorithm [7] is repeated here for comparison. See text for explanation.
Domain
Name
Float Reset
Tiger
Paint
Shuttle
4x3 Maze
Cheese Maze
Bridge Repair
Network
|Q|
5
2
2
7
10
11
5
7
Constrained Gradient
b
|Q|
Correct # Data
6.1
4.5
4000
4.0
2.0
1000
2.6
2.0
4000
8.7
7.0
2000
10.4
8.6
2000
12.1
9.6
1000
7.2
5.0
1000
4.7
4.5
2000
Suffix-History
b
|Q|/Correct
# Data
2
4000
2
4000
7 1024000
9 1024000
9
32000
5 1024000
3 2048000
dynamical system (|Q|), the number of core tests selected by the constrained gradient alb and how many of the selected core tests were actually core tests (Correct).
gorithm (|Q|),
The results are averaged over 10 trials. Table 1 also shows the time step at which the last
core test was chosen (# Data). In all domains, the algorithm found a majority of the core
tests after only several thousand data points; in several cases, the core tests were found after
only a single run of the core test selection procedure.
Table 1 also shows discovery results published for the suffix-history algorithm [7]. All
of the core tests found by the suffix-history algorithm were true core tests. In all cases
except the 4x3 Maze, the constrained gradient algorithm was able to find at least as many
core tests as the suffix-history method, and required significantly less data. To be fair, the
suffix-history algorithm uses a conservative approach of selecting core tests, and therefore
requires more data. The constrained gradient algorithm chooses tests that give an early
indication of being linearly independent. Therefore, the constrained gradient finds most, or
all, core tests extremely quickly, but can also choose tests that are not linearly independent.
4.2
Online and Offline Results
Figure 1 shows the performance of the constrained gradient approach, in online and offline
settings. The question answered by the online experiments is: How accurately can the
constrained gradient algorithm predict the outcome of the next time step? At each time i,
we measured the error in the algorithm?s predictions of p(ai+1 oj |hi ) for each oj ? O. The
?Online? plot in Figure 1 shows the mean online error from the previous 1000 time steps.
The question posed for the offline experiments was: What is the long-term performance
of the PSRs learned by the constrained gradient algorithm? To test this, we stopped the
learning process at different points in the training sequence and computed the current PSR.
b
The initial state vector for the offline tests was set to the column means of p(Q|H),
which
approximates the state vector of the system?s stationary distribution. In Figure 1, the ?Offline? plot shows the mean error of this PSR on a test sequence of length 10,000. The offline
and online performances of the algorithm are very similar. This indicates that, after a given
amount of data, the immediate error on the next observation and the long-term error of the
generated PSR are approximately the same. This result is encouraging because it implies
that the PSR remains stable in its predictions, even in the long term.
Previously published [7] performance results for the suffix-history algorithm are also shown
in Figure 1. A direct comparison between the performance of the two algorithms is somewhat inappropriate, because the suffix-history algorithm solves the ?batch? problem and is
able to make multiple passes over the data stream. However, the comparison does show that
Tiger
Float Reset
10?1
10?1
10?2
10
Offline
Online
?2
Suffix-History
10?3
10?3
10?4
10?4
10?5
10?6 1K
250K
500K
750K
1000K
10?5 1K
250K
500K
750K
1000K
750K
1000K
750K
1000K
750K
1000K
Network
Paint
10?1
10?1
10?2
10?2
10?3
10?3
10?4
10?5 1K
250K
500K
750K
1000K
10?4 1K
250K
Shuttle
500K
4x3 Maze
10
?1
10?1
10?2
10?2 1K
250K
500K
750K
1000K
10?3 1K
250K
10?1
10?1
10?2
10?2
10?3 1K
250K
500K
500K
Bridge
Cheese Maze
750K
1000K
10?3 1K
250K
500K
Figure 1: The PSR error on the test domains. The x-axis is the length of the sequence used
for training, which ranges from 1,000 to 1,000,000. The y-axis shows the mean error on
the one-step predictions (Online) or on a test sequence (Offline and Suffix-History). The
results for Suffix-History are repeated from previous work [7]. See text for explanations.
the constrained gradient approach is competitive with current PSR learning algorithms.
The performance plateau in the 4x3 Maze and Network domains is unsurprising, because in
these domains only a subset of the correct core tests were found (see Table 1). The plateau
in the Bridge domain is more concerning, because in this domains all of the correct core
tests were found. We suspect this may be due to a local minimum in the error space; more
tests need to be performed to investigate this phenomenon.
5
Future Work and Conclusion
We have demonstrated that the constrained gradient algorithm can do online learning and
discovery of predictive state representations from an arbitrary stream of experience. We
have also shown that it is competitive with the alternative batch methods. There are still a
number of interesting directions for future improvement.
b
In the current method of core test selection, the condition of the core test matrix p(Q|H)
is
important. If the matrix becomes ill-conditioned, it prevents new core tests from becoming
selected. This can happen if the true core test matrix p(Q|H) is poorly conditioned (beb To prevent this
cause some core tests are similar), or if incorrect core tests are added to Q.
problem, there needs to be a mechanism for removing chosen core tests if they turn out to
be linearly dependent. Also, the condition threshold should be gradually increased during
learning, to allow more obscure core tests to be selected.
Another interesting modification to the algorithm is to replace the current multi-step estimation of new rows with a single optimization. We want to simultaneously minimize the
regression error and next observation error subject to the constraints on valid predictions.
This optimization could be solved with quadratic programming.
To date, the constrained gradient algorithm is the only PSR algorithm that takes advantage
of the sequential nature of the data stream experienced by the agent, and the constraints
such a sequence imposes on the system. It handles the lack of a reset action without partitioning histories. Also, at the end of learning the algorithm has an estimate of the current
state, instead of a prediction of the initial distribution or a stationary distribution over states.
Empirical results show that, while there is room for improvement, the constrained gradient
algorithm is competitive in both discovery and learning of PSRs.
References
[1] Herbert Jaeger. Observable operator models for discrete stochastic time series. Neural Computation, 12(6):1371?1398, 2000.
[2] Michael Littman, Richard Sutton, and Satinder Singh. Predictive representations of state. In
Advances in Neural Information Processing Systems 14 (NIPS), pages 1555?1561, 2002.
[3] Satinder Singh, Michael R. James, and Matthew R. Rudary. Predictive state representations: A
new theory for modeling dynamical systems. In Uncertainty in Artificial Intelligence: Proceedings of the Twentieth Conference (UAI), pages 512?519, 2004.
[4] Richard Sutton and Brian Tanner. Temporal-difference networks. In Advances in Neural Information Processing Systems 17, pages 1377?1384, 2005.
[5] Matthew Rosencrantz, Geoff Gordon, and Sebastian Thrun. Learning low dimensional predictive representations. In Twenty-First International Conference on Machine Learning (ICML),
2004.
[6] Michael R. James and Satinder Singh. Learning and discovery of predictive state representations in dynamical systems with reset. In Twenty-First International Conference on Machine
Learning (ICML), 2004.
[7] Britton Wolfe, Michael R. James, and Satinder Singh. Learning predictive state representations
in dynamical systems without reset. In Twenty-Second International Conference on Machine
Learning (ICML), 2005.
[8] Satinder Singh, Michael Littman, Nicholas Jong, David Pardoe, and Peter Stone. Learning
predictive state representations. In Twentieth International Conference on Machine Learning
(ICML), pages 712?719, 2003.
[9] Peter McCracken. An online algorithm for discovery and learning of prediction state representations. Master?s thesis, University of Alberta, 2005.
[10] Eric Wiewiora. Learning predictive representations from a history. In Twenty-Second International Conference on Machine Learning (ICML), 2005.
[11] Anthony Cassandra. Tony?s POMDP file repository page.
research/ai/pomdp/examples/index.html, 1999.
http://www.cs.brown.edu/-
| 2883 |@word trial:4 repository:1 version:2 open:1 initial:2 necessity:1 contains:1 series:1 selecting:1 prefix:1 past:1 existing:1 o2:4 current:15 must:5 happen:2 wiewiora:1 plot:2 succeeding:1 update:5 stationary:2 intelligence:1 discovering:3 selected:9 parameterization:1 oldest:1 beginning:1 core:42 constructed:1 direct:1 incorrect:1 introduce:1 forgetting:1 nor:1 examine:1 multi:1 alberta:5 td:1 encouraging:1 inappropriate:1 becomes:1 begin:2 discover:1 notation:3 underlying:1 estimating:1 unrelated:1 null:3 what:3 string:1 minimizes:1 developed:1 britton:1 finding:1 suite:1 guarantee:1 forgotten:1 temporal:2 every:4 thorough:1 ti:4 control:1 partitioning:1 positive:2 local:1 sutton:2 subscript:1 becoming:1 approximately:1 chose:2 range:5 averaged:1 union:1 x3:4 procedure:4 empirical:4 significantly:1 convenient:1 regular:1 selection:4 operator:2 impossible:1 www:1 equivalent:1 demonstrated:1 pomdp:3 immediately:1 rule:2 handle:1 updated:2 ualberta:2 programming:1 us:3 wolfe:2 element:1 gorithm:1 observed:3 ft:3 solved:1 thousand:1 ensures:1 e8:2 removed:1 observes:1 littman:3 dynamic:1 singh:5 solving:1 predictive:15 creation:1 creates:1 division:1 eric:1 completely:1 geoff:1 describe:3 effective:1 monte:1 artificial:1 outcome:4 h0:2 whose:3 larger:1 posed:1 ability:1 statistic:1 superscript:1 online:21 advantage:6 sequence:17 indication:2 net:1 propose:1 reset:8 loop:1 date:1 poorly:1 supposed:1 parent:1 requirement:1 jaeger:1 develop:1 measured:1 solves:1 c:3 implies:1 direction:2 correct:6 stochastic:1 require:1 ao:6 brian:1 adjusted:1 extension:4 predict:1 matthew:2 consecutive:1 a2:4 early:1 perceived:2 estimation:3 bridge:3 create:2 successfully:1 always:2 beb:1 hj:4 shuttle:2 focus:2 longest:1 improvement:2 indicates:3 dependent:1 suffix:12 entire:1 tao:7 among:1 ill:1 html:1 art:1 constrained:21 special:2 initialize:1 once:2 never:1 icml:5 future:4 gordon:1 inherent:1 richard:2 preserve:1 simultaneously:1 unsuccessfully:1 phase:2 maintain:2 attempt:1 investigate:1 myopic:1 accurate:2 psrs:15 closer:1 capable:2 experience:2 shorter:1 old:1 initialized:3 desired:1 re:1 stopped:1 instance:2 column:4 modeling:3 increased:1 entry:1 subset:1 uniform:2 unsurprising:1 chooses:1 decayed:1 fundamental:1 international:5 rudary:1 michael:6 tanner:1 quickly:2 thesis:1 containing:3 hn:1 choose:4 tpsrs:1 postulated:1 satisfy:2 explicitly:1 depends:1 stream:6 performed:3 later:1 competitive:3 minimize:1 oi:15 identify:3 accurately:1 carlo:1 pomdps:3 published:2 history:34 plateau:2 sebastian:1 definition:2 sensorimotor:1 james:3 popular:1 knowledge:1 actually:3 improved:1 though:1 furthermore:2 stage:6 until:3 lack:1 defines:1 quality:1 alb:1 name:1 effect:1 requiring:1 contain:2 normalized:2 true:3 brown:1 iteratively:2 bowling:2 during:2 maintained:1 stone:1 complete:1 performs:1 rudimentary:1 recently:2 preceded:1 mt:8 approximates:1 surpassing:1 buffered:1 ai:15 consistency:4 stable:1 longer:1 base:1 something:1 halved:1 recent:1 buffer:2 success:2 accomplished:1 seen:3 minimum:2 herbert:1 remembering:1 impose:1 somewhat:1 paradigm:1 violate:1 multiple:1 full:1 long:3 concerning:1 a1:5 controlled:1 prediction:40 involving:1 regression:4 foremost:1 grounded:1 represent:1 normalization:6 addition:2 want:1 float:2 posse:1 pass:1 file:1 suspect:1 subject:1 call:2 extracting:1 affect:1 restrict:2 sibling:1 qj:4 peter:3 cause:2 action:18 pardoe:1 amount:5 category:1 reduced:1 generate:1 mai:2 http:1 restricts:1 aot:2 happened:1 estimated:6 discrete:2 write:1 group:1 four:1 threshold:4 prevent:1 kept:1 sum:1 enforced:1 run:4 uncertainty:1 master:1 throughout:1 decision:1 entirely:1 on1:1 hi:36 bound:2 followed:2 guaranteed:1 quadratic:1 encountered:2 constraint:10 constrain:1 answered:1 extremely:1 expanded:2 department:2 combination:1 making:1 modification:1 explained:1 gradually:1 pr:1 repair:1 taken:4 remains:1 previously:1 discus:2 turn:1 mechanism:1 needed:1 end:1 adopted:1 available:2 appropriate:2 enforce:1 nicholas:1 batch:2 alternative:1 original:1 running:1 include:1 ensure:1 tony:1 build:2 move:2 already:2 added:7 paint:2 question:2 strategy:1 usual:1 said:1 gradient:27 mx:5 unable:1 thrun:1 majority:1 toward:1 length:9 o1:5 index:1 ratio:1 executed:3 hao:1 append:1 proper:1 policy:4 twenty:4 observation:14 markov:3 descent:4 immediate:1 looking:1 discovered:1 arbitrary:1 canada:2 introduced:1 david:1 pair:7 required:6 learned:2 nip:1 able:3 dynamical:9 below:1 summarize:1 oj:8 explanation:3 predicting:1 improve:2 axis:2 created:1 text:2 prior:1 literature:1 discovery:14 determining:2 interesting:2 mccracken:2 agent:9 sufficient:3 t6g:2 imposes:1 principle:1 obscure:1 row:23 changed:1 last:2 offline:9 side:1 allow:1 fall:1 valid:9 maze:6 qn:1 forward:1 approximate:1 observable:4 compact:1 relatedness:1 satinder:5 cheese:2 incoming:1 uai:1 assumed:1 table:5 nature:1 ca:2 constructing:1 domain:10 anthony:1 main:1 linearly:5 repeated:3 fair:1 augmented:1 edmonton:2 experienced:3 fails:1 explicit:2 candidate:1 removing:1 normalizing:1 exists:2 restricting:1 adding:3 sequential:1 conditioned:4 cassandra:2 likely:1 psr:21 twentieth:2 prevents:1 partially:1 rosencrantz:1 satisfies:1 constantly:1 extracted:1 succeed:1 conditional:3 goal:3 room:1 replace:1 change:1 tiger:2 except:1 conservative:1 called:2 pas:1 succeeds:1 jong:1 select:1 internal:3 tested:1 phenomenon:1 |
2,074 | 2,884 | Selecting Landmark Points for Sparse Manifold
Learning
J. G. Silva
ISEL/ISR
R. Conselheiro Emidio Navarro
1950.062 Lisbon, Portugal
[email protected]
J. S. Marques
IST/ISR
Av. Rovisco Pais
1949-001 Lisbon, Portugal
[email protected]
J. M. Lemos
INESC-ID/IST
R. Alves Redol, 9
1000-029 Lisbon, Portugal
[email protected]
Abstract
There has been a surge of interest in learning non-linear manifold models
to approximate high-dimensional data. Both for computational complexity reasons and for generalization capability, sparsity is a desired feature
in such models. This usually means dimensionality reduction, which
naturally implies estimating the intrinsic dimension, but it can also mean
selecting a subset of the data to use as landmarks, which is especially important because many existing algorithms have quadratic complexity in
the number of observations. This paper presents an algorithm for selecting landmarks, based on LASSO regression, which is well known to favor sparse approximations because it uses regularization with an l1 norm.
As an added benefit, a continuous manifold parameterization, based on
the landmarks, is also found. Experimental results with synthetic and
real data illustrate the algorithm.
1
Introduction
The recent interest in manifold learning algorithms is due, in part, to the multiplication of
very large datasets of high-dimensional data from numerous disciplines of science, from
signal processing to bioinformatics [6].
As an example, consider a video sequence such as the one in Figure 1. In the absence
of features like contour points or wavelet coefficients, each image of size 71 ? 71 pixels
is a point in a space of dimension equal to the number of pixels, 71 ? 71 = 5041. The
observation space is, therefore, R5041 . More generally, each observation is a vector y ?
Rm where m may be very large.
A reasonable assumption, when facing an observation space of possibly tens of thousands
of dimensions, is that the data are not dense in such a space, because several of the mea-
Figure 1: Example of a high-dimensional dataset: each image of size 71 ? 71 pixels is a
point in R5041 .
sured variables must be dependent. In fact, in many problems of interest, there are only a
few free parameters, which are embedded in the observed variables, frequently in a nonlinear way. Assuming that the number of free parameters remains the same throughout the
observations, and also assuming smooth variation of the parameters, one is in fact dealing
with geometric restrictions which can be well modelled as a manifold.
Therefore, the data must lie on, or near (accounting for noise) a manifold embedded in
observation, or ambient space. Learning this manifold is a natural approach to the problem
of modelling the data, since, besides computational issues, sparse models tend to have
better generalization capability. In order to achieve sparsity, considerable effort has been
devoted to reducing the dimensionality of the data by some form of non-linear projection.
Several algorithms ([10], [8], [3]) have emerged in recent years that follow this approach,
which is closely related to the problem of feature extraction. In contrast, the problem of
finding a relevant subset of the observations has received less attention.
It should be noted that the complexity of most existing algorithms is, in general, dependent
not only on the dimensionality but also on the number of observations. An important
example is the ISOMAP [10], where the computational cost is quadratic in the number
of points, which has motivated the L-ISOMAP variant [3] which uses a randomly chosen
subset of the points as landmarks (L is for Landmark).
The proposed algorithm uses, instead, a principled approach to select the landmarks, based
on the solutions of a regression problem minimizing a regularized cost functional. When
the regularization term is based on the l1 norm, the solution tends to be sparse. This is the
motivation for using the Least Absolute value Subset Selection Operator (LASSO) [5].
Finding the LASSO solutions used to require solving a quadratic programming problem,
until the development of the Least Angle Regression (LARS1 ) procedure [4], which is much
faster (the cost is equivalent to that of ordinary least squares) and not only gives the LASSO
solutions but also provides an estimator of the risk as a function of the regularization tuning
parameter. This means that the correct amount of regularization can be automatically found.
In the specific context of selecting landmarks for manifold learning, with some care in the
LASSO problem formulation, one is able to avoid a difficult problem of sparse regression
with Multiple Measurement Vectors (MMV), which has received considerable interest in
its own right [2].
The idea is to use local information, found by local PCA as usual, and preserve the smooth
variation of the tangent subspace over a larger scale, taking advantage of any known embedding. This is a natural extension of the Tangent Bundle Approximation (TBA) algorithm,
proposed in [9], since the principal angles, which TBA computes anyway, are readily avail1
The S in LARS stands for Stagewise and LASSO, an allusion to the relationship between the
three algorithms.
able and appropriate for this purpose. Nevertheless, the method proposed here is independent of TBA and could, for instance, be plugged into a global procedure like L-ISOMAP.
The algorithm avoids costly global computations, that is, it doesn?t attempt to preserve
geodesic distances between faraway points, and yet, unlike most local algorithms, it is
explicitly designed to be sparse while retaining generalization ability.
The remainder of this introduction formulates the problem and establishes the notation. The
selection procedure itself is covered in section 2, while also providing a quick overview of
the LASSO and LARS methods. Results are presented in section 3 and then discussed in
section 4.
1.1
Problem formulation
The problem can be formulated as following: given N vectors y ? Rm , suppose that the y
can be approximated by a differentiable n-manifold M embedded in Rm . This means that
M can be charted through one or more invertible and differentiable mappings of the type
gi (y) = x
(1)
to vectors x ? Rn so that open sets Pi ? M, called patches, whose union covers M, are
diffeomorphically mapped onto other open sets Ui ? Rn , called parametric domains. Rn
is the lower dimensional parameter space and n is the intrinsic dimension of M. The gi are
called charts, and manifolds with complex topology may require several gi . Equivalently,
since the charts are invertible, inverse mappings hi : Rn ? Rm , called parameterizations
can be also be found.
Arranging the original data in a matrix Y ? Rm?N , with the y as column vectors and
assuming, for now, only one mapping g, the charting process produces a matrix X ?
Rn?N :
?
y11
? ..
Y=? .
ym1
???
..
.
...
?
?
y1N
x11
.. ? X = ? ..
? .
. ?
ymN
xn1
???
..
.
...
?
x1N
.. ?
. ?
xnN
(2)
The n rows of X are sometimes called features or latent variables. It is often intended in
manifold learning to estimate the correct intrinsic dimension, n, as well as the chart g or at
least a column-to-column mapping from Y to X. In the present case, this mapping will be
assumed known, and so will n.
What is intended is to select a subset of the columns of X (or of Y, since the mapping
between them is known) to use as landmarks, while retaining enough information about g,
resulting in a reduced n ? N ? matrix with N ? < N . N ? is the number of landmarks, and
should also be automatically determined.
Preserving g is equivalent to preserving its inverse mapping, the parameterization h, which
is more practical because it allows the following generative model:
y = h(x) + ?
(3)
in which ? is zero mean Gaussian observation noise. How to find the fewest possible
landmarks so that h can still be well approximated?
2
2.1
Landmark selection
Linear regression model
To solve the problem, it is proposed to start by converting the non-linear regression in (3) to
a linear regression by offloading the non-linearity onto a kernel, as described in numerous
works, such as [7]. Since there are N columns in X to start with, let K be a square, N ? N ,
symmetric semidefinite positive matrix such that
K
kij
K(x, xj )
= {kij }
= K(xi , xj )
=
exp(?
kx ? xj k2
).
2
2?K
(4)
The function K can be readily recognized as a Gaussian kernel. This allows the reformulation, in matrix form, of (3) as
YT = KB + E
(5)
,
where B, E ? RN ?m and each line of E is a realization of ? above. Still, it is difficult to
proceed directly from (5), because neither the response, YT , nor the regression parameters,
B, are column vectors. This leads to a Multiple Measurement Vectors (MMV) problem,
and while there is nothing to prevent solving it separately for each column, this makes
it harder to impose sparsity in all columns simultaneously. Two alternative approaches
present themselves at this point:
? Solve a sparse regression problem for each column of YT (and the corresponding
column of B), find a way to force several lines of B to zero.
? Re-formulate (5) is a way that turns it to a single measurement value problem.
The second approach is better studied, and it will be the one followed here. Since the
parameterization h is known and must be, at the very least, bijective and continuous, then
it must preserve the smoothness of quantities like the geodesic distance and the principal
angles. Therefore, it is proposed to re-formulate (5) as
? = K? + ?
(6)
where the new response, ? ? RN , as well as ? ? RN and ? ? RN are now column vectors,
allowing the use of known subset selection procedures.
The elements of ? can be, for example, the geodesic distances to the y? = h(x? ) observation corresponding to the mean, x? of the columns of X. This would be a possibility
if an algorithm like ISOMAP were used to find the chart from Y to X. However, since
the whole point of using landmarks is to know them beforehand, so as to avoid having to
compute N ? N geodesic distances, this is not the most interesting alternative.
A better way is to use a computationally lighter quantity like the maximum principal angle
between the tangent subspace at y? , Ty? (M), and the tangent subspaces at all other y.
Given a point y0 and its k nearest neighbors, finding the tangent subspace can be done by
local PCA. The sample covariance matrix S can be decomposed as
S =
k
1X
(yi ? y0 )(yi ? y0 )T
k i=0
S = VDVT
(7)
(8)
where the columns of V are the eigenvectors vi and D is a diagonal matrix containing the
eigenvalues ?i , in descending order. The eigenvectors form an orthonormal basis aligned
with the principal directions of the data. They can be divided in two groups: tangent and
normal vectors, spanning the tangent and normal subspaces, with dimensions n and m ? n,
respectively. Note that m ? n is the codimension of the manifold. The tangent subspaces
are spanned from the n most important eigenvectors. The principal angles between two
different tangent subspaces at different points y0 can be determined from the column spaces
of the corresponding matrices V.
An in-depth description of the principal angles, as well as efficient algorithms to compute
them, can be found, for instance, in [1]. Note that, should the Ty (M) be already available
from the eigenvectors found during some local PCA analysis, e. g., during estimation of
the intrinsic dimension, there would be little extra computational burden. An example is
[9], where the principal angles already are an integral part of the procedure - namely for
partitioning the manifold into patches.
Thus, it is proposed to use ?j equal to the maximum principal angle between Ty? (M) and
Tyj (M), where yj is the j-th column of Y. It remains to be explained how to achieve a
sparse solution to (6).
2.2
Sparsity with LASSO and LARS
? that minimizes the functional
The idea is to find an estimate ?
? 2 + ?k?k
? qq .
E = k? ? K?k
(9)
qP
m
? q
? q denotes the lq norm of ?,
? i. e. q
Here, k?k
i=1 |?i | , and ? is a tuning parameter that
controls the amount of regularization. For the most sparseness, the ideal value of q would be
zero. However, minimizing E with the l0 norm is, in general, prohibitive in computational
terms. A sub-optimal strategy is to use q = 1 instead. This is the usual formulation of
a LASSO regression problem. While minimization of (9) can be done using quadratic
programming, the recent development of the LARS method has made this unnecessary.
For a detailed description of LARS and its relationship with the LASSO, vide [4].
? = 0 and adds covariates (the columns of K) to the
Very briefly, LARS starts with ?
? setting the
model according to their correlation with the prediction error vector, ? ? K?,
?
corresponding ?j to a value such that another covariate becomes equally correlated with
the error and is, itself, added to the model - it becomes active. LARS then proceeds in a
direction equiangular to all the active ??j and the process is repeated until all covariates have
been added. There are a total of m steps, each of which adds a new ??j , making it non-zero.
With slight modifications, these steps correspond to a sampling of the tuning parameter ?
in (9) under LASSO. Moreover, [4] shows that the risk, as a function of the number, p, of
non-zero ??j , can be estimated (under mild assumptions) as
?p ) = k? ? K?
?p k2 /?
R(?
? 2 ? m + 2p
(10)
where ?
? 2 can be found from the unconstrained least squares solution of (6). Computing
?p ) requires no more than the ?
?p themselves, which are already provided by LARS
R(?
anyway.
2.3
Landmarks and parameterization of the manifold
The landmarks are the columns xj of X (or of Y) with the same indexes j as the non-zero
elements of ? p , where
p = arg min R(? p ).
(11)
p
There are N ? = p landmarks, because there are p non-zero elements in ? p . This criterion
ensures that the landmarks are the kernel centers that minimize the risk of the regression in
(6).
As an interesting byproduct, regardless of whether h was a continuous or point-to-point
mapping to begin with, it is now also possible to obtain a new, continuous parameterization
hB,X? by solving a reduced version of (5):
YT = BK? + E
(12)
?
where K? only has N ? columns, with the same indexes as X? . In fact, K? ? RN ?N is no
?
longer square. Also, now B ? RN ?m . The new, smaller regression (12) can be solved
T
separately for each column of Y and B by unconstrained least squares. For a new feature
vector, x, in the parametric domain, a new vector y ? M in observation space can be
synthesized by
y = hB,X? (x)
yj (x)
T
=
[y1 (x) . . . ym (x)]
X
=
bij K(xi , x)
(13)
xi ?X?
where the {bij } are the elements of B.
3
Results
The algorithm has been tested in two synthetic datasets: the traditional synthetic ?swiss
roll? and a sphere, both with 1000 points embedded in R10 , with a small amount of isotropic
Gaussian noise (?y = 0.01) added in all dimensions, as shown in Figure 2. These manifolds have intrinsic dimension n = 2. A global embedding for the swiss roll was found
by ISOMAP, using k = 8. On the other hand, TBA was used for the sphere, resulting in
multiple patches and charts - a necessity, because otherwise the sphere?s topology would
make ISOMAP fail. Therefore, in the sphere, each patch has its own landmark points, and
the manifold require the union of all such points. All are shown in Figure 2, as selected by
our procedure.
Additionally, a real dataset was used: images from the video sequence shown above in
Figure 1. This example is known [9] to be reasonably well modelled by as few as 2 free
parameters.
The sequence contains N = 194 frames with m = 5041 pixels. A first step was to perform
global PCA in order to discard irrelevant dimensions. Since it obviously isn?t possible
1
0.8
0.6
0.5
0.4
0.2
0
0
?0.5
?0.2
?0.4
0.5
0.5
0
?0.6
0
?0.5
1
0
1
0.5
0
?0.5
?1
?0.5
0.8
0.6
1
0.4
0.5
0.2
0
0
?0.2
?0.5
?0.4
?1
1
?0.6
0.5
?0.8
1
0
1
0
?0.5
0
?0.8
?1
?0.6
?0.4
0
?0.2
0.2
0.4
?1
0.6
?1
1200
?50
1000
800
?100
600
400
?150
200
0
?200
?200
?400
?600
?250
?800
0
200
400
(a)
600
800
1000
?10
0
10
20
30
40
50
60
70
80
(b)
Figure 2: Above: landmarks; Middle: interpolated points using hB,X? ; Below: risk estimates. For the sphere, the risk plot is for the largest patch. Total landmarks, N ? = 27 for
the swiss roll, 42 for the sphere.
to compute a covariance matrix of size 5000 ? 5000 from 194 samples, the problem was
transposed, leading to the computation of the eigenvectors of a N ? N covariance, from
which the first N ? 1 eigenvectors of the non-transposed problem can easily be found [11].
This resulted in an estimated 15 globally significant principal directions, on which the data
were projected.
After this pre-processing, the effective values of m and N were, respectively, 15 and 194.
An embedding was found using TBA with 2 features (ISOMAP would have worked as
well). The results obtained for this case are shown in Figure 3. Only 4 landmarks were
needed, and they correspond to very distinct face expressions.
4
Discussion
A new approach for selecting landmarks in manifold learning, based on LASSO and LARS
regression, has been presented. The proposed algorithm finds geometrically meaningful
landmarks and successfully circumvents a difficult MMV problem, by using the intuition
that, since the variation of the maximum principal angle is a measure of curvature, the
points that are important in preserving it should also be important in preserving the overall
manifold geometry. Also, a continuous manifold parameterization is given with very little
800
600
400
200
0
?1000
?200
?400
1000
0
1000
500
0
?500
?1000
2000
Figure 3: Landmarks for the video sequence: N ? = 4, marked over a scatter plot of the
first 3 eigen-coordinates. The corresponding pictures are also shown.
additional computational cost.
The entire procedure avoids expensive, quadratic programming computations - its complexity is dominated by the LARS step, which has the same cost as a least squares fit [4].
The proposed approach has been validated with experiments on synthetic and real datasets.
Acknowledgments
This work was partially supported by FCT POCTI, under project 37844.
References
[1] A. Bjorck and G. H. Golub. Numerical methods for computing angles between linear subspaces.
Mathematical Computation, 27, 1973.
[2] J. Chen and X. Huo. Sparse representation for multiple measurement vectors (mmv) in an
over-complete dictionary. ICASSP, 2005.
[3] V. de Silva and J. B. Tenenbaum. Global versus local methods in nonlinear dimensionality
reduction. NIPS, 15, 2002.
[4] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics,
2003.
[5] T. Hastie, R. Tibshirani, and J. H. Friedman. The Elements of Statistical Learning. Springer,
2001.
[6] H. L?adesm?aki, O. Yli-Harja, W. Zhang, and I. Shmulevich. Intrinsic dimensionality in gene
expression analysis. GENSIPS, 2005.
[7] T. Poggio and S. Smale. The mathematics of learning: Dealing with data. Notices of the
American Mathematical Society, 2003.
[8] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding.
Science, 290:2323?2326, 2000.
[9] J. Silva, J. Marques, and J. M. Lemos. Non-linear dimension reduction with tangent bundle
approximation. ICASSP, 2005.
[10] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290:2319?2323, 2000.
[11] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience,
3:71?86, 1991.
| 2884 |@word mild:1 version:1 briefly:1 middle:1 norm:4 open:2 accounting:1 covariance:3 harder:1 reduction:5 necessity:1 contains:1 selecting:5 existing:2 yet:1 scatter:1 must:4 readily:2 numerical:1 designed:1 plot:2 generative:1 prohibitive:1 selected:1 parameterization:6 isotropic:1 huo:1 diffeomorphically:1 provides:1 parameterizations:1 zhang:1 mathematical:2 vide:1 themselves:2 surge:1 frequently:1 nor:1 globally:1 decomposed:1 automatically:2 little:2 becomes:2 provided:1 estimating:1 notation:1 linearity:1 moreover:1 begin:1 project:1 what:1 minimizes:1 finding:3 rm:5 k2:2 partitioning:1 control:1 positive:1 local:6 tends:1 id:2 studied:1 practical:1 acknowledgment:1 yj:2 union:2 swiss:3 procedure:7 projection:1 ym1:1 pre:1 onto:2 selection:4 operator:1 mea:1 risk:5 context:1 descending:1 restriction:1 equivalent:2 quick:1 yt:4 center:1 attention:1 regardless:1 formulate:2 estimator:1 orthonormal:1 spanned:1 embedding:4 anyway:2 variation:3 coordinate:1 arranging:1 qq:1 annals:1 pt:3 suppose:1 programming:3 lighter:1 us:3 element:5 approximated:2 expensive:1 recognition:1 observed:1 solved:1 thousand:1 ensures:1 principled:1 intuition:1 complexity:4 ui:1 covariates:2 geodesic:4 solving:3 basis:1 easily:1 icassp:2 fewest:1 distinct:1 effective:1 whose:1 emerged:1 larger:1 solve:2 otherwise:1 favor:1 ability:1 gi:3 statistic:1 itself:2 obviously:1 sequence:4 advantage:1 differentiable:2 eigenvalue:1 remainder:1 relevant:1 aligned:1 realization:1 achieve:2 roweis:1 description:2 produce:1 illustrate:1 nearest:1 received:2 implies:1 direction:3 closely:1 correct:2 lars:10 kb:1 require:3 generalization:3 extension:1 normal:2 exp:1 mapping:8 lemos:2 dictionary:1 purpose:1 estimation:1 largest:1 establishes:1 successfully:1 minimization:1 gaussian:3 avoid:2 harja:1 shmulevich:1 l0:1 validated:1 modelling:1 contrast:1 utl:1 dependent:2 entire:1 pixel:4 issue:1 x11:1 arg:1 overall:1 retaining:2 development:2 equal:2 extraction:1 having:1 sampling:1 inesc:2 few:2 randomly:1 preserve:3 simultaneously:1 resulted:1 intended:2 geometry:1 attempt:1 friedman:1 interest:4 possibility:1 golub:1 semidefinite:1 jsm:1 devoted:1 bundle:2 ambient:1 beforehand:1 integral:1 byproduct:1 poggio:1 plugged:1 desired:1 re:2 instance:2 column:19 kij:2 cover:1 formulates:1 ordinary:1 cost:5 subset:6 synthetic:4 discipline:1 invertible:2 ym:1 containing:1 possibly:1 cognitive:1 american:1 leading:1 de:2 coefficient:1 explicitly:1 vi:1 start:3 capability:2 minimize:1 square:6 chart:5 roll:3 correspond:2 modelled:2 sured:1 ty:3 turk:1 naturally:1 transposed:2 xn1:1 dataset:2 efron:1 dimensionality:7 follow:1 response:2 formulation:3 done:2 until:2 correlation:1 hand:1 langford:1 nonlinear:4 stagewise:1 isomap:7 regularization:5 symmetric:1 during:2 aki:1 noted:1 x1n:1 criterion:1 ymn:1 bijective:1 pais:1 complete:1 l1:2 silva:4 image:3 functional:2 qp:1 overview:1 discussed:1 slight:1 synthesized:1 measurement:4 significant:1 smoothness:1 tuning:3 unconstrained:2 mathematics:1 portugal:3 longer:1 add:2 curvature:1 own:2 recent:3 irrelevant:1 discard:1 yi:2 preserving:4 additional:1 care:1 impose:1 converting:1 recognized:1 signal:1 multiple:4 smooth:2 faster:1 sphere:6 divided:1 equally:1 prediction:1 variant:1 regression:14 sometimes:1 kernel:3 separately:2 extra:1 unlike:1 navarro:1 tend:1 near:1 ideal:1 enough:1 hb:3 xj:4 codimension:1 fit:1 hastie:2 lasso:12 topology:2 idea:2 whether:1 motivated:1 pca:4 expression:2 effort:1 proceed:1 generally:1 covered:1 eigenvectors:6 detailed:1 amount:3 ten:1 tenenbaum:2 locally:1 reduced:2 notice:1 estimated:2 neuroscience:1 tibshirani:2 ist:3 group:1 reformulation:1 nevertheless:1 prevent:1 neither:1 r10:1 geometrically:1 year:1 angle:11 inverse:2 throughout:1 reasonable:1 patch:5 circumvents:1 hi:1 followed:1 quadratic:5 worked:1 equiangular:1 dominated:1 interpolated:1 min:1 fct:1 according:1 smaller:1 y0:4 making:1 modification:1 explained:1 computationally:1 remains:2 turn:1 fail:1 needed:1 know:1 available:1 appropriate:1 alternative:2 eigen:1 original:1 denotes:1 tba:5 especially:1 society:1 added:4 quantity:2 already:3 parametric:2 costly:1 strategy:1 usual:2 diagonal:1 traditional:1 subspace:8 distance:4 mapped:1 landmark:24 manifold:19 reason:1 spanning:1 assuming:3 charting:1 besides:1 index:2 relationship:2 providing:1 minimizing:2 equivalently:1 difficult:3 mmv:4 smale:1 y11:1 perform:1 allowing:1 av:1 observation:11 yli:1 datasets:3 pentland:1 marque:2 y1:1 rn:11 frame:1 bk:1 namely:1 nip:1 able:2 proceeds:1 usually:1 below:1 sparsity:4 video:3 y1n:1 lisbon:3 natural:2 regularized:1 force:1 numerous:2 picture:1 isn:1 geometric:2 tangent:10 multiplication:1 embedded:4 interesting:2 facing:1 versus:1 pi:1 row:1 supported:1 free:3 ipl:1 johnstone:1 neighbor:1 saul:1 taking:1 face:1 eigenfaces:1 absolute:1 sparse:9 isr:3 benefit:1 charted:1 dimension:11 depth:1 stand:1 avoids:2 contour:1 computes:1 doesn:1 made:1 projected:1 approximate:1 gene:1 dealing:2 global:6 active:2 assumed:1 unnecessary:1 xi:3 continuous:5 latent:1 additionally:1 reasonably:1 complex:1 domain:2 dense:1 motivation:1 noise:3 whole:1 nothing:1 repeated:1 sub:1 lq:1 lie:1 wavelet:1 bij:2 specific:1 covariate:1 intrinsic:6 burden:1 sparseness:1 kx:1 alves:1 chen:1 faraway:1 partially:1 xnn:1 springer:1 marked:1 formulated:1 absence:1 considerable:2 determined:2 reducing:1 principal:10 called:5 total:2 experimental:1 meaningful:1 select:2 bioinformatics:1 tested:1 correlated:1 |
2,075 | 2,885 | Neural mechanisms of contrast dependent
receptive field size in V1
Jim Wielaard and Paul Sajda
Department of Biomedical Engineering
Columbia University
New York, NY 10027
(djw21, ps629)@columbia.edu
Abstract
Based on a large scale spiking neuron model of the input layers 4C? and ? of
macaque, we identify neural mechanisms for the observed contrast dependent
receptive field size of V1 cells. We observe a rich variety of mechanisms for
the phenomenon and analyze them based on the relative gain of excitatory and
inhibitory synaptic inputs. We observe an average growth in the spatial extent of
excitation and inhibition for low contrast, as predicted from phenomenological
models. However, contrary to phenomenological models, our simulation results
suggest this is neither sufficient nor necessary to explain the phenomenon.
1 Introduction
Neurons in the primary visual cortex (V1) display what is often referred to as ?size tuning?, i.e. the response of a cell is maximal around a cell-specific stimulus size and generally decreases substantially (30-40% on average) or vanishes altogether for larger stimulus
sizes1?9 . The cell-specific stimulus size eliciting a maximum response, also known as the
?receptive field size? of the cell4 , has a remarkable property in that it is not contrast invariant, unlike for instance orientation tuning in V1. Quite the contrary, the contrast-dependent
change in receptive field size of V1 cells is profound. Typical is a doubling in receptive
field size for stimulus contrasts decreasing by a factor of 2-3 on the linear part of the contrast response function4. This behavior is seen throughout V1, including all cell types in
all layers and at all eccentricities. A functional interpretation of the phenomenon is that
neurons in V1 sacrifice spatial resolution in return for a gain in contrast sensitivity at low
contrasts 4 . However, its neural mechanisms are at present very poorly understood. Understanding these mechanisms is potentially important for developing a theoretical model of
early signal integration and neural encoding of visual features in V1.
We have recently developed a large-scale spiking neuron model that accounts for the phenomenon and suggests neural mechanisms from which it may originate. This paper provides a technical description of these mechanisms.
2 The model
Our model consists of 8 ocular dominance columns and 64 orientation hypercolumns (i.e.
pinwheels), representing a 16 mm2 area of a macaque V1 input layer 4C? or 4C?. The
model consists of approximately 65,000 cortical cells in each of the four configurations
(see below), and the corresponding appropriate number of LGN cells. Our cortical cells are
modeled as conductance based integrate-and-fire point neurons, 75% are excitatory cells
and 25% are inhibitory cells. Our LGN cells are rectified spatio-temporal linear filters. The
model is constructed with isotropic short-range cortical connections (< 500?m), realistic
LGN receptive field sizes and densities, realistic sizes of LGN axons in V1, and cortical
magnification factors and receptive field scatter that are in agreement with experimental
observations.
Dynamic variables
P of a cortical model-cell i are its membrane potential vi (t) and its spike
train Si (t) =
k ?(t ? ti,k ), where t is time and ti,k is its kth spike time. Membrane
potential and spike train of each cell obey a set of N equations of the form
Ci
dvi
= ?gL,i (vi ? vL ) ? gE,i (t, [S]E , ?E )(vi ? vE )
dt
?gI,i (t, [S]I , ?I )(vi ? vI ) , i = 1, . . . , N .
(1)
These equations are integrated numerically using a second order Runge-Kutta method with
time step 0.1 ms. Whenever the membrane potential reaches a fixed threshold level vT it is
reset to a fixed reset level vR and a spike is registered. The equation can be rescaled so that
vi (t) is dimensionless and Ci = 1, vL = 0, vE = 14/3, vI = ?2/3, vT = 1, vR = 0, and
conductances (and currents) have dimension of inverse time.
The quantities gE,i (t, [S], ?E ) and gI,i (t, [S], ?I ) are the excitatory and inhibitory conductances of neuron i. They are defined by interactions with the other cells in the network,
external noise ?E(I) , and, in the case of gE,i possibly by LGN input. The notation [S]E(I)
stands for the spike trains of all excitatory (inhibitory) cells connected to cell i. Both,
the excitatory and inhibitory populations consist of two subpopulations Pk (E) and Pk (I),
k = 0, 1, a population that receives LGN input (k = 1) and one that does not (k = 0).
In the model presented here 30% of both the excitatory and inhibitory cell populations receive LGN input. We assume noise, cortical interactions and LGN input act additively in
contributing to the total conductance of a cell,
cor
gE,i (t, [S]E , ?E ) = ?E,i (t) + gE,i
(t, [S]E ) + ?i giLGN (t)
cor
(t, [S]I ) ,
gI,i (t, [S]I , ?I ) = ?I,i (t) + gI,i
(2)
cor
(t, [S]? )
g?,i
are the contribuwhere ?i = ? for i ? {P? (E), P? (I)}, ? = 0, 1. The terms
tions from the cortical excitatory (? = E) and inhibitory (? = I) neurons and include only
isotropic connections,
cor
g?,i
(t, [S]? ) =
Z +? X
1
X
?
,k
ds
C?k? ,?
(||~xi ? ~xj ||)G?,j (t ? s)Sj (s) ,
(3)
??
k=0 j?Pk (?)
?
where i ? Pk? (? ) Here ~xi is the spatial position (in cortex) of neuron i, the functions
?
,k
G?,j (? ) describe the synaptic dynamics of cortical synapses and the functions C?k? ,?
(r)
describe the cortical spatial couplings (cortical connections). The length scale of excitatory
and inhibitory connections is about 200?m and 100?m respectively.
In agreement with experimental findings (see references in 10 ), the LGN neurons are modeled as rectified center-surround linear spatiotemporal filters. The LGN temporal kernels
are modeled in agreement with 11 , and the LGN spatial kernels are of center-surround type.
An important class of parameters are those that define and relate the model?s geometry
in visual space and cortical space. Geometric properties are different for the two input
layers 4C?, ? and depend also on the eccentricity. As said, contrast dependent receptive
field size is observed to be insensitive to those differences4?6,8. In order to verify that our
explanations are consistent with this observation, we have performed numerical simulations
for four different sets of parameters, corresponding to the 4C?, ? layers at para-foveal
eccentricities (< 5? ) and at eccentricities around 10? . These different model configurations
are referred to as M0, M10, and P0, P10. Reported results are qualitatively similar for all
four configurations unless otherwise noted. The above is only a very brief description of
the model, the details can be found in 12 .
3 Visual stimuli and data collection
The stimulus used to analyze the phenomenon is a drifting grating confined to a circular
aperture, surrounded by a blank (mean luminance) background. The luminance of the
stimulus is given by I(~y , t) = I0 (1 + ? cos( ?t ? ~k ? ~y + ?)) for ||~y || ? rA and I(~y , t) = I0
for ||~y || > rA , with average luminance I0 , contrast ?, temporal frequency ?, spatial wave
vector ~k, phase ?, and aperture radius rA . The aperture is centered on the receptive field
of the cell and varied in size, while the other parameters are kept fixed and set to preferred
values. All stimuli are presented monocularly. Samples consisting of approximately 200
cells were collected for each configuration, containing about an equal number of simple
and complex cells. The experiments were performed at ?high? contrast, ? = 1, and ?low?
contrast, ? = 0.3.
4 Approximate model equations
We find that, to good approximation, the membrane potential and instantaneous firing rate
of our model cells are respectively12,13
hvk (t, rA )i ? Vk (rA , t) ?
hID,k (t, rA )i
,
hgT,k (t, rA )i
hSk (t, rA )i ? fk (t, rA ) ? ?k [hID,k (t, rA )i ? hgT,k (t, rA )i ? ?k ]+ ,
(4)
(5)
where [x]+ = x if x ? 0 and [x]+ = 0 if x ? 0, and where, the gain ?k and threshold ?k
do not depend on the aperture radius rA for most cells. The total conductance gT,k (t, rA )
and difference current ID,k (t, rA ) are given by
gT,k (t, rA ) = gL + gE,k (t, [S]E , rA ) + gI,k (t, [S]I , rA )
(6)
ID,k (rA , t) = gE,k (t, [S]E , rA ) VE ? gI,k (t, [S]I , rA ) |VI | .
(7)
5 Mechanisms of contrast dependent receptive field size
From Eq. (4) and (5) it follows that a change in receptive field size in general results from
a change in behavior of the relative gain,
G(rA ) =
?gE /?rA
.
?gI /?rA
(8)
2
3
<S>
0.8
0
0
1
2
3
4
rA (deg)
5
6
0.6
50
gI
gE
1 cycle
75
*
150
100
gI
g I (s -1 )
)
-1
0
100
1 cycle
50
0
g I (s -1 )
100
gE
glgn
1 cycle
*
-1
1
25
glgn
gI
)
*
g lgn , g E (s
*
50
<v>
spks/s
rA (deg)
0
50
125
75
40
gE
50
gI
g I (s -1 )
1
50
200
g lgn , g E (s
0
0
gE
*
)
0
300
100
-1
<S>
0.1
400
g I (s -1 )
25
*
150
g lgn , g E (s
0.2
g lgn , g E (s -1 )
<S+>
<S->
<v+>
<v->
<v>
spks/s
**
50
0
1 cycle
Figure 1: Two example cells, an M0 simple cell which receives LGN input (top) and an M10
complex cell which does not (bottom). (column 1) Responses as function of aperture size. Mean
responses are plotted for the complex cell, first harmonic for the simple cell. Apertures of maximum
of responses (i.e. receptive field sizes) are indicated with asterisks (dark=high contrast, light=low
contrast). (column 2) Conductances for high contrast at apertures near the maximum responses. Conductances are displayed as nine (top) and eleven (bottom) sub-panels giving the cycle-trial averaged
conductances as a function of time (relative to cycle) and aperture size. (column 3) Conductances for
low contrast at apertures near the maximum responses. Asterisks in the conductance figures (columns
2 and 3) indicate corresponding apertures of maximum response (column 1)
Note that this is a rather different parameter than the ?surround gain? parameter (ks ) used in
the ratio-of-Gaussians (ROG) model8 ?e.g. unlike for ks , there is no one-to-one relationship
between G(rA ) and the degree of surround suppression. Qualitatively, the conductances
show a similar dependence on aperture size as the membrane potential responses and spike
responses, i.e. they display surround suppression as well 12 . Receptive field sizes based
on these conductances are a measure of the spatial summation extent of excitation and
inhibition.
An obvious way to change the behavior of G, and consequently the receptive field size, is to
change the spatial summation extent of gE and/or gI . However this is not strictly necessary.
For example, other possibilities are illustrated by the two cells in Fig. 1. These cells show,
both in spike and membrane potential responses, a receptive field growth of a factor of 2
(top) and 3 (bottom) at low contrast. However, for both cells the spatial summation extent
of excitation at low contrast is one aperture less than at high contrast.
In a similar way as for spike train responses, we also obtained receptive field sizes for
the conductances. As do spike responses (Fig. 2A), both excitation and inhibition (Fig.
2B&C) also show, on the average, an increase in their spatial summation extent as contrast
is decreased, but the increase is in general smaller than what is seen for spike responses,
particularly for cells that show significant receptive field growth. For instance, we see from
Figure 2B and C that for cells in the sample with receptive field growths ? 2 or greater,
the growth for the conductances is always considerably less than the growth based on spike
responses. Expressed more rigorously, a Wilcoxon test on ratio of growth ratios larger
than unity gives p < 0.05 (all cells, excitation, Fig. 2B), p < 0.15 (all cells, inhibition,
Fig. 2C), p < 0.001 (cells with receptive field growth rate r+ /r? > 1.5, both excitation
and inhibition.) Although some increase in the spatial summation extent of excitation and
inhibition is in general the rule, this increase is rather arbitrary and bears not much relation
with the receptive field growth based on spike responses. The same conclusions follow
from membrane potential responses (not shown).
r -/r +(g I)
(C)
1
r -/r +(g E)
(B)
1
r -(s) (deg)
(A)
1
0
0
0
-1
-1
0
1
0
-1
-1
1
r +(s) (deg)
0
-1
-1
1
r -/r +(s)
0
1
r -/r +(s)
Figure 2: (A) Joint distribution of high and low contrast receptive field sizes, r+ and r? , based
on spike responses. All scales are logarithmic, base 10. All distributions are normalized to a peak
value of one. Receptive field growth at low contrast is clear. Average growth ratio is 1.9 and is
significantly greater than unity (Wilcoxon test, p < 0.001). (B & C) Joint distributions of receptive
field growth and growth of spatial summation extent of excitation (B) and inhibition (C) (computed
as ratios). There is no simple relation between receptive field growth and the growth of the spatial
summation extent of excitatory or inhibitory inputs. For cells in the sample with larger receptive field
growths (factor of ? 2 or greater) this growth is always considerably larger than the growths of their
excitatory and inhibitory inputs.
Fig. 2 thus demonstrates that, contrary to what is predicted by the difference-of-Gaussians
(DOG) 4 and ROG models 8 (see Discussion), a growth of spatial summation extent of
excitation (and/or inhibition) at low contrast is neither sufficient nor necessary to explain
the receptive field growth seen in spike responses. Membrane potential responses give the
same conclusion. The fact that a change in receptive field size can take place without a
change in the spatial summation extent of gE or gI can be illustrated by a simple example.
conductance
Consider a situation where both gE and gI have their maximum at the same aperture size
rE = rI = r? and are monotonically increasing for rA < r? and monotonically decreasing
for rA > r? , as depicted in Fig. 3. We can distinguish three classes with respect to the
relative location of the maxima in spike responses rS and the conductances r? , namely {X:
rS < r? }, {Y: rS = r? } and {Z: rS > r? }. It follows from (5) that if we define the
gI
gE
X
Y:rs=r*
G0
X
Z
Z:rs>r*
relative gain
relative gain
X:rs<r*
X
rA
r*
r+ r-
Z
Z
G0
r+ r-
Figure 3: Schematic illustration of mechanisms for receptive field growth under equal and constant
spatial summation extent of the conductances (rE = rI = r? ).
(B)
X Y Z
rs<rI
rs=rI
rI<rs<rE
rs=rE
rs>rE
0
(rE=rI)
(rE>rI)
rs<rE
rs=rE
rE<rs<rI
rs=rI
rs>rI
0.4
0.3
0
X->X
X->Z
Y->Z
Z->Z
Z->X
Y->X
X->Y
Y->Y
Z->Y
0
X->X
Z->Z
Y->Z
X->Z
Y->X
Z->X
X->Y
Y->Y
Z->Y
fraction of cells
0.4
rS<rE
rS=rE
rs>rE
fraction of cells
fraction of cells
(A)
gain change transitions
gain change transitions
(rE<rI)
Figure 4: (A) Distributions of the relative positions of the maxima (receptive field sizes) of spike
responses rS and conductances rE and rI , for the M0 configuration. A division is made with respect
to the maxima in the conductances, this corresponds to the left (rE = rI ), central (rE > rI ), and
right (rE < rI ) part of the figure. Each panel is further subdivided with respect to the maximum
in the spike response rS . Upper histograms are for all cells in the sample, lower histograms are
for cells that have receptive field growth r? /r+ > 1.5. Unfilled histograms are for high contrast,
shaded histograms are for low contrast. (B) Prevalence of transitions between positions of maxima in
spike responses and excitatory conductances (left) and in spike responses and inhibitory conductances
(right) for a high ? low contrast change. See text for definitions of X, Y, Z classes. Data are evaluated
for all cells (unfilled histograms) and for cells with a receptive field growth r? /r+ > 1.5 (shaded
histograms).
parameter G0 (v) = (|vI | + v)/(vE ? v) then we can characterize the difference between
classes X and Z by the way that G crosses G0 (1) around rS as depicted in Fig. 3. For
class Y the parameter G is not of any particular interest as it can assume arbitrary behavior
around rS . It follows from (4) that similar observations hold for the maximum in the
membrane potential rv and we need simply to replace G0 (1) with G0 (v(rv )). A growth of
receptive field size can occur without any change in the spatial summation extent (r? ) of
the conductances. Suppose we wish to remain within the same class X or Z, then receptive
field growth, can be induced, for instance, by an overall increase (X) or an overall decrease
(Z) in relative gain G(rA ) as shown in Fig. 3 (dashed line). Receptive field growth also
can be caused by more drastic changes in G so that the transitions X ? Y, X ? Z or Y ?
Z occur for a high ? low contrast change The situation is somewhat more involved when
we allow for non-suppressed responses and conductances, and for different positions of the
maxima of gE and gI , however, the essence of our conclusions remains the same.
Analysis of our data in the light of the above example is given in Fig. 4. Cells were classified (Fig. 4A) according to the relative positions of their maxima in spike response (rS ) and
excitatory (rE ) and inhibitory (rI ) conductances, using F0+F1 (i.e. mean response + first
Fourier component of the response). Membrane potential responses yield similar results.
Comparing this classification at high and low contrast we observe a striking difference for
cells with significant receptive field growths, i.e. with growth ratios >1.5 (Fig. 4A, bottom), indicative of X ? Y, X ? Z and Y ? Z transitions (as discussed in the simplified
example above). In this realistic situation there are of course many more transitions (i.e.
132 ), however, that we indeed observe a prevalence for these transitions can be demonstrated in two ways using slightly modified definitions of the X,Y,Z classes. First (Fig. 4B,
left), if we redefine the X,Y,Z classes with respect to rS and rE while ignoring rI , i.e. {X:
rS < rE }, {Y: rS = rE } and {Z: rS > rE }, then the transition distribution for cells
with significant receptive field growth shows that in about 60% of these cells a X ? Z or
Y ? Z transition occurs. Taken together with the fact that roughly 10% of the cells with
significant receptive field growth (Figure 4A, bottom) have rI ? rS < rE at high contrast
and rE < rS ? rI at low contrast, we can conclude that for more than 50% of the cells
with significant receptive field growth, a transition takes place from a high contrast RF size
less or equal to the spatial summation extent of excitation and inhibition, to a low contrast
receptive field size which exceeds both (by at least one aperture). Note that these transitions
occur in addition to any growth of rE or rI . Secondly (Fig. 4B, right), the same conclusion
is reached when we redefine the X,Y,Z classes with respect to rS and rI while ignoring rE
({X: rS < rI }, {Y: rS = rI } and {Z: rS > rI }), Now a X ? Z or Y ? Z transition
occurs in about 70% of the cells with significant receptive field growth, while about 20%
of the cells with significant receptive field growth (Fig. 4A, bottom) have rE ? rS < rI
at high contrast and rI < rS ? rE at low contrast. Finally, Fig. 4B also demonstrates the
presence of a rich diversity in relative gain changes in our model, since all transitions (for
all cells, unfilled histograms) occur with some reasonable probability.
6 Discussion
The DOG model suggests that growth in receptive field size at low contrast is due to an
increase of the spatial summation extent of excitation4 (i.e. increase in the spatial extent
parameter ?E ). This was partially confirmed experimentally in cat primary visual cortex7.
Although it has been claimed8 that the ROG model could explain receptive field growth
solely from a change in the relative gain parameter ks , we believe this is incorrect. Since
there is a one-to-one relationship between ks and surround suppression, this would imply
that contrast dependent receptive field size simply results from contrast dependent surround
suppression, which contradicts experimental data4,8 . As does the DOG model, the ROG
model, based on analysis of our data, also predicts that contrast dependent receptive field
size is due to contrast dependence of the spatial summation extent of excitation. As we
have shown, our simulations confirm an average growth of spatial summation extent of
excitation (and inhibition) at low contrast. However, this growth is neither sufficient nor
necessary to explain receptive field growth. For cells with significant receptive field growth,
(r+ /r? > 1.5) we were able to identify an additional property of the neural mechanisms.
For more than 50% of such cells, a transition takes place from a high contrast RF size
less or equal to the spatial summation extent of excitation and inhibition, to a low contrast
receptive field size which exceeds both.
An important characteristic of our model is that it is not specifically designed to produce the
phenomenon. Rather, the model parameters are set such that it produces realistic orientation
tuning and a realistic distribution of response modulations in response to drifting gratings
(simple & complex cells). Constructed in this way, our model then naturally produces a
wide variety of realistic response properties, classical as well as extraclassical, including
the phenomenon discussed here. A prominent feature of the mechanisms we suggest is
that, contrary to common belief, they require neither the long-range lateral connections in
V1 14?18 nor extrastriate feedback 6,8,19,20 . The average receptive field growth we see in
our model is about a factor of two (r? /r+ ? 2). This is a little less than what is observed
in experiments 5,8 . This leaves room for contributions from the LGN input. It seems reasonable to assume that contrast dependent receptive field size is not limited to V1 and is
also a property of LGN cells. Somewhat surprisingly, this has to our knowledge not been
verified yet for macaque. Contrast dependent receptive field size of LGN cells has been
observed in marmoset and an average growth ratio at low contrast of 1.3 was reported21.
Receptive field growth of LGN cells in some sense introduces an overall geometric scaling
factor on the entire visual input to V1. This observation ignores a great many details of
course. For instance, the fact that the density of LGN cells (LGN receptive fields) is not
known to change with contrast. On the other hand, it seems unlikely that a reasonable receptive field expansion of LGN cells would not be at least partially transferred to V1. Thus
it seems reasonable to conclude from our work that the phenomenon in V1, in particular
that seen in layer 4, may be attributed largely to isotropic short-range (< 0.5 mm) cortical
connections and LGN input.
Acknowledgments
This work was supported by grants from ONR (MURI program, N00014-01-1-0625) and
NGA (HM1582-05-C-0008).
References
[1] Dow, B, Snyder, A, Vautin, R, & Bauer, R. (1981) Exp Brain Res 44, 213?228.
[2] Schiller, P, Finlay, B, & Volman, S. (1976) J Neurophysiol 39, 1288?1319.
[3] Silito, A, Grieve, K, Jones, H, Cudeiro, J, & Davis, J. (1995) Nature 378, 492?496.
[4] Sceniak, M, Ringach, D, Hawken, M, & Shapley, R. (1999) Nat Neurosci 2, 733?739.
[5] Kapadia, M, Westheimer, G, & Gilbert, C. (1999) Proc Nat Acad Sci USA 96, 12073?12078.
[6] Sceniak, M, Hawken, M, & Shapley, R. (2001) J Neurophysiol 85, 1873?1887.
[7] Anderson, J, Lampl, I, Gillespie, D, & Ferster, D. (2001) J Neurosci 21, 2104?2112.
[8] Cavanaugh, J, Bair, W, & Movshon, J. (2002) J Neurophysiol 88, 2530?2546.
[9] Ozeki, H, Sadakane, O, Akasaki, T, Naito, T, Shimegi, S, & Sato, H. (2004) J Neurosci 24,
1428?1438.
[10] McLaughlin, D, Shapley, R, Shelley, M, & Wielaard, J. (2000) Proc Nat Acad Sci USA 97,
8087?8092.
[11] Benardete, E & Kaplan, E. (1999) Vis Neurosci 16, 355?368.
[12] Wielaard, J & Sajda, P. (2005) Cerebral Cortex in press.
[13] Wielaard, J, Shelley, M, McLaughlin, D, & Shapley, R. (2001) J Neurosci 21(14), 5203?5211.
[14] DeAngelis, G, Freeman, R, & Ohzawa, I. (1994) J Neurophysiol 71, 347?374.
[15] Somers, D, Todorov, E, Siapas, A, Toth, L, Kim, D, & Sur, M. (1998) Cereb Cortex 8, 204?217.
[16] Dragoi, V & Sur, M. (2000) J Neurophysiol 83, 1019?1030.
[17] Hup?e, J, James, A, Girard, P, & Bullier, J. (2001) J Neurophysiol 85, 146?163.
[18] Stettler, D, Das, A, Bennett, J, & Gilbert, C. (2002) Neuron 36, 739?750.
[19] Angelucci, A, Levitt, J, Walton, E, Hup?e, J, Bullier, J, & Lund, J. (2002) J Neurosci 22, 8633?
8646.
[20] Bair, W, Cavanaugh, J, & Movshon, J. (2003) J Neurosci 23(20), 7690?7701.
[21] Solomon, S, White, A, & Martin, P. (2002) J Neurosci 22(1), 338?349.
| 2885 |@word trial:1 seems:3 additively:1 r:37 simulation:3 p0:1 extrastriate:1 configuration:5 foveal:1 rog:4 current:2 blank:1 comparing:1 si:1 scatter:1 yet:1 extraclassical:1 numerical:1 realistic:6 eleven:1 designed:1 leaf:1 indicative:1 isotropic:3 cavanaugh:2 short:2 provides:1 location:1 constructed:2 profound:1 incorrect:1 consists:2 shapley:4 redefine:2 grieve:1 sacrifice:1 indeed:1 ra:30 roughly:1 behavior:4 nor:4 brain:1 freeman:1 decreasing:2 little:1 increasing:1 notation:1 panel:2 what:4 substantially:1 developed:1 finding:1 temporal:3 ti:2 act:1 growth:43 demonstrates:2 grant:1 engineering:1 understood:1 acad:2 encoding:1 id:2 firing:1 solely:1 approximately:2 modulation:1 k:4 suggests:2 shaded:2 co:1 limited:1 range:3 averaged:1 acknowledgment:1 prevalence:2 area:1 significantly:1 subpopulation:1 suggest:2 dimensionless:1 gilbert:2 demonstrated:1 center:2 resolution:1 rule:1 population:3 suppose:1 agreement:3 magnification:1 particularly:1 predicts:1 muri:1 observed:4 bottom:6 connected:1 cycle:6 spks:2 decrease:2 rescaled:1 vanishes:1 rigorously:1 dynamic:2 depend:2 naito:1 division:1 neurophysiol:6 joint:2 cat:1 sajda:2 train:4 describe:2 deangelis:1 quite:1 larger:4 otherwise:1 gi:16 runge:1 kapadia:1 interaction:2 maximal:1 reset:2 hid:2 poorly:1 description:2 eccentricity:4 walton:1 produce:3 tions:1 coupling:1 eq:1 grating:2 predicted:2 indicate:1 radius:2 filter:2 centered:1 require:1 subdivided:1 f1:1 summation:16 secondly:1 strictly:1 hold:1 mm:1 around:4 exp:1 great:1 m0:3 early:1 proc:2 ozeki:1 always:2 modified:1 rather:3 vk:1 contrast:48 suppression:4 kim:1 hsk:1 sense:1 dependent:10 i0:3 vl:2 integrated:1 entire:1 unlikely:1 relation:2 lgn:24 overall:3 classification:1 orientation:3 spatial:23 integration:1 field:54 equal:4 mm2:1 jones:1 stimulus:8 ve:4 geometry:1 phase:1 consisting:1 fire:1 conductance:24 interest:1 circular:1 possibility:1 introduces:1 light:2 necessary:4 unless:1 re:30 plotted:1 theoretical:1 instance:4 column:6 hgt:2 characterize:1 reported:1 para:1 function4:1 hypercolumns:1 spatiotemporal:1 considerably:2 density:2 m10:2 sensitivity:1 peak:1 bullier:2 together:1 central:1 solomon:1 containing:1 possibly:1 external:1 return:1 account:1 potential:10 diversity:1 caused:1 vi:10 cudeiro:1 performed:2 analyze:2 reached:1 wave:1 contribution:1 characteristic:1 largely:1 yield:1 identify:2 confirmed:1 rectified:2 classified:1 explain:4 synapsis:1 reach:1 whenever:1 synaptic:2 definition:2 frequency:1 ocular:1 involved:1 obvious:1 james:1 naturally:1 attributed:1 gain:12 knowledge:1 dt:1 follow:1 response:34 evaluated:1 anderson:1 biomedical:1 d:1 hand:1 receives:2 dow:1 indicated:1 believe:1 usa:2 ohzawa:1 verify:1 normalized:1 unfilled:3 illustrated:2 ringach:1 white:1 essence:1 noted:1 excitation:13 davis:1 m:1 prominent:1 cereb:1 angelucci:1 harmonic:1 instantaneous:1 recently:1 common:1 functional:1 spiking:2 insensitive:1 cerebral:1 discussed:2 interpretation:1 numerically:1 significant:8 surround:7 siapas:1 tuning:3 fk:1 phenomenological:2 f0:1 cortex:4 inhibition:11 gt:2 base:1 wilcoxon:2 n00014:1 onr:1 sceniak:2 vt:2 p10:1 seen:4 additional:1 somewhat:2 greater:3 monotonically:2 signal:1 dashed:1 rv:2 exceeds:2 technical:1 cross:1 long:1 hvk:1 schematic:1 histogram:7 kernel:2 confined:1 cell:62 receive:1 background:1 addition:1 decreased:1 benardete:1 unlike:2 vautin:1 induced:1 contrary:4 near:2 presence:1 variety:2 xj:1 todorov:1 mclaughlin:2 bair:2 movshon:2 york:1 nine:1 generally:1 clear:1 dark:1 inhibitory:12 snyder:1 dominance:1 four:3 threshold:2 neither:4 verified:1 kept:1 finlay:1 v1:15 luminance:3 fraction:3 nga:1 inverse:1 striking:1 somers:1 place:3 throughout:1 reasonable:4 hawken:2 scaling:1 layer:6 distinguish:1 display:2 sato:1 occur:4 ri:26 fourier:1 martin:1 transferred:1 department:1 developing:1 according:1 volman:1 membrane:10 smaller:1 remain:1 slightly:1 suppressed:1 unity:2 contradicts:1 marmoset:1 invariant:1 taken:1 equation:4 remains:1 mechanism:11 ge:17 drastic:1 cor:4 gaussians:2 data4:1 observe:4 obey:1 appropriate:1 altogether:1 drifting:2 lampl:1 top:3 include:1 giving:1 eliciting:1 monocularly:1 classical:1 g0:6 quantity:1 spike:20 occurs:2 receptive:54 primary:2 dependence:2 said:1 kth:1 kutta:1 schiller:1 lateral:1 sci:2 originate:1 extent:18 collected:1 dragoi:1 length:1 sur:2 modeled:3 relationship:2 illustration:1 ratio:7 westheimer:1 potentially:1 relate:1 kaplan:1 upper:1 neuron:10 observation:4 pinwheel:1 displayed:1 situation:3 jim:1 wielaard:4 varied:1 arbitrary:2 dog:3 namely:1 connection:6 registered:1 macaque:3 able:1 below:1 lund:1 program:1 rf:2 including:2 explanation:1 belief:1 gillespie:1 ps629:1 representing:1 brief:1 imply:1 columbia:2 text:1 understanding:1 geometric:2 contributing:1 relative:11 bear:1 remarkable:1 asterisk:2 integrate:1 degree:1 sufficient:3 consistent:1 dvi:1 surrounded:1 excitatory:12 course:2 gl:2 surprisingly:1 supported:1 allow:1 wide:1 hm1582:1 bauer:1 feedback:1 dimension:1 cortical:12 stand:1 transition:14 rich:2 ignores:1 qualitatively:2 collection:1 made:1 simplified:1 sj:1 approximate:1 preferred:1 aperture:14 confirm:1 deg:4 conclude:2 spatio:1 xi:2 nature:1 toth:1 ignoring:2 expansion:1 complex:4 da:1 pk:4 neurosci:8 noise:2 paul:1 girard:1 levitt:1 fig:16 referred:2 ny:1 axon:1 vr:2 sub:1 position:5 wish:1 shelley:2 specific:2 consist:1 ci:2 nat:3 depicted:2 logarithmic:1 simply:2 visual:6 expressed:1 partially:2 doubling:1 corresponds:1 consequently:1 ferster:1 room:1 replace:1 bennett:1 change:16 experimentally:1 typical:1 specifically:1 total:2 experimental:3 hup:2 phenomenon:8 |
2,076 | 2,886 | Efficient estimation of hidden state dynamics
from spike trains
M?arton G. Dan?oczy
Inst. for Theoretical Biology
Humboldt University, Berlin
Invalidenstr. 43
10115 Berlin, Germany
[email protected]
Richard H. R. Hahnloser
Inst. for Neuroinformatics
UNIZH / ETHZ
Winterthurerstrasse 190
8057 Zurich, Switzerland
[email protected]
Abstract
Neurons can have rapidly changing spike train statistics dictated by the
underlying network excitability or behavioural state of an animal. To
estimate the time course of such state dynamics from single- or multiple neuron recordings, we have developed an algorithm that maximizes
the likelihood of observed spike trains by optimizing the state lifetimes
and the state-conditional interspike-interval (ISI) distributions. Our nonparametric algorithm is free of time-binning and spike-counting problems and has the computational complexity of a Mixed-state Markov
Model operating on a state sequence of length equal to the total number of recorded spikes. As an example, we fit a two-state model to paired
recordings of premotor neurons in the sleeping songbird. We find that the
two state-conditional ISI functions are highly similar to the ones measured during waking and singing, respectively.
1
Introduction
It is well known that neurons can suddenly change firing statistics to reflect a macroscopic
change of a nervous system. Often, firing changes are not accompanied by an immediate
behavioural change, as is the case, for example, in paralysed patients, during sleep [1],
during covert discriminative processing [2], and for all in-vitro studies [3]. In all of these
cases, changes in some hidden macroscopic state can only be detected by close inspection
of single or multiple spike trains. Our goal is to develop a powerful, but computationally
simple tool for point processes such as spike trains. From spike train data, we want to the
extract continuously evolving hidden variables, assuming a discrete set of possible states.
Our model for classifying spikes into discrete hidden states is based on three assumptions:
1. Hidden states form a continuous-time Markov process and thus have exponentially
distributed lifetimes
2. State switching can occur only at the time of a spike (where there is observable
evidence for a new state).
3. In each of the hidden states, spike trains are generated by mutually independent
renewal processes.
1. For a continuous-time Markov process, the probability of staying in state S = i for a
time interval T > t is given by Pi (t) = exp(?rit), where ri is the escape rate (or hazard rate)
of state i. The mean lifetime ?i is defined as the inverse of the escape rate, ?i = 1/ri .
As a corollary, it follows that the probability of staying in state i for a particular duration
equals the probability of surviving for a fraction of that duration times the probability of
surviving for the remaining time, i.e., the state survival probability Pi (t) satisfies the product
identity
Pi (t1 + t2 ) = Pi (t1 )Pi (t2 ).
2. According to the second assumption, state switching can occur at any spike, irrespective of which neuron fired the spike. In the following, we shall refer to a spike fired by any
of the neurons as an event (where state switching might occur). Note that if two (or more)
neurons happen to fire a spike at exactly the same time, the respective spikes are regarded
as two (or more) distinct events. The collection of event times is denoted by te .
Combining the first two assumptions, we formulate the hidden state sequence at the events
(i.e. observation points) as a non-homogeneous discrete Markov chain. Accordingly, the
probability of remaining in state i for the duration of the interevent-interval (IEI) ?te =
te ? te?1 is given by the state survival probability Pi (?te ). The probability to change state is
then 1 ? Pi (?te ).
3. In each state i, the spike trains are assumed to be generated by a renewal process that
randomly draws interspike-intervals (ISIs) t from a probability density function (pdf) hi (t).
Because every IEI is only a fraction of an ISI, instead of working with ISI distributions, we
use an equivalent formulation based on the conditional intensity function (CIF) ?i(? ) [4].
The CIF, also called hazard function in reliability theory, is a generalization of the Poisson
firing rate. It is defined as the probability density of spiking in the time interval [? , ? + dt],
given that no spike has occurred in the interval [0, ? ) since the last spike. In the following,
the variable ? , i.e. the time that has elapsed since the last spike, shall be referred to as phase
[5]. Using the CIF, the ISI pdf can be expressed by the fundamental equation of renewal
theory,
Zt
hi (t) = exp ? ?i(? ) d? ?i(t).
(1)
0
At each event e, we observe the phase trajectory of every neuron traced out since the last
event. It is clear that in multiple electrode recordings the phase trajectories between events
are not independent, since they have to start where the previous trajectory ended. Therefore,
our model violates the observation independence assumption of standard Hidden Markov
Models (HMMs). Our model is, in formal terms, a mixed-state Markov model [6], with the
architecture of a double-chain [7]. Such models are generalizations of HMMs in that the
observable outputs may not only be dependent on the current hidden state, but also on past
observations (formally, the mixed state is formed by combining the hidden and observable
states).
In our model, hidden state transition probabilities are characterized by the escape rates ri
and observable state transition probabilities by the CIFs ?ni for neuron n in hidden state i.
Our goal is to find a set ? of model parameters, such that the likelihood
Pr{O | ?} =
? Pr{S, O | ?}
S?S
of the observation sequence O is maximized.
As a first step, we will derive an expression for the combined likelihood Pr{S,O | ?}. Then,
we will apply the expectation maximization (EM) algorithm to find the optimal parameter
set.
2
Transition probabilities
The mixed state at event e shall be composed of the hidden state Se and the observable
outputs One (for neurons n ? {1, . . . , N}).
Hidden state transitions In classical mixed-state Markov models, the hidden state transition probabilities are constant. In our model, however, we describe time as a continuous
quantity and observe the system whenever a spike occurs, thus in non-equidistant intervals.
Consequently, hidden state transitions depend explicitly on the elapsed time since the last
observation, i.e., on the IEIs ?te . The transition probability ai j (?te ) from hidden state i to
hidden state j is then given by
exp(?r j ?te )
if i = j,
ai j (?te ) =
(2)
[1 ? exp(?r j ?te )] gi j otherwise,
where gi j is the conditional probability of making a transition from state i into a new state
j, given that j 6= i. Thus, gi j has to satisfy the constraint ? j gi j = 1, with gii = 0.
Observable state transitions The observation at event e is defined as Oe = {?ne , ?e },
where ?e contains the index of the neuron that has triggered event e by emitting a spike,
and ?ne = (inf ?ne , sup ?ne ] is the phase interval traced out by neuron n since its last spike.
Observations form a cascade. After a spike, the phase of the respective neuron is immediately reset to zero. The interval?s bounds are thus defined by
0
if ?e?1 = n,
n
n
n
sup ?e = inf ?e + ?te and inf ?e =
sup ?ne?1 otherwise.
The observable transition probability pi (Oe ) = Pr{Oe | Oe?1 , Se = i} is the probability of
observing output Oe , given the previous output Oe?1 and the current hidden state Se . With
our independence assumption (3.), we can give its density as the product of every neuron?s
probability of having survived the respective phase interval ?ne that it has traced out since
its last spike, multiplied by the spiking neuron?s firing rate (compare equation 1):
Z
n
?i (? ) d?
?i?e (sup ??e e ) .
pi (Oe ) = ? exp ?
(3)
?ne
n
Note that in case of a single neuron recording, this reduces to the ISI pdf.
To give a closed form of the observable transition pdf, several approaches are thinkable.
Here, for the sake of flexibility and computational simplicity, we approximate the CIF ?ni
for neuron n in state i by a step function, assuming that its value is constant inside small,
n }. That is, ?n(? ) ? `n(b), ? ? ? Bn(b).
arbitrarily spaced bins Bn(b), b ? {1, . . . , Nbins
i
i
In order to use the discretized CIFs `ni(b), we also discretize ?ne : the fractions fen(b) ? [0, 1]
represent how much of neuron n?s phase bin Bn(b) has been traced out since the last event.
For example, if event e ? 1 happened in the middle of neuron n?s phase bin 2 and event e
happened ten percent into its phase bin 4, then fen (2) = 0.5, fen (3) = 1, and fen (4) = 0.1,
whereas fen (i) = 0 for other i, Figure 1.
Making use of these discretizations, the integral in equation 3 is approximated by a sum:
"
!#
Nn
pi (Oe ) ?
? exp
n
bins
?
?
fen(b) `ni(b) kBn(b)k
??i e(sup ?e?e ),
(4)
b=1
with kBn(b)k denoting the width of neuron n?s phase bin b.
Equations 2 and 4 fully describe transitions in our mixed-state Markov model. Next, we
apply the EM algorithm to find optimal values of the escape rates ri , the conditional hidden
state transition probabilities gi j and the discretized CIFs `ni(b), given a set of spike trains.
Neuron 1
1
2
3
1
2
3
4
1
5
2
fe2
Neuron 2
0.5
1
2
1.0
0.1
3
4
1
2
1
Events
te?1
te
Figure 1: Two spike trains are combined to form the event train shown in the bottom row.
The phase bins are shown below the spike trains, they are labelled with the corresponding
bin number. As an example, for the second neuron, the fractions fe2 (b) of its phase bins
that have been traced out since event e ? 1 are indicated by the horizontal arrow. They are
nonzero for b = 2, 3, and 4.
3
Parameter estimation
Our goal is to find model parameters ? = {ri , gi j , `ni(b)}, such that the likelihood Pr{O | ?}
of observation sequence O is maximized. According to the EM algorithm, we can find such
values by iterating over models
(5)
?new = arg max ? Pr{S | O, ?old } ln(Pr{S, O | ? }) ,
?
S?S
where S is the set of all possible hidden state sequences. The product of equations 2 and 4
over all events is proportional to the combined likelihood Pr{S, O | ? }:
Pr{S, O | ? } ? ? aSe?1 Se (?te ) pSe (Oe ).
e
Because of the logarithm in equation 5, the maximization over escape rates can be separated
from the maximization over conditional intensity functions. We define the abbreviations
?i j (e) = Pr{Se?1 = i, Se = j | O, ?old } and ?i (e) = Pr{Se = i | O, ?old } for the posterior
probabilities appearing in equation 5. In practice, both expressions are computed in the
expectation step by the classic forward-backward algorithm [8], using equations 2 and 4 as
the transition probabilities. With the abbreviations defined above, equation 5 is split to
!
rnew
= arg max
j
r
? ? j j (e) (?r?te ) + ?
e
?
?i j (e) ln[1 ? exp(?r?te )]
(6)
?
(7)
e, i6= j
?
`ni(b)new = arg max ??` ? ?i (e) fen(b) kBn(b)k + ln `
`
e
e: ?e =n ?
sup ?ne ?Bn(b)
?
(e)
with gnew
gnew
=
arg
max
ln
g
i
j
?
ij
ii = 0 and
g
e
?
?
?i (e)?
? gnew
i j = 1.
(8)
j
In order to perform the maximization in equation 6, we compute its derivative with respect
to r and set it to zero:
?
?
0 = ? ? j j (e)?te + ??te ?
e
?t
e
? ? ?i j (e)
1 ? exp ?rnew
i6= j
j ?te
This equation cannot be solved analytically, but being just a one dimensional optimization problem, a solution can be found using numerical methods, such as the LevenbergMarquardt algorithm. The singularity in case of ?te = 0, which arises when two or more
spikes occur at the same time, needs the special treatment of replacing the respective fraction by its limit: 1/rinew .
To obtain the reestimation formula for the discretized CIFs, equation 7?s derivative with
respect to ` is set to zero. The result can be solved directly and yields
.
`ni(b)new =
? ?i (e) ? ?i (e) fen(b) kBn(b)k.
e: ?e =n ?
sup ?ne ?Bn(b)
e
Finally, to obtain the reestimation formula for the conditional hidden state transition probabilities gi j , we solve equation 8 using Lagrange multipliers, resulting in
.
gnew
? ?ik (e).
i6= j = ? ?i j (e)
e
4
e, k6=i
Application to spike trains from the sleeping songbird
We have applied our model to spike train data from sleeping songbirds [9]. It has been
found that during sleep, neurons in vocal premotor area RA exhibit spontaneous activity
that at times resembles premotor activity during singing [10, 9].
We train our model on the spike train of a single RA neuron in the sleeping bird with
Nbins = 100, where the first bin extends from the sample time to 1ms and the consecutive
99 steps are logarithmically spaced up to the largest ISI. After convergence, we find that
the ISI pdfs associated with the two hidden states qualitatively agree with the pdfs recorded
in the awake non-singing bird and the awake singing bird, respectively, Figure 2. ISI pdfs
were derived from the CIFs by using equation 1. For the state-conditional ISI histograms
we first ran the Viterbi algorithm to find the most likely hidden-state sequence and then
sorted spikes into two groups, for which the ISIs histograms were computed.
We find that sleep-related activity in the RA neuron of Figure 2 is best described by random
switching between a singing-like state of lifetime ?1 = 1.18s ? 0.38s and an awake, nonsinging-like state of lifetime ?2 = 2.26s ? 0.42s. Standard deviations of lifetime estimates
were computed by dividing the spike train into 30 data windows of 10s duration each
and computing the Jackknife variance [11] on the truncated spike trains. The difference
between the singing-like state in our model and the true singing ISI pdf shown in Figure 2
is more likely due to generally reduced burst rates during sleep, rather than to a particularity
of the examined neuron.
Next we applied our model to simultaneous recordings from pairs of RA neurons. By fitting
two separate models (with identical phase binning) to the two spike trains, and after running
the Viterbi algorithm to find the most likely hidden state sequences, we find good agreement
between the two sequences, Figure 3 (top row) and 4c. The correspondence of hidden state
sequences suggests a common network mechanism for the generation of the singing-like
states in both neurons. We thus applied a single model to both spike trains and found
again good agreement with hidden-state sequences determined for the separate models,
Figure 3 (bottom row) and 4f. The lifetime histograms for both states look approximatively
exponential, justifying our assumption for the state dynamics, Figure 4g and h.
For the model trained on neuron one we find lifetimes ?1 = 0.63s ? 0.37s and ?2 =
1.71s ? 0.45s, and for the model trained on neuron two we find ?1 = 0.42s ? 0.11s
and ?2 = 1.23s ? 0.17s. For the combined model, lifetimes are ?1 = 0.58s ? 0.25s and
20
20
10
5
101
102
ISI [ms]
103
(c)
15
prob. [%]
15
0
100
20
(b)
prob. [%]
prob. [%]
(a)
10
5
0
100
101
102
ISI [ms]
103
15
10
5
0
100
101
102
ISI [ms]
103
Figure 2: (a): The two state-conditional ISI histograms of an RA neuron during sleep are
shown by the red and green curves, respectively. Gray patches represent Jackknife standard
deviations. (b): After waking up the bird by pinching his tail, the new ISI histogram shown
by the gray area becomes almost indistinguishable from the ISI histogram of state 1 (green
line). (c): In comparison to the average ISI histogram of many RA neurons during singing
(shown by the gray area, reproduced from [12]), the ISI histogram corresponding to state 2
(red line) is shifted to the right, but looks otherwise qualitatively similar.
?2 = 1.13s ? 0.15s. Thus, hidden-state switching seems to occur more frequently in the
combined model. The reason for this increase might be that evidence for the song-like
state appears more frequently with two neurons, as a single neuron might not be able to
indicate song-like firing statistics with high temporal fidelity.
We have also analysed the correlations between state dynamics in the different models. The
hidden state function S(t) is a binary function that equals one when in hidden state 1 and
zero when in state 2. For the case where we modelled the two spike trains separately, we
have two such hidden state functions, S1 (t) for neuron one and S2 (t) for neuron two. We
find that all correlation functions CSS1 (t), CSS2 (t), and CS1 S2 (t), have a peak at zero time
lag, with a high peak correlation of about 0.7, Figure 4c and f (the correlation function is
defined as the cross-covariance function divided by the autocovariance functions).
We tested whether our model is a good generative model for the observed spike trains by
applying the time rescaling theorem, after which the ISIs of a good generative model with
known CIFs should reduce to a Poisson process with unit rate, which, after another transformation, should lead to a uniform probability density in the interval (0, 1) [4]. Performing
this test, we found that the transformed ISI densities of the combined model are uniform,
thus validating our model (95% Kolmogorov-Smirnov test, Figure 4i).
5
Discussion
We have presented a mixed-state Markov model for point processes, assuming generation
by random switching between renewal processes. Our algorithm is suited for systems in
which neurons make discrete state transitions simultaneously. Previous attempts of fitting
spike train data with Markov models exhibited weaknesses due to time binning. With large
time bins and the number of spikes per bin treated as observables [13, 14], state transitions
can only be detected when they are accompanied by firing rate changes. In our case, RA
neurons have a roughly constant firing rate throughout the entire recording, and so such
approaches fail.
We were able to model the hidden states in continuous time, but had to bin the ISIs in
order to deal with limited data. In principle, the algorithm can operate on any binning
scheme for the ISIs. Our choice of logarithmic bins keeps the number of parameters small
(proportional to Nbins ), but preserves a constant temporal resolution.
The hidden-state dynamics form Poisson processes characterized by a lifetime. By esti-
IFR [Hz]
IFR [Hz]
103
102
101
100
102
101
100
0
103
102
101
100
102
101
100
0
5
10
15
20
25
30
5
10
15
Time [sec]
20
25
30
Figure 3: Shown are the instantaneous firing rate (IFR) functions of two simultaneously
recorded RA neurons (at any time, the IFR corresponds to the inverse of the current ISI).
The green areas show the times when in the first (awake-like) hidden state, and the red
areas when in the song-like hidden state. The top two rows show the result of computing
two independent models on the two neurons, whereas the bottom rows show the result of a
single model.
1
# states
30
101
102
ISI [ms]
103
(g)
20
correlation
3 (e)
2
1
0
100
30
101
102
ISI [ms]
(h)
0
101 102 103 104
State duration [ms]
(c)
.4
?0.2 0 0.2
0
?10 ?5 0 5 10
?t [sec]
.8
(f)
.4
?0.2 0 0.2
0
?10 ?5 0 5 10
?t [sec]
103
10
101 102 103 104
State duration [ms]
.8
103
20
10
0
101
102
ISI [ms]
correlation
3 (d)
2
0
100
0
100
103
# spikes/100
101
102
ISI [ms]
5
diff. to unif. dist.
prob. [%]
5
0
100
# spikes/100
15 (b)
10
# states
prob. [%]
15 (a)
10
.02 (i)
0
0
1/3
2/3
Our model
1
Figure 4: (a) and (b): State-conditional ISI pdfs for each of the two neurons. (d) and (e): ISI
histograms (blue and yellow) for neurons 1 and 2, respectively, as well as state-conditional
ISI histograms (red and green), computed as in Figure 2a. (g) and (h): State lifetime
histograms for the song-like state (red) and for the awake-like state (green). Theoretical
(exponential) histograms with escape rates r1 and r2 (fine black lines) show good agreement
with the measured histograms, especially in F. (c): Correlation between state functions of
the two separate models. (f): Correlation between the state functions of the combined
model with separate model 1 (blue) and separate model 2 (yellow). (i): KolmogorovSmirnov plot after time rescaling. After transforming the ISIs, the resulting densities for
both neurons remain within the 95% confidence bounds of the uniform density (gray area).
In (a)?(c) and (f)?(h), Jackknife standard deviations are shown by the gray areas.
mating this lifetime, we hope it might be possible to form a link between the hidden states
and the underlying physical process that governs the dynamics of switching. Despite the
apparent limitation of Poisson statistics, it is a simple matter to generalize our model to
hidden state distributions with long tails (e.g., power-law lifetime distributions): By cascading many hidden states into a chain (with fixed CIFs), a power-law distribution can be
approximated by the combination of multiple exponentials with different lifetimes. Our
code is available at http://www.ini.unizh.ch/?rich/software/.
Acknowledgements
We would like to thank Sam Roweis for advice on Hidden Markov models and Maria
Minkoff for help with the manuscript. R. H. is supported by the Swiss National Science
Foundation. M. D. is supported by Stiftung der Deutschen Wirtschaft.
References
[1] Z. N?adasdy, H. Hirase, A. Czurk?o, J. Csicsv?ari, and G. Buzs?aki. Replay and time compression
of recurring spike sequences in the hippocampus. J Neurosci, 19(21):9497?9507, Nov 1999.
[2] K. G. Thompson, D. P. Hanes, N. P. Bichot, and J. D. Schall. Perceptual and motor processing
stages identified in the activity of macaque frontal eye field neurons during visual search. J
Neurophysiol, 76(6):4040?4055, Dec 1996.
[3] R. Cossart, D. Aronov, and R. Yuste. Attractor dynamics of network UP states in the neocortex.
Nature, 423(6937):283?288, May 2003.
[4] E. N. Brown, R. Barbieri, V. Ventura, R. E. Kass, and L. M. Frank. The time-rescaling theorem
and its application to neural spike train data analysis. Neur Comp, 14(2):325?346, Feb 2002.
[5] J. Deppisch, K. Pawelzik, and T. Geisel. Uncovering the synchronization dynamics from correlated neuronal activity quantifies assembly formation. Biol Cybern, 71(5):387?399, 1994.
[6] A. M. Fraser and A. Dimitriadis. Forecasting probability densities by using hidden Markov
models with mixed states. In Weigend and Gershenfeld, editors, Time Series Prediction: Forecasting the Future and Understanding the Past, pages 265?82. Addison-Wesley, 1994.
[7] A. Berchtold. The double chain Markov model. Comm Stat Theor Meths, 28:2569?2589, 1999.
[8] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE, 77(2):257?286, Feb 1989.
[9] R. H. R. Hahnloser, A. A. Kozhevnikov, and M. S. Fee. An ultra-sparse code underlies the
generation of neural sequences in a songbird. Nature, 419(6902):65?70, Sep 2002.
[10] A. S. Dave and D. Margoliash. Song replay during sleep and computational rules for sensorimotor vocal learning. Science, 290(5492):812?816, Oct 2000.
[11] D. J. Thomson and A. D. Chave. Jackknifed error estimates for spectra, coherences, and transfer
functions. In Simon Haykin, editor, Advances in Spectrum Analysis and Array Processing,
volume 1, chapter 2, pages 58?113. Prentice Hall, 1991.
[12] A. Leonardo and M. S. Fee. Ensemble coding of vocal control in birdsong. J Neurosci,
25(3):652?661, Jan 2005.
[13] G. Radons, J. D. Becker, B. D?ulfer, and J. Kr?uger. Analysis, classification, and coding of
multielectrode spike trains with hidden Markov models. Biol Cybern, 71(4):359?373, 1994.
[14] I. Gat, N. Tishby, and M. Abeles. Hidden Markov modelling of simultaneously recorded cells
in the associative cortex of behaving monkeys. Network: Computation in Neural Systems,
8(3):297?322, 1997.
| 2886 |@word middle:1 compression:1 hippocampus:1 seems:1 smirnov:1 unif:1 hu:1 bn:5 covariance:1 contains:1 series:1 denoting:1 past:2 current:3 ka:1 analysed:1 numerical:1 happen:1 interspike:2 motor:1 plot:1 generative:2 selected:1 nervous:1 accordingly:1 inspection:1 haykin:1 burst:1 ik:1 dan:1 fitting:2 inside:1 ra:8 roughly:1 isi:34 frequently:2 dist:1 discretized:3 pawelzik:1 window:1 becomes:1 underlying:2 maximizes:1 monkey:1 developed:1 fe2:2 transformation:1 ended:1 temporal:2 esti:1 winterthurerstrasse:1 every:3 exactly:1 control:1 unit:1 t1:2 limit:1 switching:7 cifs:7 despite:1 barbieri:1 firing:8 might:4 black:1 bird:4 resembles:1 examined:1 suggests:1 hmms:2 limited:1 practice:1 swiss:1 jan:1 survived:1 area:7 discretizations:1 evolving:1 cascade:1 confidence:1 vocal:3 cannot:1 close:1 prentice:1 applying:1 cybern:2 www:1 equivalent:1 duration:6 thompson:1 formulate:1 resolution:1 simplicity:1 immediately:1 rule:1 cascading:1 regarded:1 array:1 his:1 classic:1 margoliash:1 spontaneous:1 homogeneous:1 agreement:3 logarithmically:1 approximated:2 recognition:1 pinching:1 binning:4 observed:2 bottom:3 solved:2 singing:9 oe:9 ran:1 transforming:1 comm:1 complexity:1 dynamic:8 trained:2 depend:1 observables:1 neurophysiol:1 sep:1 chapter:1 kolmogorov:1 train:25 separated:1 distinct:1 describe:2 detected:2 formation:1 neuroinformatics:1 apparent:1 premotor:3 lag:1 solve:1 particularity:1 otherwise:3 statistic:4 gi:7 reproduced:1 associative:1 sequence:12 triggered:1 product:3 reset:1 combining:2 rapidly:1 fired:2 flexibility:1 roweis:1 convergence:1 electrode:1 double:2 r1:1 staying:2 help:1 derive:1 develop:1 stat:1 measured:2 ij:1 stiftung:1 dividing:1 geisel:1 indicate:1 switzerland:1 violates:1 bin:14 humboldt:1 generalization:2 ultra:1 singularity:1 theor:1 hall:1 exp:8 viterbi:2 consecutive:1 estimation:2 proc:1 largest:1 tool:1 hope:1 cossart:1 jackknifed:1 rather:1 deppisch:1 corollary:1 derived:1 pdfs:4 maria:1 modelling:1 likelihood:5 inst:2 dependent:1 nn:1 entire:1 hidden:43 transformed:1 germany:1 arg:4 fidelity:1 uncovering:1 classification:1 denoted:1 k6:1 animal:1 renewal:4 special:1 equal:3 field:1 having:1 biology:1 identical:1 look:2 unizh:2 future:1 t2:2 richard:1 escape:6 randomly:1 composed:1 simultaneously:3 preserve:1 national:1 hanes:1 phase:13 fire:1 attractor:1 attempt:1 aronov:1 highly:1 weakness:1 chain:4 paralysed:1 integral:1 respective:4 autocovariance:1 old:3 logarithm:1 theoretical:2 levenbergmarquardt:1 maximization:4 deviation:3 uniform:3 tishby:1 abele:1 combined:7 density:8 fundamental:1 peak:2 continuously:1 again:1 reflect:1 recorded:4 derivative:2 rescaling:3 de:1 iei:2 accompanied:2 sec:3 coding:2 matter:1 satisfy:1 explicitly:1 closed:1 observing:1 sup:7 red:5 start:1 simon:1 formed:1 ni:8 variance:1 ensemble:1 maximized:2 spaced:2 yield:1 rabiner:1 yellow:2 generalize:1 modelled:1 trajectory:3 comp:1 dave:1 simultaneous:1 phys:1 whenever:1 mating:1 sensorimotor:1 associated:1 treatment:1 appears:1 manuscript:1 cs1:1 wesley:1 dt:1 interevent:1 formulation:1 lifetime:14 just:1 stage:1 correlation:8 working:1 horizontal:1 replacing:1 uger:1 indicated:1 gray:5 brown:1 multiplier:1 true:1 analytically:1 excitability:1 nonzero:1 deal:1 indistinguishable:1 during:10 width:1 songbird:4 aki:1 m:10 ini:2 pdf:5 thomson:1 covert:1 percent:1 instantaneous:1 ari:1 common:1 spiking:2 vitro:1 physical:1 exponentially:1 volume:1 tail:2 occurred:1 refer:1 ai:2 i6:3 had:1 reliability:1 cortex:1 operating:1 behaving:1 feb:2 buzs:1 posterior:1 dictated:1 optimizing:1 inf:3 binary:1 arbitrarily:1 der:1 fen:8 ii:1 multiple:4 reduces:1 characterized:2 cross:1 long:1 hazard:2 justifying:1 divided:1 ase:1 fraser:1 paired:1 prediction:1 underlies:1 patient:1 expectation:2 poisson:4 histogram:13 represent:2 sleeping:4 dec:1 cell:1 whereas:2 want:1 separately:1 fine:1 interval:11 macroscopic:2 operate:1 exhibited:1 recording:6 hz:2 gii:1 validating:1 surviving:2 counting:1 split:1 independence:2 fit:1 equidistant:1 architecture:1 identified:1 reduce:1 whether:1 expression:2 pse:1 becker:1 forecasting:2 birdsong:1 song:5 cif:4 speech:1 generally:1 iterating:1 clear:1 se:7 governs:1 nonparametric:1 neocortex:1 ten:1 reduced:1 http:1 tutorial:1 shifted:1 happened:2 per:1 hirase:1 blue:2 discrete:4 shall:3 group:1 traced:5 changing:1 gershenfeld:1 backward:1 fraction:5 sum:1 weigend:1 inverse:2 prob:5 powerful:1 extends:1 almost:1 throughout:1 patch:1 draw:1 coherence:1 fee:2 radon:1 bound:2 hi:2 multielectrode:1 correspondence:1 sleep:6 activity:5 occur:5 constraint:1 awake:5 ri:5 software:1 leonardo:1 sake:1 performing:1 jackknife:3 according:2 neur:1 combination:1 remain:1 em:3 sam:1 adasdy:1 making:2 s1:1 schall:1 pr:11 behavioural:2 equation:14 zurich:1 computationally:1 mutually:1 ln:4 agree:1 mechanism:1 fail:1 addison:1 available:1 multiplied:1 apply:2 observe:2 appearing:1 top:2 remaining:2 running:1 assembly:1 especially:1 classical:1 suddenly:1 quantity:1 spike:46 occurs:1 ifr:4 berchtold:1 exhibit:1 separate:5 link:1 berlin:3 thank:1 reason:1 assuming:3 length:1 code:2 index:1 dimitriadis:1 ventura:1 frank:1 zt:1 perform:1 discretize:1 neuron:45 observation:8 markov:16 truncated:1 immediate:1 biologie:1 waking:2 intensity:2 pair:1 elapsed:2 macaque:1 able:2 recurring:1 below:1 rnew:2 max:4 green:5 power:2 event:17 treated:1 meth:1 scheme:1 eye:1 ne:10 irrespective:1 extract:1 understanding:1 acknowledgement:1 law:2 synchronization:1 fully:1 mixed:8 generation:3 limitation:1 proportional:2 yuste:1 foundation:1 principle:1 editor:2 classifying:1 pi:10 row:5 course:1 deutschen:1 supported:2 last:7 free:1 formal:1 gnew:4 sparse:1 distributed:1 curve:1 transition:17 rich:2 forward:1 collection:1 qualitatively:2 emitting:1 approximate:1 observable:8 nov:1 keep:1 reestimation:2 assumed:1 discriminative:1 spectrum:2 continuous:4 search:1 quantifies:1 nature:2 transfer:1 neurosci:2 arrow:1 s2:2 neuronal:1 advice:1 referred:1 exponential:3 replay:2 perceptual:1 formula:2 theorem:2 r2:1 evidence:2 survival:2 kr:1 gat:1 te:21 suited:1 logarithmic:1 likely:3 visual:1 lagrange:1 expressed:1 approximatively:1 ch:2 corresponds:1 satisfies:1 oct:1 hahnloser:2 conditional:11 goal:3 identity:1 abbreviation:2 consequently:1 sorted:1 labelled:1 change:7 determined:1 diff:1 total:1 called:1 nbins:3 rit:1 formally:1 arises:1 ethz:2 frontal:1 tested:1 biol:2 correlated:1 |
2,077 | 2,887 | Context as Filtering
Daichi Mochihashi
ATR, Spoken Language Communication
Research Laboratories
Hikaridai 2-2-2, Keihanna Science City
Kyoto, Japan
[email protected]
Yuji Matsumoto
Graduate School of Information Science
Nara Institute of Science and Technology
Takayama 8916-5, Ikoma City
Nara, Japan
[email protected]
Abstract
Long-distance language modeling is important not only in speech recognition and machine translation, but also in high-dimensional discrete sequence modeling in general. However, the problem of context length has
almost been neglected so far and a na??ve bag-of-words history has been
employed in natural language processing. In contrast, in this paper we
view topic shifts within a text as a latent stochastic process to give an explicit probabilistic generative model that has partial exchangeability. We
propose an online inference algorithm using particle filters to recognize
topic shifts to employ the most appropriate length of context automatically. Experiments on the BNC corpus showed consistent improvement
over previous methods involving no chronological order.
1
Introduction
Contextual effect plays an essential role in the linguistic behavior of humans. We infer
the context in which we are involved to make an adaptive linguistic response by selecting
an appropriate model from that information. In natural language processing research, such
models are called long-distance language models that incorporate distant effects of previous
words over the short-term dependencies between a few words, which are called n-gram
models. Besides apparent application in speech recognition and machine translation, we
note that many problems of discrete data processing reduce to language modeling, such as
information retrieval [1], Web navigation [2], human-machine interaction or collaborative
filtering and recommendation [3].
From the viewpoint of signal processing or control theory, context modeling is clearly a
filtering problem that estimates the states of a system sequentially along time to predict the
outputs according to them. However, for the problem of long-distance language modeling,
natural language processing has so far only provided simple averaging using a set of whole
words from the beginning of a text, totally dropping chronological order and implicitly
assuming that the text comes from a stationary information source [4, 5].
The inherent difficulties that have prevented filtering approaches to language modeling are
its discreteness and high dimensionality, which precludes Kalman filters and their extensions that are all designed for vector spaces and distributions like Gaussians. As we note in
the following, ordinary discrete HMMs are not powerful enough for this purpose because
their true state is restricted to a single hidden component [6].
In contrast, this paper proposes to solve the high-dimensional discrete filtering problem directly using a Particle Filter. By combining a multinomial Particle Filter recently proposed
in statistics for DNA sequence modeling [7] with Bayesian text models LDA and DM, we
introduce two models that can track multinomial stochastic processes of natural language
or similar high-dimensional discrete data domains that we often encounter.
2
2.1
Mean Shift Model of Context
HMM for Multinomial Distributions
The long-distance language models mentioned in Section 1 assume a hidden multinomial
distribution, such as a unigram distribution or a mixture distribution over the latent topics,
to predict the next word by updating its estimate according to the observations. Therefore,
to track context shifts, we need a model that describes changes of multinomial distributions.
One model for this purpose is a multinomial extension to the Mean shift model (MSM)
recently proposed in the field of statistics [7]. This is a kind of HMM, but note that it is
different from traditional discrete HMMs. In discrete HMMs, the true state is one of M
components and we estimate it stochastically as a multinomial over the M components.
On the other hand, since the true state here is itself a multinomial over the components, we
estimate it stochastically as (possibly a mixture of) a Dirichlet distribution, a distribution
of multinomial distributions on the (M ? 1)-simplex. This HMM has some similarity to
the Factorial HMM [6] in that it has a combinatorial representational power through a
distributed state representation. However, because the true state here is a multinomial over
the latent variables, there are dependencies between the states that are assumed independent
in the FHMM. Below, we briefly introduce a multinomial Mean shift model following [7]
and an associated solution using a Particle Filter.
2.2
Multinomial Mean Shift Model
The MSM is a generative model that describes the intermittent changes of hidden states
and outputs according to them. Although there is a corresponding counterpart using Normal
distribution that was first introduced [8, 9], here we concentrate on a multinomial extension
of MSM, following [7] for DNA sequence modeling.
In a multinomial MSM, we assume time-dependent true multinomials ?t that may change
occasionally and the following generative model for the discrete outputs yt = y1 y2 . . . yt
(yt ? ? ; ? is a set of symbols) according to ?1 ?2 . . . ?t :
?
? ?t ? Dir(?) with probability ?
= ?t?1
with probability (1??) ,
(1)
?
yt ? Mult(?t )
where Dir(?) and Mult(?) are a Dirichlet and multinomial distribution with parameters
? and ?, respectively. Here we assume that the hyperparameter ? is known and fixed, an
assumption we will relax in Section 3.
This model first draws a multinomial ? from Dir(?) and samples output y according to ?
for a certain interval. When a change point occurs with probability ?, a new ? is sampled
again from Dir(?) and subsequent y is sampled from the new ?. This process continues
recursively throughout which neither ?t nor the change points are known to us; all we know
is the output sequence yt .
However, if we know that the change has occurred at time c, y can be predicted exactly.
Let It be a binary variable that represents whether a change occurred at time t: that is,
It = 1 means there was a change at t (?t 6= ?t?1 ), and It = 0 means there was no change
(?t = ?t?1 ). When the last change occurred at time c,
1. For particles i = 1 . . . N ,
(a) Calculate f (t) and g(t) according to (6).
(i)
(i)
(i)
(b) Sample It ? Bernoulli (f (t)/(f (t) + g(t))), and update It?1 to It .
(i)
(i)
(c) Update weight wt = wt?1 ? (f (t) + g(t)).
(1)
(N )
(1)
(N )
2. Find a predictive distribution using wt . . . wt and It . . . It
PN
(i)
(i)
p(yt+1 |yt ) = i=1 wt p(yt+1 |yt , It )
:
(4)
(i)
where p(yt+1 |yt , It ) is given by (3).
Figure 1: Algorithm of the Multinomial Particle Filter.
p(yt+1 = y | yt , Ic = 1, Ic+1 = ? ? ? = It = 0) =
Z
p(y|?)p(?|yc ? ? ? yt )d?
?y + n y
=P
,
y (?y +ny )
(2)
(3)
where ?y is the y?th element of ? and ny is the number of occurrences of y in yc ? ? ? yt .
Therefore, the essence of this problem lies in how to detect a change point given the data
up to time t, a change point problem in discrete space. Actually, this problem can be solved
by an efficient Particle Filter algorithm [10] shown below.
2.3
Multinomial Particle Filter
The prediction problem above can be
solved by the efficient Particle Filter algorithm shown in Figure 1, graphically
displayed in Figure 2 (excluding prior
updates). The main intricacy involved is
as follows. Let us denote It = {I1 . . . It }.
By Bayes? theorem,
Particle #1
Particle #2
Particle #3
Particle #4
:
z
d1
}|
d2 ? ? ? dc
{ Prior
Weight
yt+1
:
Particle #N
t?1 t
Figure 2: Multinomial Particle Filter in work.
p(It |It?1 , yt ) ? p(It , yt |It?1 , yt?1 ) = p(yt |yt?1 , It?1 , It )p(It |It?1 )
p(yt |yt?1 , It?1 , It = 1)p(It = 1|It?1 ) =: f (t)
=
p(yt |yt?1 , It?1 , It = 0)p(It = 0|It?1 ) =: g(t)
leading
p(It = 1|It?1 , yt ) = f (t)/(f (t) + g(t))
p(It = 0|It?1 , yt ) = g(t)/(f (t) + g(t)) .
(5)
(6)
(7)
In Expression (5), the first term is a likelihood of observation yt when It has been fixed,
which can be obtained through (3). The second term is a prior probability of change, which
can be set tentatively by a constant ?. However, when we endow ? with a prior Beta
distribution Be(?, ?), posterior estimate of ?t given the binary change point history It?1
can be obtained using the number of 1?s in It?1 , nt?1 (1), following a standard Bayesian
method:
? + nt?1 (1)
E[?t |It?1 ] =
.
(8)
?+?+t?1
This means that we can estimate a ?rate of topic shifts? as time proceeds in a Bayesian
fashion. Throughout the following experiments, we used this online estimate of ? t .
The above algorithm runs for each observation yt (t = 1 . . . T ). If we observe a ?strange?
word that is more predictable from the prior than the contextual distribution, (6) makes
f (t) larger than g(t), which leads to a higher probability that It = 1 will be sampled in the
Bernoulli trial of Algorithm 1(b).
3
Mean Shift Model of Natural Language
Chen and Lai [7] recently proposed the above algorithm to analyze DNA sequences. However, when extending this approach to natural language, i.e. word sequences, we meet two
serious problems.
The first problem is that in a natural language the number of words is extremely large.
As opposed to DNA, which has only four letters of A/T/G/C, a natural language usually
contains a minimum of some tens of thousands of words and there are strong correlations
between them. For example, if ?nurse? follows ?hospital?we believe that there has been
no context shift; however, if ?university? follows ?hospital,? the context probably has been
shifted to a ?medical school? subtopic, even though the two words are equally distinct from
?hospital.? Of course, this is due to the semantic relationship we can assume between these
words. However, the original multinomial MSM cannot capture this relationship because
it treats the words independently. To incorporate this relationship, we require an extensive
prior knowledge of words as a probabilistic model.
The second problem is that in model equation (1), the hyperparameter ? of prior Dirichlet
distribution of the latent multinomials is assumed to be known. In the case of natural
language, this means we know beforehand what words or topics will be spoken for all the
texts. Apparently, this is not a natural assumption: we need an online estimation of ? as
well when we want to extend MSM to natural languages.
To solve these problems, we extended a multinomial MSM using two probabilistic text
models, LDA and DM. Below we introduce MSM-LDA and MSM-DM, in this order.
3.1
MSM-LDA
Latent Dirichlet Allocation (LDA) [3] is a probabilistic text model that assumes a hidden
multinomial topic distribution ? over the M topics on a document d to estimate it stochastically as a Dirichlet distribution p(?|d). Context modeling using LDA [5] regards a history
h = w1 . . . wh as a pseudo document and estimates a variational approximation q(?|h) of
a topic distribution p(?|h) through a variational Bayes EM algorithm on a document [3].
After obtaining topic distribution q(?|h), we can predict the next word as follows.
Z
PM
p(y|h) = p(y|?)q(?|h)d? = i=1 p(y|?i )h?i iq(?|h)
(9)
When we use this prediction with an associated VB-EM algorithm in place of the na??ve
Dirichlet model (3) of MSM, we get an MSM-LDA that tracks a latent topic distribution ?
instead of a word distribution. Since each particle computes a Dirichlet posterior of topic
distribution, the final topic distribution of MSM-LDA is a mixture of Dirichlet distributions
for predicting the next word through (4) and (9) as shown in Figure 3(a). Note that MSMLDA has an implicit generative model corresponding to (1) in topic space. However, here
we use a conditional model where LDA parameters are already known in order to estimate
the context online.
In MSM-LDA, we can also update the hyperparameter ? sequentially from the history. As
seen in Figure 2, each particle has a history that has been segmented into pseudo ?documents? d1 . . . dc by the change points sampled so far. Since each pseudo ?document? has
a Dirichlet posterior q(?|di ) (i = 1 . . . c), a common Dirichlet prior can be inferred by
a linear-time Newton-Raphson algorithm [3]. Note that this computation needs only be
run when a change point has been sampled. For this purpose, only the sufficient statistics
q(?|di ) must be stored for each particle to render itself an online algorithm.
Note in passing that MSM-LDA is a model that only tracks a mixing distribution of a mixture model. Therefore, in principle this model is also applicable to other mixture models,
e.g. Gaussian mixtures, where mixing distribution is not static but evolves according to (1).
Time
1
Particle#1
Particle #2
Particle #N
Time
1
Context
change
points
Particle #2
Particle #N
Context
change
points
prior
t
prior
t
Dirichlet
distribution
wn
Word
Simplex
Topic
Subsimplex
w1
Particle#1
w2
0.2
0.64
Expectation
of Topic Mixture
wn
Word
Simplex
0.02
Weights
w1
Mixture of
Dirichlet distributions
Unigram
0.2
Mixture of
Dirichlet
distributions
w2
0.64
0.02
Expectation
of Unigram
Dirichlet
distribution
Unigram
distributions
Weights
Mixture of
Mixture of
Dirichlet distributions
next word
next word
(b) MSM-DM
(a) MSM-LDA
Figure 3: MSM-LDA and MSM-DM in work.
However, in terms of multinomial estimation, this generality has a drawback because it
uses a lower-dimensional topic representation to predict the next word, which may cause
a loss of information. In contrast, MSM-DM is a model that works directly on the word
space to predict the next word with no loss of information.
3.2
MSM-DM
Dirichlet Mixtures (DM) [11] is a novel Bayesian text model that has the lowest perplexity
reported so far in context modeling. DM uses no intermediate ?topic? variables, but places
a mixture of Dirichlet distributions directly on the word simplex to model word correlations.
Specifically, DM assumes the following generative model for a document w = w 1 . . . wN :1
?
1. Draw m ? Mult(?).
2. Draw p ? Dir(?m ).
3. For n = 1 . . . N ,
a. Draw wn ? Mult(p).
m
wN
m
D
(a) Unigram Mixture (UM)
p
wN
D
(b) Dirichlet Mixtures (DM)
Figure 4: Graphical models of UM and DM.
where p is a V -dimensional unigram distribution over words, ?1 . . . ?M = ?M
1 are parameters of Dirichlet prior distributions of p, and ? is a M -dimensional prior mixing
distribution of them. This model is considered a Bayesian extension of the Unigram
Mixture [12] and has a graphical model shown in Figure 4. Given a set of documents
D = {w1 , w2 , . . . , wD }, parameters ? and ?M
1 can be iteratively estimated by a combination of EM algorithm and the modified Newton-Raphson method shown in Figure 5,
which is a straight extension to the estimation of a Polya mixture [13]. 2
Under DM, a predictive probability p(y|h) is (omitting dependencies on ? and ? M
1 ):
M
M Z
X
X
p(y|h) =
p(y|m, h)p(m|h) =
p(y|p)p(p|?m ,h)dp ? p(m|h)
=
m=1
M
X
m=1
1
m=1
Cm
? +ny
P my
,
y (?my +ny )
(10)
Step 1 of the generative model in fact can be replaced by a Dirichlet process prior. Full Bayesian
treatment of DM through Dirichlet processes is now under our development.
2
DM is an extension to the model for amino acids [14] to natural language with a huge number of
parameters, which precludes the ordinary Newton-Raphson algorithm originally proposed in [14].
E step:
M step:
p(m|wi ) ? ?m
p(m|wi ) ,
P
) n /(? + niv ? 1)
i p(m|w
P
P i iv P mv
= ?mv ? P
n
p(m|w
)
i
v niv ? 1)
v ?mv +
v iv /(
i
?m ?
0
?mv
PD
P
V
Y
?( v ?mv )
?(?mv +niv )
P
P
?( v ?mv + v niv ) v=1 ?(?mv )
i=1
(13)
(14)
(15)
Figure 5: EM-Newton algorithm of Dirichlet Mixtures.
where
Cm ? ? m
P
V
?( v ?mv ) Y ?(?mv +nv )
P
?( v ?mv +h) v=1 ?(?mv )
(11)
and nv is the number of occurrences of v in h. This prediction can also be considered an
extension to Dirichlet smoothing [15] with multiple hyperparameters ?m to weigh them
accordingly by Cm .3
When we replace a na??ve Dirichlet model (3) by a DM prediction (10), we get a flexible
MSM-DM dynamic model that works on word simplex directly. Since the original multinomial MSM places a Dirichlet prior in the model (1), MSM-DM is considered a natural
extension to MSM by placing a mixture of Dirichlet priors rather than a single Dirichlet
prior for multinomial unigram distribution. Because each particle calculates a mixture of
Dirichlet posteriors for the current context, the final MSM-DM estimate is a mixture of
them, again a mixture of Dirichlet distributions as shown in Figure 3(b).
In this case, we can also update the mixture prior ? sequentially. Because each particle has
?pseudo documents? w1 . . . wc segmented by change points individually, posterior ?m can
be obtained similarly as (14),
Pc
?m ? i=1 p(m|wi )
(12)
where p(m|wi ) is obtained from (13). Also in this case, only the sufficient statistics
p(m|wi ) (i = 1 .. c) must be stored to make MSM-DM a filtering algorithm.
4
Experiments
We conducted experiments using a standard British National Corpus (BNC). We randomly
selected 100 files of BNC written texts as an evaluation set, and the remaining 2,943 files
as a training set for parameter estimation of LDA and DM in advance.
4.1
Training and evaluation data
Since LDA and DM did not converge on the long texts like BNC, we divided training texts
into pseudo documents with a minimum of ten sentences for parameter estimation. Due
to the huge size of BNC, we randomly selected a maximum of 20 pseudo documents from
each of the 2,943 files to produce a final corpus of 56,939 pseudo documents comprising
11,032,233 words. We used a lexicon of 52,846 words with a frequency ? 5. Note that
this segmentation is optional and has an only indirect influence on the experiments. It only
affects the clustering of LDA and DM: in fact, we could use another corpus, e.g. newspaper
corpus, to estimate the parameters without any preprocessing.
Since the proposed method is an algorithm that simultaneously captures topic shifts and
their rate in a text to predict the next word, we need evaluation texts that have different
rates of topic shifts. For this purpose, we prepared four different text sets by sampling
3
Therefore, MSM-DM is considered an ingenious dynamic Dirichlet smoothing as well as a context modeling.
Text
Raw
Slow
Fast
VFast
MSM-DM
870.06 (?6.02%)
893.06 (?8.31%)
898.34 (?9.10%)
960.26 (?7.57%)
DM
925.83
974.04
988.26
1038.89
MSM-LDA
1028.04
1047.08
1044.56
1065.15
LDA
1037.42
1060.56
1061.01
1050.83
Table 2: Contextual Unigram Perplexities for Evaluation Texts.
from the long BNC texts. Specifically, we conducted sentence-based random sampling as
follows.
(1) Select a first sentence randomly for each text.
(2) Sample contiguous X sentences from that sentence.
(3) Skip Y sentences.
(4) Continue steps (2) and (3) until a desired length of text is obtained.
In the procedure above, X and Y are random variables that have uniform distributions
given in Table 1. We sampled 100 sentences from each of the 100 files by this procedure to
create the four evaluation text sets listed in the table.
4.2
Parameter settings
Name
Raw
Slow
Fast
VeryFast
Property
X = 100, Y = 0
1 ? X ? 10, 1 ? Y ? 3
1 ? X ? 10, 1 ? Y ? 10
X = 1, 1 ? Y ? 10
The number of latent classes in LDA
and DM are set to 200 and 50, respectively.4 The number of particles is set
to N = 20, a relatively small number
because each particle executes an exTable 1: Types of Evaluation Texts.
act Bayesian prediction once previous
change points have been sampled. Beta prior distribution of context change can be initialized as a uniform distribution, (?, ?) = (1, 1). However, based on a preliminary experiment
we set it to (?, ?) = (1, 50): this means we initially assume a context change rate of once
every 50 words in average, which will be updated adaptively.
4.3
Experimental results
Table 2 shows the unigram perplexity of contextual prediction for each type of evaluation
set. Perplexity is a reciprocal of the geometric average of contextual predictions, thus
better predictions yield lower perplexity. While MSM-LDA slightly improves LDA due
to the topic space compression explained in Section 3.1, MSM-DM yields a consistently
better prediction, and its performance is more significant for texts whose subtopics change
faster.
Figure 6 shows a plot of the actual improvements relative to DM, PPL MSM ?PPLDM . We
can see that prediction improves for most documents by automatically selecting appropriate
contexts. The maximum improvement was ?365 in PPL for one of the evaluation texts.
Finally, we show in Figure 7 a sequential plot of context change probabilities p (i) (It = 1)
(i = 1..N, t = 1..T ) calculated by each particle for the first 1,000 words of one of the
evaluation texts.
5
Conclusion and Future Work
In this paper, we extended the multinomial Particle Filter of a small number of symbols to
natural language with an extremely large number of symbols. By combining original filter
with Bayesian text models LDA and DM, we get two models, MSM-LDA and MSM-DM,
that can incorporate semantic relationship between words and can update their hyperparam4
We deliberately chose a smaller number of mixtures in DM because it is reported to have a better
performance in small mixtures since it is essentially a unitopic model, in contrast to LDA.
40
Documents
0.8
0.7
30
0.6
20
0.5
0.4
10
0.3
0.2
0
-400 -300 -200 -100
20
0.1
0
100 200 300
Perplexity reduction
15
10
0
1000
Particle
5
900
800
700
600
500
Time
400
300
200
100
0
0
Figure 6: Perplexity reductions of MSM Figure 7: Context change probabilities for
1,000 words text, sampled by the particles.
relative to DM.
eter sequentially. According to this model, prediction is made using a mixture of different
context lengths sampled by each Monte Carlo particle.
Although the proposed method is still in its fundamental stage, we are planning to extend
it to larger units of change points beyond words, and to use a forward-backward MCMC or
Expectation Propagation to model a semantic structure of text more precisely.
References
[1] Jay M. Ponte and W. Bruce Croft. A Language Modeling Approach to Information Retrieval.
In Proc. of SIGIR ?98, pages 275?281, 1998.
[2] David Cohn and Thomas Hofmann. The Missing Link: a probabilistic model of document
content and hypertext connectivity. In NIPS 2001, 2001.
[3] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet Allocation. Journal of
Machine Learning Research, 3:993?1022, 2003.
[4] Daniel Gildea and Thomas Hofmann. Topic-based Language Models Using EM. In Proc. of
EUROSPEECH ?99, pages 2167?2170, 1999.
[5] Takuya Mishina and Mikio Yamamoto. Context adaptation using variational Bayesian learning
for ngram models based on probabilistic LSA. IEICE Trans. on Inf. and Sys., J87-D-II(7):1409?
1417, 2004.
[6] Zoubin Ghahramani and Michael I. Jordan. Factorial Hidden Markov Models. In Advances in
Neural Information Processing Systems (NIPS), volume 8, pages 472?478. MIT Press, 1995.
[7] Yuguo Chen and Tze Leung Lai. Sequential Monte Carlo Methods for Filtering and Smoothing in Hidden Markov Models. Discussion Paper 03-19, Institute of Statistics and Decision
Sciences, Duke University, 2003.
[8] H. Chernoff and S. Zacks. Estimating the Current Mean of a Normal Distribution Which is
Subject to Changes in Time. Annals of Mathematical Statistics, 35:999?1018, 1964.
[9] Yi-Chin Yao. Estimation of a noisy discrete-time step function: Bayes and empirical Bayes
approaches. Annals of Statistics, 12:1434?1447, 1984.
[10] Arnaud Doucet, Nando de Freitas, and Neil Gordon. Sequential Monte Carlo Methods in Practice. Statistics for Engineering and Information Science. Springer-Verlag, 2001.
[11] Mikio Yamamoto and Kugatsu Sadamitsu. Dirichlet Mixtures in Text Modeling. CS Technical
Report CS-TR-05-1, University of Tsukuba, 2005. http://www.mibel.cs.tsukuba.ac.jp/?myama/
pdf/dm.pdf.
[12] Kamal Nigam, Andrew K. McCallum, Sebastian Thrun, and Tom M. Mitchell. Text Classification from Labeled and Unlabeled Documents using EM. Machine Learning, 39(2/3):103?134,
2000.
[13] Thomas P. Minka. Estimating a Dirichlet distribution, 2000. http://research.microsoft.com/
?minka/papers/dirichlet/.
[14] K. Sj?
olander, K. Karplus, M.P. Brown, R. Hughey, R. Krogh, I.S. Mian, and D. Haussler. Dirichlet Mixtures: A Method for Improved Detection of Weak but Significant Protein Sequence Homology. Computing Applications in the Biosciences, 12(4):327?245, 1996.
[15] D. J. C. MacKay and L. Peto. A Hierarchical Dirichlet Language Model. Natural Language
Engineering, 1(3):1?19, 1994.
| 2887 |@word trial:1 briefly:1 compression:1 d2:1 tr:1 recursively:1 takuya:1 reduction:2 contains:1 selecting:2 daniel:1 document:15 freitas:1 current:2 contextual:5 nt:2 wd:1 com:1 must:2 written:1 distant:1 subsequent:1 hofmann:2 zacks:1 designed:1 plot:2 update:6 stationary:1 generative:6 selected:2 accordingly:1 mccallum:1 beginning:1 sys:1 reciprocal:1 short:1 blei:1 lexicon:1 mathematical:1 along:1 beta:2 introduce:3 behavior:1 nor:1 planning:1 automatically:2 actual:1 totally:1 provided:1 estimating:2 lowest:1 what:1 kind:1 cm:3 spoken:2 pseudo:7 every:1 act:1 chronological:2 exactly:1 um:2 control:1 unit:1 medical:1 lsa:1 engineering:2 treat:1 meet:1 chose:1 hmms:3 ngram:1 graduate:1 practice:1 procedure:2 empirical:1 mult:4 word:36 zoubin:1 protein:1 get:3 cannot:1 unlabeled:1 context:23 influence:1 www:1 yt:29 missing:1 graphically:1 independently:1 sigir:1 haussler:1 updated:1 annals:2 play:1 duke:1 us:2 element:1 recognition:2 updating:1 continues:1 labeled:1 role:1 solved:2 capture:2 hypertext:1 calculate:1 thousand:1 mentioned:1 weigh:1 predictable:1 pd:1 neglected:1 dynamic:2 predictive:2 indirect:1 distinct:1 fast:2 monte:3 apparent:1 whose:1 larger:2 solve:2 relax:1 precludes:2 statistic:8 neil:1 itself:2 noisy:1 final:3 online:5 sequence:7 propose:1 interaction:1 adaptation:1 combining:2 mixing:3 representational:1 extending:1 produce:1 iq:1 andrew:2 ac:1 polya:1 school:2 krogh:1 olander:1 strong:1 predicted:1 skip:1 come:1 c:3 concentrate:1 drawback:1 filter:12 stochastic:2 human:2 nando:1 require:1 niv:4 preliminary:1 extension:8 considered:4 ic:2 normal:2 predict:6 purpose:4 estimation:6 proc:2 applicable:1 bag:1 combinatorial:1 individually:1 create:1 city:2 mit:1 clearly:1 gaussian:1 modified:1 rather:1 pn:1 exchangeability:1 linguistic:2 endow:1 improvement:3 consistently:1 bernoulli:2 likelihood:1 contrast:4 detect:1 inference:1 dependent:1 leung:1 mochihashi:2 initially:1 hidden:6 i1:1 comprising:1 classification:1 flexible:1 proposes:1 development:1 smoothing:3 mackay:1 field:1 once:2 ng:1 sampling:2 chernoff:1 represents:1 placing:1 kamal:1 future:1 simplex:5 report:1 gordon:1 inherent:1 employ:1 few:1 serious:1 randomly:3 simultaneously:1 ve:3 recognize:1 national:1 replaced:1 microsoft:1 detection:1 huge:2 evaluation:9 navigation:1 mixture:28 pc:1 beforehand:1 partial:1 iv:2 yamamoto:2 initialized:1 desired:1 karplus:1 modeling:13 contiguous:1 ordinary:2 uniform:2 conducted:2 eurospeech:1 stored:2 reported:2 dependency:3 dir:5 my:2 adaptively:1 yuji:1 fundamental:1 daichi:2 probabilistic:6 michael:2 yao:1 na:3 w1:5 again:2 connectivity:1 opposed:1 possibly:1 stochastically:3 leading:1 japan:2 de:1 mv:12 view:1 analyze:1 apparently:1 bayes:4 bruce:1 collaborative:1 gildea:1 acid:1 subtopics:1 yield:2 fhmm:1 weak:1 bayesian:9 raw:2 carlo:3 straight:1 executes:1 history:5 sebastian:1 frequency:1 involved:2 minka:2 dm:34 associated:2 di:2 bioscience:1 static:1 nurse:1 sampled:9 treatment:1 wh:1 mitchell:1 knowledge:1 dimensionality:1 improves:2 segmentation:1 actually:1 higher:1 originally:1 tom:1 response:1 improved:1 subtopic:1 though:1 generality:1 hughey:1 implicit:1 stage:1 mian:1 correlation:2 until:1 hand:1 web:1 cohn:1 propagation:1 lda:24 believe:1 ieice:1 name:1 effect:2 omitting:1 brown:1 true:5 y2:1 counterpart:1 homology:1 deliberately:1 arnaud:1 laboratory:1 iteratively:1 semantic:3 essence:1 pdf:2 chin:1 variational:3 novel:1 recently:3 common:1 multinomial:28 jp:3 volume:1 extend:2 occurred:3 significant:2 pm:1 similarly:1 particle:33 language:23 similarity:1 posterior:5 showed:1 inf:1 perplexity:7 occasionally:1 certain:1 verlag:1 binary:2 continue:1 yi:1 seen:1 minimum:2 employed:1 converge:1 signal:1 ii:1 full:1 multiple:1 kyoto:1 infer:1 segmented:2 technical:1 faster:1 long:6 nara:2 retrieval:2 lai:2 raphson:3 divided:1 prevented:1 equally:1 calculates:1 prediction:11 involving:1 essentially:1 expectation:3 eter:1 want:1 interval:1 source:1 w2:3 probably:1 nv:2 file:4 subject:1 jordan:2 intermediate:1 keihanna:1 enough:1 wn:6 affect:1 reduce:1 shift:12 whether:1 expression:1 render:1 speech:2 passing:1 cause:1 listed:1 factorial:2 prepared:1 ten:2 dna:4 http:2 shifted:1 estimated:1 track:4 discrete:10 hyperparameter:3 dropping:1 ppl:2 four:3 discreteness:1 neither:1 backward:1 run:2 letter:1 powerful:1 place:3 almost:1 throughout:2 strange:1 draw:4 decision:1 vb:1 tsukuba:2 precisely:1 wc:1 extremely:2 relatively:1 according:8 combination:1 describes:2 slightly:1 em:6 smaller:1 wi:5 evolves:1 explained:1 restricted:1 equation:1 know:3 gaussians:1 observe:1 hierarchical:1 appropriate:3 occurrence:2 encounter:1 original:3 thomas:3 assumes:2 dirichlet:36 remaining:1 clustering:1 graphical:2 newton:4 ghahramani:1 hikaridai:1 already:1 ingenious:1 occurs:1 traditional:1 dp:1 distance:4 link:1 atr:2 thrun:1 hmm:4 topic:21 assuming:1 length:4 besides:1 kalman:1 relationship:4 observation:3 markov:2 matsumoto:1 displayed:1 optional:1 peto:1 extended:2 communication:1 excluding:1 ponte:1 y1:1 dc:2 intermittent:1 inferred:1 introduced:1 david:2 extensive:1 sentence:7 naist:1 nip:2 trans:1 beyond:1 proceeds:1 below:3 usually:1 yc:2 power:1 natural:15 difficulty:1 predicting:1 technology:1 tentatively:1 text:29 prior:18 geometric:1 relative:2 loss:2 msm:36 filtering:7 allocation:2 sufficient:2 consistent:1 principle:1 viewpoint:1 translation:2 course:1 bnc:6 last:1 institute:2 distributed:1 regard:1 calculated:1 gram:1 computes:1 forward:1 made:1 adaptive:1 preprocessing:1 far:4 ikoma:1 newspaper:1 sj:1 implicitly:1 doucet:1 sequentially:4 corpus:5 assumed:2 latent:8 table:4 obtaining:1 nigam:1 domain:1 did:1 main:1 whole:1 hyperparameters:1 amino:1 mikio:2 fashion:1 ny:4 slow:2 explicit:1 lie:1 jay:1 croft:1 theorem:1 british:1 unigram:10 symbol:3 essential:1 sequential:3 chen:2 intricacy:1 tze:1 recommendation:1 springer:1 conditional:1 replace:1 content:1 change:27 specifically:2 averaging:1 wt:5 called:2 hospital:3 experimental:1 select:1 incorporate:3 mcmc:1 d1:2 |
2,078 | 2,888 | Large-scale biophysical parameter estimation in
single neurons via constrained linear regression
Misha B. Ahrens? , Quentin J.M. Huys? , Liam Paninski
Gatsby Computational Neuroscience Unit
University College London
{ahrens, qhuys, liam}@gatsby.ucl.ac.uk
Abstract
Our understanding of the input-output function of single cells has been
substantially advanced by biophysically accurate multi-compartmental
models. The large number of parameters needing hand tuning in these
models has, however, somewhat hampered their applicability and interpretability. Here we propose a simple and well-founded method for automatic estimation of many of these key parameters: 1) the spatial distribution of channel densities on the cell?s membrane; 2) the spatiotemporal
pattern of synaptic input; 3) the channels? reversal potentials; 4) the intercompartmental conductances; and 5) the noise level in each compartment. We assume experimental access to: a) the spatiotemporal voltage
signal in the dendrite (or some contiguous subpart thereof, e.g. via voltage sensitive imaging techniques), b) an approximate kinetic description
of the channels and synapses present in each compartment, and c) the
morphology of the part of the neuron under investigation. The key observation is that, given data a)-c), all of the parameters 1)-4) may be simultaneously inferred by a version of constrained linear regression; this
regression, in turn, is efficiently solved using standard algorithms, without any ?local minima? problems despite the large number of parameters
and complex dynamics. The noise level 5) may also be estimated by
standard techniques. We demonstrate the method?s accuracy on several
model datasets, and describe techniques for quantifying the uncertainty
in our estimates.
1
Introduction
The usual tradeoff in parameter estimation for single neuron models is between realism and
tractability. Typically, the more biophysical accuracy one tries to inject into the model, the
harder the computational problem of fitting the model?s parameters becomes, as the number
of (nonlinearly interacting) parameters increases (sometimes even into the thousands, in the
case of complex multicompartmental models).
?
These authors contributed equally. Support contributed by the Gatsby Charitable Foundation (LP,
MA), a Royal Society International Fellowship (LP), the BIBA consortium and the UCL School of
Medicine (QH). We are indebted to P. Dayan, M. H?ausser, M. London, A. Roth, and S. Roweis for
helpful and interesting discussions, and to R. Wood for channel definitions.
Previous authors have noted the difficulties of this large-scale, simultaneous parameter estimation problem, which are due both to the highly nonlinear nature of the ?cost functions?
minimized (e.g., the percentage of correctly-predicted spike times [1]) and the abundance
of local minima on the very large-dimensional allowed parameter space [2, 3].
Here we present a method that is both computationally tractable and biophysically detailed.
Our goal is to simultaneously infer the following dendritic parameters: 1) the spatial distribution of channel densities on the cell?s membrane; 2) the spatiotemporal pattern of synaptic input; 3) the channels? reversal potentials; 4) the intercompartmental conductances; and
5) the noise level in each compartment. Achieving this somewhat ambitious goal comes at
a price: our method assumes that the experimenter a) knows the geometry of the cell, b) has
a good understanding of the kinetics of the channels present in each compartment, and c)
most importantly, is able to observe the spatiotemporal voltage signal on the dendritic tree,
or at least a fraction thereof (e.g. by voltage-sensitive imaging methods; in electrotonically
compact cells, single electrode recordings can be used).
The key to the proposed method is to recognise that, when we condition on data a)-c),
the dynamics governing this observed spatiotemporal voltage signal become linear in the
parameters we are seeking to estimate (even though the system itself may behave highly
nonlinearly), so that the parameter estimation can be recast into a simple constrained linear regression problem (see also [4, 5]). This implies, somewhat counterintuitively, that
optimizing the likelihood of the parameters in this setting is a convex problem, with no
non-global local extrema. Moreover, linearly constrained quadratic optimization is an extremely well-studied problem, with many efficient algorithms available. We give examples
of the resulting methods successfully applied to several types of model data below. In addition, we discuss methods for incorporating prior knowledge and analyzing uncertainty
in our estimates, again basing our techniques on the well-founded probabilistic regression
framework.
2
Methods
Biophysically accurate models of single cells are typically formulated compartmentally ? a
set of first-order coupled differential equations that form a spatially discrete approximation
to the cable equations. Modeling the cell under investigation in this discretized manner, a
typical equation describing the voltage in compartment x is
!
X
Cx dVx (t) =
ai,x Ji,x (t) + Ix (t) dt + ?x dNx,t .
(1)
i
Here ?x Nx,t is evolution (current) noise and Ix (t) is externally injected current. Dropping
the subscript x where possible, the terms ai ? Ji (t) represent currents due to:
1. voltage mismatch in neighbouring compartments, fx,y (Vy (t) ? Vx (t)),
2. synaptic input, gs (t)(Es ? V (t)) ,
3. membrane channels, active (voltage-dependent) or passive, g?j gj (t)(Ej ? V (t)).
Here ai are parameters to be inferred:
1. the intercompartmental conductances fx,y ,
2. the spatiotemporal input from synapse s, us (t), from which gs (t) is obtained by
dgs (t)/dt = ?gs (t)/?s + us (t),
(2)
a linear convolution operation (the synaptic kinetic parameter ?s is assumed
known) which may be written in matrix notation gs = Ku.
3. the ion channel concentrations g?j . The open probabilities of channel j, gj (t), are
obtained from the channel kinetics, which are assumed to evolve deterministically,
with a known dependence on V , as in the Hodgkin-Huxley model, gN a = m3 h,
?m dm(t)/dt = m? (V ) ? m,
(3)
and similarly for h. Again, we emphasize that the kinetic parameters ?m and
m? (V ) are assumed known; only the inhomogeneous concentrations are unknown. (For passive channels gj is taken constant and independent of voltage.)
The parameters 1-3 are relative to membrane capacitance Cx .1
When modeling the dynamics of a single neuron according to (1), the voltage V (t) and
channel kinetics gj (t) are typically evolved in parallel, according to the injected current
I(t) and synaptic inputs us (t). Suppose, on the other hand, that we have observed the
voltage Vx (t) in each compartment. Since we have assumed we also know the channel
kinetics (equation 3), the synaptic kinetics (equation 2) and the reversal potentials Ej of the
channels present in each compartment, we may decouple the equations and determine the
open probabilities gj,x (t) for t ? [0, T ]. This, in turn, implies that the currents Ji,x (t) and
voltage differentials V? x (t) are all known, and we may interpret equation 1 as a regression
equation, linear in the unknown parameters ai , instead of an evolution equation. This is the
key observation of this work.
Thus we can use linear regression methods to simultaneously infer optimal values of the
? =
parameters {?
gj,x , us,x (t), fx,y }2 . More precisely, rewrite equation (1) in matrix form, V
Ma + ??, where each column of the matrix M is composed of one of the known currents
? a, and
{Ji (t), t ? [0, T ]} (with T the length of the experiment) and the column vectors V,
? are defined in the obvious way. Then
? ? Mak2 .
? opt = arg min kV
a
2
(4)
a
In addition, since on physical grounds the channel concentrations, synaptic input, and conductances must be non-negative, we require our solution ai ? 0. The resulting linearlyconstrained quadratic optimization problem has no local minima (due to the convexity of
the objective function and of the domain gi ? 0), and allows quadratic programming (QP)
tools (e.g., quadprog.m in Matlab) to be employed for highly efficient optimization.
Quadratic programming tactics: As emphasized above, the dimension d of the parameter
space to be optimized over in this application is quite large (d ? Ncomp (T Nsyn + Nchan ),
with N denoting the number of compartments, synapse types, and membrane channel types
respectively). While our problem is convex, and therefore tractable in the sense of having
no nonglobal local optima, the time-complexity of QP, implemented naively, is O(d3 ),
which is too slow for our purposes.
Fortunately, the correlational structure of the parameters allows us to perform this optimization more efficiently, by several natural decompositions: in particular, given the spatiotemporal voltage signal Vx (t), parameters which are distant in space (e.g., the densities
of channels in widely-separated compartments) and time (i.e., the synaptic input us,x (t) for
t = ti and tj with |ti ? tj | large) may be optimized independently. This amounts to a kind
of ?coordinate descent? algorithm, in which we decompose our parameter set into a set
of (not necessarily disjoint) subsets, and iteratively optimize the parameters in each subset
1
Note that Cx is the proportionality constant between the externally injected electrode current and
. It is linear in the data and can be included with the other parameters ai in the joint estimation.
2
In the case that the reversal potentials Ej are unknown as well, we may estimate these terms by
separating the term g?j gj (t)(V (t) ? Ej ) into g?j gj (t)V (t) and (?
gj Ej )gj (t), thereby increasing the
number of parameters in the regression by one per channel; Ej is then set to (?
gj Ej )/?
gj .
dV
dt
while holding all the other parameters fixed. (The quadratic nature of the original problem
guarantees that each of these subset problems will be quadratic, with no local minima.)
Empirically, we found that this decomposition / sequential optimization approach reduced
the computation time from O(d3 ) to near O(d).
2.1
The probabilistic framework
If we assume the noise Nx,t is Gaussian and white, then the mean-square regression solution for a described above coincides exactly with the (constrained) maximum likelihood
2
2
?
?M L = arg mina kV?Mak
estimate, a
2 /2? . (The noise scale ? may also be estimated via
maximum likelihood.) This suggests several straightforward likelihood-based techniques
for representing the uncertainty in our estimates.
Posterior confidence intervals: The assumption of Gaussian noise implies that the posterior distribution of the parameters a is of the form p(a|V) = Z1 p(a)G?,? (a), with Z a normalizing constant, the prior p(a) supported on ai ? 0, and the mean and covariance of the
? and ??1 = MT M/? 2 . We will
likelihood Gaussian G(a) given by ? = (MT M)?1 MT V
assume a flat prior distribution p(a) (that is, no prior knowledge) on the non-synaptic parameters {?
gj,x , fx,y } (although clearly non-flat priors can be easily incorporated here [6]);
for the synapticQ
parameters us,x (t) it will be convenient to use a product-of-exponentials
prior, p(u) = i ?i exp(??i ui ). In each case, computing confidence intervals for ai
reduces to computing moments of multidimensional Gaussian distributions, truncated to
ai ? 0.
We use importance sampling methods [7] to compute these moments for the channel parameters. Sampling from high-dimensional truncated Gaussians via sample-reject is inefficient
(since samples from the non-truncated Gaussian ? call this distribution p? (a|V) ? may violate the constraint ai ? 0 with high probability). Therefore we sample instead from a
proposal density q(a) with support on ai ? 0 (specifically, a product of univariate truncated Gaussians with mean ai and appropriate variance) and evaluate the second moments
around aM L by
N
X
p? (an |V)
.
q(an )
n=1
(5)
Hessian Principal Components Analysis: The procedure described above allows us to
quantify the uncertainty of individual estimated parameters ai . We are also interested in the
uncertainty of our estimates in a joint sense (e.g., in the posterior covariance instead of just
the individual variances). The negative Hessian of the loglikelihood function, A ? MT M,
contains a great deal of this information, which may be extracted via a kind of principal components analysis: the eigenvectors of A corresponding to the greatest eigenvalues
tell us in which directions the model is most strongly constrained by the data, while low
eigenvalues correspond to directions in which the likelihood changes relatively slowly, e.g.
channels whose corresponding currents are highly correlated (and therefore approximately
interchangeable). These ideas will be illustrated in section 3.4.
E[(ai ?aM Li )2 |V] ?
3
N
1 X p? (an |V) n
(ai ?aM Li )2
Z n=1 q(an )
where
Z=
Results
To test the validity, efficiency and accuracy of the proposed method we apply it to model
data of varying complexity.
3.1
Inferring channel conductances in a multicompartmental model
We take a simple 14-compartment model neuron, described by
Cx
NX
chan
X
dVx
=
g?c gc (Vx , t)(Ec ? Vx (t)) +
fx,y ? (Vy (t) ? Vx (t)) + Ix (t) + ?x dNx,t ;
dt
y
c=1
inferred HH g
Na
recall fx,y are the intercompartmental conductances, gc (V, t) is channel c?s conductance
state given the voltage history up to time t, and g?c is the channel concentration. We minimize a vectorized expression as above (equation 4). On biophysical grounds we require
fx,y = fy,x ; we enforce this (linear) constraint by only including one parameter for each
connected pair of compartments (x, y). In this case the true channel kinetics were of standard Hodgkin-Huxley form (Na+ , K+ and leak), with inhomogeneous densities (figure 1).
To test the selectivity of the estimation procedure, we fitted Nchan = 8 candidate channels from [8, 9, 10] (five of which were absent in the true model cell). Figure 1 shows the
performance of the inference; despite the fact that we used only 20 ms of model data, the
last 7 ms of which were used for the actual fitting (the first 13 ms were used to evolve the
random initial conditions to an approximately correct value), the fit is near perfect in the
? = 0 case, with vanishingly small errorbars. The concentrations of the five channels that
were not present when generating the data were set to approximately zero, as desired (data
not shown). The lower panels demonstrate the robustness of the methods on highly noisy
(large ?) data, in which case the estimated errorbars become significant, but the performance degrades only slightly.
200
150
100
50
100
HH g
200
100
HH gNa
200
inferred HH gNa
Na
200
150
100
50
Figure 1: Top panels: ? = 0. 14 compartment model neuron, Na+ channel concentration indicated by grey scale; estimated Na+ channel concentrations in the noiseless case; observed voltage
traces (one per compartment); estimated concentrations. Bottom panels: ? large. Na+ channel concentration legend, values relative to Cm (e.g. in mS/cm2 if Cm = 1?F/cm2 ); estimated Na+
concentrations in the noisy case; noisy voltage traces; estimated channel concentrations. K+ channel
concentrations and intercompartmental conductances fx,y not shown (similar performance).
3.2
Inferring synaptic input in a passive model
Next we simulated a single-compartment, leaky neuron (i.e., no voltage-sensitive membrane channels) with synaptic input from three synapses, two excitatory (glutamatertic;
? = 3 ms, E = 0 mV) and one inhibitory (GABAA ; ? = 5 ms, E = ?75 mV). When
we attempted to estimate the synaptic input us (t) via the ML estimator described above
(figure 2, left), we observe an overfitting phenomenon: the current noise due to Nt is being
?explained? by competing balanced excitatory and inhibitory synaptic inputs. This overfit? with 2T
ting is unsurprising, given that we are modeling a T -dimensional observation, V,
regressor variables, u? (t) and u+ (t), 0 < t < T (indeed, overfitting is much less apparent
in the case that only one synapse is modeled, where no balance of excitation and inhibition
is possible; data not shown).
Once again, we may make use of well-known techniques from the regression literature to
solve this problem: in this case, we need to regularize our estimated synaptic parameters.
Instead of maximizing the likelihood, uM L , we maximize the posterior likelihood
1 ?
kV ? MKuk22 + ?u ? n
with ut ? 0 ?t,
(6)
2? 2
where n is a vector of ones and ? is the Lagrange multiplier for the regularizer, or equivalently parametrizes the exponential prior distribution over u(t). As mentioned above, this
maximum a posteriori (MAP) estimate corresponds to a product exponential prior on the
synaptic input ut ; the multiplier ? may be chosen as the expected synaptic input per unit
time. It is well known that this type of prior has a sparsening effect, shrinking small values
of uM L (t) to zero. This is visible in figure 2 (right); we see that the small, noise-matching
synaptic activity is effectively suppressed, permitting much more accurate detection of the
true input spike timing.
? M AP = arg min
u
u
with regularisation
without regularisation
Inh spikes
2
[mS/cm ]
|
Voltage
[mV]
|
Exc spikes
2
[mS/cm ]
12
0
?49
?53
?57
23
0
0
0.05
0.1
0.15
0.2
Time [s]
0.25
0.3
0.35
0.4
0
0.05
0.1
0.15
0.2
Time [s]
0.25
0.3
0.35
0.4
Figure 2: Inferring synaptic inputs to a passive membrane. Top traces: excitatory inputs; bottom:
inhibitory inputs; middle: the resulting voltage trace. Left panels: synaptic inputs inferred by ML;
right: MAP estimates under the exponential (shrinkage) prior. Note the overfitting by the ML estimate (left) and the higher accuracy under the MAP estimate (right); in particular note that the two
excitatory synapses of differing magnitudes may easily be distinguished.
3.3
Inferring synaptic input and channel distribution in an active model
The optimization is, as mentioned earlier, jointly convex in both channel densities and
synaptic input. We illustrate the simultaneous inference of channel densities and synaptic
inputs in a single compartment, writing the model as:
NX
S
chan
X
dV
=
g?c gc (V, t)(Vc ? V (t)) +
gs (t)(Vs ? V (t)) + ?dN (t),
dt
c=1
s=1
(7)
with the same channels and synapse types as above. The combination of leak conductance
and inhibitory synaptic input leads to very small eigenvalues in A and slow convergence
when applying the above decomposition; thus, to speed convergence here we coarsened
the time resolution of the synaptic input from 0.1 ms to 0.2 ms. Figure 3 demonstrates the
accuracy of the results.
Synaptic conductances
Channel conductances
120
max conductance [mS/cm2]
Inh spikes | Voltage [mV] | Exc spikes
1
0
22 mV
?23 mV
?69 mV
1
True parameters
100
(spikes and conductances)
80
Data (voltage trace)
Inferred (MAP) spikes
60
Inferred (ML) channel densities
40
20
0
HHNa HHK
0
20
40
60
Time [ms]
80
Leak
MNa
MK
SNa
SKA
SKDR
100
Figure 3: Joint inference of synaptic input and channel densities. The true parameters are in blue,
the inferred parameters in red. The top left panel shows the excitatory synaptic input, the middle left
panel the voltage trace (the only data) and the bottom left traces the inhibitory synaptic input. The
right panel shows the true and inferred channel densities; channels are the same as in 3.1.
3.4
Eigenvector analysis for a single-compartment model
Finally, as discussed above, the eigenvectors (?principal components?) of the loglikelihood
Hessian A carry significant information about the dependence and redundancy of the parameters under study here. An example is given in figure 4; for simplicity, we restrict our
attention again to the single-compartment case. In the leftmost panels, we see that the
direction amost most highly-constrained by the data ? the eigenvector corresponding to
the largest eigenvalue of A ? turns out to have the intuitive form of the balance between
Na+ and K+ channels. When we perturb this balance slightly (that is, when we shift the
model parameters slightly along this direction in parameter space, aM L ? aM L +?amost ),
the cell?s behavior changes dramatically. Conversely, the least-sensitive direction, aleast ,
corresponds roughly to the balance between the concentrations of two Na+ channels with
similar kinetics, and moving in this direction in parameter space (aM L ? aM L + ?aleast )
has a negligible effect on the model?s dynamical behavior.
50
0.4
0.6
voltage [mV]
0.1
0
?0.1
0
voltage [mV]
0.4
0.2
0.2
0
?0.2
?50
0
?50
?0.4
?0.2
?0.6
?0.3
?0.4
50
0.8
0.3
Na
.
.
.
K
.
.
.
L
R
?100
0
20
40
60
time [ms]
80
100
?0.8
Na
.
.
.
K
.
.
.
L
R
?100
0
20
40
60
time [ms]
80
100
Figure 4: Eigenvectors of A corresponding to largest (amost , left) and smallest (aleast , right) eigenvalues, and voltage traces of the model neuron after equal sized perturbations by both (solid line:
perturbed model; dotted line: original model). The first four parameters are the concentrations of
four Na+ channels (the first two of which are in fact the same Hodgkin-Huxley channel, but with
slightly different kinetic parameters); the next four of K+ channels; the next of the leak channel; the
last of 1/C.
4
Discussion and future work
We have developed a probabilistic regression framework for estimation of biophysical
single neuron properties and synaptic input. This framework leads directly to efficient,
globally-convergent algorithms for determining these parameters, and also to well-founded
methods for analyzing the uncertainty of the estimates. We believe this is a key first step
towards applying these techniques in detailed, quantitative studies of dendritic input and
processing in vitro and in vivo. However, some important caveats ? and directions for
necessary future work ? should be emphasized.
Observation noise: While we have explicitly allowed current noise in our main evolution equation (1) (and experimented with a variety of other current- and conductance-noise
terms; data not shown), we have assumed that the resulting voltage V (t) is observed noiselessly, with sufficiently high sampling rates. This is a reasonable assumption when voltage
is recorded directly, via patch-clamp methods. However, while voltage-sensitive imaging
techniques have seen dramatic improvements over the last few years (and will continue to
do so in the near future), currently these methods still suffer from relatively low signalto-noise ratios and spatiotemporal sampling rates. While the procedure proved to be robust to low-level noise of various forms (data not shown), it will be important to relax the
noiseless-observation assumption, most likely by adapting standard techniques from the
hidden Markov model signal processing literature [11].
Hidden branches: Current imaging and dye technologies allow for the monitoring of only
a fraction of a dendritic tree; therefore our focus will be on estimating the properties of these
sub-structures. Furthermore, these dyes diffuse very slowly and may miss small branches
of dendrites, thereby effectively creating unobserved current sources.
Misspecified channel kinetics and channels with chemical dependence: Channels dependent on unobserved variables (e.g., Ca++ -dependent K+ channels), have not been included in the model. The techniques described here may thus be applied unmodified to experimental data for which such channels have been blocked pharmacologically. However,
we should note that our methods extend directly to the case where simultaneous access to
voltage and calcium signals is possible; more generally, one could develop a semi-realistic
model of calcium concentration, and optimize over the parameters of this model as well.
We have discussed in some detail (e.g. figure 1) the effect of misspecifications of voltagedependent channel kinetics and how the most relevant channels may be selected by supplying sufficiently rich ?channel libraries?. Such libraries can also contain several ?copies? of
the same channel, with one or more systematically varying parameters, thus allowing for
a limited search in the nonlinear space of channel kinetics. Finally, in our discussion of
?equivalence classes? of channels (figure 4), we illustrate how eigenvector analysis of our
objective function allows for insights into the joint behaviour of channels.
References
[1] Jolivet, Lewis, and Gerstner, 2004. J. Neurophysiol., 92, 959-976.
[2] Vanier and Bower, 1999. J. Comput. Neurosci., 7(2), 149-171.
[3] Goldman, Golowasch, Marder and Abbott, 2001. J. Neurosci., 21(14), 5229-5238.
[4] Wood, Gurney and Wilson, 2004. Neurocomputing, 58-60, 1109-1116.
[5] Morse, Davison and Hines, 2001. Soc. Neurosci. Abs., 606.5.
[6] Baldi, Vanier and Bower, 1998. J. Comp. Neurosci., 5(3), 285-314.
[7] Press et al., 1992. Numerical Recipes in C, CUP.
[8] Hodgkin and Huxley, 1952. J. Physiol., 117.
[9] Poirazi, Brannon and Mel, 2003. Neuron, 37(6), 977-87.
[10] Mainen, Joerges, Huguenard, and Sejnowski, 1995. Neuron, 15(6), 1427-39.
[11] Rabiner, 1989. Proc. IEEE, 77(2), 257-286.
| 2888 |@word middle:2 version:1 open:2 proportionality:1 grey:1 cm2:3 decomposition:3 covariance:2 dramatic:1 thereby:2 solid:1 harder:1 carry:1 moment:3 initial:1 contains:1 mainen:1 denoting:1 current:13 nt:1 written:1 must:1 physiol:1 distant:1 visible:1 realistic:1 numerical:1 v:1 selected:1 realism:1 supplying:1 caveat:1 davison:1 five:2 dn:1 along:1 become:2 differential:2 fitting:2 baldi:1 manner:1 pharmacologically:1 expected:1 indeed:1 roughly:1 behavior:2 multi:1 morphology:1 discretized:1 globally:1 goldman:1 actual:1 increasing:1 becomes:1 estimating:1 moreover:1 notation:1 panel:8 evolved:1 kind:2 cm:4 substantially:1 eigenvector:3 developed:1 differing:1 extremum:1 unobserved:2 guarantee:1 quantitative:1 multidimensional:1 ti:2 exactly:1 um:2 demonstrates:1 uk:1 unit:2 negligible:1 local:6 timing:1 despite:2 analyzing:2 subscript:1 approximately:3 ap:1 studied:1 equivalence:1 suggests:1 conversely:1 biba:1 limited:1 liam:2 huys:1 procedure:3 reject:1 adapting:1 convenient:1 matching:1 confidence:2 consortium:1 applying:2 writing:1 optimize:2 map:4 roth:1 maximizing:1 straightforward:1 attention:1 independently:1 convex:3 resolution:1 simplicity:1 estimator:1 insight:1 importantly:1 regularize:1 quentin:1 fx:8 coordinate:1 qh:1 suppose:1 neighbouring:1 programming:2 quadprog:1 observed:4 bottom:3 coarsened:1 mna:1 solved:1 thousand:1 connected:1 balanced:1 mentioned:2 convexity:1 complexity:2 ui:1 leak:4 dynamic:3 rewrite:1 interchangeable:1 gabaa:1 efficiency:1 neurophysiol:1 easily:2 joint:4 various:1 regularizer:1 separated:1 describe:1 london:2 sejnowski:1 tell:1 dnx:2 quite:1 whose:1 widely:1 apparent:1 solve:1 loglikelihood:2 compartmental:1 relax:1 gi:1 jointly:1 itself:1 noisy:3 eigenvalue:5 biophysical:4 ucl:2 propose:1 clamp:1 product:3 vanishingly:1 relevant:1 roweis:1 description:1 intuitive:1 kv:3 recipe:1 convergence:2 electrode:2 optimum:1 generating:1 perfect:1 sna:1 illustrate:2 develop:1 ac:1 school:1 soc:1 implemented:1 predicted:1 come:1 implies:3 quantify:1 direction:7 inhomogeneous:2 correct:1 vc:1 vx:6 require:2 behaviour:1 investigation:2 decompose:1 opt:1 dendritic:4 kinetics:10 around:1 sufficiently:2 ground:2 exp:1 great:1 smallest:1 purpose:1 estimation:8 proc:1 counterintuitively:1 currently:1 sensitive:5 largest:2 basing:1 successfully:1 tool:1 clearly:1 gaussian:5 ej:7 shrinkage:1 varying:2 voltage:29 wilson:1 focus:1 improvement:1 likelihood:8 sense:2 am:7 posteriori:1 helpful:1 inference:3 dayan:1 dependent:3 typically:3 hidden:2 interested:1 arg:3 constrained:7 spatial:2 mak:1 equal:1 once:1 having:1 sampling:4 future:3 minimized:1 parametrizes:1 few:1 qhuys:1 simultaneously:3 dg:1 composed:1 individual:2 neurocomputing:1 geometry:1 ab:1 conductance:14 detection:1 highly:6 misha:1 tj:2 accurate:3 necessary:1 tree:2 desired:1 fitted:1 mk:1 column:2 modeling:3 earlier:1 gn:1 contiguous:1 unmodified:1 applicability:1 tractability:1 cost:1 subset:3 too:1 unsurprising:1 perturbed:1 spatiotemporal:8 ska:1 density:10 international:1 golowasch:1 probabilistic:3 regressor:1 na:12 again:4 recorded:1 slowly:2 creating:1 inject:1 inefficient:1 li:2 potential:4 explicitly:1 mv:9 try:1 red:1 parallel:1 vivo:1 minimize:1 square:1 compartment:18 accuracy:5 variance:2 efficiently:2 correspond:1 rabiner:1 biophysically:3 monitoring:1 comp:1 indebted:1 history:1 hhk:1 simultaneous:3 synapsis:3 synaptic:29 definition:1 thereof:2 dm:1 obvious:1 experimenter:1 proved:1 recall:1 knowledge:2 ut:2 noiselessly:1 higher:1 dt:6 nchan:2 synapse:4 though:1 strongly:1 furthermore:1 governing:1 just:1 gurney:1 overfit:1 hand:2 nonlinear:2 indicated:1 sparsening:1 believe:1 effect:3 validity:1 contain:1 true:6 multiplier:2 evolution:3 chemical:1 spatially:1 iteratively:1 illustrated:1 white:1 deal:1 noted:1 excitation:1 mel:1 coincides:1 m:14 tactic:1 leftmost:1 mina:1 demonstrate:2 passive:4 misspecified:1 mt:4 ji:4 physical:1 qp:2 empirically:1 vitro:1 discussed:2 extend:1 interpret:1 significant:2 blocked:1 cup:1 ai:15 tuning:1 automatic:1 similarly:1 moving:1 access:2 gj:13 inhibition:1 posterior:4 chan:2 dye:2 optimizing:1 ausser:1 selectivity:1 continue:1 seen:1 minimum:4 fortunately:1 somewhat:3 employed:1 determine:1 maximize:1 signal:6 semi:1 branch:2 violate:1 needing:1 infer:2 reduces:1 equally:1 permitting:1 regression:11 noiseless:2 sometimes:1 represent:1 cell:9 ion:1 proposal:1 addition:2 fellowship:1 interval:2 source:1 recording:1 legend:1 call:1 near:3 variety:1 fit:1 competing:1 restrict:1 idea:1 tradeoff:1 poirazi:1 absent:1 shift:1 expression:1 suffer:1 hessian:3 matlab:1 dramatically:1 generally:1 detailed:2 eigenvectors:3 amount:1 reduced:1 percentage:1 vy:2 inhibitory:5 ahrens:2 dotted:1 neuroscience:1 estimated:9 correctly:1 disjoint:1 subpart:1 per:3 blue:1 discrete:1 dropping:1 key:5 redundancy:1 four:3 achieving:1 d3:2 abbott:1 imaging:4 fraction:2 wood:2 year:1 uncertainty:6 injected:3 hodgkin:4 reasonable:1 patch:1 recognise:1 convergent:1 quadratic:6 nonglobal:1 brannon:1 g:5 activity:1 marder:1 precisely:1 huxley:4 constraint:2 flat:2 diffuse:1 speed:1 extremely:1 min:2 relatively:2 according:2 combination:1 membrane:7 slightly:4 suppressed:1 lp:2 cable:1 voltagedependent:1 electrotonically:1 dv:2 explained:1 taken:1 computationally:1 equation:12 discus:1 turn:3 describing:1 hh:4 know:2 tractable:2 reversal:4 available:1 operation:1 gaussians:2 apply:1 observe:2 appropriate:1 enforce:1 distinguished:1 robustness:1 original:2 hampered:1 assumes:1 top:3 medicine:1 ting:1 perturb:1 society:1 seeking:1 objective:2 capacitance:1 spike:8 degrades:1 concentration:15 dependence:3 usual:1 separating:1 simulated:1 nx:4 exc:2 fy:1 gna:2 length:1 modeled:1 ratio:1 balance:4 equivalently:1 holding:1 trace:8 negative:2 vanier:2 ambitious:1 calcium:2 unknown:3 contributed:2 perform:1 allowing:1 neuron:11 observation:5 datasets:1 convolution:1 markov:1 descent:1 behave:1 truncated:4 incorporated:1 inh:2 interacting:1 gc:3 perturbation:1 misspecifications:1 inferred:9 nonlinearly:2 pair:1 optimized:2 z1:1 errorbars:2 jolivet:1 able:1 below:1 pattern:2 mismatch:1 dynamical:1 recast:1 interpretability:1 royal:1 including:1 max:1 greatest:1 difficulty:1 natural:1 advanced:1 representing:1 technology:1 library:2 coupled:1 prior:10 understanding:2 literature:2 evolve:2 determining:1 relative:2 regularisation:2 morse:1 interesting:1 foundation:1 vectorized:1 charitable:1 systematically:1 excitatory:5 supported:1 last:3 copy:1 allow:1 leaky:1 dimension:1 rich:1 author:2 founded:3 ec:1 approximate:1 compact:1 emphasize:1 ml:4 global:1 active:2 overfitting:3 assumed:5 search:1 channel:60 nature:2 ku:1 robust:1 ca:1 dendrite:2 gerstner:1 complex:2 necessarily:1 domain:1 main:1 linearly:1 neurosci:4 noise:14 allowed:2 gatsby:3 slow:2 shrinking:1 sub:1 inferring:4 deterministically:1 exponential:4 comput:1 candidate:1 bower:2 ix:3 abundance:1 externally:2 emphasized:2 experimented:1 normalizing:1 incorporating:1 naively:1 sequential:1 effectively:2 importance:1 magnitude:1 signalto:1 cx:4 paninski:1 univariate:1 likely:1 lagrange:1 corresponds:2 lewis:1 extracted:1 kinetic:4 ma:2 hines:1 goal:2 formulated:1 sized:1 quantifying:1 towards:1 price:1 change:2 included:2 typical:1 specifically:1 decouple:1 principal:3 correlational:1 miss:1 experimental:2 e:1 m3:1 attempted:1 college:1 support:2 evaluate:1 phenomenon:1 correlated:1 |
2,079 | 2,889 | AER Building Blocks for Multi-Layer Multi-Chip
Neuromorphic Vision Systems
R. Serrano-Gotarredona1, M. Oster2, P. Lichtsteiner2, A. Linares-Barranco4, R. PazVicente4, F. G?mez-Rodr?guez4, H. Kolle Riis3, T. Delbr?ck2, S. C. Liu2, S. Zahnd2,
A. M. Whatley2, R. Douglas2, P. H?fliger3, G. Jimenez-Moreno4, A. Civit4, T.
Serrano-Gotarredona1, A. Acosta-Jim?nez1, B. Linares-Barranco1
1Instituto
de Microelectr?nica de Sevilla (IMSE-CNM-CSIC) Sevilla Spain, 2Institute of
Neuroinformatics (INI-ETHZ) Zurich Switzerland, 3University of Oslo Norway (UIO),
4University of Sevilla Spain (USE).
Abstract
A 5-layer neuromorphic vision processor whose components
communicate spike events asychronously using the address-eventrepresentation (AER) is demonstrated. The system includes a retina
chip, two convolution chips, a 2D winner-take-all chip, a delay line
chip, a learning classifier chip, and a set of PCBs for computer
interfacing and address space remappings. The components use a
mixture of analog and digital computation and will learn to classify
trajectories of a moving object. A complete experimental setup and
measurements results are shown.
1 Introduction
The Address-Event-Representation (AER) is an event-driven asynchronous inter-chip
communication technology for neuromorphic systems [1][2]. Senders (e.g. pixels or
neurons) asynchronously generate events that are represented on the AER bus by the
source addresses. AER systems can be easily expanded. The events can be merged with
events from other senders and broadcast to multiple receivers [3]. Arbitrary connections,
remappings and transformations can be easily performed on these digital addresses.
A potentially huge advantage of AER systems is that computation is event driven and thus
can be very fast and efficient. Here we describe a set of AER building blocks and how we
assembled them into a prototype vision system that learns to classify trajectories of a
moving object. All modules communicate asynchronously using AER. The building
blocks and demonstration system have been developed in the EU funded research project
CAVIAR (Convolution AER VIsion Architecture for Real-time). The building blocks
(Fig. 1) consist of: (1) a retina loosely modeled on the magnocellular pathway that
responds to brightness changes, (2) a convolution chip with programmable convolution
kernel of arbitrary shape and size, (3) a multi-neuron 2D competition chip, (4) a spatiotemporal pattern classification learning module, and (5) a set of FPGA-based PCBs for
address remapping and computer interfaces.
Using these AER building blocks and tools we built the demonstration vision system
shown schematically in Fig. 1, that detects a moving object and learns to classify its
Fig. 1: Demonstration AER vision system
trajectories. It has a front end retina, followed by an array of convolution chips, each
programmed to detect a specific feature with a given spatial scale. The competition or
?object? chip selects the most salient feature and scale. A spatio-temporal pattern
classification module categorizes trajectories of the object chip outputs.
2 Retina
Biological vision uses asynchronous events (spikes) delivered from the retina. The stream
of events encodes dynamic scene contrast. Retinas are optimized to deliver relevant
information and to discard redundancy. CAVIAR?s input is a dynamic visual scene. We
developed an AER silicon retina chip ?TMPDIFF? that generates events corresponding to
relative changes in image intensity [8]. These address-events are broadcast
asynchronously on a shared digital bus to the convolution chips. Static scenes produce no
output. The events generated by TMPDIFF represent relative changes in intensity that
exceed a user-defined threshold and are ON or OFF type depending on the sign of the
change since the last event. This silicon retina loosely models the magnocellular retinal
pathway.
The front-end of the pixel core (see Fig. 2a) is an active unity-gain logarithmic
photoreceptor that can be self-biased by the average photocurrent [7]. The active feedback
speeds up the response compared to a passive log photoreceptor and greatly increases
bandwidth at low illumination. The photoreceptor output is buffered to a voltage-mode
capacitive-feedback amplifier with closed-loop gain set by a well-matched capacitor ratio.
The amplifier is balanced after transmission of each event by the AER handshake. ON and
OFF events are detected by the comparators that follow. Mismatch of the event threshold
is determined by only 5 transistors and is effectively further reduced by the gain of the
amplifier. Much higher contrast resolution than in previous work [6] is obtained by using
the excellent matching between capacitors to form a self-clocked switched-capacitor
change amplifier, allowing for operation with scene contrast down to about 20%. A chip
photo is shown in Fig. 2b.
(a)
(b)
Fig. 2. Retina. a) core of pixel circuit, b) chip photograph.
Line Buffer & Column Arbiter
AER
out
(c)
2
0
?2
?4
0
(a)
(x,y)
Row?Arbiter
y?Decoder
5
10
15
pixel array
20
25
30
35
25
20
15
10
5
0
Monostable
High
Speed
Clock
35
(d)
5
0
Pixel Array
30
?5
35
x?neighbourhood
Control
Control
Block
X-neighb.
30
25
20
kernel-RAM
15
10
35
30
Address
Rqst
Ack
(b)
I/O
25
5
20
15
10
0
5
0
Kernel?RAM
Fig. 3. Convolution chip (a) architecture of the convolution chip. (b) microphotograph of
fabricated chip. (c) kernel for detecting circumferences of radius close to 4 pixels and (d) close to 9
pixels.
TMPDIFF has 64x64 pixels, each with 2 outputs (ON and OFF), which are communicated
off-chip on a 16-bit AER bus. It is fabricated in a 0.35?m process. Each pixel is
40x40 ?m2 and has 28 transistors and 3 capacitors. The operating range is at least 5
decades and minimum scene illumination with f/1.4 lens is less than 10 lux.
3 Convolution Chip
The convolution chip is an AER transceiver with an array of event integrators. Foreach
incoming event, integrators within a projection field around the addressed pixel compute a
weighted event integration. The weight of this integration is defined by the convolution
kernel [4]. This event-driven computation puts the kernel onto the integrators.
Fig. 3a shows the block diagram of the convolution chip. The main parts of the chip are:
(1) An array of 32x32 pixels. Each pixel contains a binary weighted signed current source
and an integrate-and-fire signed integrator [5]. The current source is controlled by the
kernel weight read from the RAM and stored in a dynamic register. (2) A 32x32 kernel
RAM. Each kernel weight value is stored with signed 4-bit resolution. (3) A digital
controller handles all sequence of operations. (4) A monostable. For each incoming event,
it generates a pulse of fixed duration that enables the integration simultaneously in all the
pixels. (5) X-Neighborhood Block. This block performs a displacement of the kernel in
the x direction. (6) Arbitration and decoding circuitry that generate the output address
events. It uses Boahen?s burst mode fully parallel AER [2].
The chip operation sequence is as follows: (1) Each time an input address event is
received, the digital control block stores the (x,y) address and acknowledges reception of
the event. (2) The control block computes the x-displacement that has to be applied to the
kernel and the limits in the y addresses where the kernel has to be copied. (3) The
Afterwards, the control block generates signals that control on a row-by-row basis the
copy of the kernel to the corresponding rows in the pixel array. (4) Once the kernel copy is
finished, the control block activates the generation of a monostable pulse. This way, in
each pixel a current weighted by the corresponding kernel weight is integrated during a
fixed time interval. Afterwards, kernel weights in the pixels are erased. (5) When the
integrator voltage in a pixel reaches a threshold, that pixel asynchronously sends an event,
which is arbitrated and decoded in the periphery of the array. The pixel voltage is reset
upon reception of the acknowledge from the periphery.
A prototype convolution chip has been fabricated in a CMOS 0.35?m process. Both the
size of the pixel array and the size of the kernel storage RAM are 32x32. The input address
space can be up to 128x128. In the experimental setup of Section 7, the 64x64 retina
output is fed to the convolution chip, whose pixel array addresses are centered on that of
the retina. The pixel size is 92.5?m x 95?m. The total chip area is 5.4x4.2 mm2. Fig. 3b
shows the microphotograph of the fabricated chip. AER events can be fed-in up to a peak
rate of 50 Mevent/s. Output event rate depends on kernel lines nk. The measured output
AER peak delay is (40 + 20 x nk) ns/event.
4 Competition ?Object? Chip
This AER transceiver chip consists of a group of VLSI integrate-and-fire neurons with
various types of synapses [9]. It reduces the dimensionality of the input space by
preserving the strongest input and suppressing all other inputs. The strongest input is
determined by configuring the architecture on the ?Object? chip as a spiking winner-takeall network. Each convolution chip convolves the output spikes of the retina with its
preprogrammed feature kernel (in our example, this kernel consists of a ring filter of a
particular resolution). The ?Object? chip receives the outputs of several convolution chips
and computes the winner (strongest input) in two dimensions. First, it determines the
strongest input in each feature map and in addition, it determines the strongest feature.
The computation to determine the strongest input in each feature map is carried out using
a two-dimensional winner-take-all circuit as shown in Fig. 4. The network is configured so
that it implements a hard winner-take-all, that is, only one neuron is active at a time. The
activity of the winner is proportional to the winner?s input activity.
The winner-take-all circuit can reliably select the winner given a difference of input firing
rate of only 10% assuming that it receives input spike trains having a regular firing rate
[10]. Each excitatory input spike charges the membrane of the post-synaptic neuron until
one neuron in the array--the winner--reaches threshold and is reset. All other neurons are
then inhibited via a global inhibitory neuron which is driven by all the excitatory neurons.
Self-excitation provides hysteresis for the winning neuron by facilitating the selection of
this neuron as the next winner.
Because of the moving stimulus, the network has to determine the winner using an
estimate of the instantaneous input firing rates. The number of spikes that the neuron must
integrate before eliciting an output spike can be adjusted by varying the efficacies of the
input synapses.
To determine the winning feature, we use the activity of the global inhibitory neuron
(which reflects the activity of the strongest input within a feature map) of each feature map
in a second layer of competition. By adding a second global inhibitory neuron to each
feature map and by driving this neuron through the outputs of the first global inhibitory
neurons of all feature maps, only the strongest feature map will survive. The output of the
object chip will be spikes encoding the spatial location of the stimulus and the identity of
the winning feature. (In the characterization shown in Section 7, the global competition
was disabled, so both objects could be simultaneously localized by the object chip).
We integrated the winner-take-all circuits for four feature maps on a single chip with a
total of 16x16 neurons; each feature uses an 8x8 array. The chip was fabricated in a
0.35 ?m CMOS process with an area of 8.5 mm2.
5 Learning Spatio-Temporal Pattern Classification
The last step of data reduction in the CAVIAR demonstrator is a subsystem that learns to
classify the spatio-temporal patterns provided by the object chip. It consists of three
delay line chip
right
half
visual
field
left
half
visual
field
3
?t
0
?t
4
?t
1
?t
5
2
AER mapper
competitive Hebbian
learning chip
.....
Fig. 4: Architecture of ?Object? chip configured
for competition within two feature maps and
competition across the two feature maps.
Fig. 5: System setup for learning direction of
motion
components: a delay line chip, a competitive Hebbian learning chip [11], and an AER
mapper that connects the two. The task of the delay line chip is to project the temporal
dimension into a spatial dimension. The competitive Hebbian learning chip will then learn
to classify the resulting patterns. The delay line chip consists of one cascade of 880 delay
elements. 16 monostables in series form one delay element. The output of every delay
element produces an output address event. A pulse can be inserted at every delay-element
by an input address event. The cascade can be programmed to be interrupted or connected
between any two subsequent delay-elements. The associative Hebbian learning chip
consists of 32 neurons with 64 learning synapses each. Each synapse includes learning
circuitry with a weak multi-level memory cell for spike-based learning [11].
A simple example of how this system may be configured is depicted in Fig. 5: the mapper
between the object chip and the delay line chip is programmed to project all activity from
the left half of the field of vision onto the input of one delay line, and from the right half of
vision onto another. The mapper between the delay line chip and the competitive Hebbian
learning chip taps these two delay lines at three different delays and maps these 6 outputs
onto 6 synapses of each of the 32 neurons in the competitive Hebbian learning chip. This
configuration lets the system learn the direction of motion.
(a)
(b)
(c)
(d)
Fig. 6: Developed AER interfacing PCBs. (a) PCI-AER, (b) USB-AER, (c) AER-switch, (d) mini-USB
2
3
AER
USB
AER
mapper
motion
retina
(a)
4
AER
Splitter
7 PCI?AER
USB
AER
monitor
Monitor
5
8
AER
Convolution
chip
10
AER
Merger
USB
AER
monitor
AER
6 Convolution
chip
11
12
13
14
15
16
USB
AER
mapper
AER
object
USB
AER
monitor
USB
AER
mapper
AER
delay
line
chip
USB
AER
mapper
chip
AER
learning
chip
9
7
(b)
17
1
5
11
17
16
14
13
15
10 8
9
12
3
4
2
6
Fig. 7: Experimental setup of multi-layered AER vision system for ball tracking (white boxes include
custom designed chips, blue boxes are interfacing PCBs). (a) block diagram, (b) photograph of setup.
6 Computer Interfaces
When developing and tuning complex hierarchical multi-chips AER systems it is crucial
to have available proper computer interfaces for (a) reading AER traffic and visualizing it,
and (b) for injecting synthesized or recorded AER traffic into AER buses. We developed
several solutions. Fig. 6(a) shows a PCI-AER interfacing PCB capable of transmitting
AER streams from within the computer or, vice versa, capturing them from an AER bus
and into computer memory. It uses a Spartan-II FPGA, and can achieve a peak rate of
15 Mevent/s using PCI mastering. Fig. 5(b) shows a USB-AER board that does not require
a PCI slot and can be controlled through a USB port. It uses a Spartan II 200 FPGA with a
Silicon Labs C8051F320 microcontroller. Depending on the FPGA firmware, it can be
used to perform five different functions: (a) transform sequence of frames into AER in real
time [13], (b) histogram AER events into sequences of frames in real time, (c) do
remappings of addresses based on look-up-tables, (d) capture timestamped events for offline analysis, (e) reproduce time-stamped sequences of events in real time. This board can
also work without a USB connection (stand-alone mode) by loading the firmware through
MMC/SD cards, used in commercial digital cameras. This PCB can handle AER traffic of
up to 25 Mevent/s. It also includes a VGA output for visualizing histogrammed frames.
The third PCB, based on a simple CPLD, is shown in Fig. 6(c). It splits one AER bus into
2, 3 or 4 buses, and vice versa, merges 2, 3 or 4 buses into a single bus, with proper
handling of handshaking signals. The last board in Fig. 6(d) is a lower performance but
more compact single-chip bus-powered USB interface based on a C8051F320
microcontroller. It captures timestamped events to a computer at rates of up to 100 kevent/
s and is particularly useful for demonstrations and field capture of retina output.
7 Demonstration Vision System
To test CAVIAR?s capabilities, we built a demonstration system that could simultaneously
track two objects of different size. A block diagram of the complete system is shown in
Fig. 7(a), and a photograph of the complete experimental setup is given in Fig. 7(b). The
(a)
(b)
(b)
(c)
Fig. 8: Captured AER outputs
at different stages of processing
chain. (a) at the retina output,
(b) at the output of the 2
convolution chips, (c) at the
output of the object chip. ?I?
labels the activity of the
inhibitory neurons.
complete chain consisted of 17 pieces (chips and PCBs), all numbered in Fig. 7: (1) The
rotating wheel stimulus. (2) The retina. The retina looked at a rotating disc with two solid
circles on it of two different radii. (3) A USB-AER board as mapper to reassign addresses
and eliminate the polarity of brightness change. (4) A 1-to-3 splitter (one output for the
PCI-AER board (7) to visualize the retina output, as shown in Fig. 8(a), and two outputs
for two convolution chips). (5-6) Two convolution chips programmed with the kernels in
Fig. 3c-d, to detect circumferences of radius 4 pixels and 9 pixels, respectively. They see
the complete 64x64 retina image (with rectified activity; polarity is ignored) but provide a
32x32 output for only the central part of the retina image. This eliminates convolution
edge effects. The output of each convolution chip is fed to a USB-AER board working as a
monitor (8-9) to visualize their outputs (Fig. 8b). The left half is for the 4-radius kernel
and the right half for the 9-radius kernel. The outputs of the convolution chips provide the
center of the circumferences only if they have radius close to 4 pixels or 9 pixels,
respectively. As can be seen, each convolution chip detects correctly the center of its
corresponding circumference, but not the other. Both chips are tuned for the same feature
but with different spatial scale. Both convolution chips outputs are merged onto a single
AER bus using a merger (10) and then fed to a mapper (11) to properly reassign the
address and bit signs for the winner-take-all ?object? chip (12), which correctly decides the
centers of the convolution chip outputs. The object chip output is fed to a monitor (13) for
visualization purposes. This output is shown in Fig. 8(c). The output of this chip is
transformed using a mapper (14) and fed to the delay line chip (15), the outputs of which
are fed through a mapper (16) to the learning (17) chip. The system as characterized can
simultaneously trach two objects of different shape; we have connected but not yet studied
trajectory learning and classification.
8 Conclusions
In terms of the number of independent components, CAVIAR demonstrates the largest
AER system yet assembled. It consists of 5 custom neuromorphic AER chips and at least
6 custom AER digital boards. Its functioning shows that AER can be used for assembling
complex real time sensory processing systems and that relevant information about object
size and location can be extracted and restored through a chain of feedforward stages. The
CAVIAR system is a useful environment to develop reusable AER infrastructure and is
capable of fast visual computation that is not limited by normal imager frame rate. Its
continued development will result in insights about spike coding and representation.
Acknowledgements
This work was sponsored by EU grant IST-2001-34124 (CAVIAR), and Spanish grant
TIC-2003-08164-C03 (SAMANTA). We thank K. Boahen for sharing AER interface
technology and the EU project ALAVLSI for sharing chip development and other AER
computer interfaces [14].
References
[1] M. Sivilotti, Wiring Considerations in Analog VLSI Systems with Application to FieldProgrammable Networks, Ph.D. Thesis, California Institute of Technology, Pasadena CA,
1991.
[2] K. Boahen, ?Point-to-Point Connectivity Between Neuromorphic Chips Using
Address Events,? IEEE Trans. on Circuits and Systems Part-II, vol. 47, No. 5, pp. 416-434,
May 2000.
[3] J. P. Lazzaro and J. Wawrzynek, ?A Multi-Sender Asynchronous Extension to the
Address-Event Protocol,? 16th Conference on Advanced Research in VLSI, W. J. Dally, J.
W. Poulton, and A. T. Ishii (Eds.), pp. 158-169, 1995.
[4] T. Serrano-Gotarredona, A. G. Andreou, and B. Linares-Barranco, "AER Image
Filtering Architecture for Vision Processing Systems," IEEE Trans. Circuits and Systems
(Part II): Analog and Digital Signal Processing, vol. 46, No. 9, pp. 1064-1071, September
1999.
[5] R. Serrano-Gotarredona, B. Linares-Barranco, and T. Serrano-Gotarredona, ?A New
Charge-Packet Driven Mismatch-Calibrated Integrate-and-Fire Neuron for Processing
Positive and Negative Signals in AER-based Systems,? In Proc. of the IEEE Int. Symp.
Circ. Syst., (ISCAS04), vol. 5, pp. 744-747 ,Vancouver, Canada, May 2004.
[6] P. Lichtsteiner, T. Delbr?ck, and J. Kramer, "Improved ON/OFF temporally
differentiating address-event imager," in 11th IEEE International Conference on
Electronics, Circuits and Systems (ICECS2004), Tel Aviv, Israel, 2004, pp. 211-214.
[7] T. Delbr?ck and D. Oberhoff, "Self-biasing low-power adaptive photoreceptor," in
Proc. of the IEEE Int. Symp. Circ. Syst. (ISCAS04), pp. IV-844-847, 2004.
[8] P. Lichtsteiner and T. Delbr?ck ?64x64 AER Logarithmic Temporal Derivative Silicon
Retina,? Research in Microelectronics and Electronics, Vol. 2, pp. 202-205, July 2005.
[9] Liu, S.-C. and Kramer, J. and Indiveri, G. and Delbr?ck, T. and Burg, T. and Douglas,
R. ?Orientation-selective aVLSI spiking neurons?, Neural Networks, 14:(6/7) 629-643, Jul,
2001
[10] Oster, M. and Liu, S.-C. ?A Winner-take-all Spiking Network with Spiking Inputs?, in
11th IEEE International Conference on Electronics, Circuits and Systems (ICECS 2004),
Tel Aviv, pp. 203-206, 2004
[11] H. Kolle Riis and P. Haefliger, ?Spike based learning with weak multi-level static
memory,? In Proc. of the IEEE Int. Symp. Circ. Syst. (ISCAS04), vol. 5, pp. 393-395,
Vancouver, Canada, May 2004.
[12] P. H?fliger and H. Kolle Riis, ?A Multi-Level Static Memory Cell,? In Proc. of the
IEEE Int. Symp. Circ. Syst. (ISCAS04), vol. 1, pp. 22-25, Bangkok, Thailand, May 2003.
[13] A. Linares-Barranco, G. Jim?nez-Moreno, B. Linares-Barranco, and A. Civit-Ballcels,
?On Algorithmic Rate-Coded AER Generation,? accepted for publication in IEEE Trans.
Neural Networks, May 2006 (tentatively).
[14] V. Dante, P. Del Giudice, and A. M. Whatley, ?PCI-AER Hardware and Software for
Interfacing to Address-Event Based Neuromorphic Systems?, The Neuromorphic
Engineer, 2:(1) 5-6, 2005.
| 2889 |@word loading:1 pulse:3 brightness:2 solid:1 reduction:1 electronics:3 configuration:1 contains:1 efficacy:1 series:1 jimenez:1 liu:2 tuned:1 suppressing:1 current:3 yet:2 must:1 interrupted:1 subsequent:1 shape:2 enables:1 moreno:1 designed:1 sponsored:1 alone:1 half:6 merger:2 core:2 ck2:1 infrastructure:1 detecting:1 provides:1 characterization:1 location:2 x128:1 five:1 burst:1 transceiver:2 lux:1 consists:6 pathway:2 symp:4 inter:1 multi:9 integrator:5 detects:2 spain:2 project:4 matched:1 provided:1 circuit:8 remapping:1 israel:1 tic:1 sivilotti:1 developed:4 transformation:1 fabricated:5 temporal:5 every:2 charge:2 lichtsteiner:2 classifier:1 demonstrates:1 control:7 imager:2 configuring:1 grant:2 before:1 positive:1 sd:1 limit:1 instituto:1 encoding:1 pcbs:5 firing:3 reception:2 signed:3 studied:1 cpld:1 programmed:4 limited:1 range:1 camera:1 block:15 implement:1 communicated:1 displacement:2 area:2 cascade:2 matching:1 projection:1 regular:1 numbered:1 onto:5 close:3 selection:1 subsystem:1 layered:1 put:1 storage:1 wheel:1 map:11 demonstrated:1 circumference:4 center:3 duration:1 resolution:3 x32:4 m2:1 insight:1 continued:1 array:11 handle:2 x64:4 poulton:1 commercial:1 user:1 us:5 delbr:5 element:5 particularly:1 inserted:1 module:3 capture:3 connected:2 remappings:3 eu:3 balanced:1 boahen:3 environment:1 cnm:1 dynamic:3 preprogrammed:1 deliver:1 upon:1 basis:1 oslo:1 easily:2 convolves:1 chip:81 represented:1 various:1 train:1 fast:2 describe:1 detected:1 spartan:2 pci:7 neighborhood:1 neuroinformatics:1 whose:2 circ:4 transform:1 asynchronously:4 delivered:1 associative:1 advantage:1 sequence:5 transistor:2 whatley:1 reset:2 serrano:5 relevant:2 loop:1 achieve:1 competition:7 transmission:1 produce:2 cmos:2 ring:1 object:21 mmc:1 depending:2 develop:1 avlsi:1 measured:1 received:1 switzerland:1 direction:3 radius:6 merged:2 filter:1 centered:1 packet:1 microphotograph:2 require:1 biological:1 adjusted:1 extension:1 around:1 normal:1 algorithmic:1 visualize:2 circuitry:2 nica:1 driving:1 purpose:1 proc:4 injecting:1 label:1 largest:1 vice:2 tool:1 weighted:3 reflects:1 interfacing:5 activates:1 ck:4 varying:1 voltage:3 publication:1 indiveri:1 properly:1 greatly:1 contrast:3 ishii:1 detect:2 integrated:2 eliminate:1 pasadena:1 vlsi:3 reproduce:1 transformed:1 selects:1 selective:1 pixel:26 rodr:1 classification:4 orientation:1 development:2 spatial:4 integration:3 field:5 categorizes:1 once:1 having:1 x4:1 mm2:2 look:1 comparators:1 survive:1 stimulus:3 inhibited:1 retina:21 simultaneously:4 connects:1 fire:3 arbitrated:1 amplifier:4 huge:1 custom:3 mixture:1 chain:3 edge:1 capable:2 iv:1 loosely:2 rotating:2 circle:1 classify:5 column:1 neuromorphic:7 alavlsi:1 fpga:4 delay:18 front:2 stored:2 spatiotemporal:1 calibrated:1 peak:3 international:2 off:5 decoding:1 transmitting:1 connectivity:1 thesis:1 central:1 recorded:1 broadcast:2 derivative:1 syst:4 de:2 retinal:1 coding:1 includes:3 int:4 hysteresis:1 configured:3 register:1 depends:1 stream:2 piece:1 performed:1 closed:1 lab:1 microcontroller:2 traffic:3 dally:1 competitive:5 parallel:1 capability:1 jul:1 weak:2 disc:1 trajectory:5 usb:15 rectified:1 processor:1 synapsis:4 reach:2 strongest:8 sharing:2 synaptic:1 ed:1 pp:10 static:3 gain:3 dimensionality:1 norway:1 higher:1 follow:1 response:1 improved:1 synapse:1 box:2 mez:1 stage:2 clock:1 until:1 working:1 receives:2 del:1 mode:3 disabled:1 aviv:2 building:5 effect:1 consisted:1 functioning:1 read:1 linares:6 white:1 histogrammed:1 visualizing:2 wiring:1 during:1 self:4 spanish:1 excitation:1 clocked:1 ini:1 ack:1 complete:5 performs:1 motion:3 interface:6 passive:1 image:4 instantaneous:1 consideration:1 barranco:4 spiking:4 winner:15 foreach:1 analog:3 assembling:1 synthesized:1 measurement:1 silicon:4 buffered:1 versa:2 tuning:1 csic:1 mapper:12 funded:1 moving:4 operating:1 driven:5 discard:1 periphery:2 store:1 buffer:1 binary:1 preserving:1 minimum:1 captured:1 seen:1 determine:3 signal:4 ii:4 july:1 multiple:1 afterwards:2 timestamped:2 reduces:1 hebbian:6 characterized:1 post:1 coded:1 controlled:2 controller:1 vision:12 histogram:1 kernel:23 represent:1 cell:2 liu2:1 schematically:1 addition:1 addressed:1 interval:1 diagram:3 source:3 sends:1 crucial:1 biased:1 eliminates:1 capacitor:4 exceed:1 split:1 feedforward:1 switch:1 architecture:5 bandwidth:1 prototype:2 x40:1 lazzaro:1 reassign:2 programmable:1 ignored:1 useful:2 thailand:1 ph:1 hardware:1 reduced:1 generate:2 demonstrator:1 inhibitory:5 sign:2 track:1 correctly:2 blue:1 vol:6 group:1 redundancy:1 salient:1 four:1 threshold:4 reusable:1 monitor:6 ist:1 stamped:1 douglas:1 ram:5 communicate:2 bit:3 capturing:1 layer:3 followed:1 copied:1 activity:7 aer:72 scene:5 software:1 encodes:1 giudice:1 generates:3 speed:2 expanded:1 developing:1 ball:1 membrane:1 across:1 mastering:1 unity:1 wawrzynek:1 acosta:1 sevilla:3 zurich:1 bus:11 visualization:1 riis:2 fed:7 end:2 photo:1 available:1 operation:3 takeall:1 monostable:3 hierarchical:1 photocurrent:1 neighbourhood:1 capacitive:1 include:1 burg:1 dante:1 magnocellular:2 eliciting:1 icecs:1 spike:11 looked:1 restored:1 responds:1 september:1 thank:1 card:1 decoder:1 assuming:1 modeled:1 polarity:2 mini:1 ratio:1 demonstration:6 setup:6 potentially:1 pcb:3 negative:1 reliably:1 proper:2 perform:1 allowing:1 convolution:27 neuron:22 acknowledge:1 communication:1 jim:2 frame:4 kolle:3 arbitrary:2 intensity:2 canada:2 connection:2 optimized:1 tap:1 andreou:1 california:1 merges:1 assembled:2 address:23 trans:3 pattern:5 mismatch:2 biasing:1 reading:1 built:2 memory:4 power:1 event:38 advanced:1 technology:3 splitter:2 temporally:1 finished:1 carried:1 acknowledges:1 x8:1 tentatively:1 oster:1 acknowledgement:1 powered:1 vancouver:2 relative:2 fully:1 generation:2 proportional:1 filtering:1 localized:1 digital:8 c03:1 switched:1 integrate:4 port:1 row:4 excitatory:2 last:3 asynchronous:3 copy:2 firmware:2 offline:1 institute:2 differentiating:1 feedback:2 dimension:3 stand:1 computes:2 sensory:1 adaptive:1 compact:1 global:5 active:3 incoming:2 decides:1 receiver:1 uio:1 spatio:3 arbiter:2 gotarredona:3 decade:1 table:1 learn:3 ca:1 tel:2 excellent:1 complex:2 protocol:1 main:1 facilitating:1 fig:27 board:7 x16:1 imse:1 n:1 decoded:1 winning:3 bangkok:1 third:1 learns:3 down:1 specific:1 microelectronics:1 consist:1 adding:1 effectively:1 illumination:2 nk:2 depicted:1 logarithmic:2 photograph:3 nez:1 sender:3 visual:4 tracking:1 determines:2 extracted:1 slot:1 identity:1 kramer:2 shared:1 erased:1 change:6 hard:1 determined:2 engineer:1 lens:1 total:2 accepted:1 experimental:4 photoreceptor:4 select:1 ethz:1 handshaking:1 arbitration:1 handling:1 |
2,080 | 289 | 316
Atkeson
Using Local Models to Control Movement
Christopher G. Atkeson
Department of Brain and Cognitive Sciences
and the Artificial Intelligence Laboratory
Massachusetts Institute of Technology
NE43-771, 545 Technology Square
Cambridge, MA 02139
[email protected]
ABSTRACT
This paper explores the use of a model neural network for motor
learning. Steinbuch and Taylor presented neural network designs to
do nearest neighbor lookup in the early 1960s. In this paper their
nearest neighbor network is augmented with a local model network,
which fits a local model to a set of nearest neighbors. The network
design is equivalent to local regression. This network architecture
can represent smooth nonlinear functions, yet has simple training
rules with a single global optimum. The network has been used
for motor learning of a simulated arm and a simulated running
machine.
1
INTRODUCTION
A common problem in motor learning is approximating a continuous function from
samples of the function's inputs and outputs. This paper explores a neural network architecture that simply remembers experiences (samples) and builds a local
model to answer any particular query (an input for which the function's output is
desired). This network design can represent smooth nonlinear functions, yet has
simple training rules with a single global optimum for building a local model in
response to a query. Our approach is to model complex continuous functions using simple local models. This approach avoids the difficult problem of finding an
appropriate structure for a global model. A key idea is to form a training set for
the local model network after a query to be answered is known. This approach
Using Local Models to Control Movement
allows us to include in the training set only relevant experiences (nearby samples).
The local model network, which may be a simple network architecture such as a
perceptron, forms a model of the portion of the function near the query point. This
local model is then used to predict the output of the function, given the input. The
local model network is retrained with a new training set to answer the next query.
This approach minimizes interference between old and new data, and allows the
range of generalization to depend on the density of the samples.
Steinbuch (Steinbuch 1961, Steinbuch and Piske 1963) and Taylor (Taylor 1959,
Taylor 1960) independently proposed neural network designs that used a local representation to do nearest neighbor lookup and pointed out that this approach could
be used for control. They used a layer of hidden units to compute an inner product
of each stored vector with the input vector. A winner-take-all circuit then selected
the hidden unit with the highest activation. This type of network can find nearest neighbors or best matches using a Euclidean distance metric (Kazmierczak and
Steinbuch 1963). In this paper their nearest neighbor lookup network (which I will
refer to as the memory network) is augmented with a local model network, which
fits a local model to a set of nearest neighbors.
The ideas behind the network design used in this paper have a long history. Approaches which represent previous experiences directly and use a similar experience
or similar experiences to form a local model are often referred to as nearest neighbor
or k-nearest neighbor approaches. Local models (often polynomials) have been used
for many years to smooth time series (Sheppard 1912, Sherriff 1920, Whittaker and
Robinson 1924, Macauley 1931) and interpolate and extrapolate from limited data.
Lancaster and Salkauskas (1986) refer to nearest neighbor approaches as "moving
least squares" and survey their use in fitting surfaces to arbitrarily spaced points.
Eubank (1988) surveys the use of nearest neighbor estimators in nonparametric
regression. Farmer and Sidorowich (1988) survey the use of nearest neighbor and
local model approaches in modeling chaotic dynamic systems.
Crain and Bhattacharyya (1967), Falconer (1971), and McLain (1974) suggested
using a weighted regression to fit a local polynomial model at each point a function
evaluation was desired. All of the available data points were used. Each data point
was weighted by a function of its distance to the desired point in the regression.
McIntyre, Pollard, and Smith (1968), Pelto, Elkins, and Boyd (1968), Legg and
Brent (1969), Palmer (1969), Walters (1969), Lodwick and Whittle (1970), Stone
(1975) and Franke and Nielson (1980) suggested fitting a polynomial surface to a set
of nearest neighbors, also using distance weighted regression. Cleveland (1979) proposed using robust regression procedures to eliminate outlying or erroneous points
in the regression process. A program implementing a refined version of this approach (LOESS) is available by sending electronic mail containing the single line,
send dloess from a, to the address [email protected] (Grosse 1989). Cleveland, Devlin and Grosse (1988) analyze the statistical properties of the LOESS
algorithm and Cleveland and Devlin (1988) show examples of its use. Stone (1977,
1982), Devroye (1981), Cheng (1984), Li (1984), Farwig (1987), and Miiller (1987)
317
318
Atkeson
provide analyses of nearest neighbor approaches. Franke (1982) compares the performance of nearest neighbor approaches with other methods for fitting surfaces to
data.
2
THE NETWORK ARCHITECTURE
The memory network of Steinbuch and Taylor is used to find the nearest stored
vectors to the current input vector. The memory network computes a measure of
the distance between each stored vector and the input vector in parallel, and then a
"winner take all" network selects the nearest vector (nearest neighbor). Euclidean
distance has been chosen as the distance metric, because the Euclidean distance is
invariant under rotation of the coordinates used to represent the input vector.
The memory network consists of three layers of units: input units, hidden or memory
units, and output units. The squared Euclidean distance between the input vector
(i) and a weight vector (Wk) for the connections of the input units to hidden unit
k is given by;
dk2
T
= (01 - Wk )T(o1 - Wk ) = 1?To1 - 20T
1 Wk + Wk Wk
Since the quantity iTi is the same for all hidden units, minimizing the distance
between the input vector and the weight vector for each hidden unit is equivalent
to maximizing:
iTWk -1/2wlw k
This quantity is the inner product of the input vector and the weight vector for
hidden unit k, biased by half the squared length of the weight vector.
Dynamics of the memory network neurons allow the memory network to output a
sequence of nearest neighbors. These nearest neighbors form the selected training
sequence for the local model network. Memory unit dynamics can be used to allocate
"free" memory units to new experiences, and to forget old training points when the
capacity of the memory network is fully utilized.
The local model network consists of only one layer of modifiable weights preceded by
any number of layers with fixed connections. There may be arbitrary preprocessing
of the inputs of the local model, but the local model is linear in the parameters
used to form the fit. The local model network using the LMS training algorithm
performs a linear regression of the transformed inputs against the desired outputs.
Thus, the local model network can be used to fit a linear regression model to the
selected training set. With multiplicative interactions between inputs the local
model network can be used to fit a polynomial surface (such as a quadratic) to the
selected training set. An alternative implementation of the local model network
could use a single layer of "sigma-pi" units.
This network design has simple training rules. In the memory network the weights
are simply the values of the components of input and output vectors, and the bias
for each memory unit is just half the squared length of the corresponding input
weight vector. No search for weights is necessary, since the weights are directly
Using Local Models to Control Movement
/
/
Figure 1: Simulated Planar Two-joint Arm
given by the data to be stored. The local model network is linear in the weights,
leading to a single optimum which can be found by linear regression or gradient
descent. Thus, convergence to the global optimum is guaranteed when forming a
local model to answer a particular query.
This network architecture was simulated using k-d tree data structures (Friedman,
Bentley, and Finkel 1977) on a standard serial computer and also using parallel
search on a massively parallel computer, the Connection Machine (Hillis 1985). A
special purpose computer is being built to implement this network in real time.
3
APPLICATIONS
The network has been used for motor learning of a simulated arm and a simulated
running machine. The network performed surprisingly well in these simple evalua..tions. The simulated arm was able to follow a desired trajectory after only a few
practice movements. Performance of the simulated running machine in following a
series of desired velocities was also improved. This paper will report only on the
arm trajectory learning.
Figure 1 shows the simulated 2- joint planar arm. The problem faced in this simulation is to learn the correct joint torques to drive the arm along the desired
trajectory (the inverse dynamics problem). In addition to the feedforward control
signal produced by the network described in this paper, a feedback controller was
also used.
Figure 2 shows several learning curves for this problem. The first point in each
of the curves shows the performance generated by the feedback controller alone.
The error measure is the RMS torque error during the movement. The highest
curve shows the performance of a nearest neighbor method without a local model.
The nearest point was used to generate the torques for the feedforward command,
which were then summed with the output from the feedback controller. The second
319
320
Atkeson
-
E 50.0
Nearest neighbor
*? ......*? Quadratic
Linear local model
local model
I
-a:
[;- -
Z
ct>
o
a:
a:
w
ct>
::E
a:
w
=>
o
a:
~
....
z
o
...,
o
40.0
~..
~'.o
o?J
\",
o
\
I"
\~
o
\
t ??
13-'.- B- "
'i5l
".
'."
"-
~
*?-'.
....... 'So
-
-B _
-..r._ ..
~-~-B--~
-- ?
o.
'*. . ..
~."
0.0 o~-~-~==--t--'-':'':':':':~'''''''''''''-''-''''--t'''''-''
Movement
Figure 2: Learning curves from 3 different network designs on the two joint arm
trajectory learning problem.
curve shows the performance using a linear local model. The third curve shows
the performance using a quadratic local model. Adding the local model network
greatly speeds up learning. The network with the quadratic local model learned
more quickly than the one with the local linear model.
4
WHY DOES IT WORK?
In this learning paradigm the feedback controller serves as the teacher, or source of
new data for the network. If the feedback controller is of poor quality, the nearest
neighbor function approximation method tends to get "stuck" with a non-zero error
level. The use of a local model seems to eliminate this stuck state, and reduce the
dependence on the quality of the feedback controller.
Fast training is achieved by modularizing the network: the memory network does
not need to search for weights in order to store the samples, and local models can
be linear in the unknown parameters, leading to a single optimum which can be
found by linear regression or gradient descent.
The combination of storing all the data and only using a certain number of nearby
samples to form a local model minimizes interference between old and new data,
and allows the range of generalization to depend on the density of the samples.
There are many issues left to explore. A disadvantage of this approach is the limited
capacity of the memory network. In this version of the proposed neural network
?
Using Local Models to Control Movement
architecture, every experience is stored. Eventually all the memory units will be
used up. To use memory units more sparingly, only the experiences which are sufficiently different from previous experiences could be stored. Memory requirements
could also be reduced by "forgetting" certain experiences, perhaps those that have
not been referenced for a long time, or a randomly selected experience. It is an
empirical question as to how large a memory capacity is necessary for this network
design to be useful.
How should the distance metric be chosen? So far distance metrics have been
devised by hand. Better distance metrics may be based on the stored data and
a particular query. How far will this approach take us? Experiments using more
complex systems and actual physical implementations, with the inevitable noise and
high order dynamics, need to be done.
Acknowledgments
B. Widrow and J. D. Cowan made the author aware of the work of Steinbuch and
Taylor (Steinbuch and Wid row 1965, Cowan and Sharp 1988).
This paper describes research done at the Whitaker College, Department of Brain
and Cognitive Sciences, Center for Biological Information Processing and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support
was provided under Office of Naval Research contract N00014-88-K-0321 and under
Air Force Office of Scientific Research grant AFOSR-89-0500. Support for CGA
was provided by a National Science Foundation Engineering Initiation A ward and
Presidential Young Investigator Award, an Alfred P. Sloan Research Fellowship, the
W. M. Keck Foundation Assistant Professorship in Biomedical Engineering, and a
Whitaker Health Sciences Fund MIT Faculty Research Grant.
References
Cheng, P.E. (1984), "Strong Consistency of Nearest Neighbor Regression Function Estimators", Journal of Multivariate Analysis, 15:63-72.
Cleveland, W.S. (1979), "Robust Locally Weighted Regression and Smoothing
Scatterplots", Journal of the American Statistical Association, 74:829-836.
Cleveland, W.S. and S.J. Devlin (1988), "Locally Weighted Regression: An
Approach to Regression Analysis by Local Fitting", Journal of the American Statistical Association, 83:596-610.
Cleveland, W.S., S.J. Devlin and E. Grosse (1988), "Regression by Local
Fitting: Methods, Properties, and Computational Algorithms", Journal of Econometrics, 37:87-114.
Cowan, J.D. and D.H. Sharp (1988), "Neural Nets", Quarterly Reviews of
Biophysics, 21(3):365-427.
Crain, I.K. and B.K. Bhattacharyya (1967), "Treatment of nonequispaced
two dimensional data with a digital computer", Geoexploration, 5:173-194.
321
322
Atkeson
Devroye, L.P. (1981), "On the Almost Everywhere Convergence of Nonparametric Regression Function Estimates", The Annals of Statistics, 9(6):1310-1319.
Eubank, R.L. (1988), Spline Smoothing and Nonparametric Regression, Marcel
Dekker, New York, pp. 384-387.
Falconer, K.J. (1971), "A general purpose algorithm for contouring over scattered data points", Nat. Phys. Lab. Report NAC 6.
Farmer, J.D., and J.J. Sidorowich (1988), "Predicting Chaotic Dynamics",
in Dynamic Patterns in Complex Systems, J .A.S. Kelso, A.J. Mandell, and M.F.
Shlesinger, (eds.), World Scientific, New Jersey, pp. 265-292.
Farwig, R. (1987), "Multivariate Interpolation of Scattered Data by Moving Least
Squares Methods", in J .C. Mason and M.G. Cox (eds), Algorithms for Approximation, Clarendon Press, Oxford, pp. 193-21l.
Franke, R. (1982), "Scattered Data Interpolation: Tests of Some Methods",
Mathematics of Computation, 38(157):181-200.
Franke, R. and G. Nielson (1980), "Smooth Interpolation of Large Sets of
Scattered Data", International Journal Numerical Methods Engineering, 15:16911704.
Friedman, J.H., J.L. Bentley, and R.A. Finkel (1977), "An Algorithm for
Finding Best Matches in Logarithmic Expected Time", ACM Trans. on Mathematical Software, 3(3):209-226.
Grosse, E. (1989), "LOESS: Multivariate Smoothing by Moving Least Squares",
in C.K. Chui, L.L. Schumaker, and J.D. Ward (eds.), Approximation Theory VI,
Academic Press, Boston, pp. 1-4.
Hillis, D. (1985), The Connection Machine, MIT Press, Cambridge, Mass.
Kazmierczak, H. and K. Steinbuch (1963), "Adaptive Systems in Pattern
Recognition" , IEEE Transactions on Electronic Computers, EC-12:822-835.
Lancaster, P. and K. Salkauskas (1986), Curve And Surface Fitting, Academic
Press, New York.
Legg, M.P.C. and R.P. Brent (1969), "Automatic Contouring", Proc. 4th
A ustralian Computer Conference, 467-468.
Li, K.C. (1984), "Consistency for Cross-Validated Nearest Neighbor Estimates in
Nonparametric Regression", The Annals of Statistics, 12:230-240.
Lodwick, G.D., and J. Whittle (1970), "A technique for automatic contouring
field survey data", Australian Computer Journal, 2:104-109.
Macauley, F.R. (1931), The Smoothing of Time Series, National Bureau of Economic Research, New York.
McIntyre, D.B., D.D. Pollard, and R. Smith (1968), "Computer Programs
For Automatic Contouring" , Kansas Geological Survey Computer Contributions 23,
Using Local Models to Control Movement
University of Kansas, Lawrence, Kansas.
McLain, D.H. (1974), "Drawing Contours From Arbitrary Data Points", The
Computer Journal, 17(4):318-324.
Miiller, H.G. (1987), "Weighted Local Regression and Kernel Methods for Nonparametric Curve Fitting", Journal of the A merican Statistical Association, 82:231238.
Palmer, J.A.B. (1969), "Automated mapping", Proc. 4th Australian Computer
Conference, 463-466 .
Pelto, C.R., T.A. Elkins, and H.A. Boyd (1968), "Automatic contouring of
irregularly spaced data", Geophysics, 33:424-430.
Sheppard, W.F. (1912), "Reduction of Errors by Means of Negligible Differences", Proceedings of the Fifth International Congress of Mathematicians, E. W.
Hobson and A. E. H. Love (eds), Cambridge University Press, 11:348-384.
Sherriff, C.W.M. (1920), "On a Class of Graduation Formulae", Proceedings of
the Royal Society of Edinburgh, XL:112-128.
Steinbuch, K. (1961), "Die lernmatrix", Kybernetik, 1:36-45.
Steinbuch, K. and U.A.W. Piske (1963), "Learning Matrices and Their Applications" , IEEE Transactions on Electronic Computers, EC-12:846-862.
Steinbuch, K. and B. Widrow (1965), "A Critical Comparison of Two Kinds
of Adaptive Classification Networks" , IEEE Transactions on Electronic Computers,
EC-14:737-740.
Stone, C.J. (1975), "Nearest Neighbor Estimators of a Nonlinear Regression
Function", Proc. of Computer Science and Statistics: 8th Annual Symposium on
the Interface, pp. 413-418.
Stone, C.J. (1977), "Consistent Nonparametric Regression", The Annals of Statistics, 5:595-645.
Stone, C.J. (1982), "Optimal Global Rates of Convergence for Nonparametric
Regression", The Annals of Statistics, 10(4):1040-1053.
Taylor, W.K. (1959), "Pattern Recognition By Means Of Automatic Analogue
Apparatus", Proceedings of The Institution of Electrical Engineers, 106B:198-209.
Taylor, W.K. (1960), "A parallel analogue reading machine", Control, 3:95-99.
Taylor, W.K. (1964), "Cortico-thalamic organization and memory", Proc. Royal
Society B, 159:466-478.
Walters, R.F. (1969), "Contouring by Machine: A User's Guide", American
Association of Petroleum Geologists Bulletin, 53(11):2324-2340.
Whittaker, E., and G. Robinson (1924), The Calculus of Observations, Blackie
& Son, London.
323
| 289 |@word cox:1 faculty:1 version:2 polynomial:4 seems:1 dekker:1 calculus:1 simulation:1 reduction:1 series:3 att:1 bhattacharyya:2 current:1 com:1 activation:1 yet:2 numerical:1 motor:4 mandell:1 fund:1 alone:1 intelligence:2 selected:5 half:2 smith:2 institution:1 mathematical:1 along:1 symposium:1 consists:2 fitting:7 forgetting:1 expected:1 love:1 brain:2 torque:3 actual:1 cleveland:6 provided:2 circuit:1 mass:1 kind:1 minimizes:2 mathematician:1 finding:2 every:1 control:8 unit:17 farmer:2 grant:2 negligible:1 engineering:3 local:46 referenced:1 tends:1 congress:1 kybernetik:1 apparatus:1 oxford:1 interpolation:3 limited:2 professorship:1 palmer:2 range:2 acknowledgment:1 practice:1 implement:1 chaotic:2 procedure:1 empirical:1 boyd:2 kelso:1 get:1 franke:4 equivalent:2 center:1 sidorowich:2 send:1 maximizing:1 independently:1 survey:5 rule:3 estimator:3 coordinate:1 annals:4 user:1 salkauskas:2 velocity:1 recognition:2 utilized:1 econometrics:1 steinbuch:12 electrical:1 movement:8 highest:2 ne43:1 dynamic:7 depend:2 joint:4 jersey:1 walter:2 fast:1 london:1 artificial:2 query:7 lancaster:2 refined:1 drawing:1 presidential:1 statistic:5 ward:2 sequence:2 net:1 schumaker:1 interaction:1 product:2 crain:2 relevant:1 to1:1 wlw:1 convergence:3 optimum:5 requirement:1 keck:1 tions:1 widrow:2 nearest:27 strong:1 marcel:1 australian:2 correct:1 wid:1 implementing:1 graduation:1 generalization:2 biological:1 sufficiently:1 lawrence:1 mapping:1 predict:1 lm:1 early:1 purpose:2 assistant:1 proc:4 weighted:6 mit:3 finkel:2 geologist:1 command:1 office:2 validated:1 naval:1 legg:2 greatly:1 eliminate:2 hidden:7 transformed:1 selects:1 issue:1 classification:1 loess:3 smoothing:4 special:1 summed:1 field:1 aware:1 inevitable:1 report:2 spline:1 few:1 randomly:1 national:2 interpolate:1 friedman:2 organization:1 evaluation:1 behind:1 necessary:2 experience:11 tree:1 taylor:9 old:3 euclidean:4 desired:7 modeling:1 disadvantage:1 stored:7 answer:3 teacher:1 sparingly:1 density:2 explores:2 international:2 shlesinger:1 contract:1 quickly:1 squared:3 containing:1 cognitive:2 brent:2 american:3 leading:2 li:2 lookup:3 whittle:2 wk:6 sloan:1 vi:1 multiplicative:1 performed:1 lab:1 analyze:1 portion:1 thalamic:1 parallel:4 contribution:1 square:4 air:1 spaced:2 eubank:2 produced:1 trajectory:4 drive:1 history:1 phys:1 ed:4 against:1 pp:5 treatment:1 massachusetts:2 clarendon:1 follow:1 planar:2 response:1 improved:1 done:2 just:1 biomedical:1 hand:1 christopher:1 nonlinear:3 lernmatrix:1 quality:2 perhaps:1 scientific:2 nac:1 building:1 bentley:2 laboratory:2 during:1 die:1 stone:5 performs:1 interface:1 mcintyre:2 common:1 rotation:1 preceded:1 physical:1 winner:2 association:4 refer:2 cambridge:3 ai:1 falconer:2 automatic:5 consistency:2 mathematics:1 pointed:1 moving:3 surface:5 multivariate:3 massively:1 store:1 certain:2 n00014:1 initiation:1 arbitrarily:1 paradigm:1 signal:1 smooth:4 match:2 academic:2 cross:1 long:2 devised:1 serial:1 award:1 biophysics:1 regression:23 controller:6 metric:5 represent:4 kernel:1 achieved:1 nielson:2 addition:1 fellowship:1 source:1 biased:1 cowan:3 near:1 feedforward:2 automated:1 fit:6 architecture:6 inner:2 devlin:4 idea:2 economic:1 reduce:1 allocate:1 rms:1 miiller:2 pollard:2 york:3 useful:1 cga:2 nonparametric:7 locally:2 reduced:1 generate:1 modifiable:1 alfred:1 key:1 year:1 inverse:1 everywhere:1 almost:1 electronic:4 hobson:1 layer:5 ct:2 guaranteed:1 cheng:2 quadratic:4 annual:1 software:1 nearby:2 chui:1 answered:1 speed:1 department:2 combination:1 poor:1 describes:1 son:1 invariant:1 interference:2 eventually:1 irregularly:1 serf:1 sending:1 available:2 quarterly:1 appropriate:1 alternative:1 bureau:1 running:3 include:1 whitaker:2 build:1 approximating:1 society:2 question:1 quantity:2 dependence:1 gradient:2 distance:12 simulated:9 capacity:3 geological:1 mail:1 sheppard:2 devroye:2 o1:1 length:2 minimizing:1 difficult:1 sigma:1 design:8 implementation:2 unknown:1 neuron:1 observation:1 iti:1 petroleum:1 descent:2 arbitrary:2 retrained:1 sharp:2 connection:4 learned:1 geophysics:1 hillis:2 robinson:2 address:1 able:1 suggested:2 trans:1 pattern:3 reading:1 program:2 built:1 royal:2 memory:19 analogue:2 critical:1 force:1 predicting:1 arm:8 technology:3 kansa:3 health:1 remembers:1 faced:1 review:1 sherriff:2 afosr:1 fully:1 evalua:1 digital:1 foundation:2 consistent:1 storing:1 pi:1 row:1 surprisingly:1 free:1 bias:1 allow:1 cortico:1 perceptron:1 institute:2 neighbor:24 guide:1 bulletin:1 fifth:1 dk2:1 edinburgh:1 feedback:6 curve:8 world:1 avoids:1 contour:1 computes:1 stuck:2 made:1 author:1 preprocessing:1 adaptive:2 atkeson:5 outlying:1 far:2 ec:3 transaction:3 global:5 continuous:2 search:3 why:1 learn:1 robust:2 complex:3 noise:1 augmented:2 referred:1 scattered:4 grosse:4 scatterplots:1 xl:1 third:1 young:1 piske:2 formula:1 erroneous:1 mason:1 adding:1 nat:1 boston:1 forget:1 logarithmic:1 simply:2 explore:1 forming:1 whittaker:2 ma:1 acm:1 netlib:1 contouring:6 engineer:1 college:1 support:2 investigator:1 extrapolate:1 |
2,081 | 2,890 | A General and Efficient Multiple Kernel
Learning Algorithm
S?oren Sonnenburg?
Fraunhofer FIRST
Kekul?estr. 7
12489 Berlin
Germany
[email protected]
Gunnar R?atsch
Friedrich Miescher Lab
Max Planck Society
Spemannstr. 39
T?ubingen, Germany
Christin Sch?afer
Fraunhofer FIRST
Kekul?estr. 7
12489 Berlin
Germany
[email protected]
[email protected]
Abstract
While classical kernel-based learning algorithms are based on a single
kernel, in practice it is often desirable to use multiple kernels. Lankriet
et al. (2004) considered conic combinations of kernel matrices for classification, leading to a convex quadratically constraint quadratic program.
We show that it can be rewritten as a semi-infinite linear program that
can be efficiently solved by recycling the standard SVM implementations. Moreover, we generalize the formulation and our method to a
larger class of problems, including regression and one-class classification. Experimental results show that the proposed algorithm helps for
automatic model selection, improving the interpretability of the learning result and works for hundred thousands of examples or hundreds of
kernels to be combined.
1
Introduction
Kernel based methods such as Support Vector Machines (SVMs) have proven to be powerful for a wide range of different data analysis problems. They employ a so-called kernel
function k(xi , xj ) which intuitively computes the similarity between two examples xi and
xj . The result of SVM learning is a ?-weighted linear combination of kernel elements and
!
the bias b:
N
X
f (x) = sign
?i yi k(xi , x) + b ,
(1)
i=1
where the xi ?s are N labeled training examples (yi ? {?1}).
Recent developments in the literature on the SVM and other kernel methods have shown
the need to consider multiple kernels. This provides flexibility, and also reflects the fact
that typical learning problems often involve multiple, heterogeneous data sources. While
this so-called ?multiple kernel learning? (MKL) problem can in principle be solved via
cross-validation, several recent papers have focused on more efficient methods for multiple
kernel learning [4, 5, 1, 7, 3, 9, 2].
One of the problems with kernel methods compared to other techniques is that the resulting
decision function (1) is hard to interpret and, hence, is difficult to use in order to extract rel-
?
For more details, datasets and pseudocode see http://www.fml.tuebingen.mpg.de
/raetsch/projects/mkl silp.
evant knowledge about the problem at hand. One can approach this problem by considering
convex combinations of K kernels, i.e.
K
X
k(xi , xj ) =
?k kk (xi , xj )
(2)
k=1
P
with ?k ? 0 and k ?k = 1, where each kernel kk uses only a distinct set of features
of each instance. For appropriately designed sub-kernels kk , the optimized combination
coefficients can then be used to understand which features of the examples are of importance for discrimination: if one would be able to obtain an accurate classification by a
sparse weighting ?k , then one can quite easily interpret the resulting decision function. We
will illustrate that the considered MKL formulation provides useful insights and is at the
same time is very efficient. This is an important property missing in current kernel based
algorithms.
We consider the framework proposed by [7], which results in a convex optimization problem - a quadratically-constrained quadratic program (QCQP). This problem is more challenging than the standard SVM QP, but it can in principle be solved by general-purpose
optimization toolboxes. Since the use of such algorithms will only be feasible for small
problems with few data points and kernels, [1] suggested an algorithm based on sequential
minimization optimization (SMO) [10]. While the kernel learning problem is convex, it
is also non-smooth, making the direct application of simple local descent algorithms such
as SMO infeasible. [1] therefore considered a smoothed version of the problem to which
SMO can be applied.
In this work we follow a different direction: We reformulate the problem as a semi-infinite
linear program (SILP), which can be efficiently solved using an off-the-shelf LP solver and
a standard SVM implementation (cf. Section 2 for details). Using this approach we are
able to solve problems with more than hundred thousand examples or with several hundred
kernels quite efficiently. We have used it for the analysis of sequence analysis problems
leading to a better understanding of the biological problem at hand [16, 13]. We extend
our previous work and show that the transformation to a SILP works with a large class of
convex loss functions (cf. Section 3). Our column-generation based algorithm for solving
the SILP works by repeatedly using an algorithm that can efficiently solve the single kernel
problem in order to solve the MKL problem. Hence, if there exists an algorithm that solves
the simpler problem efficiently (like SVMs), then our new algorithm can efficiently solve
the multiple kernel learning problem.
We conclude the paper by illustrating the usefulness of our algorithms in several examples
relating to the interpretation of results and to automatic model selection.
2
Multiple Kernel Learning for Classification using SILP
In the Multiple Kernel Learning (MKL) problem for binary classification one is given N
data points (xi , yi ) (yi ? {?1}), where xi is translated via a mapping ?k (x) 7? RDk , k =
1 . . . K from the input into K feature spaces (?1 (xi ), . . . , ?K (xi )) where Dk denotes
the dimensionality of the k-th feature space. Then one solves the following optimization
problem [1], which is equivalent to the linear SVM for K = 1:1
!2
K
N
X
1 X
min
?k kwk k2
+C
?i
(3)
K
2
wk ?RDk ,??RN
+ ,??R+ ,b?R
i=1
k=1
!
K
K
X
X
?
s.t.
yi
?k wk ?k (xi ) + b ? 1 ? ?i and
?k = 1.
k=1
k=1
1
[1] used a slightly different but equivalent (assuming tr(Kk ) = 1, k = 1, . . . , K) formulation
without the ??s, which we introduced for illustration.
Note that the ?1 -norm of ? is constrained to one, while one is penalizing the ?2 -norm of
wk in each block k separately. The idea is that ?1 -norm constrained or penalized variables
tend to have sparse optimal solutions, while ?2 -norm penalized variables do not [11]. Thus
the above optimization problem offers the possibility to find sparse solutions on the block
level with non-sparse solutions within the blocks.
Bach et al. [1] derived the dual for problem (3), which can be equivalently written as:
N
N
X
X
1 X
?i yi = 0 (4)
min
? s.t.
?i ?j yi yj kk (xi , xj ) ?
?i ? ? and
N
2 i,j=1
??R,1C???R+
i=1
i=1
{z
}
|
=:Sk (?)
for k = 1, . . . , K, where kk (xi , xj ) = (?k (xi ), ?k (xj )). Note that we have one quadratic
constraint per kernel (Sk (?) ? ?). In the case of K = 1, the above problem reduces to the
original SVM dual.
In order to solve (4), one may solve the following saddle point problem (Lagrangian):
K
X
L := ? +
?k (Sk (?) ? ?)
(5)
k=1
P
minimized w.r.t. ? ?
? R (subject to ? ? C1 and i ?i yi = 0) and maximized
P
w.r.t. ? ? RK
+ . Setting the derivative w.r.t. to ? to zero, one obtains the constraint
k ?k =
PK
1 and (5) simplifies to: L = S(?, ?) := k=1 ?k Sk (?) and leads to a min-max problem:
RN
+,?
max
min
N
??Rk
+ 1C???R+
K
X
?k Sk (?) s.t.
N
X
?i yi = 0 and
i=1
k=1
K
X
?k = 1.
(6)
k=1
Assume ?? would be the optimal solution, then ?? := S(?? , ?) is minimal and, hence,
S(?, ?) ? ?? for all ? (subject to the above constraints). Hence, finding a saddle-point of
(5) is equivalent to solving the following semi-infinite linear program:
K
X
X
max
? s.t.
?k = 1 and
?k Sk (?) ? ?
(7)
??R,??RM
+
k
k=1
for all ? with 0 ? ? ? C1 and
X
yi ?i = 0
i
Note that this is a linear program, as ? and ? are only linearly constrained. However
there are infinitely many constraints: one for each ? ? RN satisfying 0 ? ? ? C and
PN
i=1 ?i yi = 0. Both problems (6) and (7) have the same solution. To illustrate that,
consider ? is fixed and we maximize ? in (6). Let ?? be the solution that maximizes (6).
Then we can decrease the value of ? in (7) as long as no ?-constraint (7) is violated, i.e.
PK
down to ? = k=1 ?k Sk (?? ). Similarly, as we increase ? for a fixed ? the maximizing ?
is found. We will discuss in Section 4 how to solve such semi infinite linear programs.
3
Multiple Kernel Learning with General Cost Functions
In this section we consider the more general class of MKL problems, where one is given
an arbitrary strictly convex differentiable loss function, for which we derive its MKL SILP
formulation. We will then investigate in this general MKL SILP using different loss functions, in particular the soft-margin loss, the ?-insensitive loss and the quadratic loss.
We define the MKL primal formulation for a strictly convex and differentiable loss function
L as: (for simplicity we omit a bias term)
!2
K
N
K
X
X
1 X
min
kwk k +
L(f (xi ), yi ) s.t. f (xi ) =
(?k (xi ), wk )
(8)
wk ?RDk 2
i=1
k=1
k=1
In analogy to [1] we treat problem (8) as a second order cone program (SOCP)
leading to the following dual (see Supplementary Website or [17] for details):
min
??
??R,??RN
N
X
L(L??1 (?i , yi ), yi ) +
i=1
N
X
?i L??1 (?i , yi )
(9)
i=1
2
N
1
X
?i ?k (xi )
? ?, ?k = 1 . . . K
s.t. :
2
i=1
2
To derive the SILP formulation we follow the same recipe as in Section 2: deriving the Lagrangian leads to a max-min problem formulation to be eventually reformulated to a SILP:
max
??R,??RK
?
s.t.
K
X
?k = 1
PK
and
k=1
k=1
?k Sk (?) ? ?, ?? ? RN ,
2
N
X
1
?i ?k (xi )
.
where Sk (?) = ?
L(L??1 (?i , yi ), yi ) +
?i L??1 (?i , yi ) +
2
i=1
i=1
i=1
N
X
N
X
2
We assumed that L(x, y) is strictly convex and differentiable in x. Unfortunately, the soft
margin and ?-insensitive loss do not have these properties. We therefore consider them
separately in the sequel.
Soft Margin Loss We use the following loss in order to approximate the soft margin
loss: L? (x, y) = C
? log(1 + exp((1 ? xy)?)). It is easy to verify that lim??? L? (x, y) =
C(1 ? xy)+ . Moreover, L? is strictly convex and differentiable for ? < ?. Using this loss
and assuming yi ? {?1}, we obtain :
? ?
?
?
?? X
N
N
X
C
Cyi
?i
1
Sk (?) = ?
log
+ log ?
+
?i yi +
?
?
?
2
i + Cyi
i + Cyi
i=1
i=1
?N
?2
?X
?
?
?
?i ?k (xi )? .
?
?
?
i=1
2
If ? ? ?, then the first two terms vanish provided that ?C ? ?i ? 0 if yP
i = 1 and
? =? N
0 ? ?i ? C if yi = ?1. Substituting ? = ??
?i yi , we then obtain Sk (?)
?i +
i=1 ?
P
2
N
1
? i yi ?k (xi )
, with 0 ? ?
? i ? C (i = 1, . . . , N ), which is very similar to (4):
i=1 ?
2
2
P
only the i ?i yi = 0 constraint is missing, since we omitted the bias.
One-Class Soft Margin Loss The one-class SVM soft margin (e.g. [15]) is very similar
P
2
N
1
to the two class case and leads to Sk (?) = 21
i=1 ?i ?k (xi )
subject to 0 ? ? ? ?N
1
2
PN
and i=1 ?i = 1.
?-insensitive Loss Using the same technique for the epsilon insensitive loss L(x, y) =
C(1 ? |x ? y|)+ , we obtain
N
2
N
N
X
X
X
1
Sk (?, ?? ) =
(?i ? ?i? )?k (xi )
?
(?i ? ?i? )yi ,
(?i + ?i? )? ?
2
i=1
2
i=1
i=1
with 0 ? ?, ?? ? C1. When including a bias term, we additionally have the constraint
P
N
?
i=1 (?i ? ?i )yi = 0.
It is straightforward to derive the dual problem for other loss functions such as the quadratic
loss. Note that the dual SILP?s only differ in the definition of Sk and the domains of the
??s.
4
Algorithms to solve SILPs
The SILPs considered in this work all have the following form:
PK
PM
max
? s.t.
k=1 ?k = 1 and
k=1 ?k Sk (?) ? ? for all ? ? C
??R,??RM
+
(10)
for some appropriate Sk (?) and the feasible set C ? RN of ? depending on the choice of
the cost function. Using Theorem 5 in [12] one can show that the above SILP has a solution
if the corresponding primal is feasible and bounded. Moreover, there is no duality gap, if
M = co{[S1 (?), . . . , SK (?)]? | ? ? C} is a closed set. For all loss functions considered
in this paper this holds true. We propose to use a technique called Column Generation to
solve (10). The basic idea is to compute the optimal (?, ?) in (10) for a restricted subset of
constraints. It is called the restricted master problem. Then a second algorithm generates
a new constraint determined by ?. In the best case the other algorithm finds the constraint
that maximizes the constraint violation for the given intermediate solution (?, ?), i.e.
X
?? := argmin
?k Sk (?).
(11)
??C
k
PK
If ?? satisfies the constraint k=1 ?k Sk (?? ) ? ?, then the solution is optimal. Otherwise, the constraint is added to the set of constraints.
Algorithm 1 is a special case of the set of SILP algorithms known as exchange methods.
These methods are known to converge (cf. Theorem 7.2 in [6]). However, no convergence
rates for such algorithm are so far known.2 Since it is often sufficient to obtain an approximate solution, we have to define a suitable convergence criterion. Note that the problem is
solved when all constraints are satisfied. Hence, it is a natural choice to usePthe normalized
K
? t S (?t )
maximal constraint violation as a convergence criterion, i.e. ? := 1 ? k=1 ?kt k
,
where (? t , ?t ) is the optimal solution at iteration t ? 1 and ?t corresponds to the newly
found maximally violating constraint of the next iteration.
We need an algorithm to identify unsatisfied constraints, which, fortunately, turns out to be
particularly simple. Note that (11) is for all considered cases exactly the dual optimization
problem of the single kernel case for fixed ?. For instance for P
binary classification, (11)
reduces to the standard SVM dual using the kernel k(xi , xj ) = k ?k kk (xi , xj ):
N
N
N
X
X
X
?i ?j yi yj k(xi , xj ) ?
?i
with
0 ? ? ? C1 and
?i yi = 0.
min
??RN
i,j=1
i=1
i=1
We can therefore use a standard SVM implementation in order to identify the most violated
constraint. Since there exist a large number of efficient algorithms to solve the single
kernel problems for all sorts of cost functions, we have therefore found an easy way to
extend their applicability to the problem of Multiple Kernel Learning. In some cases it
is possible to extend existing SMO based implementations to simultaneously optimize ?
and ?. In [16] we have considered such an algorithm for the binary classification case
that frequently recomputes the ??s.3 Empirically it is a few times faster than the column
generation algorithm, but it is on the other hand much harder to implement.
5
Experiments
In this section we will discuss toy examples for binary classification and regression, demonstrating that MKL can recover information about the problem at hand, followed by a brief
review on problems for which MKL has been successfully used.
5.1
Classifications
In Figure 1 we consider a binary classification problem, where we used MKL-SVMs with
five RBF-kernels with different widths, to distinguish the dark star-like shape from the
2
It has been shown that solving semi-infinite problems like (7), using a method related to boosting
(e.g. [8]) one requires at most T = O(log(M )/?
?2 ) iterations, where ?? is the remaining constraint
violation and the constants may depend on the kernels and the number of examples N [11, 14]. At
least for not too small values of ?? this technique produces reasonably fast good approximate solutions.
3
Simplex based LP solvers often offer the possibility to efficient restart the computation when
adding only a few constraints.
Algorithm 1 The column generation algorithm employs a linear programming solver to
iteratively solve the semi-infinite linear optimization problem (10). The accuracy parameter
? is a parameter of the algorithm. Sk (?) and C are determined by the cost function.
S 0 = 1, ?1 = ??, ?k1 =
for t = 1, 2, . . . do
Compute ?t = argmin
??C
t
S =
K
X
1
K
for k = 1, . . . , K
K
X
?kt Sk (?) by single kernel algorithm with K =
K
X
?kt Kk
k=1
k=1
?kt Sk (?t )
k=1
St
| ? ? then break
?t
t+1
t+1
(? , ? ) = argmax ?
if |1 ?
w.r.t. ? ? RK
+ , ? ? R with
K
X
k=1
?k = 1 and
K
X
?k Skr ? ? for r = 1, . . . , t
k=1
end for
light star. (The distance between the stars increases from left to right.) Shown are the
obtained kernel weightings for the five kernels and the test error which quickly drops to
zero as the problem becomes separable. Note that the RBF kernel with largest width was
not appropriate and thus never chosen. Also with increasing distance between the stars
kernels with greater widths are used. This illustrates that MKL one can indeed recover
such tendencies.
5.2
Regression
We applied the newly derived MKL support vector regression formulation, to the task of
learning a sine function using three RBF-kernels with different widths. We then increased
the frequency of the sine wave. As can be seen in Figure 2, MKL-SV regression abruptly
switches to the width of the RBF-kernel fitting the regression problem best. In another
regression experiment, we combined a linear function with two sine waves, one of lower
frequency and one of high frequency, i.e. f (x) = c ? x + sin(ax) + sin(bx). Using ten
RBF-kernels of different width (see Figure 3) we trained a MKL-SVR and display the
learned weights (a column in the figure). The largest selected width (100) models the linear
component (since RBF with large widths are effectively linear) and the medium width (1)
corresponds to the lower frequency sine. We varied the frequency of the high frequency
sine wave from low to high (left to right in the figure). One observes that MKL determines
Figure 1: A 2-class toy problem where the dark grey star-like shape is to be distinguished
from the light grey star inside of the dark grey star. For details see text.
1
width
width
width
width
width
0.9
0.005
0.05
0.5
1
10
0.8
kernel weight
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1
2
3
4
5
frequency
Figure 2: MKL-Support Vector Regression for the task of learning a sine wave (please see
text for details).
an appropriate combination of kernels of low and high widths, while decreasing the RBFkernel width with increased frequency. This shows that MKL can be more powerful than
cross-validation: To achieve a similar result with cross-validation one has to use 3 nested
loops to tune 3 RBF-kernel sigmas, e.g. train 10?9?8/6 = 120 SVMs, which in preliminary
experiments was much slower then using MKL (800 vs. 56 seconds).
0.001
0.7
RBF kernel width
0.005
0.01
0.6
0.05
0.5
0.1
0.4
1
0.3
10
50
0.2
100
0.1
1000
2
4
6
8
10
12
frequency
14
16
18
0
20
Figure 3: MKL support vector regression on a linear combination of three functions:
f (x) = c ? x + sin(ax) + sin(bx). MKL recovers that the original function is a combination of functions of low and high complexity. For more details see text.
5.3
Applications in the Real World
MKL has been successfully used on real-world datasets in the field of computational biology [7, 16]. It was shown to improve classification performance on the task of ribosomal
and membrane protein prediction, where a weighting over different kernels each corresponding to a different feature set was learned. Random channels obtained low kernel
weights. Moreover, on a splice site recognition task we used MKL as a tool for interpreting
the SVM classifier [16], as is displayed in Figure 4. Using specifically optimized string
kernels, we were able to solve the classification MKL SILP for N = 1.000.000 examples
and K = 20 kernels, as well as for N = 10.000 examples and K = 550 kernels.
0.05
0.045
0.04
0.035
0.03
0.025
0.02
0.015
0.01
0.005
0
?50
?40
?30
?20
?10
Exon
Start
+10
+20
+30
+40
+50
Figure 4: The figure shows an importance weighting for each position in a DNA sequence
(around a so called splice site). MKL was used to learn these weights, each corresponding
to a sub-kernel which uses information at that position to discriminate true splice sites from
fake ones. Different peaks correspond to different biologically known signals (see [16] for
details). We used 65.000 examples for training with 54 sub-kernels.
6
Conclusion
We have proposed a simple, yet efficient algorithm to solve the multiple kernel learning
problem for a large class of loss functions. The proposed method is able to exploit the
existing single kernel algorithms, whereby extending their applicability. In experiments we
have illustrated that the MKL for classification and regression can be useful for automatic
model selection and for obtaining comprehensible information about the learning problem
at hand. It is future work to evaluate MKL algorithms for unsupervised learning such as
Kernel PCA and one-class classification.
Acknowledgments
The authors gratefully acknowledge partial support from the PASCAL Network of Excellence (EU #506778), DFG grants JA 379 / 13-2 and MU 987/2-1. We thank Guido
Dornhege, Olivier Chapelle, Olaf Weiss, Joaquin Qui?no?nero Candela, Sebastian Mika and
K.-R. M?uller for great discussions.
References
[1] Francis R. Bach, Gert R. G. Lanckriet, and Michael I. Jordan. Multiple kernel learning, conic
duality, and the SMO algorithm. In Twenty-first international conference on Machine learning.
ACM Press, 2004.
[2] Kristin P. Bennett, Michinari Momma, and Mark J. Embrechts. Mark: a boosting algorithm for
heterogeneous kernel models. KDD, pages 24?31, 2002.
[3] Jinbo Bi, Tong Zhang, and Kristin P. Bennett. Column-generation boosting methods for mixture
of kernels. KDD, pages 521?526, 2004.
[4] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for
support vector machines. Machine Learning, 46(1-3):131?159, 2002.
[5] I. Grandvalet and S. Canu. Adaptive scaling for feature selection in SVMs. In In Advances in
Neural Information Processing Systems, 2002.
[6] R. Hettich and K.O. Kortanek. Semi-infinite programming: Theory, methods and applications.
SIAM Review, 3:380?429, September 1993.
[7] G.R.G. Lanckriet, T. De Bie, N. Cristianini, M.I. Jordan, and W.S. Noble. A statistical framework
for genomic data fusion. Bioinformatics, 2004.
[8] R. Meir and G. R?atsch. An introduction to boosting and leveraging. In S. Mendelson and
A. Smola, editors, Proc. of the first Machine Learning Summer School in Canberra, LNCS,
pages 119?184. Springer, 2003. in press.
[9] C.S. Ong, A.J. Smola, and R.C. Williamson. Hyperkernels. In In Advances in Neural Information
Processing Systems, volume 15, pages 495?502, 2003.
[10] J. Platt. Fast training of support vector machines using sequential minimal optimization. In
B. Sch?olkopf, C.J.C. Burges, and A.J. Smola, editors, Advances in Kernel Methods ? Support
Vector Learning, pages 185?208, Cambridge, MA, 1999. MIT Press.
[11] G. R?atsch. Robust Boosting via Convex Optimization. PhD thesis, University of Potsdam,
Computer Science Dept., August-Bebel-Str. 89, 14482 Potsdam, Germany, 2001.
[12] G. R?atsch, A. Demiriz, and K. Bennett. Sparse regression ensembles in infinite and finite hypothesis spaces. Machine Learning, 48(1-3):193?221, 2002. Special Issue on New Methods for
Model Selection and Model Combination. Also NeuroCOLT2 Technical Report NC-TR-2000085.
[13] G. R?atsch, S. Sonnenburg, and C. Sch?afer. Learning interpretable svms for biological sequence
classification. BMC Bioinformatics, Special Issue from NIPS workshop on New Problems and
Methods in Computational Biology Whistler, Canada, 18 December 2004, 7(Suppl. 1:S9), February 2006.
[14] G. R?atsch and M.K. Warmuth. Marginal boosting. NeuroCOLT2 Technical Report 97, Royal
Holloway College, London, July 2001.
[15] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[16] S. Sonnenburg, G. R?atsch, and C. Sch?afer. Learning interpretable SVMs for biological sequence classification. In RECOMB 2005, LNBI 3500, pages 389?407. Springer-Verlag Berlin
Heidelberg, 2005.
[17] S. Sonnenburg, G. R?atsch, S. Sch?afer, and B. Sch?olkopf. Large scale multiple kernel learning.
Journal of Machine Learning Research, 2006. accepted.
| 2890 |@word illustrating:1 version:1 momma:1 norm:4 grey:3 tr:2 harder:1 existing:2 current:1 jinbo:1 yet:1 bie:1 written:1 kdd:2 shape:2 designed:1 drop:1 interpretable:2 discrimination:1 v:1 selected:1 website:1 warmuth:1 provides:2 boosting:6 simpler:1 zhang:1 five:2 direct:1 fitting:1 inside:1 excellence:1 indeed:1 mpg:2 frequently:1 decreasing:1 str:1 considering:1 solver:3 becomes:1 project:1 provided:1 moreover:4 bounded:1 maximizes:2 increasing:1 medium:1 argmin:2 string:1 finding:1 transformation:1 dornhege:1 exactly:1 k2:1 rm:2 classifier:1 platt:1 grant:1 omit:1 planck:1 local:1 treat:1 mika:1 challenging:1 co:1 range:1 bi:1 acknowledgment:1 yj:2 silp:13 practice:1 block:3 implement:1 lncs:1 protein:1 svr:1 sonne:1 selection:5 s9:1 www:1 equivalent:3 optimize:1 lagrangian:2 missing:2 maximizing:1 straightforward:1 convex:10 focused:1 simplicity:1 insight:1 deriving:1 gert:1 guido:1 programming:2 olivier:1 us:2 hypothesis:1 lanckriet:2 element:1 satisfying:1 particularly:1 recognition:1 mukherjee:1 labeled:1 solved:5 thousand:2 sonnenburg:4 eu:1 decrease:1 observes:1 mu:1 complexity:1 cristianini:1 ong:1 trained:1 depend:1 solving:3 translated:1 exon:1 easily:1 evant:1 train:1 distinct:1 recomputes:1 fast:2 london:1 choosing:1 quite:2 larger:1 solve:13 supplementary:1 otherwise:1 demiriz:1 sequence:4 differentiable:4 propose:1 maximal:1 loop:1 flexibility:1 achieve:1 skr:1 olkopf:3 recipe:1 convergence:3 olaf:1 lnbi:1 extending:1 produce:1 help:1 illustrate:2 derive:3 depending:1 school:1 kortanek:1 solves:2 differ:1 direction:1 exchange:1 ja:1 preliminary:1 biological:3 strictly:4 hold:1 around:1 considered:7 exp:1 great:1 mapping:1 substituting:1 omitted:1 purpose:1 proc:1 largest:2 successfully:2 tool:1 weighted:1 reflects:1 minimization:1 uller:1 kristin:2 mit:2 genomic:1 pn:2 shelf:1 derived:2 ax:2 bebel:1 fhg:2 germany:4 issue:2 classification:16 dual:7 pascal:1 michinari:1 development:1 constrained:4 special:3 marginal:1 field:1 never:1 cyi:3 biology:2 bmc:1 unsupervised:1 noble:1 future:1 minimized:1 simplex:1 report:2 employ:2 few:3 simultaneously:1 dfg:1 argmax:1 possibility:2 investigate:1 violation:3 mixture:1 light:2 primal:2 accurate:1 kt:4 partial:1 xy:2 minimal:2 instance:2 column:6 soft:6 increased:2 kekul:2 cost:4 applicability:2 subset:1 hundred:4 usefulness:1 too:1 sv:1 combined:2 st:1 peak:1 international:1 siam:1 sequel:1 off:1 michael:1 quickly:1 thesis:1 satisfied:1 derivative:1 leading:3 bx:2 yp:1 toy:2 de:5 socp:1 star:7 wk:5 coefficient:1 sine:6 break:1 lab:1 closed:1 kwk:2 candela:1 francis:1 wave:4 sort:1 recover:2 start:1 accuracy:1 christin:2 efficiently:6 maximized:1 identify:2 correspond:1 ensemble:1 generalize:1 sebastian:1 definition:1 frequency:9 rdk:3 recovers:1 newly:2 knowledge:1 lim:1 dimensionality:1 violating:1 follow:2 maximally:1 wei:1 formulation:8 smola:4 hand:5 joaquin:1 mkl:28 verify:1 true:2 normalized:1 hence:5 iteratively:1 illustrated:1 sin:4 width:17 please:1 whereby:1 criterion:2 interpreting:1 estr:2 pseudocode:1 qp:1 empirically:1 insensitive:4 volume:1 extend:3 interpretation:1 relating:1 interpret:2 raetsch:2 cambridge:2 automatic:3 fml:1 pm:1 similarly:1 canu:1 gratefully:1 chapelle:2 afer:4 similarity:1 recent:2 verlag:1 ubingen:1 binary:5 yi:28 seen:1 fortunately:1 greater:1 converge:1 maximize:1 signal:1 semi:7 july:1 multiple:15 desirable:1 reduces:2 smooth:1 technical:2 faster:1 cross:3 offer:2 bach:2 long:1 prediction:1 regression:11 basic:1 miescher:1 heterogeneous:2 iteration:3 kernel:61 suppl:1 oren:1 c1:4 separately:2 source:1 sch:7 appropriately:1 subject:3 tend:1 december:1 spemannstr:1 leveraging:1 jordan:2 intermediate:1 easy:2 switch:1 xj:10 idea:2 simplifies:1 pca:1 abruptly:1 reformulated:1 repeatedly:1 useful:2 fake:1 involve:1 tune:1 dark:3 ten:1 svms:7 dna:1 http:1 meir:1 exist:1 sign:1 per:1 gunnar:1 demonstrating:1 penalizing:1 cone:1 powerful:2 master:1 hettich:1 decision:2 scaling:1 qui:1 followed:1 distinguish:1 display:1 summer:1 quadratic:5 constraint:22 qcqp:1 bousquet:1 generates:1 min:8 separable:1 combination:8 membrane:1 slightly:1 lp:2 making:1 s1:1 biologically:1 intuitively:1 restricted:2 discus:2 eventually:1 turn:1 end:1 rewritten:1 appropriate:3 distinguished:1 slower:1 comprehensible:1 original:2 denotes:1 remaining:1 cf:3 recycling:1 exploit:1 epsilon:1 k1:1 february:1 society:1 classical:1 added:1 neurocolt2:2 september:1 distance:2 thank:1 berlin:3 tue:1 restart:1 evaluate:1 tuebingen:1 assuming:2 kk:8 reformulate:1 illustration:1 equivalently:1 difficult:1 unfortunately:1 nc:1 sigma:1 implementation:4 twenty:1 datasets:2 acknowledge:1 finite:1 descent:1 displayed:1 rn:7 varied:1 smoothed:1 arbitrary:1 august:1 canada:1 introduced:1 toolbox:1 friedrich:1 optimized:2 smo:5 quadratically:2 learned:2 potsdam:2 nip:1 able:4 suggested:1 program:8 max:7 including:2 interpretability:1 royal:1 suitable:1 natural:1 improve:1 brief:1 conic:2 fraunhofer:2 extract:1 text:3 review:2 literature:1 understanding:1 unsatisfied:1 loss:19 generation:5 recomb:1 proven:1 analogy:1 validation:3 sufficient:1 principle:2 editor:2 grandvalet:1 penalized:2 infeasible:1 bias:4 understand:1 burges:1 wide:1 sparse:5 world:2 computes:1 author:1 adaptive:1 far:1 approximate:3 obtains:1 conclude:1 assumed:1 xi:26 sk:22 additionally:1 channel:1 reasonably:1 learn:1 rbfkernel:1 robust:1 obtaining:1 improving:1 heidelberg:1 williamson:1 domain:1 pk:5 linearly:1 site:3 canberra:1 tong:1 sub:3 position:2 vanish:1 weighting:4 splice:3 rk:4 down:1 theorem:2 dk:1 svm:11 fusion:1 workshop:1 exists:1 mendelson:1 rel:1 sequential:2 adding:1 importance:2 effectively:1 vapnik:1 phd:1 illustrates:1 ribosomal:1 margin:6 gap:1 saddle:2 infinitely:1 springer:2 corresponds:2 nested:1 satisfies:1 determines:1 acm:1 ma:2 rbf:8 bennett:3 feasible:3 hard:1 infinite:8 typical:1 determined:2 specifically:1 hyperkernels:1 called:5 discriminate:1 duality:2 experimental:1 tendency:1 accepted:1 atsch:8 holloway:1 college:1 whistler:1 support:8 mark:2 embrechts:1 bioinformatics:2 violated:2 dept:1 |
2,082 | 2,891 | Statistical Convergence of Kernel CCA
Kenji Fukumizu
Institute of Statistical Mathematics
Tokyo 106-8569 Japan
[email protected]
Francis R. Bach
Centre de Morphologie Mathematique
Ecole des Mines de Paris, France
[email protected]
Arthur Gretton
Max Planck Institute for Biological Cybernetics
72076 T?
ubingen, Germany
[email protected]
Abstract
While kernel canonical correlation analysis (kernel CCA) has been
applied in many problems, the asymptotic convergence of the functions estimated from a finite sample to the true functions has not
yet been established. This paper gives a rigorous proof of the statistical convergence of kernel CCA and a related method (NOCCO),
which provides a theoretical justification for these methods. The
result also gives a sufficient condition on the decay of the regularization coefficient in the methods to ensure convergence.
1
Introduction
Kernel canonical correlation analysis (kernel CCA) has been proposed as a nonlinear
extension of CCA [1, 11, 3]. Given two random variables, kernel CCA aims at
extracting the information which is shared by the two random variables, and has
been successfully applied in various practical contexts. More precisely, given two
random variables X and Y , the purpose of kernel CCA is to provide nonlinear
mappings f (X) and g(Y ) such that their correlation is maximized.
As in many statistical methods, the desired functions are in practice estimated from
a finite sample. Thus, the convergence of the estimated functions to the population
ones with increasing sample size is very important to justify the method. Since the
goal of kernel CCA is to estimate a pair of functions, the convergence should be
evaluated in an appropriate functional norm: thus, we need tools from functional
analysis to characterize the type of convergence.
The purpose of this paper is to rigorously prove the statistical convergence of kernel
CCA, and of a related method. The latter uses a NOrmalized Cross-Covariance
Operator, and we call it NOCCO for short. Both kernel CCA and NOCCO require a
regularization coefficient to enforce smoothness of the functions in the finite sample
case (thus avoiding a trivial solution), but the decay of this regularisation with
increased sample size has not yet been established. Our main theorems give a
sufficient condition on the decay of the regularization coefficient for the finite sample
estimates to converge to the desired functions in the population limit. Another
important issue in establishing the convergence is an appropriate distance measure
for functions. For NOCCO, we obtain convergence in the norm of reproducing kernel
Hilbert spaces (RKHS) [2]. This norm is very strong: if the positive definite (p.d.)
kernels are continuous and bounded, it is stronger than the uniform norm in the
space of continuous functions, and thus the estimated functions converge uniformly
to the desired ones. For kernel CCA, we show convergence in the L2 norm, which
is a standard distance measure for functions. We also discuss the relation between
our results and two relevant studies: COCO [9] and CCA on curves [10].
2
Kernel CCA and related methods
In this section, we review kernel CCA as presented by [3], and then formulate it
with covariance operators on RKHS. In this paper, a Hilbert space always refers to
a separable Hilbert space, and an operator to a linear operator. kT k denotes the
operator norm supk?k=1 kT ?k, and R(T ) denotes the range of an operator T .
Throughout this paper, (HX , kX ) and (HY , kY ) are RKHS of functions on measurable spaces X and Y, respectively, with measurable p.d. kernels kX and kY . We
consider a random vector (X, Y ) : ? ? X ?Y with distribution PXY . The marginal
distributions of X and Y are denoted PX and PY . We always assume
EX [kX (X, X)] < ? and EY [kY (Y, Y )] < ?.
(1)
Note that under this assumption it is easy to see HX and HY are continuously
included in L2 (PX ) and L2 (PY ), respectively, where L2 (?) denotes the Hilbert
space of square integrable functions with respect to the measure ?.
2.1
CCA in reproducing kernel Hilbert spaces
Classical CCA provides the linear mappings aT X and bT Y that achieve maximum
correlation. Kernel CCA extends this by looking for functions f and g such that
f (X) and g(Y ) have maximal correlation. More precisely, kernel CCA solves
Cov[f (X), g(Y )]
max
.
(2)
f ?HX ,g?HY Var[f (X)]1/2 Var[g(Y )]1/2
In practice, we have to estimate the desired function from a finite sample. Given
an i.i.d. sample (X1 , Y1 ), . . . , (Xn , Yn ) from PXY , an empirical solution of Eq. (2) is
d (X), g(Y )]
Cov[f
max
(3)
1/2
1/2 ,
f ?HX ,g?HY d
d
Var[f (X)] + ?n kf k2HX
Var[g(Y
)] + ?n kgk2HY
d and Var
d denote the empirical covariance and variance, such as
where Cov
d (X), g(Y )] = 1 Pn f (Xi ) ? 1 Pn f (Xj ) g(Yi ) ? 1 Pn g(Yj ) .
Cov[f
i=1
j=1
j=1
n
n
n
The positive constant ?n is a regularization coefficient. As we shall see, the regularization terms ?n kf k2HX and ?n kgk2HY make the problem well-formulated statistically,
enforce smoothness, and enable operator inversion, as in Tikhonov regularization.
2.2
Representation with cross-covariance operators
Kernel CCA and related methods can be formulated using covariance operators [4,
7, 8], which make theoretical discussions easier. It is known that there exists a
unique cross-covariance operator ?Y X : HX ? HY for (X, Y ) such that
hg, ?Y X f iHY = EXY (f (X)?EX [f (X)])(g(Y )?EY [g(Y )]) (= Cov[f (X), g(Y )])
holds for all f ? HX and g ? HY . The cross covariance operator represents the
covariance of f (X) and g(Y ) as a bilinear form of f and g. In particular, if Y is
equal to X, the self-adjoint operator ?XX is called the covariance operator.
Let (X1 , Y1 ), . . . , (Xn , Yn ) be i.i.d. random vectors on X ? Y with distribution PXY .
b (n) is defined by the cross-covariance
The empirical cross-covariance operator ?
YX
P
n
1
operator with the empirical distribution n i=1 ?Xi ?Yi . By definition, for any f ?
b (n) gives the empirical covariance as follows;
HX and g ? HY , the operator ?
YX
b (n) f iH = Cov[f
d (X), g(Y )].
hg, ?
Y
YX
Let QX and QY be the orthogonal projections which respectively map HX onto
R(?XX ) and HY onto R(?Y Y ). It is known [4] that ?Y X can be represented as
1/2
1/2
?Y X = ?Y Y VY X ?XX ,
(4)
where VY X : HX ? HY is a unique bounded operator such that kVY X k ? 1
?1/2
?1/2
and VY X = QY VY X QX . We often write VY X as ?Y Y ?Y X ?XX in an abuse of
?1/2
?1/2
notation, even when ?XX or ?Y Y are not appropriately defined as operators.
With cross-covariance operators, the kernel CCA problem can be formulated as
hf, ?XX f iHX = 1,
hg, ?Y X f iHY subject to
(5)
sup
hg, ?Y Y giHY = 1.
f ?HX ,g?HY
As with classical CCA, the solution of Eq. (5) is given by the eigenfunctions corresponding to the largest eigenvalue of the following generalized eigenproblem:
f
O
?XY
f
?XX
O
.
(6)
= ?1
g
?Y X
O
g
O
?Y Y
Similarly, the empirical estimator in Eq. (3) is obtained by solving
(
b (n) + ?n I)f iH = 1,
hf, (?
(n)
X
XX
b f iH
sup
hg, ?
subject to
Y
(n)
YX
b
hg,
(
?
+
?
I)gi
f ?HX ,g?HY
n
H = 1.
YY
(7)
Y
Let us assume that the operator VY X is compact,1 and let ? and ? be the unit
eigenfunctions of VY X corresponding to the largest singular value; that is,
h?, VY X ?iHY =
max
hg, VY X f iHY .
f ?HX ,g?HY ,kf kHX =kgkHY =1
(8)
Given ? ? R(?XX ) and ? ? R(?Y Y ), the kernel CCA solution in Eq. (6) is
?1/2
f = ?XX ?,
?1/2
g = ?Y Y ?.
(9)
In the empirical case, let ?bn ? HX and ?bn ? HY be the unit eigenfunctions corresponding to the largest singular value of the finite rank operator
(n)
b (n) ?
b (n) + ?n I ?1/2 .
b (n) + ?n I ?1/2 ?
(10)
VbY X := ?
YX
XX
YY
As in Eq. (9), the empirical estimators fbn and gbn in Eq. (7) are equal to
1
b (n) + ?n I)?1/2 ?bn ,
fbn = (?
XX
(n)
?1/2 b
b
?n .
gbn = (?
Y Y + ?n I)
(11)
A bounded operator T : H1 ? H2 is called compact if any bounded sequence {un } ?
H1 has a subsequence {un? } such that T un? converges in H2 . One of the useful properties
of a compact operator is that it admits a singular value decomposition (see [5, 6])
Note that all the above empirical operators and the estimators can be expressed in
terms of Gram matrices. The solutions fbn and gbn are exactly the P
same as those
n
given in [3], and are obtained by linear combinations of kX (?, Xi )? n1 j=1 kX (?, Xj )
P
n
and kY (?, Yi ) ? 1
kY (?, Yj ). The functions ?bn and ?bn are obtained similarly.
n
j=1
There exist additional, related methods to extract nonlinear dependence. The constrained covariance (COCO) [9] uses the unit eigenfunctions of ?Y X ;
max
hg, ?Y X f iHY =
f ?HX ,g?HY
kf kHX =kgkHY =1
max
f ?HX ,g?HY
kf kHX =kgkHY =1
Cov[f (X), g(Y )].
The statistical convergence of COCO has been proved in [8]. Instead of normalizing
the covariance by the variances, COCO normalizes it by the RKHS norms of f and
g. Kernel CCA is a more direct nonlinear extension of CCA than COCO. COCO
tends to find functions with large variance for f (X) and g(Y ), which may not be the
most correlated features. On the other hand, kernel CCA may encounter situations
where it finds functions with moderately large covariance but very small variance
for f (X) or g(Y ), since ?XX and ?Y Y can have arbitrarily small eigenvalues.
A possible compromise is to use ? and ? for VY X , the NOrmalized Cross-Covariance
Operator (NOCCO). While the statistical meaning of NOCCO is not as direct as
kernel CCA, it can incorporate the normalization by ?XX and ?Y Y . We will
establish the convergence of kernel CCA and NOCCO in Section 3.
3
Main theorems: convergence of kernel CCA and NOCCO
We show the convergence of NOCCO in the RKHS norm, and the kernel CCA in
L2 sense. The results may easily be extended to the convergence of the eigenspace
corresponding to the m-th largest eigenvalue.
Theorem 1. Let (?n )?
n=1 be a sequence of positive numbers such that
lim ?n = 0,
n??
lim n1/3 ?n = ?.
n??
(12)
Assume VY X is compact, and the eigenspaces given by Eq. (8) are one-dimensional.
Let ?, ?, ?bn , and ?bn be the unit eigenfunctions of Eqs. (8) and (10). Then
|h?bn , ?iHX | ? 1,
in probability, as n goes to infinity.
|h?bn , ?iHY | ? 1
Theorem 2. Let (?n )?
n=1 be a sequence of positive numbers which satisfies Eq. (12).
Assume that ? and ? are included in R(?XX ) and R(?Y Y ), respectively, and that
VY X is compact. Then, for f, g, fbn , and gbn in Eqs.(9), (11), we have
(fbn ? EX [fbn (X)]) ? (f ? EX [f (X)])
? 0,
L (P )
2 X
(b
gn ? EY [b
gn (Y )]) ? (g ? EY [g(Y )]) L2 (PY ) ? 0
in probability, as n goes to infinity.
The convergence of NOCCO in the RKHS norm is a very strong result. If kX and
kY are continuous and bounded, the RKHS norm is stronger than the uniform norm
of the continuous functions. In such cases, Theorem 1 implies ?bn and ?bn converge
uniformly to ? and ?, respectively. This uniform convergence is useful in practice,
because in many applications the function value at each point is important.
?
For any complete orthonormal systems (CONS) {?i }?
i=1 of HX and {?i }i=1 of HY ,
?1/2
the compactness assumption on VY X requires that the correlation of ?XX ?i (X)
?1/2
and ?Y Y ?i (Y ) decay to zero as i ? ?. This is not necessarily satisfied in general.
A trivial example is the case of variables with Y = X, in which VY X = I is not
compact. In this case, NOCCO is solved by an arbitrary function. Moreover, the
kernel CCA does not have solutions, if ?XX has arbitrarily small eigenvalues.
Leurgans et al. ([10]) discuss CCA on curves, which are represented by stochastic
processes on an interval, and use the Sobolev space of functions with square integrable second derivative. Since the Sobolev space is a RKHS, their method is an
example of kernel CCA. They also show the convergence of estimators under the
condition n1/2 ?n ? ?. Although the proof can be extended to a general RKHS,
convergence is measured by the correlation,
|hfbn , ?XX f iHX |
(hfbn , ?XX fbn iHX )1/2 (hf, ?XX f iHX )1/2
?
1,
which is weaker than the L2 convergence in Theorem 2.
In fact, using
hf, ?XX f iHX = 1, it is easy to derive the above convergence from Theorem
2. On the other hand, convergence of the correlation does not necessarily imply
h(fbn ? f ), ?XX (fbn ? f )iHX ? 0. From the equality
1/2
1/2
h(fbn ? f ), ?XX (fbn ? f )iHX = (hfbn , ?XX fbn iHX ? hf, ?XX f iHX )2
1/2
1/2
1/2
1/2
+ 2{1 ? hfbn , ?XX f iHX /(k?XX fbn kHX k?XX f kHX )} k?XX fbn kHX k?XX f kHX ,
we require hfbn , ?XX fbn iHX ? hf, ?XX f iHX = 1 in order to guarantee the left hand
b (n) + ?n I)fbn iH =
side converges to zero. However, with the normalization hfbn , (?
X
XX
1, convergence of hfbn , ?XX fbn iHX is not clear. We use the stronger assumption
n1/3 ?n ? ? to prove h(fbn ? f ), ?XX (fbn ? f )iHX ? 0 in Theorem 2.
4
Outline of the proof of the main theorems
We show only the outline of the proof in this paper. See [6] for the detail.
4.1
Preliminary lemmas
We introduce some definitions for our proofs. Let H1 and
P?H2 be Hilbert spaces.
An operator T : H1 ? H2 is called Hilbert-Schmidt if i=1 kT ?i k2H2 < ? for a
CONS {?i }?
i=1 of H1 . Obviously kT k ? kT kHS . For Hilbert-Schmidt operators,
the Hilbert-Schmidt norm and inner product are defined as
P?
P?
hT1 , T2 iHS = i=1 hT1 ?i , T2 ?i iH2 .
kT k2HS = i=1 kT ?i k2H2 ,
These definitions are independent of the CONS. For more details, see [5] and [8].
For a Hilbert space F, a Borel measurable map F : ? ? F from a measurable space
F is called a random element in F. For a random element F in F with EkF k < ?,
there exists a unique element E[F ] ? F, called the expectation of F , such that
hE[F ], giH = E[hF, giF ]
(?g ? F)
holds. If random elements F and G in F satisfy E[kF k2 ] < ? and E[kGk2 ] < ?,
then hF, GiF is integrable. Moreover, if F and G are independent, we have
E[hF, GiF ] = hE[F ], E[G]iF .
(13)
It is easy to see under the condition Eq. (1), the random element kX (?, X)kY (?, Y ) in
the direct product HX ? HY is integrable, i.e. E[kkX (?, X)kY (?, Y )kHX ?HY ] < ?.
Combining Lemma 1 in [8] and Eq. (13), we obtain the following lemma.
Lemma 3. The cross-covariance operator ?Y X is Hilbert-Schmidt, and
2
k?Y X k2HS =
EY X kX (?, X) ? EX [kX (?, X)] kY (?, Y ) ? EY [kY (?, Y )]
H ?H .
X
Y
(n)
b f iH = hg, ?Y X f iH for each f
The law of large numbers implies limn?? hg, ?
Y
Y
YX
and g in probability. The following lemma shows a much stronger uniform result.
Lemma 4.
(n)
b
?Y X ? ?Y X
HS = Op (n?1/2 ) (n ? ?).
Proof. Write for simplicity F = kX (?, X) ? EX [kX (?, X)], G = kY (?, Y ) ?
EY [kY (?, Y )], Fi = kX (?, Xi ) ? EX [kX (?, X)], and Gi = kY (?, Yi ) ? EY [kY (?, Y )].
Then, F, F1 , . . . , Fn are i.i.d. random elements in HX , and a similar property also
holds for G, G1 , . . . , Gn . Lemma 3 and the same argument as its proof implies
(n)
2
Pn
2
Pn
Pn
b
?Y X
HS =
n1 i=1 Fi ? n1 j=1 Fj Gi ? n1 j=1 Gj
H ?H ,
X
Y
P
P
P
(n)
n
n
n
1
1
1
b iHS = E[F G],
h?Y X , ?
YX
i=1 Fi ? n
j=1 Fj Gi ? n
j=1 Gj H ?H .
n
X
Y
From these equations, we have
(n)
2
Pn
2
Pn
Pn
b
?Y X ? ?Y X
HS =
n1 i=1 Fi ? n1 j=1 Fj Gi ? n1 j=1 Gj ? E[F G]
HX ?HY
2
Pn
Pn Pn
1
=
n1 1 ? n1
i=1 Fi Gi ? n2
i=1
j6=i (Fi Gj + Fj Gi ) ? E[F G] H ?H .
X
Y
Using E[Fi ] = E[Gi ] = 0 and E[Fi Gj Fk G? ] = 0 for i 6= j, {k, ?} 6= {i, j}, we have
(n)
2
E
b
?
? ?Y X
= 1 E kF Gk2H ?H ? 1 kE[F G]k2H ?H + O(1/n2 ).
YX
HS
n
X
Y
n
X
Y
The proof is completed by Chebyshev?s inequality.
The following two lemmas are essential parts of the proof of the main theorems.
Lemma 5. Let ?n be a positive number such that ?n ? 0 (n ? ?). Then
(n)
?1/2
Vb
?Y X (?XX + ?n I)?1/2
= Op (??3/2
n?1/2 ).
n
Y X ? (?Y Y + ?n I)
Proof. The operator on the left hand side is equal to
(n)
(n) (n)
?1/2
?1/2
b
b (?
b
(?
? (?Y Y + ?n I)?1/2 ?
Y Y + ?n I)
YX
XX + ?n I)
b (n) ? ?Y X (?
b (n) + ?n I)?1/2
+ (?Y Y + ?n I)?1/2 ?
YX
XX
(n)
?1/2
b
+ (?Y Y + ?n I)
?Y X (?XX + ?n I)?1/2 ? (?XX + ?n I)?1/2 .
(14)
?3/2
?1/2
?1/2
?1/2
3/2
3/2
?3/2
From the equality A
?B
=A
B ?A
B
+ (A ? B)B
, the
first term in Eq. (14) is equal to
3
3
(n)
? 23 (n) (n)
(n)
? 12
? 21
b
b (n) 2 + ?
b (n) ? ?Y Y
b
b (?
b
(?
?Y2 Y ? ?
?
?
.
Y Y + ?n I)
YY
YY
Y Y + ?n I
YX
XX + ?n I)
?
(n)
(n)
(n)
(n)
?1/2
?1/2 b
?1/2
b
b
b
From k(?
k ? 1/ ?n , k(?
?Y X (?
k?1
Y Y + ?n I)
XX + ?n I)
Y Y + ?n I)
and Lemma 7, the norm of the above operator is upper-bounded by
3
(n)
3/2
1
b (n) k3/2 + 1 k?
b
?
, k?
YY
Y Y ? ?Y Y k.
?n
?n max k?Y Y k
A similar bound applies to the third term of Eq. (14), and the second term is
b (n) k. Thus, Lemma 4 completes the proof.
upper-bounded by ?1n k?Y X ? ?
YX
Lemma 6. Assume VY X is compact. Then, for a sequence ?n ? 0,
(?Y Y + ?n I)?1/2 ?Y X (?XX + ?n I)?1/2 ? VY X
? 0 (n ? ?).
?1/2
Proof. It suffices to prove k{(?Y Y + ?n I)?1/2 ? ?Y Y }?Y X (?XX + ?n I)?1/2 k and
?1/2
?1/2
k?Y Y ?Y X {(?XX + ?n I)?1/2 ? ?XX }k converge to zero. The former is equal to
(?Y Y + ?n I)?1/2 ?1/2 ? I VY X
.
(15)
YY
Note that R(VY X ) ? R(?Y Y ), as remarked in Section 2.2. Let v = ?Y Y u be
1/2
an arbitrary element in R(VY X ) ? R(?Y Y ). We have k{(?Y Y + ?n I)?1/2 ?Y Y ?
1/2
1/2
1/2
1/2
I}vkHY = k(?Y Y + ?n I)?1/2 ?Y Y {?Y Y ? (?Y Y + ?n I)1/2 }?Y Y ukHY ? k?Y Y ?
1/2
1/2
(?Y Y + ?n I)1/2 k k?Y Y ukHY . Since (?Y Y + ?n I)1/2 ? ?Y Y in norm, we obtain
1/2
{(?Y Y + ?n I)?1/2 ?Y Y ? I}v ? 0
(n ? ?)
(16)
for all v ? R(VY X ) ? R(?Y Y ). Because VY X is compact, Lemma 8 in the Appendix
shows Eq. (15) converges to zero. The convergence of the second norm is similar.
4.2
Proof of the main theorems
Proof of Thm. 1. This follows from Lemmas 5, 6, and Lemma 9 in Appendix.
Proof Thm. 2. We show only the convergence of fbn . W.l.o.g, we can assume ?bn ? ?
1/2
1/2
1/2
in HX . From k?XX (fbn ? f )k2HX = k?XX fbn k2HX ? 2h?, ?XX fbn iHX + k?k2HX , it
1/2
suffices to show ? fbn converges to ? in probability. We have
XX
1/2
k?XX fbn ? ?kHX
1/2
+ k?XX (?XX
1/2
b (n) + ?n I)?1/2 ? (?XX + ?n I)?1/2 }?bn kH
? k?XX {(?
X
XX
1/2
+ ?n I)?1/2 (?bn ? ?)kHX + k?XX (?XX + ?n I)?1/2 ? ? ?kHX .
Using the same argument as the bound on the first term in Eq. (14), the first term on
the R.H.S of the above inequality is shown to converge to zero. The convergence of
the second term is obvious. Using the assumption ? ? R(?XX ), the same argument
as the proof of Eq. (16) applies to the third term, which completes the proof.
5
Concluding remarks
We have established the statistical convergence of kernel CCA and NOCCO, showing that the finite sample estimators of the nonlinear mappings converge to the
desired population functions. This convergence is proved in the RKHS norm for
NOCCO, and in the L2 norm for kernel CCA. These results give a theoretical justification for using the empirical estimates of NOCCO and kernel CCA in practice.
We have also derived a sufficient condition, n1/3 ?n ? ?, for the decay of the
regularization coefficient ?n , which ensures the convergence described above. As
[10] suggests, the order of the sufficient condition seems to depend on the function
norm used to determine convergence. An interesting consideration is whether the
order n1/3 ?n ? ? can be improved for convergence in the L2 or RKHS norm.
Another question that remains to be addressed is when to use kernel CCA, COCO,
or NOCCO in practice. The answer probably depends on the statistical properties
of the data. It might consequently be helpful to determine the relation between the
spectral properties of the data distribution and the solutions of these methods.
Acknowledgements
This work is partially supported by KAKENHI 15700241 and Inamori Foundation.
References
[1] S. Akaho. A kernel method for canonical correlation analysis. Proc. Intern. Meeting
on Psychometric Society (IMPS2001), 2001.
[2] N. Aronszajn. Theory of reproducing kernels. Trans. American Mathematical Society,
69(3):337?404, 1950.
[3] F. R. Bach and M. I. Jordan. Kernel independent component analysis. J. Machine
Learning Research, 3:1?48, 2002.
[4] C. R. Baker. Joint measures and cross-covariance operators. Trans. American Mathematical Society, 186:273?289, 1973.
[5] N. Dunford and J. T. Schwartz. Linear Operators, Part II. Interscience, 1963.
[6] K. Fukumizu, F. R. Bach, and A. Gretton. Consistency of kernel canonical correlation.
Research Memorandum 942, Institute of Statistical Mathematics, 2005.
[7] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised
learning with reproducing kernel Hilbert spaces. J. Machine Learning Research, 5:73?
99, 2004.
[8] A. Gretton, O. Bousquet, A. Smola, and B. Sch?
olkopf. Measuring statistical dependence with Hilbert-Schmidt norms. Tech Report 140, Max-Planck-Institut f?
ur
biologische Kybernetik, 2005.
[9] A. Gretton, A. Smola, O. Bousquet, R. Herbrich, B. Sch?
olkopf, and N. Logothetis.
Behaviour and convergence of the constrained covariance. Tech Report 128, MaxPlanck-Institut f?
ur biologische Kybernetik, 2004.
[10] S. Leurgans, R. Moyeed, and B. Silverman. Canonical correlation analysis when the
data are curves. J. Royal Statistical Society, Series B, 55(3):725?740, 1993.
[11] T. Melzer, M. Reiter, and H. Bischof. Nonlinear feature extraction using generalized canonical correlation analysis. Proc. Intern. Conf. Artificial Neural Networks
(ICANN2001), 353?360, 2001.
A
Lemmas used in the proofs
We list the lemmas used in Section 4. See [6] for the proofs.
Lemma 7. Suppose A and B are positive self-adjoint operators on a Hilbert space
such that 0 ? A ? ?I and 0 ? B ? ?I hold for a positive constant ?. Then
kA3/2 ? B 3/2 k ? 3?3/2 kA ? Bk.
Lemma 8. Let H1 and H2 be Hilbert spaces, and H0 be a dense linear subspace of
H2 . Suppose An and A are bounded operators on H2 , and B is a compact operator
from H1 to H2 such that An u ? Au for all u ? H0 , and supn kAn k ? M for some
M > 0. Then An B converges to AB in norm.
Lemma 9. Let A be a compact positive operator on a Hilbert space H, and An (n ?
N) be bounded positive operators on H such that An converges to A in norm. Assume
the eigenspace of A corresponding to the largest eigenvalue is one-dimensional and
spanned by a unit eigenvector ?, and the maximum of the spectrum of An is attained
by a unit eigenvector ?n . Then we have |h?n , ?iH | ? 1 as n ? ?.
| 2891 |@word h:4 inversion:1 seems:1 norm:22 stronger:4 bn:14 covariance:20 decomposition:1 reduction:1 series:1 ecole:1 rkhs:11 ka:1 exy:1 yet:2 fn:1 short:1 provides:2 herbrich:1 org:1 mathematical:2 direct:3 prove:3 interscience:1 introduce:1 mpg:1 increasing:1 xx:59 bounded:9 notation:1 moreover:2 baker:1 eigenspace:2 gif:3 eigenvector:2 guarantee:1 exactly:1 k2:1 schwartz:1 unit:6 yn:2 planck:2 positive:9 tends:1 limit:1 kybernetik:2 bilinear:1 establishing:1 abuse:1 might:1 au:1 suggests:1 range:1 statistically:1 practical:1 unique:3 yj:2 practice:5 definite:1 silverman:1 empirical:10 projection:1 refers:1 onto:2 operator:37 context:1 py:3 measurable:4 map:2 go:2 formulate:1 ke:1 simplicity:1 estimator:5 orthonormal:1 spanned:1 population:3 memorandum:1 justification:2 logothetis:1 suppose:2 us:2 maxplanck:1 element:7 solved:1 ensures:1 moderately:1 mine:2 rigorously:1 depend:1 solving:1 compromise:1 easily:1 joint:1 represented:2 various:1 artificial:1 h0:2 cov:7 gi:8 g1:1 obviously:1 sequence:4 eigenvalue:5 maximal:1 product:2 relevant:1 combining:1 achieve:1 adjoint:2 ihy:6 kh:1 ky:14 olkopf:2 convergence:33 converges:6 derive:1 ac:1 measured:1 op:2 ex:7 eq:17 strong:2 solves:1 kenji:1 implies:3 tokyo:1 stochastic:1 enable:1 mathematique:1 require:2 hx:20 behaviour:1 f1:1 suffices:2 preliminary:1 biological:1 extension:2 hold:4 k2h:1 k3:1 mapping:3 purpose:2 proc:2 largest:5 successfully:1 tool:1 fukumizu:4 always:2 aim:1 ekf:1 pn:12 derived:1 kakenhi:1 rank:1 tech:2 rigorous:1 sense:1 helpful:1 bt:1 compactness:1 relation:2 france:1 germany:1 issue:1 denoted:1 constrained:2 marginal:1 equal:5 eigenproblem:1 extraction:1 represents:1 t2:2 report:2 n1:14 ab:1 hg:10 kt:7 arthur:2 xy:1 eigenspaces:1 orthogonal:1 institut:2 kvy:1 desired:5 gihy:1 theoretical:3 increased:1 gn:3 measuring:1 uniform:4 characterize:1 answer:1 continuously:1 fbn:25 satisfied:1 conf:1 american:2 derivative:1 japan:1 de:4 coefficient:5 satisfy:1 depends:1 h1:7 francis:2 sup:2 biologische:2 hf:9 kgk2:1 square:2 variance:4 maximized:1 cybernetics:1 j6:1 definition:3 remarked:1 obvious:1 proof:19 con:3 proved:2 lim:2 dimensionality:1 hilbert:16 attained:1 supervised:1 improved:1 evaluated:1 ihx:16 smola:2 correlation:12 hand:4 aronszajn:1 nonlinear:6 normalized:2 true:1 y2:1 former:1 regularization:7 equality:2 reiter:1 self:2 generalized:2 outline:2 complete:1 fj:4 meaning:1 consideration:1 fi:8 functional:2 jp:1 nocco:15 he:2 leurgans:2 smoothness:2 fk:1 mathematics:2 similarly:2 consistency:1 akaho:1 centre:1 gj:5 coco:7 tikhonov:1 ubingen:1 inequality:2 arbitrarily:2 ht1:2 meeting:1 yi:4 integrable:4 additional:1 ey:8 converge:6 determine:2 ii:1 gretton:5 bach:5 cross:10 expectation:1 kernel:40 normalization:2 qy:2 interval:1 addressed:1 completes:2 singular:3 limn:1 appropriately:1 sch:2 eigenfunctions:5 probably:1 subject:2 jordan:2 call:1 extracting:1 easy:3 xj:2 inner:1 chebyshev:1 whether:1 remark:1 useful:2 clear:1 gih:1 vby:1 exist:1 canonical:6 vy:21 estimated:4 yy:6 write:2 shall:1 k2h2:2 extends:1 throughout:1 k2hs:2 sobolev:2 appendix:2 vb:1 cca:36 bound:2 pxy:3 precisely:2 infinity:2 hy:19 bousquet:2 argument:3 concluding:1 separable:1 px:2 combination:1 ur:2 equation:1 remains:1 discus:2 spectral:1 appropriate:2 enforce:2 schmidt:5 encounter:1 denotes:3 ensure:1 completed:1 yx:12 ism:1 establish:1 classical:2 society:4 question:1 dependence:2 supn:1 subspace:1 distance:2 tuebingen:1 trivial:2 upper:2 finite:7 situation:1 extended:2 looking:1 y1:2 reproducing:4 arbitrary:2 thm:2 bk:1 pair:1 paris:1 kkx:1 bischof:1 established:3 trans:2 max:8 royal:1 imply:1 extract:1 review:1 l2:9 acknowledgement:1 kf:7 asymptotic:1 regularisation:1 law:1 interesting:1 var:5 h2:8 foundation:1 sufficient:4 normalizes:1 supported:1 side:2 weaker:1 institute:3 curve:3 xn:2 gram:1 ihs:2 qx:2 compact:10 xi:4 spectrum:1 subsequence:1 continuous:4 un:3 gbn:4 necessarily:2 main:5 dense:1 n2:2 x1:2 psychometric:1 borel:1 third:2 theorem:11 showing:1 list:1 decay:5 admits:1 normalizing:1 exists:2 essential:1 ih:7 k2hx:5 kx:13 easier:1 intern:2 expressed:1 partially:1 supk:1 applies:2 khs:1 satisfies:1 kan:1 goal:1 formulated:3 consequently:1 shared:1 included:2 uniformly:2 justify:1 lemma:20 called:5 latter:1 khx:11 incorporate:1 avoiding:1 correlated:1 |
2,083 | 2,892 | Logic and MRF Circuitry for Labeling
Occluding and Thinline Visual Contours
Eric Saund
Palo Alto Research Center
3333 Coyote Hill Rd.
Palo Alto, CA 94304
[email protected]
Abstract
This paper presents representation and logic for labeling contrast edges
and ridges in visual scenes in terms of both surface occlusion (border
ownership) and thinline objects. In natural scenes, thinline objects include sticks and wires, while in human graphical communication thinlines include connectors, dividers, and other abstract devices. Our analysis is directed at both natural and graphical domains. The basic problem
is to formulate the logic of the interactions among local image events,
specifically contrast edges, ridges, junctions, and alignment relations,
such as to encode the natural constraints among these events in visual
scenes. In a sparse heterogeneous Markov Random Field framework, we
define a set of interpretation nodes and energy/potential functions among
them. The minimum energy configuration found by Loopy Belief Propagation is shown to correspond to preferred human interpretation across
a wide range of prototypical examples including important illusory contour figures such as the Kanizsa Triangle, as well as more difficult examples. In practical terms, the approach delivers correct interpretations
of inherently ambiguous hand-drawn box-and-connector diagrams at low
computational cost.
1 Introduction
A great deal of attention has been paid to the curious phenomenon of illusory contours in
visual scenes [5]. The most famous example is the Kanizsa Triangle (Figure 1). Although
a number of explanations have been proposed, computational accounts have converged on
the understanding that illusory contours are an outcome of the more general problem of
labeling scene contours in terms of causal events such as surface overlap. Illusory contours are the visual system?s way of expressing belief in an occlusion relation between
two surfaces having the same lightness and therefore lacking a visible contrast edge. The
phenomena are interesting in their revelation of interactions among multiple factors comprising the visual system?s prior assumptions about what constitutes likely interpretations
of ambiguous input.
Several computational models for this process have generated interpretations of Kanizsalike figures corresponding to human perception. Williams[9] formulated an integer-linear
Figure 1: a. Original Kanizsa Triangle. b. Solid surface version. c. Human preferred
interpretation. d, e. Other valid interpretations.
optimization problem with hard constrains originating from the topology of contours and
junctions, and soft constraints representing figural biases for non-accidental interpretations
and figural closure. Heitger and von der Heydt[2] implemented a series of nonlinear filtering operations that enacted interactions among line terminations and junctions to infer
modal completions corresponding to illusory contours. Geiger[1] used a dense Markov
Random Field to represent surface depths explicitly and propagated local evidence through
a diffusion process. Saund[6] enumerated possible generic and non-generic interpretations
of T- and L-junctions to set up an optimization problem solved by deterministic annealing.
Liu and Wang[4] set up a network of contours traversing the boundaries of segmented regions, which interact to propagate local information through an iterative updating scheme.
This paper expands this body of previous work in the following ways:
? The computational model is expressed in terms of a sparse heterogeneous Markov
Random Field whose solution is accessible to fast techniques such as Loopy Belief
Propagation.
? We introduce interpretations of thinlines in addition to solid surfaces, adding a
significant layer of richness and complexity.
? The model infers occlusion relations of surfaces depicted by line drawings of their
borders, as well as solid graphics depictions.
? We devise MRF energy functions that implement circuitry for sophisticated logical constraints of the domain.
The result is a formulation that is both fast and effective at correctly interpreting a greater
range of psychophysical and near-practical contour configuration examples than has heretofor been demonstrated. The model exposes aspects of fundamental ambiguity to be resolved by the incorporation of additional constraints and domain-specific knowledge.
2 Interpretation Nodes and Relations
2.1 Visible Contours and Contour Ends
Early vision studies commonly distinguish several models for visible contour creation and
measurement, including contrast edges, lines or ridges, ramps, color and texture edges, etc.
Let us idealize to consider only contrast edges and ridges (also known as ?bars?), measured at a single scale. We include in our domain of interest human-generated graphical
Figure 2: a. Sample image region. b. Spatial relation categories characterizing links in
the MRF among Contour End nodes: Corner, Near Alignment, Far Alignment, Lateral. c.
Resulting MRF including nodes of type Visible Contour, Contour End, Corner Tie, and
Corner Tie Mediator.
figures. Contrast edges arise from distinct regions or surfaces, while ridges may represent
either a boundary between regions or else a ?thinline?, i.e. a physical or graphical object
whose shape is essentially defined by a one-dimensional path at our scale of measurement.
Examples of thinlines in photographic imagery include twigs, sidewalk cracks, and telephone wires, while in graphical images thinlines include separators, connectors, and arrow
shafts. Figure 7e shows a hand-drawn sketch in which some lines (measured as ridges) are
intended to define boxes and therefore represent region boundaries, while others are connectors between boxes. We take the contour interpretation problem to include the analysis
of this type of scene in addition to classical illusory contour figures.
For any input data, we may construct a Markov Random Field consisting of four types of
nodes derived from measured contrast edge and ridge contours. An interpretation is an
assignment of states to nodes. Local potentials and the potential matrices associated with
pairwise links between nodes encode constraints and biases among interpretation states
based on the spatial relations among the visible contours. Figure 2 illustrates MRF nodes
types and links for a simple example input image, as explained below.
Let us assume that contours defining region boundaries are assigned an occlusion direction,
equivalent to relative surface depth and hence boundary ownership. Figure 3 shows the possible mappings between visible image contours measured as contrast edges or ridges, and
their interpretation in terms of direction of surface overlap or else thinline object. Contrast
edges always correspond to surface occlusion, while ridges may represent either a surface
boundary or a thinline object. Correspondingly, the simplest MRF node type is the Visible
Contour node which has state dimension 3 corresponding to two possible overlap directions
and one thinline interpretation.
Most of the interesting evidence and interaction occurs at terminations and junctions of
visible contours. Contour End nodes are given the job of explaining why a smooth visible
edge or ridge contour has terminated visibility, and hence they will encode the bulk of the
modal (illusory) and amodal (occluded) completion information of a computed interpretation. Smooth visible contours may terminate in four ways:
Figure 3: Permissible mappings between visible edge and ridge contours and interpretations. Wedges indicate direction of surface overlap: white (FG) surface occludes shaded
(BG) surface.
1. The surface boundary contour or thinline object changes direction (turns a corner)
2. The contour becomes modal because the background surface lacks a visible edge
with the foreground surface.
3. The contour becomes amodal because it becomes occluded by another surface.
4. The contour simply terminates when an surface overlap meets the end of a fold,
or when a thin object or graphic stops.
Contour Ends therefore have 3x4 = 12 interpretation states as shown in Figure 4.
Figure 4: Contour End nodes have state dimension 12 indicating contour overlap
type/direction (overlap or thinline) and one of four explanations for termination of the
visible contour.
Every Visible Contour node is linked to its two corresponding Contour End nodes through
energy matrices (or equivalently, potential matrices, using Potential ? = exp?E ) representing simple compatibility among overlap direction/thinline interpretation states. Additional
links in the network are created based on spatial relations among Contour Ends as described
next.
a
b
Figure 5: a. Corner Tie nodes have state dimension 6 indicating the causal relationship
between the Contour End nodes they link. b. Energy matrix linking the Left Contour End
of a pair of corner-relation Contour Ends to their Corner Tie. X indicates high energy
prohibiting the state combination. EA refers to a low penalty for Accidental Coincidence
of the Contour Ends. EDC refers to a (typically low) penalty of two Contour Ends failing
to meet the ideal geometrical constraints of meeting at a corner. The subscripts refer to
necessary Near-Alignment Relations on the Contour Ends. The energy matrix linking the
Right End Contour to the Corner Tie swaps the 5th and 6th columns.
2.2 Contour Ends Relation Links
Let us consider five classes of pairwise geometric relations among observed contour ends:
Corner, Near-Alignment, Far-Alignment, Lateral, and Unrelated. Mathematical expressions forming the bases for these relations may be engineered as measures of distance and
smooth continuation such as used by Saund [6]. The Corner relation depends only on
proximity; Near-Alignment depends on proximity and alignment; Far-Alignment omits the
proximity requirement.
Within this framework a further refinement distinguishes ridge Contour Ends from those
arising from contrast edges. Namely, ridge ends are permitted to form Lateral relation links
which correspond to potential modal contours. Contrast edge Contour Ends are excluded
from this link type because they terminate at junctions which distribute modal and amodal
completion roles to their participating Contour Ends. Contour End nodes from ridge contours may participate in Far-Alignment links but their local energies are set to preclude
them from taking states representing modal completions.
In this way the present model fixes the topology of related ends in the process of setting up
the Markov Graph. An important problem for future research is to formulate the Markov
Graph to include all plausible Contour End pairings and have the actual pairings sort themselves out at solution time.
Biases about preferred and less-preferred interpretations are represented through the terms
in the energy matrices linking related Contour Ends. In accordance with prior work, we
bias energy terms associated with curved Visible Contours and junctions of Contour Ends
in favor of convex object interpretations. Space limitations preclude presenting the energy
matrices in detail, but we discuss the main novel and significant considerations.
The simplest case is pairs of Contour Ends sharing a Near-Alignment or Far-Alignment
relation. These energy matrices are constructed to trade off priors regarding accidental
alignment versus amodal or modal invisible contour completion interpretations. For Con-
Figure 6: The Corner Tie Mediator node restricts border ownership of occluding contours
to physically consistent interpretations. The energy matrix shown in e links the Corner Tie
Mediator to the Left Corner Tie of a pair sharing a Contour End. X indicates high energy.
The energy matrix for the link to the Right Corner Tie swaps the second and third columns.
tour End pairs that are relatively near and well aligned, energy terms corresponding to
causally unrelated interpretations (CE states 0,1,2) are large, while terms corresponding to
amodal completion with compatible overlap/thinline property (CE states 6,7,8) are small.
Actual energy values for the matrices are assigned by straightforward formulas derived
from the Proximity and Smooth Continuation terms mentioned above. Per Kanizsa, modal
completion interpretations (CE states 3,4,5) are somewhat more expensive than amodal
interpretations, by a constant factor. Energy terms shift their relative weights in favor of
causally unrelated interpretations (CE corner states 0,1,2) as the Contour Ends become
more distant and less aligned.
Contour Ends sharing a Corner relation can be related in one of three ways: they can
be causally unrelated and unordered in depth; they can represent a turning of a surface
boundary or thinline object; they can represent overlap of one contour above the other. In
order to exploit the geometry of Contour Ends as local evidence, these alternatives must be
articulated and entered into the MRF node graph. To do this we therefore introduce a third
type of node, the Corner Tie node, possessing six states as illustrated in Figure 5a.
The energy matrix relating Contour End nodes and Corner Tie nodes is shown in Figure
5b. It contains low energy terms representing the Corner Tie?s belief that the Contour End
termination is due to direction change (turning a corner). It also contains low energy terms
representing the conditions of one Contour End?s owning surface overlapping the other
contour, i.e. the relative depth relation between these contours in the scene.
2.3 Constraints on Overlaps and Thinlines at Junctions
Physical considerations impose hard constraints on the interpretations of End Pairs meeting
at a junction. Consider the T-junction in Figure 6a. One preferred interpretation for a
T-junction is occlusion (6b). A less-preferred but possible interpretation is a change of
direction (corner) by one surface, with accidental alignment by another contour (6c). What
is impossible is for a surface boundary to bifurcate and ?belong? to both sides of the T (6d).
This type of constraint cannot be enforced by the purely pairwise Corner Tie node. We
therefore introduce a fourth node type, the Corner Tie Mediator. This node governs the
number of Corner Ties that any Contour End can claim to form a direction change (corner
turn) relation with. The energy matrix for the Corner Tie Mediator node is shown in Figure
6e: multiple Corner-Ties in the overlap direction-turn states (CT states 1 & 2) are excluded
(solid arrows). But note that the matrix contains a low energy term (dashed arrow) for
the formation of multiple direction-turn Corner-Ties provided they are in the Thinline state
(CT state 3); branching of thinline objects is physically permissible.
3 Experiments and Conclusion
Loopy Belief Propagation under the Max-Product algorithm seeks the MAP configuration which is equivalent to the minimum-energy assignment of states [8]. We have not
encountered a failure of LBP to converge, and it is quite rare to encounter a lower-energy
assignment of states than the algorithm delivers starting from an initial uniform distribution
over states. However, multiple stable fixed points can exist. For some ambiguous figures
such as Figure 7e in which qualitatively different interpretations have similar energies, one
may clamp one or more nodes to alternative states, leading to LBP solutions which persist
once the clamping is removed. This invites the exploration of N-best configuration solution
techniques [10].
Figure 7 demonstrates MAP assignments corresponding to preferred human interpretations
of the classic Kanizsa illusory contour figure and others containing both aligning L-junction
and ridge termination evidence for modal contours, amodal completions, and thinline objects. Note that the MRF correctly predicts that outline drawings of surface boundaries do
not induce illusory contours.
Figure 7g borrows from experiments by Szummer and Cowans[7] toward a practical application in line drawing interpretation, in which closed boxes define regions while connectors
remain interpreted as thinline objects. For this scene containing 369 nodes and 417 links,
the entire process of forming the MRF and performing 100 iterations of LBP takes less
than a second. The major pressures operating in these situations are a figural bias toward
interpreting closed paths as convex regions, and a preference to interpret ridge contours
participating in T- and X- junctions as thinline objects.
We have shown how explicit consideration of ridge features and thinline interpretations
brings new complexity to the logic of sorting out depth relations in visual scenes. This
investigation suggests that a sparse heterogeneous Markov Random Field approach may
provide a suitable basis for such models.
References
[1] Geiger, D., Kumaran, K, & Parida, L. (1996) Visual organization for figure/ground separation. in
Proc. IEEE CVPR pp. 155-160.
[2] Heitger, F., & von der Heydt, R. (1993) A Computational Model of Neural Contour Processing:
Figure-Ground Segregation and Illusory Contours. Proc. ICCV ?93.
[3] Kanizsa, G. (1979) Organization in Vision, Praeger, New York.
[4] Liu, X., Wang, D. (2000) Perceptual Organization Based on Temporal Dynamics. in S.A. Solla,
T.K. Leen, K.-R. Muller (eds.), Advances in Neural Information Processing Systems 12, pp. 38-44.
MIT Press.
[5] Petry, S., & Meyer, G. (eds.) (1987) The Perception of Illusory Contours, Springer-Verlag, New
York.
[6] Saund, E. (1999) Perceptual Organization of Occluding Contours of Opaque Surfaces, CVIU V.
76, No. 1, pp. 70-82.
[7] Szummer, M., & Cowans, P. (2004) Incorporating Context and User Feedback in Pen-Based
Interfaces. AAAI TR FS-04-06 (Papers from the 2004 AAAI Fall Symposium.)
[8] Weiss, Y., and Freeman, W.T. (2001) On the optimality of solutions of the max-product belief
propagation algorithm in arbitrary graphs, IEEE Trans. Inf. Theory 47:2, pp. 723-735.
[9] Williams, L. (1990) Perceptual Organization of Occluding Contours. Proc. ICCV ?90. pp. 639649.
[10] Yanover, C. and Weiss, Y. (2003) Finding the M Most Probable Configurations Using Loopy
?
Belief Propagation. in S. Thrun, L. Saul and B. Sch0lkpf,
eds., Advances in Neural Information
Processing Systems 16, MIT Press.
| 2892 |@word version:1 termination:5 closure:1 seek:1 propagate:1 paid:1 pressure:1 tr:1 solid:4 initial:1 configuration:5 series:1 liu:2 contains:3 com:1 must:1 distant:1 visible:15 occludes:1 shape:1 visibility:1 praeger:1 device:1 node:29 preference:1 five:1 mathematical:1 constructed:1 become:1 symposium:1 pairing:2 cowans:2 introduce:3 pairwise:3 themselves:1 freeman:1 actual:2 preclude:2 becomes:3 provided:1 unrelated:4 alto:2 what:2 interpreted:1 finding:1 temporal:1 every:1 expands:1 tie:18 revelation:1 demonstrates:1 stick:1 causally:3 local:6 accordance:1 meet:2 path:2 subscript:1 suggests:1 shaded:1 range:2 directed:1 practical:3 implement:1 induce:1 refers:2 cannot:1 context:1 impossible:1 equivalent:2 deterministic:1 demonstrated:1 center:1 map:2 williams:2 attention:1 straightforward:1 starting:1 convex:2 formulate:2 enacted:1 classic:1 user:1 expensive:1 updating:1 persist:1 predicts:1 observed:1 role:1 coincidence:1 solved:1 wang:2 region:8 richness:1 solla:1 trade:1 removed:1 mentioned:1 complexity:2 constrains:1 occluded:2 dynamic:1 parc:1 creation:1 purely:1 eric:1 swap:2 triangle:3 basis:1 resolved:1 connector:5 represented:1 articulated:1 distinct:1 fast:2 effective:1 labeling:3 formation:1 outcome:1 whose:2 quite:1 plausible:1 cvpr:1 drawing:3 ramp:1 favor:2 clamp:1 interaction:4 product:2 aligned:2 entered:1 figural:3 participating:2 requirement:1 object:13 completion:8 measured:4 job:1 implemented:1 indicate:1 direction:12 wedge:1 correct:1 exploration:1 human:6 engineered:1 fix:1 investigation:1 probable:1 enumerated:1 proximity:4 ground:2 exp:1 great:1 mapping:2 claim:1 circuitry:2 major:1 early:1 thinline:18 failing:1 proc:3 palo:2 expose:1 mit:2 always:1 encode:3 derived:2 indicates:2 contrast:11 typically:1 entire:1 relation:19 mediator:5 originating:1 comprising:1 compatibility:1 among:11 spatial:3 field:5 construct:1 once:1 having:1 x4:1 constitutes:1 thin:1 foreground:1 future:1 others:2 distinguishes:1 intended:1 occlusion:6 consisting:1 geometry:1 organization:5 interest:1 heitger:2 alignment:14 edge:15 necessary:1 traversing:1 causal:2 column:2 soft:1 amodal:7 assignment:4 loopy:4 cost:1 rare:1 tour:1 coyote:1 uniform:1 graphic:2 fundamental:1 accessible:1 off:1 von:2 ambiguity:1 imagery:1 aaai:2 containing:2 corner:29 leading:1 account:1 potential:6 distribute:1 unordered:1 explicitly:1 bg:1 depends:2 saund:5 closed:2 linked:1 sort:1 correspond:3 famous:1 converged:1 sharing:3 ed:3 failure:1 energy:26 pp:5 associated:2 con:1 propagated:1 stop:1 illusory:11 logical:1 knowledge:1 color:1 infers:1 sophisticated:1 ea:1 permitted:1 modal:9 wei:2 formulation:1 leen:1 box:4 hand:2 sketch:1 invite:1 nonlinear:1 overlapping:1 propagation:5 lack:1 brings:1 divider:1 idealize:1 assigned:2 hence:2 excluded:2 illustrated:1 deal:1 white:1 branching:1 ambiguous:3 hill:1 presenting:1 ridge:17 outline:1 invisible:1 delivers:2 interpreting:2 interface:1 geometrical:1 image:5 consideration:3 novel:1 possessing:1 physical:2 linking:3 interpretation:35 belong:1 relating:1 interpret:1 expressing:1 significant:2 measurement:2 refer:1 rd:1 stable:1 surface:26 depiction:1 etc:1 base:1 aligning:1 operating:1 inf:1 verlag:1 meeting:2 der:2 devise:1 muller:1 minimum:2 greater:1 additional:2 impose:1 somewhat:1 converge:1 dashed:1 multiple:4 photographic:1 infer:1 segmented:1 smooth:4 mrf:9 basic:1 heterogeneous:3 essentially:1 vision:2 physically:2 iteration:1 represent:6 addition:2 background:1 lbp:3 annealing:1 diagram:1 else:2 permissible:2 integer:1 curious:1 near:7 ideal:1 topology:2 regarding:1 shift:1 expression:1 six:1 penalty:2 f:1 york:2 governs:1 category:1 simplest:2 continuation:2 exist:1 restricts:1 crack:1 arising:1 correctly:2 per:1 bulk:1 four:3 drawn:2 ce:4 diffusion:1 graph:4 enforced:1 fourth:1 opaque:1 separation:1 geiger:2 layer:1 ct:2 distinguish:1 accidental:4 fold:1 encountered:1 constraint:9 incorporation:1 prohibiting:1 scene:9 aspect:1 optimality:1 performing:1 relatively:1 combination:1 across:1 terminates:1 remain:1 explained:1 iccv:2 segregation:1 turn:4 discus:1 end:38 junction:13 operation:1 sidewalk:1 generic:2 alternative:2 encounter:1 original:1 include:7 graphical:5 exploit:1 classical:1 psychophysical:1 occurs:1 distance:1 link:12 lateral:3 thrun:1 participate:1 toward:2 relationship:1 equivalently:1 difficult:1 bifurcate:1 wire:2 kumaran:1 markov:7 curved:1 defining:1 situation:1 communication:1 kanizsa:6 heydt:2 arbitrary:1 pair:5 namely:1 omits:1 trans:1 bar:1 below:1 perception:2 including:3 max:2 explanation:2 belief:7 event:3 overlap:12 natural:3 suitable:1 turning:2 yanover:1 representing:5 scheme:1 lightness:1 created:1 prior:3 understanding:1 geometric:1 relative:3 lacking:1 prototypical:1 interesting:2 filtering:1 limitation:1 versus:1 borrows:1 consistent:1 compatible:1 bias:5 side:1 wide:1 explaining:1 characterizing:1 correspondingly:1 taking:1 fall:1 sparse:3 fg:1 saul:1 boundary:10 depth:5 dimension:3 valid:1 feedback:1 contour:82 commonly:1 refinement:1 qualitatively:1 far:5 preferred:7 logic:4 iterative:1 pen:1 why:1 terminate:2 ca:1 inherently:1 interact:1 separator:1 domain:4 dense:1 main:1 arrow:3 border:3 terminated:1 arise:1 body:1 owning:1 shaft:1 meyer:1 explicit:1 perceptual:3 third:2 formula:1 specific:1 evidence:4 incorporating:1 adding:1 texture:1 illustrates:1 clamping:1 sorting:1 cviu:1 depicted:1 simply:1 likely:1 forming:2 visual:8 expressed:1 springer:1 formulated:1 ownership:3 hard:2 change:4 specifically:1 telephone:1 occluding:4 indicating:2 szummer:2 phenomenon:2 |
2,084 | 2,893 | Analyzing Auditory Neurons by Learning
Distance Functions
Inna Weiner1
Tomer Hertz1,2
Israel Nelken2,3
Daphna Weinshall1,2
1
School of Computer Science and Engineering,
The Center for Neural Computation, 3 Department of Neurobiology,
The Hebrew University of Jerusalem, Jerusalem, Israel, 91904
weinerin,tomboy,[email protected],[email protected]
2
Abstract
We present a novel approach to the characterization of complex sensory
neurons. One of the main goals of characterizing sensory neurons is
to characterize dimensions in stimulus space to which the neurons are
highly sensitive (causing large gradients in the neural responses) or alternatively dimensions in stimulus space to which the neuronal response
are invariant (defining iso-response manifolds). We formulate this problem as that of learning a geometry on stimulus space that is compatible
with the neural responses: the distance between stimuli should be large
when the responses they evoke are very different, and small when the responses they evoke are similar. Here we show how to successfully train
such distance functions using rather limited amount of information. The
data consisted of the responses of neurons in primary auditory cortex
(A1) of anesthetized cats to 32 stimuli derived from natural sounds. For
each neuron, a subset of all pairs of stimuli was selected such that the
responses of the two stimuli in a pair were either very similar or very
dissimilar. The distance function was trained to fit these constraints. The
resulting distance functions generalized to predict the distances between
the responses of a test stimulus and the trained stimuli.
1
Introduction
A major challenge in auditory neuroscience is to understand how cortical neurons represent
the acoustic environment. Neural responses to complex sounds are idiosyncratic, and small
perturbations in the stimuli may give rise to large changes in the responses. Furthermore,
different neurons, even with similar frequency response areas, may respond very differently
to the same set of stimuli. The dominant approach to the functional characterization of
sensory neurons attempts to predict the response of the cortical neuron to a novel stimulus.
Prediction is usually estimated from a set of known responses of a given neuron to a set of
stimuli (sounds). The most popular approach computes the spectrotemporal receptive field
(STRF) of each neuron, and uses this linear model to predict neuronal responses. However,
STRFs have been recently shown to have low predictive power [10, 14].
In this paper we take a different approach to the characterization of auditory cortical neurons. Our approach attempts to learn the non-linear warping of stimulus space that is in-
duced by the neuronal responses. This approach is motivated by our previous observations
[3] that different neurons impose different partitions of the stimulus space, which are not
necessarily simply related to the spectro-temporal structure of the stimuli. More specifically, we characterize a neuron by learning a pairwise distance function over the stimulus
domain that will be consistent with the similarities between the responses to different stimuli, see Section 2. Intuitively a good distance function would assign small values to pairs
of stimuli that elicit a similar neuronal response, and large values to pairs of stimuli that
elicit different neuronal responses.
This approach has a number of potential advantages: First, it allows us to aggregate information from a number of neurons, in order to learn a good distance function even when the
number of known stimuli responses per neuron is small, which is a typical concern in the
domain of neuronal characterization. Second, unlike most functional characterizations that
are limited to linear or weakly non-linear models, distance learning can approximate functions that are highly non-linear. Finally, we explicitly learn a distance function on stimulus
space; by examining the properties of such a function, it may be possible to determine the
stimulus features that most strongly influence the responses of a cortical neuron. While
this information is also implicitly incorporated into functional characterizations such as the
STRF, it is much more explicit in our new formulation.
In this paper we therefore focus on two questions: (1) Can we learn distance functions
over the stimulus domain for single cells using information extracted from their neuronal
responses?? and (2) What is the predictive power of these cell specific distance functions
when presented with novel stimuli? In order to address these questions we used extracellular recordings from 22 cells in the auditory cortex of cats in response to natural bird chirps
and some modified versions of these chirps [1]. To estimate the distance between responses,
we used a normalized distance measure between the peri-stimulus time histograms of the
responses to the different stimuli.
Our results, described in Section 4, show that we can learn compatible distance functions on
the stimulus domain with relatively low training errors. This result is interesting by itself as
a possible characterization of cortical auditory neurons, a goal which eluded many previous
studies [3]. Using cross validation, we measure the test error (or predictive power) of
our method, and report generalization power which is significantly higher than previously
reported for natural stimuli [10]. We then show that performance can be further improved
by learning a distance function using information from pairs of related neurons. Finally, we
show better generalization performance for wide-band stimuli as compared to narrow-band
stimuli. These latter two contributions may have some interesting biological implications
regarding the nature of the computations done by auditory cortical neurons.
Related work Recently, considerable attention has been focused on spectrotemporal receptive fields (STRFs) as characterizations of the function of auditory cortical neurons
[8, 4, 2, 11, 16].
The STRF model is appealing in several respects: it is a conceptually simple model that provides a linear description of the neuron?s behavior. It can be
interpreted both as providing the neuron?s most efficient stimulus (in the time-frequency
domain), and also as the spectro-temporal impulse response of the neuron [10, 12]. Finally,
STRFs can be efficiently estimated using simple algebraic techniques.
However, while there were initial hopes that STRFs would uncover relatively complex
response properties of cortical neurons, several recent reports of large sets of STRFs of
cortical neurons concluded that most STRFs are somewhat too simple [5], and that STRFs
are typically rather sluggish in time, therefore missing the highly precise synchronization
of some cortical neurons [11]. Furthermore, when STRFs are used to predict neuronal
responses to natural stimuli they often fail to predict the correct responses [10, 6]. For
example, in Machens et al. only 11% of the response power could be predicted by STRFs
on average [10]. Similar results were also reported in [14], who found that STRF models
account for only 18 ? 40% (on average) of the stimulus related power in auditory cortical
neural responses to dynamic random chord stimuli. Various other studies have shown that
there are significant and relevant non-linearities in auditory cortical responses to natural
stimuli [13, 1, 9, 10]. Using natural sounds, Bar-Yosef et. al [1] have shown that auditory
neurons are extremely sensitive to small perturbations in the (natural) acoustic context.
Clearly, these non-linearities cannot be sufficiently explained using linear models such as
the STRF.
2
Formalizing the problem as a distance learning problem
Our approach is based on the idea of learning a cell-specific distance function over the space
of all possible stimuli, relying on partial information extracted from the neuronal responses
of the cell. The initial data consists of stimuli and the resulting neural responses. We use
the neuronal responses to identify pairs of stimuli to which the neuron responded similarly
and pairs to which the neuron responded very differently. These pairs can be formally
described by equivalence constraints. Equivalence constraints are relations between pairs
of datapoints, which indicate whether the points in the pair belong to the same category or
not. We term a constraint positive when they points are known to originate from the same
class, and negative belong to different classes. In this setting the goal of the algorithm is to
learn a distance function that attempts to comply with the equivalence constraints.
This formalism allows us to combine information from a number of cells to improve
the resulting characterization. Specifically, we combine equivalence constraints gathered
from pairs of cells which have similar responses, and train a single distance function for
both cells. Our results demonstrate that this approach improves prediction results of the
?weaker? cell, and almost always improves the result of the ?stronger? cell in each pair.
Another interesting result of this formalism is the ability to classify stimuli based on the
responses of the total recorded cortical cell ensemble. For some stimuli, the predictive
performance based on the learned inter-stimuli distance was very good, whereas for other
stimuli it was rather poor. These differences were correlated with the acoustic structure of
the stimuli, partitioning them into narrowband and wideband stimuli.
3
Methods
Experimental setup Extracellular recordings were made in primary auditory cortex of
nine halothane-anesthetized cats. Anesthesia was induced by ketamine and xylazine and
maintained with halothane (0.25-1.5%) in 70% N2 O using standard protocols authorized
by the committee for animal care and ethics of the Hebrew University - Haddasah Medical
School. Single neurons were recorded using metal microelectrodes and an online spike
sorter (MSD, alpha-omega). All neurons were well separated. Penetrations were performed
over the whole dorso-ventral extent of the appropriate frequency slab (between about 2 and
8 kHz). Stimuli were presented 20 times using sealed, calibrated earphones at 60-80 dB
SPL, at the preferred aurality of the neurons as determined using broad-band noise bursts.
Sounds were taken from the Cornell Laboratory of Ornithology and have been selected
as in [1]. Four stimuli, each of length 60-100 ms, consisted of a main tonal component
with frequency and amplitude modulation and of a background noise consisting of echoes
and unrelated components. Each of these stimuli was further modified by separating the
main tonal component from the noise, and by further separating the noise into echoes and
background. All possible combinations of these components were used here, in addition
to a stylized artificial version that lacked the amplitude modulation of the natural sound.
In total, 8 versions of each stimulus were used, and therefore each neuron had a dataset
consisting of 32 datapoints. For more detailed methods, see Bar-Yosef et al. [1].
Data representation We used the first 60 ms of each stimulus. Each stimulus was represented using the first d real Cepstral coefficients. The real Cepstrum of a signal x was
calculated by taking the natural logarithm of magnitude of the Fourier transform of x and
then computing the inverse Fourier transform of the resulting sequence. In our experiments
we used the first 21-30 coefficients. Neuronal responses were represented by creating PeriStimulus Time Histograms (PSTHs) using 20 repetitions recorded for each stimuli. Response duration was 100 ms.
Obtaining equivalence constraints over stimuli pairs The distances between responses
were measured using a normalized ?2 distance measure. All responses to both stimuli (40
responses in total) were superimposed to generate a single high-resolution PSTH. Then, this
PSTH was non-uniformly binned so that each bin contained at least 10 spikes. The same
bins were then used to generate the PSTHs of the responses to the two stimuli separately.
For similar responses, we would expect that on average each bin in these histograms would
contain 5 spikes. Formally, let N denote the number of bins in each histogram, and let r1i ,r2i
denote the number of spikes in the i?th bin in each of the two histograms respectively. The
PN (ri ?ri )2
distance between pairs of histograms is given by: ?2 (r1i , r2i ) = i=1 (ri1+ri2)/2 /(N ? 1).
1
2
In order to identify pairs (or small groups) of similar responses, we computed the normalized ?2 distance matrix over all pairs of responses, and used the complete-linkage algorithm
to cluster the responses into 8 ? 12 clusters. All of the points in each cluster were marked
as similar to one another, thus providing positive equivalence constraints. In order to obtain
negative equivalence constraints, for each cluster ci we used the 2?3 furthest clusters from
it to define negative constraints. All pairs, composed of a point from cluster ci and another
point from these distant clusters, were used as negative constraints.
Distance learning method In this paper, we use the DistBoost algorithm [7], which is
a semi-supervised boosting learning algorithm that learns a distance function using unlabeled datapoints and equivalence constraints. The algorithm boosts weak learners which
are soft partitions of the input space, that are computed using the constrained ExpectationMaximization (cEM) algorithm [15]. The DistBoost algorithm, which is briefly summarized in 1, has been previously used in several different applications and has been shown
to perform well [7, 17].
Evaluation methods In order to evaluate the quality of the learned distance function,
we measured the correlation between the distances computed by our distance learning algorithm to those induced by the ?2 distance over the responses. For each stimulus we
measured the distances to all other stimuli using the learnt distance function. We then computed the rank-order (Spearman) correlation coefficient between these learnt distances in
the stimulus domain and the ?2 distances between the appropriate responses. This procedure produced a single correlation coefficient for each of the 32 stimuli, and the average
correlation coefficient across all stimuli was used as the overall performance measure.
Parameter selection The following parameters of the DistBoost algorithm can be finetuned: (1) the input dimensionality d = 21-30, (2) the number of Gaussian models in
each weak learner M = 2-4, (3) the number of clusters used to extract equivalence constraints C = 8-12, and (4) the number of distant clusters used to define negative constraints
numAnti = 2-3. Optimal parameters were determined separately for each of the 22 cells,
based solely on the training data. Specifically, in the cross-validation testing we used a
validation paradigm: Using the 31 training stimuli, we removed an additional datapoint
and trained our algorithm on the remaining 30 points. We then validated its performance
using the left out datapoint. The optimal cell specific parameters were determined using
this approach.
Algorithm 1 The DistBoost Algorithm
Input:
Data points: (x1 , ..., xn ), xk ? X
A set of equivalence constraints: (xi1 , xi2 , yi ), where yi ? {?1, 1}
Unlabeled pairs of points: (xi1 , xi2 , yi = ?), implicitly defined by all unconstrained pairs of points
? Initialize Wi11 i2 = 1/(n2 ) i1 , i2 = 1, . . . , n (weights over pairs of points)
wk = 1/n
k = 1, . . . , n (weights over data points)
? For t = 1, .., T
1. Fit a constrained GMM (weak learner) on weighted data points in X using the equivalence constraints.
? t : X ? X ? [?1, 1] and define a weak distance function as
2. Generate a weak ?hypothesis h
?
? t (xi , xj ) ? [0, 1]
ht (xi , xj ) = 21 1 ? h
3. Compute rt =
P
(xi ,xi ,yi =?1)
1
2
Wit1 i2 y i ht (xi1 , xi2 ), only over labeled pairs. Accept the current
hypothesis only if rt > 0.
4. Choose the hypothesis weight ?t =
1
2
1+r
ln( 1?rt )
t
5. Update the weights of all points in X ? X as follows:
(
? t (xi , xi ))
Wit1 i2 exp(??t yi h
t+1
1
2
Wi i =
1 2
Wit1 i2 exp(??t )
6. Normalize: Wit+1
i =
1 2
W
n
P
t+1
i1 i2
W
i1 ,i2 =1
t+1
i1 i2
t+1
7. Translate the weights from X ? X to X : wk
=
Output: A final distance function D(xi , xj ) =
4
yi ? {?1, 1}
yi = ?
PT
t=1
?t ht (xi , xj )
P
j
t+1
Wkj
Results
Cell-specific distance functions We begin our analysis with an evaluation of the fitting
power of the method, by training with the entire set of 32 stimuli (see Fig. 1). In general almost all of the correlation values are positive and they are quite high. The average
correlation over all cells is 0.58 with ste = 0.023.
In order to evaluate the generalization potential of our approach, we used a Leave-OneOut (LOU) cross-validation paradigm. In each run, we removed a single stimulus from the
dataset, trained our algorithm on the remaining 31 stimuli, and then tested its performance
on the datapoint that was left out (see Fig. 3). In each histogram we plot the test correlations
of a single cell, obtained when using the LOU paradigm over all of the 32 stimuli. As can
be seen, on some cells our algorithm obtains correlations that are as high as 0.41, while
for other cells the average test correlation is less then 0.1. The average correlation over all
cells is 0.26 with ste = 0.019.
Not surprisingly, the train results (Fig. 1) are better than the test results (Fig. 3). Interestingly, however, we found that there was a significant correlation between the training
performance and the test performance C = 0.57, p < 0.05 (see Fig. 2, left).
Boosting the performance of weak cells In order to boost the performance of cells with
low average correlations, we constructed the following experiment: We clustered the responses of each cell, using the complete-linkage algorithm over the ?2 distances with 4
clusters. We then used the F 21 score that evaluates how well two clustering partitions are
?R
in agreement with one another (F 12 = 2?P
P +R , where P denotes precision and R denotes
recall.). This measure was used to identify pairs of cells whose partition of the stimuli
was most similar to each other. In our experiment we took the four cells with the lowest
Cell 13
All cells
Cell 18
30
30
25
25
20
20
15
15
10
10
5
?1
?0.5
0
0.5
5
0
?1
1
?0.5
0
0.5
1
0
?1
?0.5
0
0.5
1
Figure 1: Left: Histogram of train rank-order correlations on the entire ensemble of cells. The
Mean Test Rank?Order Correlation
rank-order correlations were computed between the learnt distances and the distances between the
recorded responses for each single stimulus (N = 22 ? 32). Center: train correlations for a ?strong?
cell. Right: train correlations for a ?weak? cell. Dotted lines represent average values.
Test correlation
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
Train correlation
0.8
1
Original constraints
Intersection of const.
0.5
0.4
0.3
0.2
0.1
0
16 20
18 14
25 38
Cell number
37 19
Figure 2: Left: Train vs. test cell specific correlations. Each point marks the average correlation of a
single cell. The correlation between train and test is 0.57 with p = 0.05. The distribution of train and
test correlations is displayed as histograms on the top and on the right respectively. Right: Test rankorder correlations when training using constraints extracted from each cell separately, and when using
the intersection of the constraints extracted from a pair of cells. This procedure always improves the
performance of the weaker cell, and usually also improves the performance of the stronger cell
performance (right column of Fig 3), and for each of them used the F 21 score to retrieve the
most similar cell. For each of these pairs, we trained our algorithm once more, using the
constraints obtained by intersecting the constraints derived from the two cells in the pair,
in the LOU paradigm. The results can be seen on the right plot in Fig 2. On all four cells,
this procedure improved LOUT test results. Interestingly and counter-intuitively, when
training the better performing cell in each pair using the intersection of its constraints with
those from the poorly performing cell, results deteriorated only for one of the four better
performing cells.
Stimulus classification The cross-validation results induced a partition of the stimulus
space into narrowband and wideband stimuli. We measured the predictability of each stimulus by averaging the LOU test results obtained for the stimulus across all cells (see Fig. 4).
Our analysis shows that wideband stimuli are more predictable than narrowband stimuli,
despite the fact that the neuronal responses to these two groups are not different as a whole.
Whereas the non-linearity in the interactions between narrowband and wideband stimuli
has already been noted before [9], here we further refine this observation by demonstrating
a significant difference between the behavior of narrow and wideband stimuli with respect
to the predictability of the similarity between their responses.
5
Discussion
In the standard approach to auditory modeling, a linear or weakly non-linear model is fitted
to the data, and neuronal properties are read from the resulting model. The usefulness of
this approach is limited however by the weak predictability of A1 responses when using
such models. In order to overcome this limitation, we reformulated the problem of char-
Cell 49
Cell 15
Cell 52
Cell 2
Cell 25
Cell 37
15
15
15
15
15
15
10
10
10
10
10
10
5
5
5
5
5
5
0
?1
0
?1
0
?1
0
?1
0
?1
?0.5
0
0.5
1
?0.5
Cell 38
0
0.5
1
?0.5
Cell 24
0
0.5
1
?0.5
Cell 19
0
0.5
1
?0.5
Cell 3
0
0.5
1
0
?1
15
15
15
15
10
10
10
10
10
10
5
5
?0.5
0
0.5
1
0
?1
5
?0.5
Cell 54
0
0.5
1
0
?1
5
?0.5
Cell 20
0
0.5
1
0
?1
5
?0.5
Cell 13
0
0.5
1
0
?1
Cell 17
0
0.5
1
0
?1
15
15
15
10
10
10
10
10
10
5
5
5
5
5
5
0
0.5
1
0
?1
?0.5
0
0.5
1
0
?1
?0.5
Cell 36
0
0.5
1
0
?1
?0.5
Cell 21
0
0.5
1
0
?1
?0.5
0
0.5
Cell 48
15
15
15
10
10
10
5
5
5
1
0
0.5
1
0.5
1
Cell 18
15
Cell 14
?0.5
Cell 51
15
?0.5
0.5
5
?0.5
15
0
?1
0
Cell 16
15
0
?1
?0.5
Cell 1
15
1
0
?1
?0.5
0
All cells
15
10
5
0
?1
0
?1
?0.5
0
0.5
?0.5
0
0.5
1
0
?1
?0.5
0
0.5
1
0
?1
?0.5
0
0.5
1
?1
?0.5
0
0.5
1
1
Figure 3: Histograms of cell specific test rank-order correlations for the 22 cells in the dataset. The
rank-order correlations compare the predicted distances to the distances between the recorded responses, measured on a single stimulus which was left out during the training stage. For visualization
purposes, cells are ordered (columns) by their average test correlation per stimulus in descending
order. Negative correlations are in yellow, positive in blue.
acterizing neuronal responses of highly non-linear neurons. We use the neural data as a
guide for training a highly non-linear distance function on stimulus space, which is compatible with the neural responses. The main result of this paper is the demonstration of the
feasibility of this approach.
Two further results underscore the usefulness of the new formulation. First, we demonstrated that we can improve the test performance of a distance function by using constraints
on the similarity or dissimilarity between stimuli derived from the responses of multiple
neurons. Whereas we expected this manipulation to improve the test performance of the
algorithm on the responses of neurons that were initially poorly predicted, we found that it
actually improved the performance of the algorithm also on neurons that were rather well
predicted, although we paired them with neurons that were poorly predicted. Thus, it is
possible that intersecting constraints derived from multiple neurons uncover regularities
that are hard to extract from individual neurons.
Second, it turned out that some stimuli consistently behaved better than others across the
neuronal population. This difference was correlated with the acoustic structure of the stimuli: those stimuli that contained the weak background component (wideband stimuli) were
generally predicted better. This result is surprising both because background component
is substantially weaker than the other acoustic components in the stimuli (by as much as
35-40 dB). It may mean that the relationships between physical structure (as characterized
by the Cepstral parameters) and the neuronal responses becomes simpler in the presence
of the background component, but is much more idiosyncratic when this component is absent. This result underscores the importance of interactions between narrow and wideband
stimuli for understanding the complexity of cortical processing.
The algorithm is fast enough to be used in near real-time. It can therefore be used to guide
real experiments. One major problem during an experiment is that of stimulus selection:
choosing the best set of stimuli for characterizing the responses of a neuron. The distance
functions trained here can be used to direct this process. For example, they can be used to
Main
Natural
0.8
Narrow band
Wide band
20
Frequency (kHz)
Frequency (kHz)
20
10
0.7
0.6
10
0.5
0
50
Time (ms)
0
100
Echo
100
0.4
0.3
20
Frequency (kHz)
Frequency (kHz)
20
10
0
50
Time (ms)
Background
50
Time (ms)
100
Narrowband
0.2
10
0.1
0
50
Time (ms)
100
Wideband
0
0.1
0.2
0.3
0.4
0.5
0.6
A1 Stimuli Mean Test Rank?Order Correlation
Figure 4: Left: spectrograms of input stimuli, which are four different versions of a single natural
bird chirp. Right: Stimuli specific correlation values averaged over the entire ensemble of cells. The
predictability of wideband stimuli is clearly better than that of the narrowband stimuli.
find surprising stimuli: either stimuli that are very different in terms of physical structure
but that would result in responses that are similar to those already measured, or stimuli that
are very similar to already tested stimuli but that are predicted to give rise to very different
responses.
References
[1] O. Bar-Yosef, Y. Rotman, and I. Nelken. Responses of Neurons in Cat Primary Auditory Cortex to Bird Chirps: Effects of
Temporal and Spectral Context. J. Neurosci., 22(19):8619?8632, 2002.
[2] D. T. Blake and M. M. Merzenich. Changes of AI Receptive Fields With Sound Density. J Neurophysiol, 88(6):3409?3420,
2002.
[3] G. Chechik, A. Globerson, M.J. Anderson, E.D. Young, I. Nelken, and N. Tishby. Group redundancy measures reveal
redundancy reduction in the auditory pathway. In NIPS, 2002.
[4] R. C. deCharms, D. T. Blake, and M. M. Merzenich.
280(5368):1439?1444, 1998.
Optimizing Sound Features for Cortical Neurons.
Science,
[5] D. A. Depireux, J. Z. Simon, D. J. Klein, and S. A. Shamma. Spectro-Temporal Response Field Characterization With
Dynamic Ripples in Ferret Primary Auditory Cortex. J Neurophysiol, 85(3):1220?1234, 2001.
[6] J. J. Eggermont, P. M. Johannesma, and A. M. Aertsen. Reverse-correlation methods in auditory research. Q Rev Biophys.,
16(3):341?414, 1983.
[7] T. Hertz, A. Bar-Hillel, and D. Weinshall. Boosting margin based distance functions for clustering. In ICML, 2004.
[8] N. Kowalski, D. A. Depireux, and S. A. Shamma. Analysis of dynamic spectra in ferret primary auditory cortex. I. Characteristics of single-unit responses to moving ripple spectra. J Neurophysiol, 76(5):3503?3523, 1996.
[9] L. Las, E. A. Stern, and I. Nelken. Representation of Tone in Fluctuating Maskers in the Ascending Auditory System. J.
Neurosci., 25(6):1503?1513, 2005.
[10] C. K. Machens, M. S. Wehr, and A. M. Zador. Linearity of Cortical Receptive Fields Measured with Natural Sounds. J.
Neurosci., 24(5):1089?1100, 2004.
[11] L. M. Miller, M. A. Escabi, H. L. Read, and C. E. Schreiner. Spectrotemporal Receptive Fields in the Lemniscal Auditory
Thalamus and Cortex. J Neurophysiol, 87(1):516?527, 2002.
[12] I. Nelken. Processing of complex stimuli and natural scenes in the auditory cortex. Current Opinion in Neurobiology,
14(4):474?480, 2004.
[13] Y. Rotman, O. Bar-Yosef, and I. Nelken. Relating cluster and population responses to natural sounds and tonal stimuli in
cat primary auditory cortex. Hearing Research, 152(1-2):110?127, 2001.
[14] M. Sahani and J. F. Linden. How linear are auditory cortical responses? In NIPS, 2003.
[15] N. Shental, A. Bar-Hilel, T. Hertz, and D. Weinshall. Computing Gaussian mixture models with EM using equivalence
constraints. In NIPS, 2003.
[16] F. E. Theunissen, K. Sen, and A. J. Doupe. Spectral-Temporal Receptive Fields of Nonlinear Auditory Neurons Obtained
Using Natural Sounds. J. Neurosci., 20(6):2315?2331, 2000.
[17] C. Yanover and T. Hertz. Predicting protein-peptide binding affinity by learning peptide-peptide distance functions. In
RECOMB, 2005.
| 2893 |@word briefly:1 version:4 stronger:2 reduction:1 initial:2 score:2 interestingly:2 current:2 surprising:2 distant:2 partition:5 plot:2 update:1 v:1 selected:2 tone:1 xk:1 iso:1 characterization:10 provides:1 boosting:3 psth:2 simpler:1 anesthesia:1 burst:1 constructed:1 direct:1 consists:1 combine:2 fitting:1 pathway:1 pairwise:1 inter:1 expected:1 behavior:2 relying:1 becomes:1 begin:1 linearity:4 formalizing:1 unrelated:1 lowest:1 israel:3 what:1 weinshall:2 interpreted:1 substantially:1 temporal:5 partitioning:1 unit:1 medical:1 positive:4 before:1 engineering:1 despite:1 analyzing:1 solely:1 modulation:2 bird:3 chirp:4 strf:5 equivalence:12 limited:3 wideband:9 shamma:2 averaged:1 globerson:1 testing:1 procedure:3 area:1 ornithology:1 ri2:1 elicit:2 johannesma:1 significantly:1 chechik:1 protein:1 cannot:1 unlabeled:2 selection:2 context:2 influence:1 descending:1 demonstrated:1 center:2 missing:1 jerusalem:2 attention:1 zador:1 duration:1 focused:1 formulate:1 resolution:1 wit:1 schreiner:1 datapoints:3 retrieve:1 population:2 deteriorated:1 pt:1 us:1 machens:2 hypothesis:3 agreement:1 finetuned:1 labeled:1 theunissen:1 counter:1 removed:2 chord:1 environment:1 predictable:1 complexity:1 dynamic:3 trained:6 weakly:2 predictive:4 learner:3 neurophysiol:4 stylized:1 differently:2 cat:5 various:1 represented:2 train:10 separated:1 fast:1 artificial:1 r2i:2 aggregate:1 choosing:1 hillel:1 quite:1 whose:1 ability:1 transform:2 itself:1 echo:3 final:1 online:1 advantage:1 sequence:1 took:1 sen:1 interaction:2 ste:2 causing:1 relevant:1 turned:1 translate:1 poorly:3 description:1 normalize:1 cluster:11 regularity:1 ripple:2 leave:1 ac:2 omega:1 measured:7 school:2 expectationmaximization:1 strong:1 c:1 predicted:7 indicate:1 correct:1 char:1 opinion:1 bin:5 assign:1 generalization:3 clustered:1 biological:1 sufficiently:1 blake:2 exp:2 predict:5 slab:1 major:2 ventral:1 purpose:1 spectrotemporal:3 sensitive:2 peptide:3 repetition:1 successfully:1 weighted:1 hope:1 halothane:2 clearly:2 always:2 gaussian:2 modified:2 rather:4 pn:1 cornell:1 depireux:2 derived:4 focus:1 validated:1 consistently:1 rank:7 superimposed:1 underscore:2 typically:1 entire:3 accept:1 initially:1 relation:1 i1:4 overall:1 classification:1 animal:1 constrained:2 initialize:1 field:7 once:1 broad:1 icml:1 report:2 stimulus:99 others:1 microelectrodes:1 composed:1 individual:1 geometry:1 consisting:2 attempt:3 highly:5 evaluation:2 mixture:1 implication:1 partial:1 logarithm:1 fitted:1 formalism:2 classify:1 soft:1 column:2 modeling:1 hearing:1 subset:1 ri1:1 usefulness:2 examining:1 too:1 tishby:1 characterize:2 reported:2 learnt:3 calibrated:1 peri:1 density:1 huji:2 rotman:2 xi1:3 wi11:1 intersecting:2 recorded:5 choose:1 sorter:1 creating:1 account:1 potential:2 summarized:1 wk:2 coefficient:5 explicitly:1 performed:1 simon:1 contribution:1 il:2 responded:2 who:1 efficiently:1 miller:1 ensemble:3 gathered:1 identify:3 kowalski:1 yellow:1 conceptually:1 characteristic:1 weak:9 produced:1 datapoint:3 evaluates:1 frequency:8 auditory:24 dataset:3 popular:1 recall:1 improves:4 dimensionality:1 ethic:1 amplitude:2 uncover:2 actually:1 higher:1 supervised:1 response:68 improved:3 cepstrum:1 formulation:2 done:1 strongly:1 anderson:1 furthermore:2 stage:1 correlation:31 nonlinear:1 quality:1 reveal:1 behaved:1 impulse:1 effect:1 consisted:2 normalized:3 contain:1 merzenich:2 read:2 laboratory:1 i2:8 during:2 maintained:1 noted:1 m:7 generalized:1 complete:2 demonstrate:1 narrowband:6 novel:3 recently:2 psths:2 functional:3 physical:2 khz:5 belong:2 relating:1 significant:3 r1i:2 ai:1 unconstrained:1 sealed:1 similarly:1 had:1 moving:1 cortex:9 similarity:3 dominant:1 recent:1 optimizing:1 reverse:1 manipulation:1 yi:7 seen:2 additional:1 somewhat:1 impose:1 care:1 spectrogram:1 determine:1 paradigm:4 signal:1 semi:1 multiple:2 sound:11 earphone:1 thalamus:1 characterized:1 cross:4 msd:1 a1:3 feasibility:1 paired:1 prediction:2 histogram:10 represent:2 cell:71 whereas:3 background:6 addition:1 separately:3 ferret:2 wkj:1 concluded:1 unlike:1 recording:2 induced:3 db:2 near:1 presence:1 enough:1 xj:4 fit:2 regarding:1 idea:1 absent:1 whether:1 motivated:1 tonal:3 linkage:2 algebraic:1 reformulated:1 nine:1 strfs:9 generally:1 detailed:1 amount:1 band:5 category:1 generate:3 dotted:1 neuroscience:1 estimated:2 per:2 klein:1 blue:1 ketamine:1 shental:1 group:3 redundancy:2 four:5 demonstrating:1 gmm:1 ht:3 run:1 inverse:1 respond:1 almost:2 spl:1 refine:1 binned:1 constraint:25 ri:2 scene:1 fourier:2 extremely:1 performing:3 extracellular:2 relatively:2 department:1 combination:1 poor:1 yosef:4 spearman:1 hertz:3 across:3 em:1 wi:1 appealing:1 rev:1 penetration:1 intuitively:2 invariant:1 explained:1 taken:1 ln:1 visualization:1 previously:2 fail:1 committee:1 xi2:3 ascending:1 lacked:1 fluctuating:1 appropriate:2 spectral:2 original:1 denotes:2 remaining:2 clustering:2 top:1 daphna:2 const:1 eggermont:1 warping:1 question:2 already:3 spike:4 receptive:6 primary:6 inna:1 md:1 rt:3 aertsen:1 gradient:1 affinity:1 distance:49 lou:4 separating:2 originate:1 manifold:1 extent:1 furthest:1 length:1 relationship:1 providing:2 demonstration:1 hebrew:2 setup:1 idiosyncratic:2 decharms:1 negative:6 rise:2 oneout:1 stern:1 perform:1 neuron:46 observation:2 displayed:1 defining:1 neurobiology:2 incorporated:1 precise:1 perturbation:2 tomer:1 duced:1 pair:26 acoustic:5 eluded:1 learned:2 narrow:4 boost:2 nip:3 address:1 bar:6 usually:2 lout:1 challenge:1 power:7 natural:15 predicting:1 yanover:1 improve:3 extract:2 sahani:1 comply:1 understanding:1 synchronization:1 expect:1 interesting:3 limitation:1 recomb:1 validation:5 metal:1 consistent:1 compatible:3 surprisingly:1 guide:2 weaker:3 understand:1 wide:2 characterizing:2 cepstral:2 taking:1 anesthetized:2 overcome:1 dimension:2 cortical:17 calculated:1 xn:1 computes:1 sensory:3 nelken:5 made:1 lemniscal:1 approximate:1 spectro:3 alpha:1 implicitly:2 preferred:1 obtains:1 evoke:2 cem:1 masker:1 xi:8 alternatively:1 spectrum:2 learn:6 nature:1 obtaining:1 complex:4 necessarily:1 wehr:1 domain:6 protocol:1 main:5 neurosci:4 whole:2 noise:4 n2:2 x1:1 neuronal:16 distboost:4 fig:8 predictability:4 precision:1 explicit:1 learns:1 young:1 specific:7 peristimulus:1 linden:1 concern:1 importance:1 ci:2 magnitude:1 dissimilarity:1 sluggish:1 biophys:1 margin:1 authorized:1 intersection:3 simply:1 tomboy:1 contained:2 ordered:1 binding:1 extracted:4 goal:3 marked:1 considerable:1 change:2 hard:1 specifically:3 typical:1 determined:3 uniformly:1 averaging:1 total:3 experimental:1 la:1 formally:2 mark:1 doupe:1 latter:1 dissimilar:1 evaluate:2 tested:2 correlated:2 |
2,085 | 2,894 | Visual Encoding with Jittering Eyes
Michele Rucci?
Department of Cognitive and Neural Systems
Boston University
Boston, MA 02215
[email protected]
Abstract
Under natural viewing conditions, small movements of the eye and body
prevent the maintenance of a steady direction of gaze. It is known that
stimuli tend to fade when they are stabilized on the retina for several seconds. However, it is unclear whether the physiological self-motion of the
retinal image serves a visual purpose during the brief periods of natural
visual fixation. This study examines the impact of fixational instability
on the statistics of visual input to the retina and on the structure of neural
activity in the early visual system. Fixational instability introduces fluctuations in the retinal input signals that, in the presence of natural images,
lack spatial correlations. These input fluctuations strongly influence neural activity in a model of the LGN. They decorrelate cell responses, even
if the contrast sensitivity functions of simulated cells are not perfectly
tuned to counter-balance the power-law spectrum of natural images. A
decorrelation of neural activity has been proposed to be beneficial for
discarding statistical redundancies in the input signals. Fixational instability might, therefore, contribute to establishing efficient representations
of natural stimuli.
1
Introduction
Models of the visual system often examine steady-state levels of neural activity during
presentations of visual stimuli. It is difficult, however, to envision how such steady-states
could occur under natural viewing conditions, given that the projection of the visual scene
on the retina is never stationary. Indeed, the physiological instability of visual fixation
keeps the retinal image in permanent motion even during the brief periods in between
saccades.
Several sources cause this constant jittering of the eye. Fixational eye movements, of which
we are not aware, alternate small saccades with periods of drifts, even when subjects are
instructed to maintain steady fixation [8]. Following macroscopic redirection of gaze, other
small eye movements, such as corrective saccades and post-saccadic drifts, are likely to
occur. Furthermore, outside of the controlled conditions of a laboratory, when the head
is not constrained by a bite bar, movements of the body, as well as imperfections in the
vestibulo-ocular reflex, significantly amplify the motion of the retinal image. In the light of
?
Webpage: www.cns.bu.edu/?rucci
this constant jitter, it is remarkable that the brain is capable of constructing a stable percept,
as fixational instability moves the stimulus by an amount that should be clearly visible (see,
for example, [7]).
Little is known about the purposes of fixational instability. It is often claimed that small
saccades are necessary to refresh neuronal responses and prevent the disappearance of a
stationary scene, a claim that has remained controversial given the brief durations of natural
visual fixation (reviewed in [16]). Yet, recent theoretical proposals [1, 11] have claimed
that fixational instability plays a more central role in the acquisition and neural encoding
of visual information than that of simply refreshing neural activity. Consistent with the
ideas of these proposals, neurophysiological investigations have shown that fixational eye
movements strongly influence the activity of neurons in several areas of the monkey?s brain
[5, 14, 6]. Furthermore, modeling studies that simulated neural responses during freeviewing suggest that fixational instability profoundly affects the statistics of thalamic [13]
and thalamocortical activity [10].
This paper summarizes an alternative theory for the existence of fixational instability. Instead of regarding the jitter of visual fixation as necessary for refreshing neuronal responses,
it is argued that the self-motion of the retinal image is essential for properly structuring neural activity in the early visual system into a format that is suitable for processing at later
stages. It is proposed that fixational instability is part of a strategy of acquisition of visual
information that enables compact visual representations in the presence of natural visual
input.
2
Neural decorrelation and fixational instability
It is a long-standing proposal that an important function of early visual processing is the
removal of part of the redundancy that characterizes natural visual input [3]. Less redundant
signals enable more compact representations, in which the same amount of information can
be represented by smaller neuronal ensembles. While several methods exist for eliminating
input redundancies, a possible approach is the removal of pairwise correlations between
the intensity values of nearby pixels [2]. Elimination of these spatial correlations allows
efficient representations in which neuronal responses tend to be less statistically dependent.
According to the theory described in this paper, fixational instability contributes to decorrelating the responses of cells in the retina and the LGN during viewing of natural scenes.
This theory is based on two factors, which are described separately in the following sections. The first component, analyzed in Section 2.1, is the spatially uncorrelated input
signal that occurs when natural scenes are scanned by jittering eyes. The second factor
is an amplification of this spatially uncorrelated input, which is mediated by cell response
characteristics. Section 2.2 examines the interaction between the dynamics of fixational
instability and the temporal characteristics of neurons in the Lateral Geniculate Nucleus
(LGN), the main relay of visual information to the cortex.
2.1
Influence of fixational instability on visual input
To analyze the effect of fixational instability on the statistics of geniculate activity, it is
useful to approximate the input image in a neighborhood of a fixation point x0 by means
of its Taylor series:
I(x) ? I(x0 ) + ?I(x0 ) ? (x ? x0 )T + o(|x ? x0 |2 )
(1)
If the jittering produced by fixational instability is sufficiently small, high-order derivatives
can be neglected, and the input to a location x on the retina during visual fixation can be
approximated by its first-order expansion:
? t)
(2)
S(x, t) ? I(x) + ? T (t) ? ?I(x) = I(x) + I(x,
where ?(t) = [?x (t), ?y (t)] is the trajectory of the center of gaze during the period of
fixation, t is the time elapsed from fixation onset, I(x) is the visual input at t = 0, and
? t) = ?I(x) ?x (t) + ?I(x) ?y (t) is the dynamic fluctuation in the visual input produced
I(x,
?x
?y
by fixational instability.
Eq. 2 allows an analytical estimation of the power spectrum of the signal entering the eye
during the self-motion of the retinal image. Since, according to Eq. 2, the retinal input
? its power spectrum
S(x, t) can be approximated by the sum of two contributions, I and I,
RSS consists of three terms:
RSS (u, w) ? RII + RI?I? + 2RI I?
where u and w represent, respectively, spatial and temporal frequency.
Fixational instability can be modeled as an ergodic process with zero mean and uncorrelated components along the two axes, i.e., h?iT = 0 and R?x ?y (t) = 0. Although not
necessary for the proposed theory, these assumptions simplify our statistical analysis, as
RI I? is zero, and the power spectrum of the visual input is given by:
RSS ? RII + RI?I?
(3)
where RII is the power spectrum of the stimulus, and RI?I? depends on both the stimulus
and fixational instability.
To determine RI?I?(u, w), from Eq. 2 follows that
? w) = iux I(u)?x (w) + iuy I(u)?y (w)
I(u,
and under the assumption of uncorrelated motion components, approximating the power
spectrum via finite Fourier Transform yields:
RI?I?(u, w) = lim <
T ??
1 ?
|IT (u, w)|2 >?,I = R?? (w)RII (u)|u|2
T
(4)
where I?T is the Fourier Transform of a signal of duration T , and we have assumed identical second-order statistics of retinal image motion along the two Cartesian axes. As shown
in Fig. 1 is clear that the presence of the term u2 in Eq. 4 compensates for the scaling
invariance of natural images. That is, since for natural images RII (u) ? u?2 , the product RII (u)|u|2 whitens RII by producing a power spectrum RI?I? that remains virtually
constant at all spatial frequencies.
2.2
Influence of fixational instability on neural activity
This section analyzes the structure of correlated activity during fixational instability in a
model of the LGN. To delineate the important elements of the theory, we consider linear
approximations of geniculate responses provided by space-time separable kernels. This
assumption greatly simplifies the analysis of levels of correlation. Results are, however,
general, and the outcomes of simulations with space-time inseparable kernels and different
levels of rectification (the most prominent nonlinear behavior of parvocellular geniculate
neurons) can be found in [13, 10].
Mean instantaneous firing rates were estimated on the basis of the convolution between the
input I and the cell spatiotemporal kernel h? :
Z tZ ? Z ?
?(t) = h? (x, t) ? I(x, t) =
h? (x0 , y 0 , t0 )I(x ? x0 , y ? y 0 , t ? t0 ) dx0 dy 0 dt0
0
??
??
where h? (x, t) = g? (t)f? (x). Kernels were designed on the basis of data from neurophysiological recordings to replicate the responses of parvocellular ON-center cells in the LGN
0
10
RI?I?
RII
?2
Power
10
?4
10
1
10
Spatial Frequency (cycles/deg)
Figure 1: Fixational instability introduces a spatially uncorrelated component in the visual input to the retina during viewing of natural scenes. The graph compares the power
spectrum of natural images (RII ) to the dynamic power spectrum introduced by fixational
instability (RI?I?). The two curves represent radial averages evaluated over 15 pictures of
natural scenes.
of the macaque. The spatial component f? (x) was modeled by a standard difference of
Gaussian [15]. The temporal kernel g? (t) possessed a biphasic profile with positive peak
at 50 ms, negative peak at 75 ms, and overall duration of less than 200 ms [4].
In this section, levels of correlation in the activity of pairs of geniculate neurons are summarized by the correlation pattern c??? (x):
c??? (x) = h?y (t)?z (t)i
(5)
T,I
where ?y (t) and ?z (t) are the responses of cells with receptive fields centered at y and z,
and x = y ? z is the separation between receptive field centers. The average is evaluated
over time T and over a set of stimuli I.
With linear models, c??? (x) can be estimated on the basis of the input power spectrum
RSS (u, w):
c??? (x) = c?? (x, t)
and c?? (x, t) = F ?1 {R?? }
(6)
t=0
where R?? = |H? |2 RSS (u, w) is the power spectrum of LGN activity (H? (u, w) is the
spatiotemporal Fourier transform of the kernel h? (x, t)), and F ?1 represents the inverse
Fourier transform operator.
To evaluate R?? , substitution of RSS from Eq. 3 and separation of spatial and temporal
elements yield:
S
D
R?? ? |G? |2 |F? |2 RII + |G? |2 |F? |2 RI?I? = R??
+ R??
(7)
where F? (u) and G? (w) represent the Fourier Transforms of the spatial and temporal
kernels. Eq. 7 shows that, similar to the retinal input, also the power spectrum of geniculate
D
activity can be approximated by the sum of two separate elements. Only R??
depends on
S
fixational instability. The first term, R?? , is determined by the power spectrum of the
stimulus and the characteristics of geniculate cells but does not depend on the motion of
the eye during the acquisition of visual information.
By substituting in Eq. 6 the expression of R?? from Eq. 7, we obtain
c?? (x, t) ? cS?? (x, t) + cD
?? (x, t)
where
(8)
S
?1
D
(u, w)} and cD
(u, w)}
cS?? (x, t) = F ?1 {R??
{R??
?? (x, t) = F
Eq. 8 shows that fixational instability adds the term cD
?? to the pattern of correlated activity
cS?? that would obtained with presentation of the same set of stimuli without the self-motion
of the eye.
With presentation of pictures of natural scenes, RII (w) = 2??(w), and the two input
S
D
signals R??
and R??
provide, respectively, a static and a dynamic contribution to the
spatiotemporal correlation of geniculate activity. The first term in Eq. 8 gives a correlation
pattern:
S
c?S?? (x) = kS FS?1 {|F? |2 RII
(u)}
(9)
where kS = |G(0)|2 .
By substituting RI?I? from Eq. 4, the second term in Eq. 8 gives a correlation pattern:
where kD
?1
S
c?D
{|F? |2 RII
(u)|u|2 }
(10)
?? (x) = kD FS
= FT ?1 {|G? (w)|2 R?? (w)}
is a constant given by the temporal dynamics
t=0
of cell response and fixational instability. FT?1 and FS?1 indicate the operations of inverse
Fourier Transform in time and space.
To summarize, during the physiological instability of visual fixation, the structure of correlated activity in a linear model of the LGN is given by the superposition of two spatial
terms, each of them weighted by a coefficient (kS and kD ) that depends on dynamics:
S
S
c??? (x) = kS FS ?1 {(|F? |2 RII
(u)} + kD FS ?1 {|F? |2 RII
(u)|u|2 }
(11)
Whereas the stimulus contributes to the structure of correlated activity by means of the
S
power spectrum RII
, the contribution introduced by fixational instability depends on RIS?I?,
a signal that discards the broad correlation of natural images. Since in natural images, most
power is concentrated at low spatial frequencies, the uncorrelated fluctuations in the input
D
signals generated by fixational instability have small amplitudes. That is, RII
provides less
S
power than RII . However, geniculate cells tend to respond more strongly to changing stimuli than stationary ones, and kD is larger than kS . Therefore, the small input modulations
introduced by fixational instability are amplified by the dynamics of geniculate cells.
Fig. 2 shows the structure of correlated activity in the model when images of natural scenes
are examined in the presence of fixational instability. In this example, fixational instability
was assumed to possess Gaussian temporal correlation, R?? (w), with standard deviation
?T = 22 ms and amplitude ?S = 12 arcmin. In addition to the total pattern of correlation
given by Eq. 11, Fig. 2 also shows the patterns of correlation produced by the two components c?S?? and c?D
?S?? was strongly influenced by the broad spatial correlations
?? . Whereas c
of natural images, c?D
,
due
to
its dependence on the whitened power spectrum RI?I?, was
??
determined exclusively by cell receptive fields. Due to the amplification factor k D , c?D
??
provided a stronger contribution than c?S?? and heavily influenced the global structure of
correlated activity.
To examine the relative influence of the two terms c?S?? and c?D
?? on the structure of correlated activity, Fig. 3 shows their ratio at separation zero, ?DS = c?D
cS?? (0), with
?? (0)/?
Normalized Correlation
1
0.8
Static
Dynamic
Total
0.6
0.4
0.2
0
0
1
2
3
4
Cell RF Separation (deg.)
5
Figure 2: Patterns of correlation obtained from Eq. 11 when natural images are examined in
the presence of fixational instability. The three curves represent the total level of correlation
(Total), the correlation c?S?? (x) that would be present if the same images were examined
in the absence of fixational instability (Static), and the contribution c?D
?? (x) of fixational
instability (Dynamic). Data are radial averages evaluated over pairs of cells with the same
separation ||x|| between their receptive fields.
presentation of natural images and for various parameters of fixational instability. Fig. 3
(a) shows the effect of varying the spatial amplitude of the retinal jitter. In order to remain
within the range of validity of the Taylor approximation in Eq. 2, only small amplitude values are considered. As shown by Fig. 3 (a), the larger the instability of visual fixation, the
larger the contribution of the dynamic term c?D
?S?? . Except for very small
?? with respect to c
D
values of ?S , ?DS is larger than one, indicating that c??? influences the structure of correlated activity more strongly than c?S?? . Fig. 3 (b) shows the impact of varying ?T , which
defines the temporal window over which fixational jitter is correlated. Note that ? DS is a
non-monotonic function of ?T . For a range of ?T corresponding to intervals shorter than
the typical duration of visual fixation, c?D
?S?? . Thus, fixational
?? is significantly larger than c
instability strongly influences correlated activity in the model when it moves the direction
of gaze within a range of a few arcmin and is correlated over a fraction of the duration
of visual fixation. This range of parameters is consistent with the instability of fixation
observed in primates.
3
Conclusions
It has been proposed that neurons in the early visual system decorrelate their responses
to natural stimuli, an operation that is believed to be beneficial for the encoding of visual
information [2]. The original claim, which was based on psychophysical measurements of
human contrast sensitivity, relies on an inverse proportionality between the spatial response
characteristics of retinal and geniculate neurons and the structure of natural images. However, data from neurophysiological recordings have clearly shown that neurons in the retina
and the LGN respond significantly to low spatial frequencies, in a way that is not compatible with the requirements of Atick and Redlich?s proposal. During natural viewing, input
signals to the retina depend not only on the stimulus, but also on the physiological instability of visual fixation. The results of this study show that when natural scenes are examined
3
5
2.5
Ratio Dynamic/Static
Ratio Dynamic/Static
6
4
3
2
1
0
0
5
?S (arcmin)
(a)
10
15
2
1.5
1
0.5
0
0
100
?T (ms)
200
300
(b)
Figure 3: Influence of the characteristics of fixational instability on the patterns of correlated activity during presentation of natural images. The two graphs show the ratio ? DS
?S?? in Eq. 8. Fixational instability was asbetween the peaks of the two terms c?D
?? and c
sumed to possess a Gaussian correlation with standard deviation ?T and amplitude ?S . (a)
Effect of varying ?S (?T = 22 ms). (b) Effect of varying ?T (?S = 12 arcmin).
with jittering eyes, as occurs under natural viewing conditions, fixational instability tends
to decorrelate cell responses even if the contrast sensitivity functions of individual neurons
do not counterbalance the power spectrum of visual input.
The theory described in this paper relies of two main elements. The first component is
the presence of a spatially uncorrelated input signal during presentation of natural visual
stimuli (RI?I? in Eq. 3). This input signal is a direct consequence of the scale invariance
of natural images. It is a property of natural images that, although the intensity values of
nearby pixels tend to be correlated, changes in intensity around pairs of pixels are uncorrelated. This property is not satisfied by an arbitrary image. In a spatial grating, for example,
intensity changes at any two locations are highly correlated. During the instability of visual fixation, neurons receive input from the small regions of the visual field covered by the
jittering of their receptive fields. In the presence of natural images, although the inputs to
cells with nearby receptive fields are on average correlated, the fluctuations in these input
signals produced by fixational instability are not correlated. Fixational instability appears
to be tuned to the statistics of natural images, as it introduces a spatially uncorrelated signal
only in the presence of visual input with a power spectrum that declines as u?2 with spatial
frequency.
The second element of the theory is the neuronal amplification of the spatially uncorrelated input signal introduced by the self-motion of the retinal image. This amplification
originates from the interaction between the dynamics of fixational instability and the temporal sensitivity of geniculate units. Since RI?I? attenuates the low spatial frequencies of
the stimulus, it tends to possess less power than RII . However, in Eq. 11, the contributions of the two input signals are modulated by the multiplicative terms kS and kD , which
depend on the temporal characteristics of cell responses (both kS and kD ) and fixational
instability (kD only). Since geniculate neurons respond more strongly to changing stimuli
than to stationary ones, kD tends to be higher than kS . Correspondingly, in a linear model
of the LGN, units are highly sensitive to the uncorrelated fluctuations in the input signals
produced by fixational instability.
The theory summarized in this study is consistent with the strong modulations of neural
responses observed during fixational eye movements [5, 14, 6], as well as with the results
of recent psychophysical experiments aimed at investigating perceptual influences of fixational instability [12, 9]. It should be observed that, since patterns of correlations were
evaluated via Fourier analysis, this study implicitly assumed a steady-state condition of
visual fixation. Further work is needed to extend the proposed theory in order to take into
account time-varying natural stimuli and the nonstationary regime produced by the occurrence of saccades.
Acknowledgments
The author thanks Antonino Casile and Gaelle Desbordes for many helpful discussions.
This material is based upon work supported by the National Institute of Health under Grant
EY15732-01 and the National Science Foundation under Grant CCF-0432104.
References
[1] E. Ahissar and A. Arieli. Figuring space by time. Neuron, 32(2):185?201, 2001.
[2] J. J. Atick and A. Redlich. What does the retina know about natural scenes? Neural Comp.,
4:449?572, 1992.
[3] H. B. Barlow. The coding of sensory messages. In W. H. Thorpe and O. L. Zangwill, editors,
Current Problems in Animal Behaviour, pages 331?360. Cambridge University Press, Cambridge, 1961.
[4] E. A. Benardete and E. Kaplan. Dynamics of primate P retinal ganglion cells: Responses to
chromatic and achromatic stimuli. J. Physiol., 519(3):775?790, 1999.
[5] D. A. Leopold and N. K. Logothetis. Microsaccades differentially modulate neural activity in
the striate and extrastriate visual cortex. Exp. Brain. Res., 123:341?345, 1998.
[6] S. Martinez-Conde, S. L. Macknik, and D. H. Hubel. The function of bursts of spikes during
visual fixation in the awake primate lateral geniculate nucleus and primary visual cortex. Proc.
Natl. Acad. Sci. USA, 99(21):13920?13925, 2002.
[7] I. Murakami and P. Cavanagh. A jitter after-effect reveals motion-based stabilization of vision.
Nature, 395(6704):798?801, 1998.
[8] F. Ratliff and L. A. Riggs. Involuntary motions of the eye during monocular fixation. J. Exp.
Psychol., 40:687?701, 1950.
[9] M. Rucci and J. Beck. Effects of ISI and flash duration on the identification of briefly flashed
stimuli. Spatial Vision, 18(2):259?274, 2005.
[10] M. Rucci and A. Casile. Decorrelation of neural activity during fixational instability: Possible
implications for the refinement of V1 receptive fields. Visual Neurosci., 21:725?738, 2004.
[11] M. Rucci and A. Casile. Fixational instability and natural image statistics: Implications for early
visual representations. Network: Computation in Neural Systems, 16(2-3):121?138, 2005.
[12] M. Rucci and G. Desbordes. Contributions of fixational eye movements to the discrimination
of briefly presented stimuli. J. Vision, 3(11):852?64, 2003.
[13] M. Rucci, G. M. Edelman, and J. Wray. Modeling LGN responses during free-viewing: A
possible role of microscopic eye movements in the refinement of cortical orientation selectivity.
J. Neurosci, 20(12):4708?4720, 2000.
[14] D. M. Snodderly, I. Kagan, and M. Gur. Selective activation of visual cortex neurons by fixational eye movements: Implications for neural coding. Vis. Neurosci., 18:259?277, 2001.
[15] P. D. Spear, R. J. Moore, C. B. Y. Kim, J. T. Xue, and N. Tumosa. Effects of aging on the
primate visual system: spatial and temporal processing by lateral geniculate neurons in young
adult and old rhesus monkeys. J. Neurophysiol., 72:402?420, 1994.
[16] R.M. Steinman and J.Z. Levinson. The role of eye movements in the detection of contrast and
spatial detail. In E. Kowler, editor, Eye Movements and their Role in Visual and Cognitive
Processes, pages 115?212. Elsevier Science, 1990.
| 2894 |@word briefly:2 eliminating:1 stronger:1 replicate:1 proportionality:1 r:6 simulation:1 rhesus:1 decorrelate:3 extrastriate:1 substitution:1 series:1 exclusively:1 tuned:2 envision:1 current:1 kowler:1 activation:1 yet:1 refresh:1 physiol:1 visible:1 enables:1 designed:1 discrimination:1 stationary:4 provides:1 contribute:1 location:2 along:2 burst:1 direct:1 edelman:1 consists:1 fixation:19 x0:7 pairwise:1 indeed:1 behavior:1 isi:1 examine:2 brain:3 little:1 window:1 provided:2 what:1 monkey:2 biphasic:1 ahissar:1 temporal:11 originates:1 unit:2 grant:2 producing:1 positive:1 tends:3 aging:1 consequence:1 acad:1 encoding:3 establishing:1 fluctuation:6 firing:1 modulation:2 might:1 k:8 examined:4 range:4 statistically:1 acknowledgment:1 zangwill:1 area:1 significantly:3 projection:1 radial:2 suggest:1 amplify:1 operator:1 influence:9 instability:52 www:1 center:3 duration:6 ergodic:1 fade:1 examines:2 gur:1 snodderly:1 play:1 heavily:1 logothetis:1 element:5 approximated:3 observed:3 role:4 ft:2 region:1 cycle:1 movement:11 counter:1 parvocellular:2 dynamic:14 neglected:1 jittering:6 depend:3 upon:1 basis:3 neurophysiol:1 represented:1 various:1 corrective:1 outside:1 neighborhood:1 outcome:1 dt0:1 larger:5 compensates:1 achromatic:1 statistic:6 transform:5 analytical:1 interaction:2 product:1 amplified:1 amplification:4 differentially:1 arcmin:4 webpage:1 requirement:1 eq:18 strong:1 grating:1 c:4 indicate:1 direction:2 centered:1 human:1 stabilization:1 viewing:7 enable:1 elimination:1 material:1 argued:1 behaviour:1 investigation:1 sufficiently:1 considered:1 around:1 exp:2 claim:2 substituting:2 inseparable:1 early:5 relay:1 purpose:2 estimation:1 proc:1 freeviewing:1 geniculate:15 superposition:1 sensitive:1 weighted:1 clearly:2 imperfection:1 gaussian:3 chromatic:1 varying:5 structuring:1 ax:2 properly:1 greatly:1 contrast:4 kim:1 helpful:1 elsevier:1 dependent:1 selective:1 lgn:10 pixel:3 overall:1 orientation:1 animal:1 spatial:20 constrained:1 field:8 aware:1 never:1 identical:1 represents:1 broad:2 stimulus:20 simplify:1 few:1 retina:9 thorpe:1 national:2 individual:1 beck:1 antonino:1 cns:2 maintain:1 detection:1 message:1 highly:2 introduces:3 analyzed:1 light:1 natl:1 implication:3 capable:1 necessary:3 shorter:1 taylor:2 old:1 re:1 theoretical:1 modeling:2 deviation:2 spatiotemporal:3 xue:1 thanks:1 peak:3 sensitivity:4 bu:2 standing:1 gaze:4 central:1 satisfied:1 cognitive:2 tz:1 derivative:1 murakami:1 account:1 retinal:13 summarized:2 coding:2 coefficient:1 permanent:1 onset:1 depends:4 vi:1 later:1 multiplicative:1 analyze:1 characterizes:1 thalamic:1 contribution:8 characteristic:6 percept:1 ensemble:1 yield:2 identification:1 produced:6 wray:1 trajectory:1 comp:1 influenced:2 acquisition:3 frequency:7 ocular:1 whitens:1 static:5 lim:1 amplitude:5 appears:1 higher:1 response:18 decorrelating:1 evaluated:4 delineate:1 strongly:7 furthermore:2 stage:1 atick:2 correlation:20 d:4 nonlinear:1 lack:1 defines:1 michele:1 usa:1 effect:7 validity:1 normalized:1 barlow:1 ccf:1 spatially:6 entering:1 moore:1 laboratory:1 flashed:1 during:21 self:5 steady:5 m:6 prominent:1 motion:12 image:28 instantaneous:1 extend:1 measurement:1 cambridge:2 stable:1 cortex:4 add:1 recent:2 discard:1 claimed:2 selectivity:1 analyzes:1 determine:1 period:4 redundant:1 signal:17 levinson:1 believed:1 long:1 post:1 controlled:1 impact:2 maintenance:1 whitened:1 vision:3 kernel:7 represent:4 cell:18 receive:1 proposal:4 whereas:2 separately:1 addition:1 interval:1 redirection:1 benardete:1 source:1 macroscopic:1 posse:3 subject:1 tend:4 virtually:1 recording:2 nonstationary:1 presence:8 affect:1 perfectly:1 idea:1 regarding:1 simplifies:1 decline:1 t0:2 whether:1 expression:1 f:5 cause:1 useful:1 clear:1 covered:1 aimed:1 amount:2 transforms:1 concentrated:1 fixational:52 exist:1 stabilized:1 figuring:1 estimated:2 profoundly:1 redundancy:3 microsaccades:1 changing:2 prevent:2 v1:1 graph:2 fraction:1 sum:2 inverse:3 jitter:5 respond:3 separation:5 summarizes:1 scaling:1 dy:1 activity:26 occur:2 scanned:1 awake:1 scene:10 ri:16 nearby:3 fourier:7 separable:1 format:1 department:1 according:2 alternate:1 arieli:1 kd:9 beneficial:2 smaller:1 remain:1 rucci:8 primate:4 rectification:1 monocular:1 remains:1 needed:1 know:1 serf:1 operation:2 cavanagh:1 occurrence:1 alternative:1 existence:1 original:1 approximating:1 casile:3 psychophysical:2 move:2 kagan:1 occurs:2 spike:1 strategy:1 saccadic:1 disappearance:1 receptive:7 dependence:1 unclear:1 striate:1 primary:1 microscopic:1 separate:1 simulated:2 lateral:3 sci:1 modeled:2 ratio:4 balance:1 difficult:1 negative:1 kaplan:1 ratliff:1 attenuates:1 rii:19 neuron:13 convolution:1 finite:1 possessed:1 head:1 arbitrary:1 drift:2 intensity:4 introduced:4 pair:3 leopold:1 elapsed:1 macaque:1 adult:1 bar:1 pattern:9 involuntary:1 regime:1 summarize:1 bite:1 rf:1 power:21 suitable:1 decorrelation:3 natural:37 counterbalance:1 brief:3 eye:18 picture:2 psychol:1 mediated:1 health:1 spear:1 removal:2 relative:1 law:1 remarkable:1 foundation:1 nucleus:2 controversial:1 consistent:3 vestibulo:1 editor:2 uncorrelated:11 cd:3 compatible:1 supported:1 thalamocortical:1 free:1 institute:1 correspondingly:1 curve:2 cortical:1 sensory:1 instructed:1 author:1 refinement:2 approximate:1 compact:2 implicitly:1 keep:1 deg:2 global:1 hubel:1 investigating:1 reveals:1 assumed:3 spectrum:17 reviewed:1 nature:1 contributes:2 expansion:1 constructing:1 conde:1 refreshing:2 main:2 neurosci:3 profile:1 martinez:1 body:2 neuronal:5 fig:7 redlich:2 perceptual:1 young:1 remained:1 discarding:1 physiological:4 essential:1 cartesian:1 boston:2 simply:1 likely:1 ganglion:1 neurophysiological:3 visual:47 u2:1 saccade:5 reflex:1 monotonic:1 relies:2 ma:1 modulate:1 presentation:6 flash:1 absence:1 change:2 determined:2 except:1 typical:1 total:4 invariance:2 indicating:1 dx0:1 modulated:1 evaluate:1 correlated:16 |
2,086 | 2,895 | Using ?epitomes? to model genetic diversity:
Rational design of HIV vaccine cocktails
Nebojsa Jojic, Vladimir Jojic, Brendan Frey, Chris Meek and David Heckerman
Microsoft Research
Abstract
We introduce a new model of genetic diversity which summarizes a large
input dataset into an epitome, a short sequence or a small set of short
sequences of probability distributions capturing many overlapping subsequences from the dataset. The epitome as a representation has already
been used in modeling real-valued signals, such as images and audio. The
discrete sequence model we introduce in this paper targets applications
in genetics, from multiple alignment to recombination and mutation inference. In our experiments, we concentrate on modeling the diversity of
HIV where the epitome emerges as a natural model for producing relatively small vaccines covering a large number of immune system targets
known as epitopes. Our experiments show that the epitome includes more
epitopes than other vaccine designs of similar length, including cocktails
of consensus strains, phylogenetic tree centers, and observed strains. We
also discuss epitome designs that take into account uncertainty about Tcell cross reactivity and epitope presentation. In our experiments, we find
that vaccine optimization is fairly robust to these uncertainties.
1
Introduction
Within and across instances of a certain class of a natural signal, such as a facial image, a bird
song recording, or a certain type of a gene, we find many repeating fragments. The repeating
fragments can vary slightly and can have arbitrary (and usually unknown) sizes. For instance,
in cropped images of human faces, a small patch capturing an eye appears in an image twice
(with a symmetry transformation applied), and across different facial images many times, as
humans have a limited number of eye types. Another repeating structure across facial images
is the nose, which occupies a larger patch. In mammalian DNA sequences, we find repeating
regulatory elements within a single sequence, and repeating larger structures (genes, or gene
fragments) across species. Instead of defining size, variability and typical relative locations
of repeating fragments manually, in an application-driven way, the ?epitomic analysis? [5]
is an unsupervised approach to estimating repeating fragment models, and simultaneously
aligning the data to them. This is achieved by considering data in terms of randomly selected
overlapping fragments, or patches, of various sizes and mapping them onto an ?epitome,?
a learned structure which is considerably larger than any of the fragments, and yet much
smaller than the total size of the dataset.
We first introduced this model for image analysis [5], and it has since been used for video
and audio analysis [2, 6], as well. This paper introduces a new form of the epitome as
a sequence of multinomial distributions (Fig. 1), and describe its applications to HIV
diversity modeling and rational vaccine design. We show that the vaccines optimized using
our algorithms are likely to have broader predicted coverage of immune targets in HIV than
the previous rational designs.
The generating profile sequence
Data (colorcoded according to the posterior epitome mapping Q(T))
p(T)
e
Figure 1: The epitome (e) learned from data synthesized from the generating profile sequence (Section
5). A color coding in the epitome and data sequences is used to show the mapping between epitome
and data positions. A white color indicates that the letter was likely generated from the garbage
component of the epitome. The distribution p(T ) shows which 9mers from the epitome were more
likely to generate patches of the data.
2
Sequence epitome
The central part of Fig. 1 illustrates a small set of amino acid sequences X = {xij } of size
MN (with i indexing a sequence, and j indexing a letter within a sequence, and M = max i,
N = max j). The sequences share patterns (although sometimes with discrepancies in
isolated amino-acids) but one sequence may be similar to other sequences in different
regions. The sequences are generated synthetically by combining the pieces of the profile
sequence given in the first line of the figure, with occasional insertions of random sequence
fragments, as discussed in Section 5. Sequence variability in this synthetic example is
slightly higher than that found in the NEF protein of the human immunodeficiency virus
(HIV) [7], while the envelope proteins of the same virus exhibit more variability. Examples
of high genetic diversity can also be found in higher-level organisms, for example in the
regions coding for immune system?s pattern recognition molecules.
The last row in the figure illustrates an epitome optimized to represent the variability in the
sequences above. In general, the epitome is a smaller array E = {emn } of size Me ? Ne ,
where Me Ne M N . In the figure, Me = 1. An epitome can be parameterized in
different ways, but in the figure, each epitome element emn is a multinomial distribution
with the probability of each letter represented by its height. The epitome?s summarization
quality is defined by a simple generative model which considers the data X in terms of
shorter subsequences, XS . A subsequence XS is defined as an ordered subset of letters
from X taken from positions listed in the ordered index set S. For instance, the set S =
{(4, 8), (4, 9), (4, 10), (4, 11)} points to a contiguous patch of letters in the fourth sequence
XS = RQKK. Similarly, set S = {(6, 2), (6, 3), (6, 4), (6, 5), (6, 6)} points to the patch
XS = LDRQK in the sixth sequence. A number of such patches1 of various lengths can
be taken randomly (and with overlap). The quality of the epitome is then defined as the total
likelihood of these patches under the generative model which generates each patch from a
set of distributions ET , where T is an ordered set of indices into the epitome (In the figure,
the epitome is defined on a circle, so that the index progression continues from Ne to 1.
(This reduces local minima problems in the EM algorithm for epitome learning as discussed
in Sections 4 and 5). For each data patch, the mapping T is considered a hidden variable,
1
In principal, noncontiguous patches can be taken as well, if the application so requires.
and the generative process is assumed to consist of the following two steps
? Sample a patch ET from E according to p(T ). To illustrate p(T ) in Fig. 1,
we consider only the set of of all 9-long contiguous patches. For such patches,
which are sometimes called nine-mers, we can index different sets T by their first
elements and plot p(T ) as a curve with the domain {1, ..., Ne ? 8}.
|T |
? Generate a patch XS from ET according to p(XS |ET ) = k=1 eT (k) (XS(k) ),
with T (k) and S(k) denoting the k-th element in the epitome and data patches.
Each execution of these two steps can, in principle, generate any pattern. The probability
(likelihood) of generating a particular pattern indicated by S is
p(XS |ET )p(T ).
(1)
p(XS ) =
T
Given the epitome, we can perform inference in this model and compute the posterior
distribution over mappings T for a particular model. For instance, for XS = RQKK,
the most probable mapping is T = {(1, 4), (1, 5), (1, 6), (1, 7)}. In Section 4, we discuss
algorithms for estimating the epitome distributions.
Our illustration points to possible applications of epitomes to multiple sequence alignment,
and therefore requires a short discussion on similarity to other biological sequence models
[3]. While the epitome is a fully probabilistic model and thus defines a precise cost function
for optimization, as was the case with HMM-based models, or dynamic programming
solutions to sequence alignment, the main novelty in our approach is the consideration
of both the data and the model parameters in terms of overlapping patches. This leads to
the alignment of different parts of the sequences to the joint representation without explicit
constraints on contiguity of the mappings or temporal models used in HMMs. Also, as we
discuss in the next section, our goal is diversity modeling, and not multiple alignment. The
epitome?s robustness to the length, position and variability of repeating sequence fragments
allows us to bypass both the task of optimal global alignment, and the problem of defining
the notion of global alignment. In addition, consideration of overlapping patches in a
biological sequence can be viewed as modeling independent binding processes, making the
patch independence assumption of our generative model biologically relevant. We illustrate
these properties of the epitome on the problem of HIV diversity modeling and rational
vaccine design.
3
HIV evolution and rational vaccine design
Recent work on the rational design of HIV vaccines has turned to cocktail approaches
with the intention of protecting a person against many possible variants of the HIV virus.
One of the potential difficulties with cocktail design is vaccine size. Vaccines with a large
number of nucleotides or amino acids are expensive to manufacture and more difficult to
deliver. In this section, we will show that epitome modeling can overcome this limitation
by providing a means for generating smaller vaccines representing a wide diversity of HIV
in an immunologically relevant way. We focus on the problem of constructing an optimal
cellular vaccine in terms of its coverage of MHC-I epitopes, short contiguous patterns of
8-11 aminoacids in HIV proteins [8].
Major histocompatibility complex (MHC) molecules are responsible for presentation of
short segments of internal proteins, called ?epitopes,? on the surface of a cell. These
peptides (protein segments) can then be observed from outside the cell by killer T-cells,
which normally react only to foreign peptides, instructing the cell to self-distruct. The killer
cells and their offspring have the opportunity to bind to multiple infected cells, and so their
first binding to a particular foreign epitope is used to accelerate an immune reaction to other
infected cells exposing the same epitope. Such responses are called memory responses
and can persist for a long time after the infection has been cleared, providing longer-term
immunity to the disease. The goal of vaccine design is to create artifical means to produce
such immunological memory of a particular virus without the danger of developing the
disease.
In the case of a less variable virus, the vaccination may be possible by delivering a foreign
protein similar to the viral protein into a patient?s cells, triggering the immune response.
However, HIV is capable of assuming many different forms, and immunization against a
single strain is largely expected to be insufficient. In fact, without appropriate optimization,
the number of different proteins needed to cover the viral diversity would be too large for
the known vaccine delivery mechanisms. It is well known that epitopes within and across
the strains in a population overlap [7]. The epitome model naturally exploits this overlap to
construct a vaccine that can prime the immune system to attack as many potential epitopes
as possible. For instance, if the sequences in Fig 1 were HIV fragments from different
strains of the virus, then the epitome would contain many potential epitopes of lengths 8-11
from these sequences. Furthermore, the context of the captured epitopes in the epitome is
similar to the context in the epitomized sequences, which increases the chances of equivalent
presentation of the epitome and data epitopes.
MHC molecules are encoded within the most diverse region of the human genome. This
gives our species a diversity advantage in numerous clashes with viruses. Each individual
has a slightly different set of MHC molecules which bind to different motifs in the proteins
expressed and cleaved in the cell. Due to the limitation in MHC binding, each person?s
cells are capable of presenting only a small number of epitopes from the invading virus, but
an entire human population attacks a diverse set of epitopes. The MHC molecule selects
the protein fragments for presentation through a binding process which is loosely motifspecific. There are several other processes that precede or follow the MHC binding, and
the combination of all of these processes can be characterized either by the concentration
of presented epitopes, or by the combination of the binding energies involved in these
processes2 . Some of these processes can be influenced by a context of the epitope (short
amino acid fragments in the regions on either side of the epitope).
Another issue to be considered in HIV evolution and vaccine design is the T-cell cross
reactivity: The killer cells primed with one epitope may be capable of binding to other related
epitopes, and therefore a small set of priming epitopes may induce a broader immunity. As
in the case of MHC binding, the likelihood of priming a T-cell, as well as cross-reaction
with a different epitope, can be linked to the binding energies.
The epitome model maps directly to these immunity variables. If the epitome content is to
be delivered to a cell in the vaccination phase, then each patch ET indexed by data index set
T corresponds either to an epitope or to a longer contiguous patch (e.g. 12 amino acids or
more) containing both an epitope and its context that influences presentation. The prior p(T )
reflects the probability of presentation of the epitome fragments, and should reflect processes
invloved in presentation, including MHC binding. The presented epitome fragments ET in
different patients?cells may prime T-cells capable of cross-reacting with some of the epitopes
XS presented by the infected cells infected by one of the known strains in the dataset
X. The cross-reaction distribution corresponds to the epitome distribution p(XS |ET ).
Vaccination is successful if the vaccine primes the immune system to attack targets found in
the known circulating strains. A natural criterion to optimize is the similarity between the
distribution over the epitopes learned by the immune systems of patients vaccinated with
the epitome (taking into account the cross-reactivity) and the distribution over the epitopes
from circulating strains. Therefore, the vaccine quality directly depends on the likelihood of
the designated epitopes p(XS ) under the epitome. To see this, consider directly optimizing
the KL divergence between the distribution pd (Xs ) over epitopes found in the data and the
distribution over the targets for which the T-cells are primed according to p(Xs ). This KL
distance differs from the log likelihood of all the data patches weighted by pd (Xs ),
pd (XS ) log
p(XS |ET )p(T ),
(2)
log p({XS }d ) =
S
T
only by a constant (the entropy of pd (Xs )). The distribution pd (Xs ) can serve as the
indicator of epitopes and be equal to either zero or a constant for all patches, and then
the above weighted likelihood is equivalent to the total likelihood of selected patches. This
2
The probabilities of physical events are often modeled as having an exponential relationship with
the energy changes.
distribution can also reflect the probability of presentation of epitopes XS , or the uncertainty
of the experiment or the prediction algorithm used to predict which parts of the circulating
strains correspond to MHC epitopes.
While the epitome can serve as a diversity model and be used to construct evolutionary
models and peptides for experimental epitope discovery, it can also serve as as an actual
immungen (the pattern containing the immunologically important message to the cell) in
vaccine. The most general version of epitome as a sequence of mutlinomial distributions
could be relevant for sequence classification, recombination modeling, and design of peptides for binding essays. In some of these applications, the distribution p(XS |ET ) may
have a semantics different than cross-reactivity, and could for instance represent mutations
dependent on the immune type of the host, or the subtype of the virus. On the other hand,
when the epitome is used for immunogen design, then cross-reactivity p(XS |ET ) can be
conveniently captured by constraining each distribution emn to have probabilities for the
twenty aminoacids from the set { 19
, 1 ? }. The mode of the epitome can then be used as a
deterministic vaccine immunogen3 , and the probability of cross-reaction will then directly
depend on the number of letters in XS that are different from the mode of ET .
While the epitome model components are mapped here to the elements of the interaction
between HIV and the immune system of the host, other applications in biology would
probably be based on a different semantics for the epitome components. We would expect
that the epitome would map to biological sequence analysis problems more naturally than to
image and audio modeling tasks, where the issue of the partition function arises. Epitome as
a generative model over-generates - generated patches overlap, and so each data element is
generated multiple times. In the image applications, we have avoided this problem through
constraints on the posterior distributions, while the traditional approach would be to deal
with the partition function (perhaps through sampling). However, the strains of a virus
are observed by the immune system through overlapping patches, independently sampled
from the viral proteins by biological processes. This fits epitome as a vaccination model.
More generally, epitome is compatible with the evolutionary forces that act independently
on overlapping patches of a biological sequence.
4
Epitome learning
Since epitomes can have multiple applications, we provide a general discussion of optimization of all parameters of the epitome, although in some applications, some of the parameters
may be known a priori. As a unified optimization criterion we use the free energy [9] of the
model (2),
q(T |S)
F ({XS }d |E) =
,
(3)
pd (XS )
q(T |S) log
p(XS |ET ) p(T )
S
T
where q(T |S) is an variational distribution, where
? log p({XS }d |E) = arg min F ({XS }d |E).
q
(4)
The model can be learned by iteratively reducing F , varying in each iteration either q or the
model parameters. When modeling biological sequences, the free energy may be associated
with real physical events, such as molecular binding processes, where log probabilities
correspond to molecular binding energies.
Setting to zero the derivatives of F with respect to the q distributions, the distribution p(T ),
and the distributions em () for all positions m, we obtain the EM algorithm [5]:
? For each XS , compute the posterior distribution over patches q(T |S):
p(XS |ET ) p(T )
q(T |S) ?
.
T p(XS |ET ) p(T )
(5)
3
To our knowledge, there is no effective way of delivering epitome as a distribution over proteins
or fragments into the cell
? Using these q distributions, update the profile sequence:
S pd (XS )
k
T |T (k)=m q(T |S)[XS(k) = ]
,
em () ?
S pd (XS )
k
T |T (k)=m q(T |S)
(6)
where [?] is the indicator function ([true] = 1; [f alse] = 0). If desired, also update
p(T ):
pd (XS ) q(T |S)
.
(7)
p(T ) ? S
S pd (XS )
The E step assigns a responsibility for S to each possible epitome patch. The M step reestimates the epitome multinomials using these responsibilities. As mentioned, this step
can re-estimate the usage probabilities of patches in the epitome, or this distribution can be
kept constant. It is often useful to construct the index sets T such that they wrap around
from one end to another. Such circular topologies can deter the EM algorithm from settling
in a poor local maximum of log likelihood. It is also sometimes useful to include a garbage
component (a component that generates patches containing random letters) in the model.
In general, the EM algorithm is prone to problems of local maxima. For example, if we
allowed the epitome to be longer, then some of the sites with two equally likely letters
could be split into two separate regions of the epitome (and in some applications, such
as vaccine optimization, this is preferred, as the epitomes need to become deterministic). Epitomes situated at different local maxima, however, often define similar probability
distributions p({XS }|E), and can be used for various inference tasks such as sequence
recognition/classification, noise removal, and context-dependent mutation prediction.
Of course, there are optimization algorithms other than EM that can learn a profile sequence
by minimizing the free energy, E = arg minE minq F ({XS }d |E). In some situations,
such as vaccine design, it is desirable to produce deterministic epitomes (containing pointmass probability distributions). Such profile sequences can be obtained by annealing the
parameter that controls the amount of probability allowed to be distributed to the letter
different from the most likely letter ?m = arg max em ():
E = lim arg min min F ({XS }d |E).
?0
E
q
(8)
Finally, in cases when the probability mass is uniformly spread over the letters other than
, 1 ? }, the myopic optimization
the modes of the epitome distributions, i.e.,emn () ? { 19
is a faster way of creating epitomes of high fragment (epitope) coverage than the EM with
multiple initializations. The myopic optimization consists of iteratively increasing the length
of the epitome by appending a patch (possibly with overlap) from the data which maximally
reduces the free energy. The process stops once the desired length is achieved (rather than
when the entire set of patches is included as in the superstring problem).
5
Experiments
To illustrate the EM algorithm for epitome learning, we created the synthetic data shown (in
part) in Figure 1. The data, eighty sequences in all, were synthesized from the generating
profile sequence of length fifty shown on the top line of the figure. In particular, each data
sequence was created by extracting one to four (mean two) patches from the generating
sequence of length three to thirty (mean sixteen), sampling from these patches to produce
corresponding patches of amino acids in the data sequence, and then filling in the gaps in the
data sequence with amino acids sampled from a uniform distribution over amino acids. In
addition, five percent of the sites in the each data sequence were subsequently replaced with
an amino acid sampled from a uniform distribution. The resulting data sequences ranged
in length from 38 to 43; and on average 80% of aminoacids in each sequence come from
the generating sequence. Thus, the synthesized data roughly simulates genetic diversity
resulting from a combination of mutation, insertion, deletion, and recombination.
We learned an epitome model using the EM algorithm applied to all 9mer patches from
the data, equally weighted. We used a two-component epitome mixture, where the first
component is an (initially unknown) sequence of probability distributions, and the second component is a garbage component, useful for representing the random insertions and
mutations. Each site in the first component was initialized to a distribution slightly (and
randomly) perturbed from uniform. The length of this component was set to be slightly
longer than the original generating sequence. In previous experiments, we have found that
a longer length helps to prevent the EM algorithm from settling in a poor local maximum
of log likelihood, and it is subsequently possible to cut out unnecessary parts which can
be detected in the learned prior p(T ). Also, we used an epitome with a circular topology.
The first (non-garbage) component of the epitome learned after sixty iterations, shown in
Figure 1, closely resembels the generating sequence even though it never saw this generating sequence during learning. (Roughly, the generating sequence starts near the end of the
epitome with the patch ?LIC? coded in red, and wraps around to the patch ?EHQ? coded
in yellow. The portion of the epitome between yellow and red is not responsible for many
patches, as reflected in the distribution p(T ).) The sixty iterations of EM are illustrated in
the video available at www.research.microsoft.com/?jojic/pEpitome.mpg. For each iteration, we show the first (non-garbage) component of the epitome E, the distribution p(T ),
and the first ten sequences in the dataset, color-coded according to the mode of q(T |S), as
in Figure 1. The video illustrates how the EM algorithm simultaneously learns the epitome
model and aligns the data sequences.
When used for vaccine optimization, some epitome parameters can be preset based on
biological knowledge. In particular, in the experiments we report on 176 gag HIV proteins
from the WA cohort [8], we assume no cross reactivity (i.e., we set = 0) and we consider
two different possibilities for the patch data distribution pd (XS ). The first parameter setting
we consider is that pd (XS ) is uniform for all ten amino-acid blocks found in the sequence
data. The advantage of the uniform data distribution is that we only need sequence data for
vaccine optimization, and not the epitope identities. The free energy criterion can be easily
shown to be proportional (with a negative constant) to the coverage - the percentage of all
10mers from the data covered by the epitome, where the 10mer is considered covered if it can
be found as a contiguous patch in the epitome?s mode. Another advantage of this approach
is that it can not miss epitopes due to errors in prediction algorithms or experimental epitope
discovery, as long as sufficient coverage can be guaranteed for the given vaccine length.
The second setting of the parameters pd (XS ) we consider is based the SYFPEITHI database
[10] of known epitopes. We trained pd (XS ) on this data using a decision tree model
to represent the probability that an observed 10mer contains a presentable epitope. The
advantage of this approach is that we can potentially focus our modeling power only to
immunologically important variability, as long as the known epitope dataset is sufficient to
capture properly the epitope distribution for at least the most frequent MHC-I molecules.
Thus, for a given epitome length, we may obtain more potent vaccines than using the first
parameter setting. Since = 0, the resulting optimization reduces to optimizing expected
epitope coverage, i.e., the sum of all probabilities
For both epitome settings, we epitomized the 176 gag proteins in the dataset, using the
myopic algorithm, and compared the expected epitope coverage of our vaccine candidates
with those of other designs, including cocktails of tree centers, consensus, and actual strains
(Fig. 2). Phylogenies were constructed using neighbor joining, as is used in Phylip [4].
Clusters were generated using a mixture model of independent multinomials [1]. Observed
sequences in the sequence cocktails were chosen at random. Both epitome models yield
better coverage and the expected epitope coverage than other designs for any fixed length.
Results are similar for the pol, nef, and env proteins. An interesting finding to note is that the
epitome optimized for coverage (using uniform distribution pd (XS )) provides essentially
equally good expected coverage as the epitome directly optimized for the expected coverage.
This is less surprising than it may seem - both true and predicted epitopes overlap in the
sequence data, and so epitomizing all 10mers leads to similar epitomes as optimizing for
coverage of the select few, but frequently overlapping epitopes. This is a direct consequence
of the epitome representation, which was found appealing in previous applications for the
same robustness to the number and sizes of the overlapping patches. It also indicates the
possibility that an effective vaccine can be optimized without precise knowledge of all HIV
epitopes.
80
70
Expected coverage (%)
Optimized for expected coverage
60
50
40
Optimized for coverage
30
20
Epitome
Consensus cocktail
COT cocktail
Strain cocktail
10
0
0
1000
2000
3000
4000
5000
6000
Length (aa)
Figure 2: Expected coverage for 176 Perth gag proteins using candidate sequences of length ten. For
comparison, we show expected coverage for the epitome optimized to cover all 10mers.
6
Conclusions
We have introduced the epitome as a new model of genetic diversity, especially well suited
to highly variable biological sequences. We show that our model can be used to optimize HIV vaccines with larger predicted coverage of MHC-I epitopes than other constructs of similar lengths and so epitome can be used to create vaccines that cover a large
fraction of HIV diversity. We also show that epitome optimization leads to good vaccines even when all subsequence of length 10 are considered epitopes. This suggests
that the vaccines could be optimized directly from sequence data, which are technologically much easier to obtain than epitope data. Our analysis of cross-reactivity which
provided similar empirical evidence of epitome robustness to cross-reactivity assumptions
(see www.research.microsoft.com/?jojic/HIVepitome.html for the full set of results).
References and Notes
[1] P. Cheeseman and J. Stutz. Bayesian classification (AutoClass): Theory and results. In Advances in Knowledge Discovery
and Data Mining, Fayyad, U., Piatesky-Shapiro, G., Smyth, P., and Uthurusamy, R., eds. (AAAI Press, 1995).
[2] V. Cheung, B. Frey, and N. Jojic. Video epitome. CVPR 2005.
[3] R. Durbin et al. Biological Sequence Analysis : Probabilistic Models of Proteins and Nucleic Acids. Cambridge University
Press, 1998.
[4] J. Felsenstein. Phylip (phylogeny inference package) version 3.6, 2004.
[5] N. Jojic, B. Frey, and A. Kannan. Epitomic analysis of appearance and shape. In Proceedings of the Ninth International
Conference on Computer Vision, Nice (2003). Video available at http://www.robots.ox.ac.uk/ awf/iccv03videos/.
[6] A. Kapoor and S. Basu. The audio epitome: A new representation for modeling and classifying auditory phenomena. ICASSP
2004.
[7] B.T.M. Korber, C. Brander, B.F. Haynes, R. Koup, C. Kuiken, J.P. Moore, B.D. Walker, and D.I. Watkins. HIV Molecular
Immunology. Los Alamos National Laboratory, Theoretical Biology and Biophysics, Los Alamos, NM, 2002.
[8] C. Moore, M. John, I. James, F. Christiansen, C. Witt, and S. Mallal. Evidence of HIV-1 adaptation to HLA-restricted
immune responses at a population level. Science, 296:1439?1443, 2002.
[9] R. Neal and G. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Learning in
graphical models, M. Jordan ed. (MIT Press,1999).
[10] H Rammensee, J Bachmann, N P Emmerich, O A Bachor, and S Stevanovic. SYFPEITHI: database for MHC ligands and
peptide motifs. Immunogenetics, 50(3-4):213?219, Nov 1999.
| 2895 |@word version:2 mers:5 essay:1 contains:1 fragment:16 genetic:5 denoting:1 cleared:1 emn:4 reaction:4 clash:1 com:2 virus:10 surprising:1 yet:1 exposing:1 john:1 partition:2 shape:1 plot:1 update:2 nebojsa:1 generative:5 selected:2 short:6 provides:1 location:1 attack:3 five:1 phylogenetic:1 height:1 constructed:1 direct:1 become:1 consists:1 introduce:2 expected:10 cot:1 roughly:2 mpg:1 frequently:1 actual:2 considering:1 increasing:1 provided:1 estimating:2 mass:1 killer:3 contiguity:1 unified:1 finding:1 transformation:1 temporal:1 act:1 uk:1 control:1 normally:1 subtype:1 producing:1 frey:3 local:5 offspring:1 bind:2 consequence:1 joining:1 reacting:1 bird:1 twice:1 initialization:1 suggests:1 hmms:1 limited:1 mer:3 responsible:2 thirty:1 block:1 differs:1 manufacture:1 danger:1 empirical:1 mhc:13 intention:1 induce:1 lic:1 protein:17 onto:1 epitome:95 context:5 influence:1 optimize:2 equivalent:2 map:2 deterministic:3 center:2 www:3 independently:2 minq:1 assigns:1 react:1 array:1 population:3 notion:1 target:5 smyth:1 programming:1 element:6 recognition:2 expensive:1 continues:1 mammalian:1 cut:1 persist:1 database:2 observed:5 capture:1 region:5 disease:2 mentioned:1 pd:15 insertion:3 pol:1 mine:1 dynamic:1 trained:1 depend:1 segment:2 deliver:1 serve:3 accelerate:1 joint:1 easily:1 icassp:1 various:3 represented:1 describe:1 effective:2 detected:1 outside:1 hiv:21 encoded:1 larger:4 valued:1 cvpr:1 reactivity:8 delivered:1 sequence:67 advantage:4 interaction:1 adaptation:1 frequent:1 relevant:3 combining:1 turned:1 kapoor:1 los:2 cluster:1 produce:3 generating:11 incremental:1 help:1 illustrate:3 ac:1 coverage:19 predicted:3 come:1 concentrate:1 closely:1 subsequently:2 human:5 occupies:1 deter:1 probable:1 biological:9 around:2 considered:4 mapping:7 predict:1 immunodeficiency:1 major:1 vary:1 precede:1 peptide:5 saw:1 create:2 reflects:1 weighted:3 mit:1 primed:2 rather:1 varying:1 broader:2 epitomic:2 focus:2 epitope:49 properly:1 indicates:2 likelihood:9 brendan:1 inference:4 motif:2 dependent:2 foreign:3 entire:2 initially:1 hidden:1 selects:1 semantics:2 issue:2 classification:3 arg:4 html:1 priori:1 fairly:1 equal:1 construct:4 once:1 having:1 never:1 sampling:2 manually:1 biology:2 env:1 haynes:1 unsupervised:1 filling:1 discrepancy:1 report:1 eighty:1 few:1 randomly:3 simultaneously:2 divergence:1 national:1 individual:1 replaced:1 phase:1 microsoft:3 message:1 circular:2 possibility:2 highly:1 mining:1 alignment:7 introduces:1 mixture:2 sixty:2 myopic:3 stutz:1 capable:4 shorter:1 facial:3 nucleotide:1 tree:3 indexed:1 loosely:1 initialized:1 circle:1 desired:2 re:1 isolated:1 theoretical:1 instance:6 modeling:12 technologically:1 contiguous:5 infected:4 cover:3 cost:1 subset:1 uniform:6 alamo:2 successful:1 too:1 perturbed:1 considerably:1 synthetic:2 person:2 potent:1 international:1 immunology:1 probabilistic:2 autoclass:1 central:1 reflect:2 aaai:1 containing:4 nm:1 possibly:1 creating:1 derivative:1 account:2 potential:3 diversity:14 gag:3 coding:2 includes:1 depends:1 piece:1 view:1 responsibility:2 linked:1 red:2 start:1 portion:1 mutation:5 acid:11 largely:1 correspond:2 yield:1 yellow:2 bayesian:1 influenced:1 infection:1 aligns:1 sixth:1 ed:2 against:2 energy:9 involved:1 james:1 naturally:2 associated:1 rational:6 sampled:3 dataset:7 stop:1 auditory:1 color:3 emerges:1 knowledge:4 lim:1 appears:1 higher:2 follow:1 reflected:1 response:4 maximally:1 though:1 ox:1 furthermore:1 hand:1 overlapping:8 defines:1 mode:5 quality:3 indicated:1 perhaps:1 immunological:1 usage:1 contain:1 true:2 ranged:1 evolution:2 jojic:6 iteratively:2 moore:2 laboratory:1 neal:1 illustrated:1 white:1 deal:1 during:1 self:1 covering:1 criterion:3 presenting:1 ehq:1 percent:1 image:9 variational:1 consideration:2 viral:3 multinomial:4 physical:2 discussed:2 organism:1 synthesized:3 cambridge:1 similarly:1 immune:12 robot:1 similarity:2 surface:1 longer:5 aligning:1 posterior:4 recent:1 optimizing:3 driven:1 prime:3 certain:2 captured:2 minimum:1 novelty:1 signal:2 multiple:7 desirable:1 full:1 reduces:3 faster:1 characterized:1 cross:12 long:4 host:2 molecular:3 equally:3 coded:3 biophysics:1 prediction:3 variant:2 patient:3 essentially:1 vision:1 iteration:4 sometimes:3 represent:3 achieved:2 cell:20 cropped:1 addition:2 annealing:1 walker:1 envelope:1 fifty:1 probably:1 recording:1 simulates:1 vaccinated:1 seem:1 jordan:1 extracting:1 near:1 synthetically:1 constraining:1 split:1 cohort:1 independence:1 fit:1 topology:2 triggering:1 song:1 nine:1 cocktail:9 garbage:5 generally:1 useful:3 delivering:2 listed:1 covered:2 amount:1 repeating:8 hla:1 ten:3 situated:1 dna:1 generate:3 shapiro:1 http:1 xij:1 percentage:1 diverse:2 discrete:1 awf:1 four:1 prevent:1 kept:1 fraction:1 sum:1 package:1 letter:11 uncertainty:3 parameterized:1 fourth:1 patch:42 delivery:1 decision:1 summarizes:1 christiansen:1 capturing:2 meek:1 guaranteed:1 durbin:1 constraint:2 generates:3 min:3 noncontiguous:1 fayyad:1 relatively:1 developing:1 according:5 designated:1 combination:3 poor:2 felsenstein:1 heckerman:1 across:5 em:15 slightly:5 smaller:3 appealing:1 making:1 biologically:1 vaccination:4 alse:1 restricted:1 indexing:2 taken:3 discus:3 mechanism:1 needed:1 nose:1 end:2 available:2 progression:1 occasional:1 appropriate:1 uthurusamy:1 appending:1 robustness:3 original:1 top:1 perth:1 include:1 graphical:1 opportunity:1 exploit:1 recombination:3 especially:1 already:1 concentration:1 traditional:1 exhibit:1 evolutionary:2 wrap:2 distance:1 separate:1 mapped:1 hmm:1 chris:1 me:3 considers:1 consensus:3 cellular:1 kannan:1 assuming:1 length:18 index:6 modeled:1 illustration:1 providing:2 insufficient:1 vladimir:1 relationship:1 minimizing:1 difficult:1 potentially:1 negative:1 design:16 summarization:1 unknown:2 perform:1 twenty:1 nucleic:1 protecting:1 defining:2 situation:1 variability:6 strain:12 precise:2 hinton:1 ninth:1 arbitrary:1 david:1 introduced:2 kl:2 optimized:9 immunity:3 instructing:1 learned:7 deletion:1 usually:1 pattern:6 immunization:1 including:3 max:3 video:5 memory:2 power:1 overlap:6 event:2 natural:3 difficulty:1 force:1 settling:2 indicator:2 cheeseman:1 mn:1 representing:2 eye:2 ne:4 numerous:1 circulating:3 created:2 prior:2 nice:1 discovery:3 removal:1 relative:1 fully:1 expect:1 interesting:1 limitation:2 proportional:1 sixteen:1 sufficient:2 principle:1 bypass:1 share:1 classifying:1 row:1 prone:1 genetics:1 compatible:1 course:1 last:1 free:5 korber:1 side:1 wide:1 neighbor:1 face:1 taking:1 basu:1 sparse:1 distributed:1 curve:1 overcome:1 genome:1 avoided:1 nov:1 preferred:1 gene:3 global:2 assumed:1 unnecessary:1 subsequence:4 nef:2 regulatory:1 invloved:1 learn:1 robust:1 molecule:6 symmetry:1 reestimates:1 complex:1 priming:2 constructing:1 domain:1 main:1 spread:1 noise:1 profile:7 allowed:2 amino:10 fig:5 site:3 vaccine:33 position:4 explicit:1 exponential:1 candidate:2 watkins:1 learns:1 bachmann:1 x:46 evidence:2 consist:1 execution:1 illustrates:3 justifies:1 gap:1 easier:1 suited:1 entropy:1 likely:5 appearance:1 conveniently:1 expressed:1 ordered:3 binding:13 ligand:1 aa:1 corresponds:2 chance:1 goal:2 presentation:8 viewed:1 identity:1 cheung:1 content:1 change:1 included:1 typical:1 reducing:1 uniformly:1 preset:1 miss:1 principal:1 total:3 specie:2 called:3 experimental:2 select:1 phylogeny:2 internal:1 arises:1 artifical:1 audio:4 phenomenon:1 |
2,087 | 2,896 | Generalization in Clustering with Unobserved
Features
Eyal Krupka and Naftali Tishby
School of Computer Science and Engineering,
Interdisciplinary Center for Neural Computation
The Hebrew University Jerusalem, 91904, Israel
{eyalkr,tishby}@cs.huji.ac.il
Abstract
We argue that when objects are characterized by many attributes, clustering them on the basis of a relatively small random subset of these
attributes can capture information on the unobserved attributes as well.
Moreover, we show that under mild technical conditions, clustering the
objects on the basis of such a random subset performs almost as well as
clustering with the full attribute set. We prove a finite sample generalization theorems for this novel learning scheme that extends analogous
results from the supervised learning setting. The scheme is demonstrated
for collaborative filtering of users with movies rating as attributes.
1
Introduction
Data clustering is unsupervised classification of objects into groups based on their similarity [1]. Often, it is desirable to have the clusters to match some labels that are unknown
to the clustering algorithm. In this context, a good data clustering is expected to have homogeneous labels in each cluster, under some constraints on the number or complexity of
the clusters. This can be quantified by mutual information (see e.g. [2]) between the objects? cluster identity and their (unknown) labels, for a given complexity of clusters. Since
the clustering algorithm has no access to the labels, it is unclear how the algorithm can
optimize the quality of the clustering. Even worse, the clustering quality depends on the
specific choice of the unobserved labels. For example a good documents clustering with
respect to topics is very different from a clustering with respect to authors.
In our setting, instead of trying to cluster by some ?arbitrary? labels, we try to predict
unobserved features from observed ones. In this sense our target ?labels? are yet other
features that ?happened? to be unobserved. For example, when clustering fruits based on
their observed features, such as shape, color and size, the target of clustering is to match
unobserved features, such as nutritional value and toxicity.
In order to theoretically analyze and quantify this new learning scheme, we make the following assumptions. Consider an infinite set of features, and assume that we observe only
a random subset of n features, called observed features. The other features are called unobserved features. We assume that the random selection of features is done uniformly and
independently.
Table 1: Analogy with supervised learning
Training set
Test set
Learning algorithm
Hypothesis class
Min generalization error
ERM
Good generalization
n randomly selected features (observed features)
Unobserved features
Cluster the instances into k clusters
All possible partitions of m instances into k clusters
Max expected information on unobserved features
Observed Information Maximization (OIM)
Mean observed and unobserved information are similar
The clustering algorithm has access only to the observed features of m instances. After the
clustering, one of the unobserved features is randomly and uniformly selected to be a target
label, i.e. clustering performance is measured with respect to this feature. Obviously, the
clustering algorithm cannot be directly optimized for this specific feature.
The question is whether we can optimize the expected performance on the unobserved
feature, based on the observed features alone. The expectation is over the random selection
of the target feature. In other words, can we find clusters that match as many unobserved
features as possible? Perhaps surprisingly, for large enough number of observed features,
the answer is yes. We show that for any clustering algorithm, the average performance of
the clustering with respect to the observed and unobserved features, is similar. Hence we
can indirectly optimize clustering performance with respect to the unobserved features, in
analogy to generalization in supervised learning. These results are universal and do not
require any additional assumptions such as underling model or a distribution that created
the instances.
In order to quantify these results, we define two terms: the average observed information and the expected unobserved information. Let T be the variable which represents the
cluster for each instance, and {X1 , ..., X? } the set of random variables which denotes the
features. The average observed information, denoted by Iob , is the average mutual information between T and each of the observed
features. In other words, if the observed features
Pn
are {X1 , ..., Xn } then Iob = n1 j=1 I(T ; Xj ). The expected unobserved information,
denoted by Iun , is the expected value of the mutual information between T and a randomly
selected unobserved feature, i.e. Ej {I(T ; Xj )}. Note that whereas Iob can be measured
directly, this paper deals with the question of how to infer and maximize Iun .
Our main results consist of two theorems. The first is a generalization theorem. It gives
an upper bound on the probability of large difference between Iob and Iun for all possible
clusterings. It also states a uniform convergence in probability of |Iob ? Iun | as the number of observed features increases. Conceptually, the observed mean information, Iob , is
analogous to the training error in standard supervised learning [3], whereas the unobserved
information, Iun , is similar to the generalization error.
The second theorem states that under constraint on the number of clusters, and large enough
number of observed features, one can achieve nearly the best possible performance, in
terms of Iun . Analogous to the principle of Empirical Risk Minimization (ERM) in statistical learning theory [3], this is done by maximizing Iob .
Table 1 summarizes the correspondence of our setting to that of supervised learning. The
key difference is that in supervised learning, the set of features is fixed and the training
instances (samples) are assumed to be randomly drawn from some distribution. In our
setting, the set of instances is fixed, but the set of observed features is assumed to be
randomly selected.
Our new theorems are evaluated empirically in section 3, on a data set of movie ratings.
This empirical test also suggests one future research direction: use the framework suggested in this paper for collaborative filtering. Our main point in this paper, however, is the
new conceptual framework and not a specific algorithm or experimental performance.
Related work The idea of an information tradeoff between complexity and information
on target variables is similar to the idea of the information bottleneck [4]. But unlike the
bottleneck method, here we are trying to maximize information on unobserved variables,
using finite samples.
In the framework of learning with labeled and unlabeled data [5], a fundamental issue is the
link between the marginal distribution P (x) over examples x and the conditional P (y|x)
for the label y [6]. From this point of view our approach assumes that y is a feature in itself.
2
Mathematical Formulation and Analysis
Consider a set of discrete random variables {X1 , ..., XL }, where
? L is very large (L ?
?). We randomly, uniformly and independently select n << L variables from this set.
These variables are the observed features and their indexes are denoted by {q1 , ..., qn }. The
remaining L ? n variables are the unobserved features. A clustering algorithm has access
only to the observed features over m instances {x[1], ..., x[m]}. The algorithm assigns a
cluster label ti ? {1, ..., k} for each instance x[i], where k is the number of clusters. Let T
denote the cluster label assigned by the algorithm.
Shannon?s mutual information between two variables
is a
function of their joint distribu
P
P (t,xj )
tion, defined as I(T ; Xj ) = t,xj P (t, xj ) log P (t)P (xj ) . Since we are dealing with a
finite number of samples, m, the distribution P is taken as the empirical joint distribution
of (T, Xj ), for every j. For a random j, this empirical mutual information is a random
variable on its own.
Pn
The average observed information, Iob , is now defined as Iob = n1 i=1 I(T ; Xqi ). In
general, Iob is higher when clusters are more coherent, i.e. elements within each cluster
have many similar attributes. The expected unobserved information, Iun , is defined as
Iun = Ej {I(T ; Xj )}. We can assume that the unobserved feature is with high probability
from the unobserved set. Equivalently, Iun can be the mean P
mutual information between
1
the clusters and each of the unobserved features, Iun = L?n
j ?{q
/ 1 ,...,qn } I(T ; Xj ).
The goal of the clustering algorithm is to find cluster labels {t1 , ..., tm }, that maximize
Iun , subject to a constraint on their complexity - henceforth considered as the number of
clusters (k ? D) for simplicity, where D is an integer bound.
Before discussing how to maximize Iun , we consider first the problem of estimating it.
Similar to the generalization error in supervised learning, Iun cannot be estimated directly
in the learning algorithm, but we may be able to bound the difference between the observed
information Iob - our ?training error? - and Iun - the ?generalization error?. To obtain generalization this bound should be uniform over all possible clusterings with a high probability over the randomly selected features. The following lemma argues that such uniform
convergence in probability of Iob to Iun always occurs.
Lemma 1 With the definitions above,
Pr
(
sup
{t1 ,...,tm }
)
|Iob ? Iun | > ?
? 2e?2n?
2
/(log k)2 +m log k
?? > 0
where the probability is over the random selection of the observed features.
Proof: For fixed cluster labels, {t1 , ..., tm }, and a random feature j, the mutual information I(T ; Xj ) is a function of the random variable j, and hence I(T ; Xj ) is a random
variable in itself. Iob is the average of n such independent random variables and Iun is its
expected value. Clearly, for all j, 0 ? I(T ; Xj ) ? log k. Using Hoeffding?s inequality [7],
2
2
Pr {|Iob ? Iun | > ?} ? 2e?2n? /(log k) . Since there are at most k m possible partitions,
the union bound is sufficient to prove the lemma 1.
Note that for any ? > 0, the probability that |Iob ? Iun?
| > ? goes to zero, as n ? ?. The
convergence rate of Iob to Iun is bounded by O(log n/ n). As expected, this upper bound
decreases as the number of clusters, k, decreases.
Unlike the standard bounds in supervised learning, this bound increases with the number
of instances (m), and decreases with increasing number of observed features (n). This
is because in our scheme the training size is not the number of instances, but rather the
number of observed features (See Table 1). However, in the next theorem we obtain an
upper bound that is independent of m, and hence is tighter for large m.
Theorem 1 (Generalization Theorem) With the definitions above,
Pr
(
sup
{t1 ,...,tm }
)
|Iob ? Iun | > ?
2
n?
+
? 8(log
k)2
? 8(log k)e
4k maxj |Xj |
?
log k?log ?
?? > 0
where |Xj | denotes the alphabet size of Xj (i.e. the number of different values it can
obtain). Again, the probability is over the random selection of the observed features.
?
The convergence rate here is bounded by O(log n/3 n). However, for relatively large n
one can use the bound in lemma 1, which converge faster.
A detailed proof of theorem 1 can be found in [8]. Here we provide the outline of the proof.
Proof outline: From the given m instances and any given cluster labels {t1 , ..., tm }, draw
uniformly and independently m? instances (repeats allowed) and denote their indexes by
{i1 , ..., im? }. We can estimate I(T ; Xj ) from the empirical distribution of (T, Xj ) over
the m? instances. This distribution is denoted by P? (t, xj ) and the corresponding mutual
information is denoted by IP? (T ; Xj ). Theorem 1 is build up from the following upper
?
bounds,
of m, but depend on the choice of m . The first bound is on?
which are independent
E I(T ; Xj ) ? IP? (T ; Xj ) , where the expectation is over random selection of the m
instances. From this bound we derive upper bounds on |Iob ? E(I?ob )| and |Iun ? E(I?un )|,
where I?ob , I?un are the estimated values of Iob , Iun based on the subset of m? instances.
The last required bound is on the probability that sup{t1 ,...,tm } |E(I?ob ) ? E(I?un )| > ?1 ,
for any ?1 > 0. This bound is obtained from lemma 1. The choice of m? is independent on
m. Its value should be large enough for the estimations I?ob , I?un to be accurate, but not too
large, so as to limit the number of possible clusterings over the m? instances.
We now describe the above mentioned upper bounds in more details. Using Paninski [9]
(proposition 1) it is easy to show that the bias between I(T ; Xj ) and its maximum likelihood estimation, based on P? (t, xj ) is bounded as follows.
k|Xj | ? 1
k|Xj |
E{i1 ,...,im? } I(T ; Xj ) ? IP? (T ; Xj ) ? log 1 +
?
m?
m?
(1)
From this equation we obtain,
|Iob ? E{i1 ,...,im? } (I?ob )|, |Iun ? E{i1 ,...,im? } (I?un )| ? k max |Xj |/m?
j
(2)
Using lemma 1 we have an upper bound on the probability that sup{t1 ,...,tm } |I?ob ? I?un | > ?
over the random selection of features, as a function of m? . However, the upper bound
we need is on the probability that sup{t1 ,...,tm } |E(I?ob ) ? E(I?un )| > ?1 . Note that the
expectations E(I?ob ), E(I?un ) are done over random selection of the subset of m? instances,
for a set of features that were randomly selected once. In order to link between these two
probabilities, we need the following lemma.
Lemma 2 Consider a function f of two independent random variables (Y, Z). We assume
that f (y, z) ? c, ?y, z, where c is some constant. If Pr {f (Y, Z) > ??} ? ?, then
Pr {Ey (f (y, Z)) ? ?} ?
Z
c ? ??
?
? ? ??
?? > ??
The proof of this lemma is rather standard and is given in [8]. From lemmas 1 and 2 it is
easy to show that
(
!
)
n?2
1
4 log k ? 2(log
+m? log k
?
?
k)2
(3)
Pr E{i1 ,...,im? }
sup Iob ? Iun > ?1 ?
e
?1
{t1 ,...,tm }
Lemma 2 is used, where Z represents the random selection of features, Y represents the
random selection of m? instances, f (y, z) = sup{t1 ,...,tm } |I?ob ? I?un |, c = log k, and
?? = ?1 /2. From eq. 2 and 3 it can be shown that
(
)
n?2
1
2k maxj |Xj |
4 log k ? 2(log
+m? log k
k)2
Pr
sup |Iob ? Iun | > ?1 +
?
e
?
m
?1
{t1 ,...,tm }
By selecting ?1 = ?/2, m? = 4k maxj |Xj |/?, we obtain theorem 1.
Note that the selection of m? depends on k maxj |Xj |. This reflects the fact that in order
to accurately estimate I(T, Xj ), we need a number of instances, m? , which is much larger
than the product of the alphabet sizes of T , Xj .
We can now return to the problem of specifying a clustering that maximizes Iun , using only
the observed features. For a reference, we will first define Iun of the best possible clusters.
?
Definition 1 Maximally achievable unobserved information: Let Iun,D
be the maximum
value of Iun that can be achieved by any clustering {t1 , ..., tm }, subject to the constraint
k ? D, for some constant D
?
Iun,D
=
sup
Iun
{{t1 ,...,tm }:k?D}
The clustering that achieves this value is called the best clustering. The average observed
?
information of this clustering is denoted by Iob,D
.
Definition 2 Observed information maximization algorithm: Let IobMax be any clustering algorithm that, based on the values of observed features alone, selects the cluster labels
{t1 , ..., tm } having the maximum possible value of Iob , subject to the constraint k ? D.
Let I?ob,D be the average observed information achieved by IobMax algorithm. Let I?un,D
be the expected unobserved information achieved by the IobMax algorithm.
The next theorem states that IobMax not only maximizes Iob , but also Iun .
Theorem 2 With the definitions above,
n
o
8k maxj |Xj |
n?2
+
log k?log(?/2)
?
?
?
Pr I?un,D ? Iun,D
? ? ? 8(log k)e 32(log k)2
?? > 0 (4)
where the probability is over the random selection of the observed features.
Proof: We now define a bad clustering as a clustering whose expected unobserved infor?
mation satisfies Iun ? Iun,D
? ?. Using Theorem 1, the probability that |Iob ? Iun | > ?/2
for any of the clusterings is upper bounded by the right term of equation 4. If for all clus?
?
terings |Iob ? Iun | ? ?/2, then surely Iob,D
? Iun,D
? ?/2 (see Definition 1) and Iob of
?
all bad clusterings satisfies Iob ? Iun,D ? ?/2. Hence the probability that a bad clustering
has a higher average observed information than the best clustering is upper bounded as in
Theorem 2.
As a result of this theorem, when n is large enough, even an algorithm that knows the value
of all the features (observed and unobserved) cannot find a clustering with the same complexity (k) which is significantly better than the clustering found by IobM ax algorithm.
3
Empirical Evaluation
In this section we describe an experimental evaluation of the generalization properties of
the IobMax algorithm for a finite large number of features. We examine the difference
between Iob and Iun as function of the number of observed features and the number of
clusters used. We also compare the value of Iun achieved by IobMax algorithm to the
?
maximum achievable Iun,D
(See definition 1).
Our evaluation uses a data set typically used for collaborative filtering. Collaborative filtering refers to methods of making predictions about a user?s preferences, by collecting
preferences of many users. For example, collaborative filtering for movie ratings could
make predictions about rating of movies by a user, given a partial list of ratings from this
user and many other users. Clustering methods are used for collaborative filtering by cluster
users based on the similarity of their ratings (see e.g. [10]).
In our setting, each user is described as a vector of movie ratings. The rating of each movie
is regarded as a feature. We cluster users based on the set of observed features, i.e. rated
movies. In our context, the goal of the clustering is to maximize the information between
the clusters and unobserved features, i.e. movies that have not yet been rated by any of the
users. By Theorem 2, given large enough number of rated movies, we can achieve the best
possible clustering of users with respect to unseen movies. In this region, no additional
information (such as user age, taste, rating of more movies) beyond the observed features
can improve Iun by more than some small ?.
The purpose of this section is not to suggest a new algorithm for collaborative filtering or
compare it to other methods, but simply to illustrate our new theorems on empirical data.
Dataset. We used MovieLens (www.movielens.umn.edu), which is a movie rating data
set. It was collected distributed by GroupLens Research at the University of Minnesota. It
contains approximately 1 million ratings for 3900 movies by 6040 users. Ratings are on
a scale of 1 to 5. We used only a subset consisting of 2400 movies by 4000 users. In our
setting, each instance is a vector of ratings (x1 , ..., x2400 ) by specific user. Each movie is
viewed as a feature, where the rating is the value of the feature.
Experimental Setup. We randomly split the 2400 movies into two groups, denoted by
?A? and ?B?, of 1200 movies (features) each. We used a subset of the movies from group
?A? as observed features and all movies from group ?B? as the unobserved features. The
experiment was repeated with 10 random splits and the results averaged. We estimated Iun
by the mean information between the clusters and ratings of movies from group ?B?.
0.025
0.025
0.02
0.02
0.015
Iob
Iob
0.015
I*un
0.015
Iob
Iun
I*un
0.01
0.01
0.01
Iun
0.005
0.005
0.005
Iun
0
0
200
400
600
800
1000
1200
Number of observed features (movies) (n)
0
0
200
400
600
800
1000
Number of observed features (movies) (n)
(a) 2 Clusters
(b) 6 Clusters
1200
0
2
3
4
5
6
Number of clusters (k)
(c) Fixed n (1200)
?
Figure 1: Iob , Iun and Iun
per number of training movies and clusters. In (a) and (b) the
number of movies is variable, and the number of clusters is fixed. In (c) The number of
observed movies is fixed (1200), and the number of clusters is variable. The overall mean
information is low, since the rating matrix is sparse.
Handling Missing Values. In this data set, most of the values are missing (not rated). We
handle this by defining the feature variable as 1,2,...,5 for the ratings and 0 for missing
value. We maximize the mutual information based on the empirical distribution of values
that
Pn are present, and weight it by the probability of presence for this feature. Hence, Iob =
j=1 P (Xj 6= 0)I(T ; Xj |Xj 6= 0) and Iun = Ej {P (Xj 6= 0)I(T ; Xj |Xj 6= 0)}. The
weighting prevents ?overfitting? to movies with few ratings. Since the observed features
were selected at random, the statistics of missing values of the observed and unobserved
features are the same. Hence, all theorems are applicable to these definitions of Iob and Iun
as well.
Greedy IobMax Algorithm
We cluster the users using a simple greedy clustering algorithm . The input to the algorithm
is all users, represented solely by the observed features. Since this algorithm can only find
a local maximum of Iob , we ran the algorithm 10 times (each used a different random
initialization) and selected the results that had a maximum value of Iob . More details about
this algorithm can be found in [8].
?
In order to estimate Iun,D
(see definition 1), we also ran the same algorithm, where all the
features are available to the algorithm (i.e. also features from group ?B?). The algorithm
finds clusters that maximize the mean mutual information on features from group "B".
Results
The results are shown in Figure 1. As n increases, Iob decreases and Iun increases, until
they converge to each other. For small n, the clustering ?overfits? to the observed features.
This is similar to training and test errors in supervised learning. For large n, Iun approaches
?
to Iun,D
, which means the IobM ax algorithm found nearly the best possible clustering as expected from the theorem 2. As the number of clusters increases, both Iob and Iun
increase, but the difference between them also increases.
4
Discussion and Summary
We introduce a new learning paradigm: clustering based on observed features that generalizes to unobserved features. Our results are summarized by two theorems that tell us
how, without knowing the value of the unobserved features, one can estimate and maximize
information between the clusters and the unobserved features.
The key assumption that enables us to prove the theorems is the random independent selection of the observed features. Another interpretation of the generalization theorem, without
using this assumption, might be combinatorial. The difference between the observed and
unobserved information is large only for a small portion of all possible partitions into observed and unobserved features. This means that almost any arbitrary partition generalizes
well.
The importance of clustering which preserves information on unobserved features is that
it enables us to learn new - previously unobserved - attributes from a small number of
examples. Suppose that after clustering fruits based on their observed features, we eat a
chinaberry1 and thus, we ?observe? (by getting sick), the previously unobserved attribute of
toxicity. Assuming that in each cluster, all fruits have similar unobserved attributes, we can
conclude that all fruits in the same cluster, i.e. all chinaberries, are likely to be poisonous.
We can even relate the IobMax principle to cognitive clustering in sensory information
processing. In general, a symbolic representation (e.g. assigning object names in language)
may be based on a similar principle - find a representation (clusters) that contain significant
information on as many observed features as possible, while still remaining simple. Such
representations are expected to contain information on other rarely viewed salient features.
Acknowledgments
We thank Amir Globerson, Ran Bachrach, Amir Navot, Oren Shriki, Avner Dor and Ilan
Sutskover for helpful discussions. We also thank the GroupLens Research Group at the
University of Minnesota for use of the MovieLens data set. Our work is partly supported
by grant from the Israeli Academy of Science.
References
[1] A. K. Jain, M. N. Murty, and P. J. Flynn. Data clustering: a review. ACM Computing Surveys,
31(3):264?323, September 1999.
[2] T. M. Cover and J. A. Thomas. Elements Of Information Theory. Wiley Interscience, 1991.
[3] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.
[4] N. Tishby, F. Pereira, and W. Bialek. The information bottleneck method. Proc. 37th Allerton
Conf. on Communication and Computation, 1999.
[5] M. Seeger. Learning with labeled and unlabeled data. Technical report, University of Edinburgh,
2002.
[6] M. Szummer and T. Jaakkola. Information regularization with partially labeled data. In NIPS,
2003.
[7] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the
American Statistical Association, 58:13?30, 1963.
[8] E. Krupka and N. Tishby. Generalization in clustering with unobserved features. Technical
report, Hebrew University, 2005. http://www.cs.huji.ac.il/~tishby/nips2005tr.pdf.
[9] L. Paninski. Estimation of entropy and mutual information. Neural Computation, 15:1101?
1253, 2003.
[10] B. Marlin. Collaborative filtering: A machine learning perspective. Master?s thesis, University
of Toronto, 2004.
1
Chinaberries are the fruits of the Melia azedarach tree, and are poisonous.
| 2896 |@word mild:1 achievable:2 q1:1 contains:1 selecting:1 document:1 yet:2 assigning:1 partition:4 shape:1 enables:2 alone:2 greedy:2 selected:8 amir:2 toronto:1 preference:2 allerton:1 mathematical:1 prove:3 interscience:1 introduce:1 theoretically:1 expected:13 examine:1 increasing:1 estimating:1 moreover:1 bounded:6 maximizes:2 israel:1 flynn:1 unobserved:42 marlin:1 every:1 collecting:1 ti:1 grant:1 t1:14 before:1 engineering:1 local:1 limit:1 krupka:2 solely:1 approximately:1 might:1 initialization:1 quantified:1 suggests:1 specifying:1 averaged:1 acknowledgment:1 globerson:1 union:1 empirical:8 universal:1 significantly:1 murty:1 word:2 refers:1 suggest:1 symbolic:1 cannot:3 unlabeled:2 selection:12 context:2 risk:1 optimize:3 www:2 demonstrated:1 center:1 maximizing:1 missing:4 jerusalem:1 go:1 independently:3 survey:1 bachrach:1 simplicity:1 assigns:1 regarded:1 toxicity:2 handle:1 analogous:3 target:5 suppose:1 user:17 homogeneous:1 us:1 hypothesis:1 element:2 labeled:3 observed:51 capture:1 region:1 decrease:4 ran:3 mentioned:1 complexity:5 depend:1 basis:2 joint:2 represented:1 alphabet:2 jain:1 describe:2 tell:1 iun:57 whose:1 larger:1 statistic:1 unseen:1 itself:2 ip:3 obviously:1 product:1 achieve:2 academy:1 getting:1 convergence:4 cluster:43 object:5 derive:1 illustrate:1 ac:2 measured:2 school:1 eq:1 c:2 quantify:2 direction:1 attribute:9 require:1 generalization:14 proposition:1 tighter:1 im:5 considered:1 predict:1 shriki:1 achieves:1 purpose:1 estimation:3 proc:1 applicable:1 label:15 combinatorial:1 grouplens:2 reflects:1 minimization:1 clearly:1 always:1 mation:1 rather:2 pn:3 ej:3 jaakkola:1 ax:2 likelihood:1 seeger:1 sense:1 helpful:1 typically:1 i1:5 selects:1 infor:1 issue:1 classification:1 overall:1 denoted:7 mutual:11 marginal:1 once:1 having:1 represents:3 unsupervised:1 nearly:2 future:1 report:2 few:1 randomly:9 preserve:1 maxj:5 consisting:1 dor:1 n1:2 evaluation:3 umn:1 accurate:1 partial:1 tree:1 instance:21 cover:1 maximization:2 subset:7 uniform:3 tishby:5 too:1 answer:1 fundamental:1 huji:2 interdisciplinary:1 again:1 thesis:1 hoeffding:2 henceforth:1 worse:1 cognitive:1 conf:1 american:1 return:1 ilan:1 summarized:1 depends:2 tion:1 try:1 view:1 eyal:1 analyze:1 sup:9 overfits:1 portion:1 collaborative:8 il:2 yes:1 conceptually:1 accurately:1 definition:9 proof:6 iob:42 dataset:1 color:1 higher:2 supervised:9 maximally:1 formulation:1 done:3 evaluated:1 until:1 quality:2 perhaps:1 name:1 contain:2 hence:6 assigned:1 regularization:1 deal:1 naftali:1 xqi:1 trying:2 pdf:1 outline:2 performs:1 argues:1 novel:1 empirically:1 million:1 association:1 interpretation:1 significant:1 language:1 had:1 minnesota:2 access:3 similarity:2 underling:1 sick:1 own:1 perspective:1 inequality:2 discussing:1 additional:2 ey:1 surely:1 converge:2 maximize:8 paradigm:1 full:1 desirable:1 infer:1 technical:3 match:3 faster:1 characterized:1 prediction:2 expectation:3 achieved:4 oren:1 whereas:2 unlike:2 subject:3 integer:1 presence:1 split:2 enough:5 easy:2 xj:41 idea:2 tm:14 knowing:1 tradeoff:1 bottleneck:3 whether:1 detailed:1 http:1 happened:1 estimated:3 per:1 discrete:1 group:8 key:2 salient:1 drawn:1 sum:1 master:1 extends:1 almost:2 draw:1 ob:10 summarizes:1 bound:19 correspondence:1 constraint:5 min:1 eat:1 relatively:2 making:1 avner:1 pr:8 erm:2 taken:1 equation:2 previously:2 know:1 available:1 generalizes:2 observe:2 indirectly:1 thomas:1 assumes:1 remaining:2 clustering:52 denotes:2 build:1 question:2 occurs:1 bialek:1 unclear:1 september:1 link:2 thank:2 topic:1 argue:1 collected:1 assuming:1 index:2 hebrew:2 equivalently:1 setup:1 relate:1 unknown:2 upper:10 finite:4 oim:1 defining:1 communication:1 arbitrary:2 rating:18 required:1 optimized:1 coherent:1 poisonous:2 nip:1 israeli:1 able:1 suggested:1 beyond:1 max:2 scheme:4 improve:1 movie:26 rated:4 created:1 review:1 taste:1 filtering:8 analogy:2 age:1 sufficient:1 fruit:5 principle:3 nutritional:1 summary:1 surprisingly:1 repeat:1 last:1 supported:1 distribu:1 bias:1 sparse:1 distributed:1 edinburgh:1 xn:1 qn:2 sensory:1 author:1 dealing:1 overfitting:1 conceptual:1 assumed:2 conclude:1 navot:1 un:13 table:3 learn:1 main:2 allowed:1 repeated:1 x1:4 clus:1 wiley:2 pereira:1 xl:1 weighting:1 theorem:23 bad:3 specific:4 list:1 consist:1 vapnik:1 importance:1 entropy:1 paninski:2 simply:1 likely:1 prevents:1 partially:1 satisfies:2 acm:1 conditional:1 identity:1 goal:2 viewed:2 infinite:1 movielens:3 uniformly:4 lemma:11 called:3 partly:1 experimental:3 shannon:1 rarely:1 select:1 szummer:1 handling:1 |
2,088 | 2,897 | Unbiased Estimator of Shape Parameter for
Spiking Irregularities under Changing
Environments
Keiji Miura
Kyoto University
JST PRESTO
Masato Okada
University of Tokyo
JST PRESTO
RIKEN BSI
Shun-ichi Amari
RIKEN BSI
Abstract
We considered a gamma distribution of interspike intervals as a statistical model for neuronal spike generation. The model parameters consist
of a time-dependent firing rate and a shape parameter that characterizes
spiking irregularities of individual neurons. Because the environment
changes with time, observed data are generated from the time-dependent
firing rate, which is an unknown function. A statistical model with an
unknown function is called a semiparametric model, which is one of the
unsolved problem in statistics and is generally very difficult to solve. We
used a novel method of estimating functions in information geometry to
estimate the shape parameter without estimating the unknown function.
We analytically obtained an optimal estimating function for the shape
parameter independent of the functional form of the firing rate. This
estimation is efficient without Fisher information loss and better than
maximum likelihood estimation.
1
Introduction
The firing patterns of cortical neurons look very noisy [1]. Consequently, probabilistic models are necessary to describe these patterns [2, 3, 4]. For example, Baker and
Lemon showed that the firing patterns recorded from motor areas can be explained using a
continuous-time rate-modulated gamma process [5]. Their model had a rate parameter, ?,
and a shape parameter, ?, that was related to spiking irregularity. ? was assumed to be a
function of time because it depended largely on the behavior of the monkey. ? was assumed
to be unique to individual neurons and constant over time.
The assumption that ? is unique to individual neurons is also supported by other studies
[6, 7, 8]. However, these indirect supports are not conclusive. Therefore, we need to accurately estimate ? to make the assumption more reliable. If the assumption is correct,
neurons may be identified by ? estimated from the spiking patterns, and ? may provide
useful information about the function of a neuron. In other words, it may be possible to
classify neurons according to functional firing patterns rather than static anatomical properties. Thus, it is very important to accurately estimate ? in the field of neuroscience.
In reality, however, it is very difficult to estimate all the parameters in the model from
the observed spike data. The reason for this is that the unknown function for the timedependent firing rate, ?(t), has infinite degrees of freedom. This kind of estimation problem
is called the semiparametric model [9] and is one of the unsolved problems in statistics. Are
there any ingenious methods of estimating ? accurately to overcome this difficulty?
Ikeda pointed out that the problem we need to consider is the semiparametric model [10].
However, the problem remains unsolved. There is a method called estimating functions
[11, 12] for semiparametric problems, and a general theory has been developed [13, 14,
15] from the viewpoint of information geometry [16, 17, 18]. However, the method of
estimating functions cannot be applied to our problem in its original form.
In this paper, we consider the semiparametric model suggested by Ikeda instead of the
continuous-time rate-modulated gamma process. In this discrete-time rate-modulated
model, the firing rate varies for each interspike interval. This model is a mixture model
and can represent various types of interspike interval distributions by adjusting its weight
function. The model can be analyzed by using the method of estimating functions for
semiparametric models.
Various attempts have been made to solve semiparametric models. Neyman and Scott
pointed out that the maximum likelihood method does not generally provide a consistent
estimator when the number of parameters and observations are the same [19]. In fact, we
show that maximum likelihood estimation for our problem is biased. Ritov and Bickel
considered asymptotic attainability of information bound purely mathematically [20, 21].
However, their results were not practical for application to our problem. Amari and Kawanabe showed a practical method of estimating finite parameters of interest without estimating
an unknown function [15]. This is the method of estimating functions. If this method can
be applied, ? can be estimated consistently independent of the functional form of a firing
rate.
In this paper, we show that the model we consider here is the ?exponential form? defined
by Amari and Kawanabe [15]. However, an asymptotically unbiased estimating function
does not exist unless multiple observations are given for each firing rate, ?. We show that
if multiple observations are given, the method of estimating functions can be applied. In
that case, the estimating function of ? can be analytically obtained, and ? can be estimated
consistently independent of the functional form of a firing rate. In general, estimation using
estimating functions is not efficient. However, for our problem, this method yielded an
optimal estimator in the sense of Fisher information [15]. That is, we obtained an efficient
estimator.
2
Simple case
We considered the following statistical model of inter spike intervals proposed by Ikeda
[10]. Interspike intervals are generated by a gamma distribution whose mean firing rate
changes over time. The mean firing rate ? at each observation is determined randomly
according to an unknown probability distribution, k(?). The model is described as
p(T ; ?, k(?)) = q(T ; ?, ?)k(?)d?,
(1)
where
q(T ; ?, ?)
=
(??)? ??1 ???T
T
e
?(?)
=
?
e?(??T )+(??1) log(T )?(?? log(??)+log ?(?))
e?s(T,?)+r(T,?)??(?,?) .
(2)
Here, T denotes an interspike interval. We defined s, r, and ? as
s(T, ?) = ??T,
(3)
r(T, ?) = (? ? 1) log(T ), and
(4)
?(?, ?) = ?? log(??) + log ?(?)
(5)
to demonstrate that the model is the exponential form defined by Amari and Kawanabe
[15]. Note that this type of model is called a semiparametric model because it has both
unknown finite parameters, ?, and function, k(?).
In this mixture model, {? (1) , ? (2) , . . .} is an unknown sequence where ? is independently
and identically distributed according to a probability density function k(?). Then, l-th
observation T (l) is distributed according to q(T (l) ; ? (l) , ?). In effect, T is independently
and identically distributed according to p(T ; ?, k(?)).
An estimating function is a function of ? whose zero-crossing provides an estimate of ?,
analogous to the derivative with respect to ? of the log-likelihood function. Note that the
zero-crossings of the derivatives of the log-likelihood function with respect to parameters
provide an maximum likelihood estimator.
Let us calculate the estimating function following Amari and Kawanabe [15] to estimate
? without estimating k(?). They showed that for the exponential form of mixture distributions, the estimating function, uI , is given by the projection of the score function,
u = ?? log p, as
uI (T, ?) = u ? E[u|s]
= (?? s ? E[?? s|s]) ? E? [?|s] + ?? r ? E[?? r|s]
= ?? r ? E[?? r|s],
(6)
where
?k(?) exp(? ? s ? ?)d?
E? [?|s] =
.
(7)
k(?) exp(? ? s ? ?)d?
The relation,
s
(8)
E[?? s|s] = = ?T = ?? s,
?
holds because the number of random variables, T , and s are the same. For the same reason,
E[?? r|s] = log(T ) = ?? r.
(9)
Then,
uI = 0.
(10)
This means that the set of estimating functions is an empty set. Therefore, we proved that
no asymptotically unbiased estimating function of ? exists for the model.
Two or more random variables may be needed. Let us consider the multivariate model
described as
n
p(T1 , ..., Tn ; ?, k(?1 , ..., ?n )) =
q(Ti ; ?i , ?)k(?1 , ..., ?n )d?.
(11)
i=1
Here, the number of random variables and s are also the same, and uI becomes an empty
set.
This result can be understood intuitively as follows. When the mean, ?, and variance, ?,
of a normal distribution are estimated from a single observation, x, they are estimated as
? = x and ? = 0. Similarly, ? and ? of a gamma distribution, q(T ; ?, ?), are estimated
from a single observation, T , as ? = T1 and ? = ? corresponding to 0 variance. Two or
more observations are required to estimate ?. For the semiparametric model considered in
this section, only one observation is given for each ?. Two or more observations are needed
for each ?.
3
Cases with multiple observations for each ?
Next we consider the case where m observations are given for each ? (l) , which may
be distributed according to k(?). Here, a consistent estimator of ? exists. Let {T } =
{T1 , . . . , Tm } be the m observations, which are generated from the same distribution specified by ? and ?. We have N such observations {T (l) }, l = 1, . . . , N, with a common ?
(l)
(l)
and different ? (l) . Thus, {T1 , . . . , Tm } are generated from the same firing rate ? (l) . Let
us take one {T }. The probability model can be written as
m
p({T }; ?, k(?)) =
q(Ti ; ?, ?)k(?)d?,
(12)
i=1
where
m
q(Ti ; ?, ?)
m
(??)?
=
i=1
i=1
T ??1 e???Ti
?(?) i
m
m
e?(?? i=1 Ti )+(??1) i=1 log(Ti )?(?m? log(??)+m log ?(?))
e(??s({T },?)+r({T },?)??(?,?)) .
(13)
=
?
We defined s, r, and ? as
s({T }, ?)
= ??
m
i=1
r({T }, ?)
=
(? ? 1)
Ti ,
m
(14)
log(Ti ), and
(15)
i=1
= ?m? log(??) + m log ?(?).
?(?, ?)
(16)
Then, the estimating function is given by
uI ({T }, ?)
= u ? E[u|s]
= (?? s ? E[?? s|s]) ? E? [?|s] + ?? r ? E[?? r|s]
= ?? r ? E[?? r|s]
m
log(Ti ) ? mE[log(T1 )|s],
=
(17)
i=1
where we used
s
= ?? s.
?
To calculate the conditional expectation of log T1 , let us use Bayes?s Theorem:
E[?? s|s] =
p(T |s) =
p(T, s)
.
p(s)
(18)
(19)
By transforming random variables, (T1 , T2 , T3 , ..., Tm ), into (s, T2 , T3 , ..., Tm ), we have
m
p(s) =
q(Ti ; ?, ?)?(s + ?
Ti )k(?)d?dT
i
=
m?1
i=1
i=1
m??1
B(i?, ?)
(?s)
?(?)m
? m? es? k(?)d?.
(20)
where the beta function is defined as
B(x, y) =
(x ? 1)!(y ? 1)!
?(x)?(y)
=
.
?(x + y)
(x + y ? 1)!
(21)
Similarly, we have
E[log(T1 )|s]
=
log(T1 )
m
q(Ti )?(s + ?
i=1
s
= log(? ) ? ?(m?) + ?(?),
?
m
Ti )k(?)d?dT
i=1
1
p(s)
(22)
where the digamma function is defined as
?(?) =
? (?)
.
?(?)
(23)
Note that E[log(T1 )|s] does not depend on the unknown function, k(?). Thus, we have
I
u ({T }, ?) =
m
m
log(Ti ) ? m log(
Ti ) + m?(m?) ? m?(?).
i=1
(24)
i=1
The form of uI can be understood as follows. If we scale T as t = ?T , we have E[t] = 1.
Then, we can show that uI does not depend on ?, because
log(T ) ? E[log T |s] = log(t) ? E[log t|s].
(25)
This implies that we can estimate ? without estimating ?. The method of estimating function only works for gamma distributions. It crucially depends on the fact that the estimating
function is invariant under scaling of T .
? can be estimated consistently from N independent observations, {T (l) }
(l)
(l)
{T1 , . . . , Tm }, l = 1, . . . , N , as the value of ? that solves
N
uI ({T (l) }, ?
? ) = 0.
=
(26)
l=1
In fact, the expectation of uI is 0 independent of k(?):
m
I
E[u ] =
(
q(Ti ; ?, ?)uI dT )k(?)d?
=
Eq [uI |s]p(s)ds ?
k(?)d?
Eq [log t ? E[log t|s]|s]p(s)ds
=
=
i=1
0,
where Eq denotes the expectation for
m
i=1
(27)
q(ti ; 1, ?).
uI yields an efficient estimating function [15, 21]. An efficient estimator is one whose
variance attains the Cramer-Rao lower bound asymptotically. Thus, there is no estimator
of ? whose mean-square estimation error is smaller than that given by uI . As uI does
not depend on k(?), it is the optimal estimating function whatever k(?) is, or whatever the
sequence ? (1) , . . . , ? (N ) is.
40
30
5
10
^
?
15
20
Maximum likelihood
Proposed method
2
5
10
20
50
200
500
Number of observations
Figure 1: Biases of ?
? for maximum likelihood estimation and proposed method for m = 2.
The dotted line represents the true value, ? = 4. The maximum likelihood estimation is
biased even when an infinite number of observations are given while the estimating function
is asymptotically unbiased.
The maximum likelihood estimation for this problem is given by
m
? + m log ? ? m?(?),
uM LE =
log(Ti ) + m log(?)
(28)
i=1
where
1
??
m
=
1
Ti .
m i=1
(29)
uM LE is similar to uI but different in terms of constant. As a result, the maximum likelihood estimator ?
? is biased (Figure 1).
So far, we have assumed that the firing rates for m observations are the same. Instead, let
us consider a case where the firing rates have some relation. For example, consider the case
where Eq [t1 ] = 2Eq [t2 ]. The model can be written as
p(t1 , t2 ; ?, k(?)) = q(t1 ; ?, ?)q(t2 ; 2?, ?)k(?)d?.
(30)
This model can be derived from Eq. (12) by rescaling as T1 = t1 and T2 = 2t2 . Note that
q(2T ; ?, ?) = q(T ; 2?, ?) because T always appears as ?T in q(T ; ?, ?). Thus, Eq. (12)
includes various kinds of models.
4
General case
Let us consider a general case where the firing rate changes stepwise. That is, {?1 , . . . , ?n }
is distributed according to k({?}) = k(?1 , . . . , ?n ) and ma observations are given for each
?a . The model can be written as
p({T }; ?, k({?}))
m1
m2
mn
(1)
(2)
(n)
q(Ti1 ; ?1 , ?)
q(Ti2 ; ?2 , ?) . . .
q(Tin ; ?n , ?)k({?})d?1 d?2 . . . d?n ,
=
i1 =1
i2 =1
in =1
(31)
where
m1
i1 =1
=
exp(?1 (??
m1
i1 =1
+(? ? 1)(
m1
n
i2 =1
(1)
Ti1 ) + ?2 (??
log Ti1 +
ma ? log(?a ) +
a=1
sa ({T (a) }, ?)
= ??
ma
i2 =1
i2 =1
=
(? ? 1)
in =1
(n)
q(Tin ; ?n , ?)
(2)
Ti2 ) + . . . + ?n (??
(2)
log Ti2 + . . . +
ma ? log(?) ?
n
mn
in =1
mn
in =1
(n)
Tin )
(n)
log Tin )
ma log ?(?)).
(32)
a=1
(a)
Tia ,
(33)
ma
n
(a)
(
log Tia ),
(34)
ia =1
r({T }, ?)
n
m2
m2
a=1
We defined sa , r, and ? as
mn
(2)
q(Ti2 ; ?2 , ?) . . .
(1)
i1 =1
+
m2
(1)
q(Ti1 ; ?1 , ?)
a=1 ia =1
?(?, {?})
= ?
n
ma ? log(?a ) ?
a=1
Then,
uI ({T }, ?)
n
ma ? log(?) +
a=1
n
ma log ?(?). (35)
a=1
= u ? E[u|s]
= (?? s ? E[?? s|s]) ? E[?|s] + ?? r ? E[?? r|s]
= ?? r ? E[?? r|s]
(36)
ma
ma
n
(a)
(a)
{
log Tia ? ma log(
Tia ) + ma ?(ma ?) ? ma ?(?)}.
=
a=1 ia =1
ia =1
Thus, ? is estimated with equal weight for every observation. Note that the conditional
expectations can be calculated independently for each set of random variables. uI yields
an efficient estimating function. As this does not depend on k({?}), uI is the optimal
estimating function at any k({?}). There is no information loss. Note that k({?}) can
include correlations among ?a ?s. Nevertheless, the result is very similar to that of the
previous section.
5
Summary and discussion
We estimated the shape parameter, ?, of the semiparametric model suggested by Ikeda
without estimating the firing rate, ?. The maximum likelihood estimator is not consistent
for this problem because the number of nuisance parameters, ?, increases with increasing
observations, T . We showed that Ikeda?s model is the exponential form defined by Amari
and Kawanabe [15] and can be analyzed by a method of estimating functions for semiparametric models. We found that an estimating function does not exist unless multiple
observations are given for each firing rate, ?. If multiple observations are given, a method
of estimating functions can be applied. In that case, the estimating function of ? can be analytically obtained, and ? can be estimated consistently independent of the functional form
of the firing rate, k(?). In general, the estimating function is not efficient. However, this
method provided an optimal estimator in the sense of Fisher information for our problem.
That is, we obtained an efficient estimator.
Acknowledgments
We are grateful to K. Ikeda for his helpful discussions. This work was supported in part by
grants from the Japan Society for the Promotion of Science (Nos. 14084212 and 16500093).
References
[1] G. R. Holt, W. R. Softky, C. Koch, and R. J. Douglas, Comparison of discharge variability in
vitro and in vivo in cat visual cortex neurons, J. Neurophysiol., Vol. 75, pp. 1806-14, 1996.
[2] H. C. Tuckwell, Introduction to theoretical neurobiology: volume 2, nonlinear and stochastic
theories, Cambridge University Press, Cambridge, 1988.
[3] Y. Sakai, S. Funahashi, and S. Shinomoto, Temporally correlated inputs to leaky integrateand-fire models can reproduce spiking statistics of cortical neurons, Neural Netw., Vol. 12, pp.
1181-1190, 1999.
[4] D. R. Cox and P. A. W. Lewis, The statistical analysis of series of events, Methuen, London,
1966.
[5] S. N. Baker and R. N. Lemon, Precise spatiotemporal repeating patterns in monkey primary
and supplementary motor areas occur at chance levels, J. Neurophysiol., Vol. 84, pp. 1770-80,
2000.
[6] S. Shinomoto, K. Shima, and J. Tanji, Differences in spiking patterns among cortical neurons,
Neural Comput.,Vol. 15, pp. 2823-42, 2003.
[7] S. Shinomoto, Y. Miyazaki, H. Tamura, and I. Fujita, Regional and laminar differences in in
vivo firing patterns of primate cortical neurons, J. Neurophysiol., in press.
[8] S. Shinomoto, K. Miura, and S. Koyama, A measure of local variation of inter-spike intervals,
Biosystems, Vol. 79, pp. 67-72, 2005.
[9] J. Pfanzagl, Estimation in semiparametric models, Springer-Verlag, Berlin, 1990.
[10] K. Ikeda, Information geometry of interspike intervals in spiking neurons, Neural Comput., in
press.
[11] V. P. Godambe, An optimum property of regular maximum likelihood estimation, Ann. Math.
Statist., Vol. 31, pp. 1208-1211, 1960.
[12] V. P. Godambe (ed.), Estimating functions, Oxford University Press, New York, 1991.
[13] S. Amari, Dual connections on the Hilbert bundles of statistical models, In C. T. J. Dodson
(ed.), Geometrization of statistical theory, pp. 123-152, University of Lancaster Department of
Mathematics, Lancaster, 1987.
[14] S. Amari and M. Kumon, Estimation in the presence of infinitely many nuisance parameters geometry of estimating functions, Ann. Statist., Vol. 16, pp. 1044-1068, 1988.
[15] S. Amari and M. Kawanabe, Information geometry of estimating functions in semi-parametric
statistical models, Bernoulli, Vol. 3, pp. 29-54, 1997.
[16] H. Nagaoka and S. Amari, Differential geometry of smooth families of probability distributions,
Technical Report 82-7, University of Tokyo, 1982.
[17] S. Amari and H. Nagaoka, Methods of information geometry, American Mathematical Society,
Providence, RI, 2001.
[18] S. Amari, Information geometry on hierarchy of probability distributions, IEEE Transactions
on Information Theory, Vol. 47, pp. 1701-1711, 2001.
[19] J. Neyman and E. L. Scott, Consistent estimates based on partially consistent observations,
Econometrica, Vol. 32, pp. 1-32, 1948.
[20] Y. Ritov and P. J. Bickel, Achieving information bounds in non and semiparametric models,
Ann. Statist., Vol. 18, pp. 925-938, 1990.
[21] P. J. Bickel, C. A. J. Klaassen, Y. Ritov, and J. A. Wellner, Efficient and adaptive estimation for
semiparametric models, Johns Hopkins University Press, Baltimore, MD, 1993.
| 2897 |@word cox:1 crucially:1 series:1 score:1 attainability:1 written:3 ikeda:7 john:1 interspike:6 shape:6 motor:2 funahashi:1 provides:1 math:1 mathematical:1 beta:1 differential:1 inter:2 behavior:1 increasing:1 becomes:1 provided:1 estimating:38 baker:2 miyazaki:1 kind:2 monkey:2 developed:1 every:1 ti:19 um:2 whatever:2 grant:1 t1:16 understood:2 local:1 depended:1 oxford:1 firing:21 unique:2 practical:2 acknowledgment:1 timedependent:1 irregularity:3 area:2 projection:1 word:1 holt:1 regular:1 cannot:1 godambe:2 independently:3 methuen:1 m2:4 estimator:12 his:1 variation:1 analogous:1 discharge:1 hierarchy:1 crossing:2 observed:2 calculate:2 transforming:1 environment:2 ui:18 econometrica:1 depend:4 grateful:1 purely:1 neurophysiol:3 indirect:1 various:3 cat:1 riken:2 describe:1 london:1 lancaster:2 whose:4 supplementary:1 solve:2 amari:12 statistic:3 nagaoka:2 noisy:1 sequence:2 empty:2 optimum:1 sa:2 eq:7 solves:1 implies:1 tokyo:2 correct:1 stochastic:1 jst:2 shun:1 integrateand:1 mathematically:1 hold:1 koch:1 considered:4 cramer:1 normal:1 exp:3 bickel:3 estimation:13 promotion:1 kumon:1 always:1 rather:1 derived:1 consistently:4 bernoulli:1 likelihood:13 digamma:1 attains:1 sense:2 helpful:1 dependent:2 relation:2 reproduce:1 i1:4 fujita:1 among:2 dual:1 field:1 equal:1 represents:1 look:1 t2:7 report:1 randomly:1 gamma:6 individual:3 geometry:8 fire:1 attempt:1 freedom:1 interest:1 mixture:3 analyzed:2 bundle:1 necessary:1 unless:2 theoretical:1 biosystems:1 classify:1 rao:1 miura:2 shima:1 providence:1 varies:1 spatiotemporal:1 density:1 probabilistic:1 hopkins:1 recorded:1 american:1 derivative:2 rescaling:1 japan:1 tia:4 includes:1 depends:1 characterizes:1 bayes:1 vivo:2 square:1 variance:3 largely:1 t3:2 yield:2 accurately:3 ed:2 pp:12 unsolved:3 static:1 proved:1 adjusting:1 hilbert:1 appears:1 dt:3 ritov:3 correlation:1 d:2 nonlinear:1 effect:1 unbiased:4 true:1 analytically:3 bsi:2 tuckwell:1 i2:4 shinomoto:4 nuisance:2 demonstrate:1 tn:1 novel:1 common:1 functional:5 spiking:7 vitro:1 ti2:4 volume:1 m1:4 cambridge:2 mathematics:1 similarly:2 pointed:2 had:1 cortex:1 multivariate:1 showed:4 verlag:1 semi:1 multiple:5 kyoto:1 smooth:1 technical:1 expectation:4 represent:1 tamura:1 semiparametric:14 interval:8 baltimore:1 keiji:1 biased:3 regional:1 presence:1 identically:2 identified:1 tm:5 ti1:4 masato:1 wellner:1 york:1 generally:2 useful:1 repeating:1 statist:3 exist:2 dotted:1 estimated:10 neuroscience:1 anatomical:1 discrete:1 vol:11 ichi:1 nevertheless:1 achieving:1 changing:1 douglas:1 asymptotically:4 klaassen:1 family:1 scaling:1 bound:3 laminar:1 yielded:1 lemon:2 occur:1 ri:1 tanji:1 department:1 according:7 smaller:1 primate:1 intuitively:1 explained:1 invariant:1 neyman:2 remains:1 needed:2 presto:2 kawanabe:6 original:1 denotes:2 include:1 society:2 ingenious:1 spike:4 parametric:1 primary:1 md:1 softky:1 berlin:1 koyama:1 me:1 reason:2 difficult:2 unknown:9 neuron:12 observation:24 finite:2 neurobiology:1 variability:1 precise:1 required:1 specified:1 conclusive:1 connection:1 suggested:2 pattern:8 scott:2 reliable:1 ia:4 event:1 difficulty:1 mn:4 temporally:1 asymptotic:1 loss:2 generation:1 degree:1 consistent:5 viewpoint:1 summary:1 supported:2 bias:1 leaky:1 distributed:5 overcome:1 calculated:1 cortical:4 sakai:1 made:1 adaptive:1 far:1 transaction:1 netw:1 assumed:3 continuous:2 reality:1 okada:1 neuronal:1 exponential:4 comput:2 tin:4 theorem:1 consist:1 exists:2 stepwise:1 infinitely:1 visual:1 partially:1 springer:1 chance:1 lewis:1 ma:15 conditional:2 consequently:1 ann:3 fisher:3 change:3 infinite:2 determined:1 called:4 e:1 support:1 modulated:3 correlated:1 |
2,089 | 2,898 | Fast Information Value
for Graphical Models
Andrew W. Moore
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Brigham S. Anderson
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
Calculations that quantify the dependencies between variables are vital
to many operations with graphical models, e.g., active learning and sensitivity analysis. Previously, pairwise information gain calculation has
involved a cost quadratic in network size. In this work, we show how
to perform a similar computation with cost linear in network size. The
loss function that allows this is of a form amenable to computation by
dynamic programming. The message-passing algorithm that results is
described and empirical results demonstrate large speedups without decrease in accuracy. In the cost-sensitive domains examined, superior accuracy is achieved.
1 Introduction
In a diagnosis problem, one wishes to select the best test (or observation) to make in order
to learn the most about a system of interest. Medical settings and disease diagnosis immediately come to mind, but sensor management (Krishnamurthy, 2002), sensitivity analysis
(Kjrulff & van der Gaag, 2000), and active learning (Anderson & Moore, 2005) all make
use of similar computations. These generally boil down to an all-pairs analysis between
observable variables (queries) and the variables of interest (targets.)
A common technique in the field of diagnosis is to compute the mutual information between
each query and target, then select the query that is expected to provide the most information
(Agostak & Weiss, 1999). Likewise, a sensitivity analysis between the query variable
and the target variables can be performed (Laskey, 1995; Kjrulff & van der Gaag, 2000).
However, both suffer from a quadratic blowup with respect to the number of queries and
targets.
In the current paper we present a loss function which can be used in a message-passing
framework to perform the all-pairs computation with cost linear in network size. We describe the loss function in Section 2, we describe a polynomial expression for networkwide expected loss in Section 3, and in Section 4 we present a message-passing scheme
to perform this computation efficiently for each node in the network. Section 5 shows the
empirical speedups and accuracy gains achieved by the algorithm.
1.1 Graphical Models
To simplify presentation, we will consider only Bayesian networks, but the results generalize to any graphical model. We also restrict the class of networks to those without
undirected loops, or polytrees, of which Junction trees are a member. We have a Bayesian
Network B, which is composed of an independence graph, G and parameters for CPT tables. The independence graph G = (X , E) is a directed acyclic graph (DAG) in which
X is a set of N discrete random variables {x1 , x2 , ..., xN } ? X , and the edges define the
independence relations. We will denote the marginal distribution of a single node P (x|B)
by ?x , where (?x )i is P (x = i). We will omit conditioning on B for the remainder of the
paper. We indicate the number states a node x can assume as |x|.
Additionally, each node x is assigned a cost matrix Cx , in which (Cx )ij is the cost of believing x = j when in fact the true value x? = i. A cost matrix of all zeros indicates that
one is not interested in the node?s value. The cost matrix C is useful because inhomogeneous costs are a common feature in most realistic domains. This ubiquity results from the
fact that information almost always has a purpose, so that some variables are more relevant
than others, some states of a variable are more relevant than others, and confusion between
some pairs of states are more relevant than between other pairs.
For our task, we are given B, and wish to estimate P (X ) accurately by iteratively selecting
the next node to observe. Although typically only a subset of the nodes are queryable, we
will for the purposes of this paper assume that any node can be queried. How do we select
the most informative node to query? We must first define our objective function, which is
determined by our definition of error.
2 Risk Due to Uncertainty
The underlying error function for the information gain computation will be denoted
Error(P (X )||X ? ), which quantifies the loss associated with the current belief state, P (X )
given the true values X ? . There are several common candidates for this role, a log-loss
function, a log-loss function over marginals, and an expected 0-1 misclassification rate
(Kohavi & Wolpert, 1996). Constant factors have been omitted.
Errorlog (P (X )||X ? ) = ? log P (X ? )
X
Errormlog (P (X )||X ? ) = ?
log P (u? )
(1)
(2)
u?X
Error01 (P (X )||X ? ) = ?
X
P (u? )
(3)
u?X
Where X is the set of nodes, and u? is the true value of node u. The error function of
Equation 1 will prove insufficient for our needs as it cannot target individual node errors,
while the error function of Equation 2 results in an objective function that is quadratic in
cost to compute.
We will be exploring a more general form of Equation 3 which allows arbitrary weights to
be placed on different types of misclassifications. For instance, we would like to specify
that misclassifying a node?s state as 0 when it is actually 1 is different from misclassifying
it as 0 when it is actually in state 2. Different costs for each node can be specified with cost
matrices Cu for u ? X . The final error function is
Error(P (X )||X ? ) =
|u|
XX
u?X
i
P (u = i)Cu [u? , i]
(4)
Where C[i, j] is the ijth element of the matrix C, and |u| is the number of states that
the node u can assume. The presence of the cost matrix Cu in Equation 4 constitutes a
significant advantage in real applications, as they often need to specify inhomogeneous
costs.
There is a separate consideration, that of query cost, or cost(x), which is the cost incurred
by the action of observing x (e.g., the cost of a medical test.) If both the query cost and
the misclassification cost C are formulated in the same units, e.g., dollars, then they form
a coherent decision framework. The query costs will be omitted from this presentation for
clarity.
In general, one does not actually know the true values X ? of the nodes, so one cannot
directly minimize the error function as described. Instead, the expected error, or risk, is
used.
X
Risk(P (X )) =
P (x)ErrorP (X ||x)
(5)
x
which for the error function of Equation 4 reduces to
X XX
Risk(P (X )) =
P (u = j)P (u = k)Cu [j, k]
u?X
=
X
j
(6)
k
?uT Cu ?u
(7)
u?X
where (?u )i = P (u = i). This is the objective we will minimize. It quantifies ?On average, how much is our current ignorance going cost us?? For comparison, note that the
log-loss function, Errorlog , results in an entropy risk function Risklog (P (X )) = H(X ),
and the log-loss function
P over the marginals, Errormlog , results in the risk function
Riskmlog (P (X )) = u?X H(u).
Ultimately, we want to find the nodes that have the greatest effect on Risk(P (X )), so we
must condition Risk(P (X )) on the beliefs at each node. In other words, if we learned
that the true marginal probabilities of node x were ?x , what effect would that have on our
current risk, or rather, what is Risk(P (X )|P (x) = ?x )? Discouragingly, however, any
change in ?x propagates to all the other beliefs in the network. It seems as if we must
perform several network evaluations for each node, a prohibitive cost for networks of any
appreciable size. However, we will show that in fact dynamic programming can perform
this computation for all nodes in only two passes through the network.
3 Risk Calculation
To clarify our objective, we wish to construct a function Ra (?) for each node a, where
Ra (?) = Risk(P (X )|P (a) = ?). Suppose, for instance, that we learn that the value
of node a is equal to 3. Our P (X ) is now constrained to have the marginal P (a) = ? a0 ,
where (?a0 )3 = 1 and equals zero elsewhere. If we had the function Ra in hand, we could
simply evaluate Ra (?a0 ) to immediately compute our new network-wide risk, which would
account for all the changes in beliefs to all the other nodes due to learning that a = 3. This
is exactly our objective; we would like to precompute Ra for all a ? X . Define
Ra (?) = Risk(P (X )|P (a) = ?)
X
T
=
?u C u ?u
u?X
(8)
(9)
P (a)=?
This simply restates the risk definition of Equation 7 under the condition that P (a) = ?.
As shown in the next theorem, the function Ra has a surprisingly simple form.
Theorem 3.1. For any node x, the function Rx (?) is a second-degree polynomial function
of the elements of ?
Proof. Define the matrix Pu|v for every pair of nodes (u, v), such that (Pu|v )ij = P (u =
j|v = i). Recall that the the beliefs at node x have a strictly linear relationship to the beliefs
of node u, since
X
(?u )i =
P (u = i|x = k)P (x = k)
(10)
k
is equivalent to ?u = Pu|x ?x . Substituting Pu|x ?x for ?u in Equation 9 obtains
X
T T
Rx (?) =
?x Pu|x Cu Pu|x ?x
u?X
? =?
!x
X
PTu|x Cu Pu|x ?
= ?T
(11)
(12)
u?X
= ? T ?x ?
(13)
Where ?x is an |x| ? |x| matrix.
Note that the matrix ?x is sufficient to completely describe Rx , so we only need to consider
the computation of ?x for x ? X . From Equation 12, we see a simple equation for
computing these ?x directly (though expensively):
X
?x =
PTu|x Cu Pu|x
(14)
u?X
Example #1
Given the 2-node network a ? b, how do we calculate Ra (?), the total risk associated with
our beliefs about the value of node a? Our objective is thus to determine
Ra (?) = Risk(P (a, b)|P (a) = ?)
T
= ? ?a ?
(15)
(16)
Equation 14 will give ?a as
?a = PTa|a Ca Pa|a + PTb|a Cb Pb|a
= Ca +
PTb|a Cb Pb|a
with Pa|a = I by definition. The individual coefficients of ?a are thus
XX
?aij = Caij +
P (b = k|a = i)P (b = l|a = j)Cbkl
k
(17)
(18)
(19)
l
Now we can compute the relation between any marginal ? at node a and our total networkwide risk via Ra (?). However, using Equation 14 to compute all the ? would require
evaluating the entire network once per node. The function can, however, be decomposed
further, which will enable much more efficient computation of ? x for x ? X .
3.1 Recursion
To create an efficient message-passing algorithm for computing ? x for all x ? X , we will
introduce ?W
x , where W is a subset of the network over which Risk(P (X )) is summed.
X
?W
PTu|x Cu Pu|x
(20)
x =
u?W
This is otherwise identical to Equation 14. It implies, for instance, that ? xx = Cx . More
importantly, these matrices can be usefully decomposed as follows.
T
W
Theorem 3.2. ?W
x = Py|x ?y Py|x if x and W are conditionally independent given y.
Proof. Note that Pu|x = Pu|y Py|x for u ? X , since
(Pu|y Py|x )ij =
|y|
X
P (u = i|y = k)P (y = k|x = j)
(21)
k
= P (u = i|x = j)
(22)
= (Pu|x )ij
(23)
Step (21) is only true if x and u are conditionally independent given y. Substituting this
result into Equation 20, we conclude
X
?W
PTu|x Cu Pu|x
(24)
x =
u?W
=
X
(25)
PTy|x PTu|y ?uu Pu|y Py|x
u?W
X
=
PTy|x
PTu|y ?uu Pu|y
=
u?W
T
W
Py|x ?y Py|x
!
Py|x
(26)
(27)
Example #2
Suppose we now have a 3-node network, a ? b ? c, and we are only interested in the
effect that node a has on the network-wide Risk. Our objective is to compute
Ra (?) = Risk(P (a, b, c)|P (a) = ?)
(28)
where ?a is by definition
(29)
= ? T ?a ?
(30)
?a = ?abc
a
= Ca +
PTb|a Cb Pb|a
+
PTc|a Cc Pc|a
(31)
Using Theorem 3.2 and the fact that a is conditionally independent of c given b, we know
a
T
bc
?abc
a = ?a + Pb|a ?b Pb|a
(32)
b
T
c
?bc
b = ?b + Pc|b ?c Pc|b
(33)
?a = ?aa + PTb|a ?bb + PTc|b ?cc Pc|b Pb|a
= Ca + PTb|a Cb + PTc|b Cc Pc|b Pb|a
(34)
Substituting 33 into 32
(35)
Note that the coefficient ?a is obtained from probabilities between neighboring nodes only,
without having to explicitly compute Pc|a .
4 Message Passing
We are now ready to define message passing. Messages are of two types; in-messages and
out-messages. They are denoted by ? and ?, respectively. Out-messages ? are passed from
parent to child, and in-messages ? are passed from child to parent. The messages from x to
y will be denoted as ?xy and ?xy . In the discrete case, ?xy and ?xy will both be matrices
of size |y| ? |y|. The messages summarize the effect that y has on the part of the network
that y is d-separated from by x. Messages relate to the ? coefficients by the following
definition
?yx = ?\y
x
(36)
?\y
x
(37)
?yx =
where the (nonstandard) notation ?x indicates the matrix ?V
x for which V is the set of all
the nodes in X that are reachable by x if y were removed from the graph. In other words,
\y
?x is summarizing the effect that x has on the entire network except for the part of the
network that x can only reach through y.
\y
Propagation: The message-passing scheme is organized to recursively compute the ?
matrices using Theorem 3.2. As can be seen from Equations 36 and 37, the two types of
messages are very similar in meaning. They differ only in that passing a message from a
parent to child automatically separates the child from the rest of the network the parent is
connected to, while a child-to-parent message does not necessarily separate the parent from
the rest of the network that the child is connected to (due to the ?explaining away? effect.)
The contruction of the ?-message involves a short sequence of basic linear algebra. The
?-message from x to child c is created from all other messages entering x except those
from c. The definition is
?
?
X
X
?vx ? Px|c
(38)
?ux +
?xc = PTx|c ?Cx +
v?ch(x)\c
u?pa(x)
The ?-messages from x to parent u are only slightly more involved. To account for the
?explaining away? effect, we must construct ?xu directly from the parents of x.
?
?
X
?cx ? Px|u +
?xu =PTx|u ?Cx +
c?ch(x)
X
w?pa(x)\u
?
PTw|u ?Cw +
X
?vw +
v?pa(w)
X
c?ch(w)\x
?
?cw ? Pw|u
(39)
Messages are constructed (or ?sent?) whenever all of the required incoming messages are
present and that particular message has not already been sent. For example, the outmessage ?xc can be sent only when messages from all the parents of x and all the children
of x (save c) are present. The overall effect of this constraint is a single leaves-inward
propagation followed by a single root-outward propagation.
Initialization and Termination: Initialization occurs naturally at any singly-connected
(leaf) node x, where the message is by definition Cx . Termination occurs when no more
messages meet the criteria for sending. Once all message propagation is finished, for each
node x the coefficients ?x can be computed by a simple summation:
X
X
?x =
?cx +
?ux + Cx
(40)
c?ch(x)
u?par(x)
200
MI
Cost Prop
Secs
150
100
50
0
0
200
400
600
Number of Nodes
800
1000
Figure 1: Comparison of execution times with synthetic polytrees.
Propagation runs in time linear in the number of nodes once the initial local probabilities
are calculated. The local probabilities required are the matrices P for each parent-child
probability,e.g., P (child = j|parent = i), and for each pair (not set) of parents that share
a child, P (parent = j|coparent = i). These are all immediately available from a junction
tree, or they can be obtained with a run of belief propagation.
It is worth noting that the apparent complexity of the ?, ? message propagation equations
is due to the Bayes Net representation. The equivalent factor graph equations (not shown)
are markedly more succinct.
5 Experiments
The performance of the message-passing algorithm (hereafter CostProp) was compared
with a standard information gain algorithm which uses mutual information (hereafter MI).
?
The error function used by MI is from Equation 2, where Error mlog
P
P (P (X )||X ) =
?
log
P
(x
),
with
a
corresponding
risk
function
Risk(P
(X
))
=
H(x).
This
x?X
x?X
corresponds to selecting the node x that has the highest summed mutual information with
each of the target nodes (in this case, the set of target nodes is X and the set of query nodes
is also X .) The computational cost of MI grows quadratically as the product of the number
of queries and of targets.
In order to test the speed and relative accuracy of CostProp, we generated random polytrees
with varying numbers of trinary nodes. The CPT tables were randomly generated with a
slight bias towards lower-entropy probabilities. The code was written in Matlab using the
Bayes Net Toolbox (Murphy, 2005).
Speed: We generated polytrees of sizes ranging from 2 to 1000 nodes and ran the MI
algorithm, the CostProp algorithm, and a random-query algorithm on each. The two nonrandom algorithms were run using a junction tree, the build time of which was not included
in the reported run times of either algorithm. Even with the relatively slow Matlab code, the
speedup shown in Figure 1 is obvious. As expected, CostProp is many orders of magnitude
faster than the MI algorithm, and shows a qualitative difference in scaling properties.
Accuracy: Due to the slow running time of MI, the accuracy comparison was performed
on polytrees of size 20. For each run, a true assignment X ? was generated from the tree, but
were initially hidden from the algorithms. Each algorithm would then determine for itself
the best node to observe, receive the true value of that node, then select the next node to
observe, et cetera. The true error at each step was computed as the 0-1 error of Equation 3.
The reduction in error plotted against number of queries is shown in Figure 2. With uniform
120
Random
MI
Cost Prop
100
6000
5000
True Cost
True Cost
80
60
40
4000
3000
2000
20
0
Random
MI
Cost Prop
7000
1000
5
10
15
Number of Queries
20
Figure 2: Performance on synthetic polytrees with symmetric costs.
0
0
5
10
15
Number of Queries
20
Figure 3: Performance on synthetic polytrees with asymmetric costs.
cost matrices, performance of MI and CostProp are approximately equal on this task, but
both are better than random. We next made the cost matrices asymmetric by initializing
them such that confusing one pair of states was 100 times more costly than confusing the
other two pairs. The results of Figure 3 show that CostProp reduces error faster than MI,
presumably because it can accomodate the cost matrix information.
6 Discussion
We have described an all-pairs information gain calculation that scales linearly with network size. The objective function used has a polynomial form that allows for an efficient
message-passing algorithm. Empirical results demonstrate large speedups and even improved accuracy in cost-sensitive domains. Future work will explore other applications of
this method, including sensitivity analysis and active learning. Further research into other
uses for the belief polynomials will also be explored.
References
Agostak, J. M., & Weiss, J. (1999). Active Fusion for Diagnosis Guided by Mutual Information.
Proceedings of the 2nd International Conference on Information Fusion.
Anderson, B. S., & Moore, A. W. (2005). Active learning for hidden markov models: Objective
functions and algorithms. Proceedings of the 22nd International Conference on Machine Learning.
Kjrulff, U., & van der Gaag, L. (2000). Making sensitivity analysis computationally efficient.
Kohavi, R., & Wolpert, D. H. (1996). Bias Plus Variance Decomposition for Zero-One Loss Functions. Machine Learning : Proceedings of the Thirteenth International Conference. Morgan
Kaufmann.
Krishnamurthy, V. (2002). Algorithms for optimal scheduling and management of hidden markov
model sensors. IEEE Transactions on Signal Processing, 50, 1382?1397.
Laskey, K. B. (1995). Sensitivity Analysis for Probability Assessments in Bayesian Networks. IEEE
Transactions on Systems, Man, and Cybernetics.
Murphy, K. (2005). Bayes net toolbox for matlab. U. C. Berkeley. http://www.ai.mit.edu/?
murphyk/Software/BNT/bnt.html.
| 2898 |@word cu:10 pw:1 polynomial:4 seems:1 nd:2 termination:2 decomposition:1 recursively:1 reduction:1 initial:1 selecting:2 hereafter:2 bc:2 trinary:1 current:4 must:4 written:1 realistic:1 informative:1 prohibitive:1 leaf:2 short:1 node:49 constructed:1 qualitative:1 prove:1 introduce:1 pairwise:1 ra:11 expected:5 blowup:1 ptb:5 decomposed:2 automatically:1 xx:4 underlying:1 notation:1 inward:1 what:2 nonrandom:1 berkeley:1 every:1 usefully:1 exactly:1 murphyk:1 unit:1 medical:2 omit:1 local:2 meet:1 approximately:1 plus:1 initialization:2 examined:1 polytrees:7 directed:1 empirical:3 word:2 cannot:2 scheduling:1 risk:23 py:8 www:1 equivalent:2 gaag:3 immediately:3 importantly:1 krishnamurthy:2 target:8 suppose:2 pty:2 programming:2 us:2 pa:7 element:2 asymmetric:2 role:1 initializing:1 calculate:1 connected:3 decrease:1 removed:1 highest:1 ran:1 disease:1 complexity:1 dynamic:2 ultimately:1 algebra:1 completely:1 separated:1 fast:1 describe:3 query:15 apparent:1 otherwise:1 itself:1 final:1 advantage:1 sequence:1 net:3 product:1 remainder:1 neighboring:1 relevant:3 loop:1 parent:13 andrew:1 ij:4 school:2 bnt:2 c:1 involves:1 come:1 indicate:1 quantify:1 implies:1 uu:2 differ:1 inhomogeneous:2 guided:1 awm:1 vx:1 enable:1 require:1 summation:1 exploring:1 strictly:1 clarify:1 presumably:1 cb:4 substituting:3 omitted:2 purpose:2 sensitive:2 create:1 mit:1 sensor:2 always:1 rather:1 varying:1 believing:1 indicates:2 dollar:1 summarizing:1 typically:1 entire:2 a0:3 initially:1 hidden:3 relation:2 going:1 interested:2 overall:1 html:1 cetera:1 denoted:3 constrained:1 summed:2 mutual:4 marginal:4 field:1 construct:2 equal:3 once:3 having:1 identical:1 constitutes:1 future:1 others:2 pta:1 simplify:1 randomly:1 composed:1 individual:2 murphy:2 interest:2 message:32 evaluation:1 pc:6 amenable:1 edge:1 xy:4 tree:4 plotted:1 instance:3 ijth:1 assignment:1 cost:35 subset:2 uniform:1 reported:1 dependency:1 synthetic:3 international:3 sensitivity:6 management:2 account:2 sec:1 expensively:1 coefficient:4 explicitly:1 performed:2 root:1 observing:1 bayes:3 minimize:2 accuracy:7 variance:1 kaufmann:1 likewise:1 efficiently:1 generalize:1 bayesian:3 accurately:1 rx:3 worth:1 cc:3 cybernetics:1 nonstandard:1 reach:1 whenever:1 definition:7 against:1 involved:2 ptu:6 obvious:1 caij:1 naturally:1 associated:2 proof:2 mi:11 boil:1 gain:5 recall:1 ut:1 organized:1 actually:3 specify:2 wei:2 improved:1 though:1 anderson:3 hand:1 assessment:1 propagation:7 laskey:2 grows:1 restates:1 effect:8 true:11 assigned:1 entering:1 symmetric:1 moore:3 iteratively:1 ignorance:1 conditionally:3 criterion:1 demonstrate:2 confusion:1 meaning:1 ranging:1 consideration:1 superior:1 common:3 conditioning:1 slight:1 marginals:2 mellon:2 significant:1 dag:1 queried:1 ai:1 had:1 reachable:1 pu:16 der:3 seen:1 morgan:1 determine:2 signal:1 reduces:2 faster:2 calculation:4 basic:1 cmu:2 achieved:2 receive:1 want:1 thirteenth:1 kohavi:2 rest:2 pass:1 markedly:1 undirected:1 sent:3 member:1 vw:1 presence:1 noting:1 vital:1 independence:3 misclassifications:1 restrict:1 expression:1 passed:2 suffer:1 passing:10 action:1 matlab:3 cpt:2 useful:1 generally:1 outward:1 singly:1 http:1 misclassifying:2 per:1 diagnosis:4 carnegie:2 discrete:2 pb:7 ptc:3 clarity:1 ptx:2 graph:5 run:5 uncertainty:1 almost:1 decision:1 confusing:2 scaling:1 followed:1 quadratic:3 constraint:1 x2:1 software:1 speed:2 px:2 relatively:1 speedup:4 precompute:1 slightly:1 making:1 computationally:1 equation:18 previously:1 mind:1 know:2 sending:1 junction:3 operation:1 available:1 observe:3 away:2 ubiquity:1 save:1 running:1 graphical:4 yx:2 xc:2 build:1 objective:9 already:1 occurs:2 costly:1 cw:2 separate:3 code:2 relationship:1 insufficient:1 relate:1 perform:5 observation:1 markov:2 arbitrary:1 pair:9 required:2 specified:1 toolbox:2 coherent:1 learned:1 quadratically:1 summarize:1 including:1 belief:9 greatest:1 misclassification:2 recursion:1 ptw:1 scheme:2 finished:1 created:1 ready:1 relative:1 loss:10 par:1 acyclic:1 incurred:1 degree:1 sufficient:1 propagates:1 share:1 elsewhere:1 placed:1 surprisingly:1 aij:1 bias:2 wide:2 explaining:2 van:3 calculated:1 xn:1 evaluating:1 made:1 transaction:2 bb:1 observable:1 obtains:1 active:5 incoming:1 pittsburgh:2 conclude:1 quantifies:2 table:2 additionally:1 learn:2 ca:4 necessarily:1 domain:3 linearly:1 succinct:1 child:11 x1:1 xu:2 slow:2 wish:3 candidate:1 down:1 theorem:5 explored:1 brigham:2 fusion:2 magnitude:1 execution:1 accomodate:1 wolpert:2 cx:9 entropy:2 simply:2 explore:1 ux:2 aa:1 ch:4 corresponds:1 abc:2 prop:3 presentation:2 formulated:1 towards:1 appreciable:1 man:1 change:2 included:1 determined:1 except:2 mlog:1 total:2 select:4 evaluate:1 |
2,090 | 2,899 | Tensor Subspace Analysis
Xiaofei He1
Deng Cai2
Partha Niyogi1
Department of Computer Science, University of Chicago
{xiaofei, niyogi}@cs.uchicago.edu
2
Department of Computer Science, University of Illinois at Urbana-Champaign
[email protected]
1
Abstract
Previous work has demonstrated that the image variations of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces. The typical linear subspace learning algorithms include Principal Component Analysis (PCA),
Linear Discriminant Analysis (LDA), and Locality Preserving Projection (LPP). All of these methods consider an n1 ? n2 image as a high
dimensional vector in Rn1 ?n2 , while an image represented in the plane
is intrinsically a matrix. In this paper, we propose a new algorithm called
Tensor Subspace Analysis (TSA). TSA considers an image as the second order tensor in Rn1 ? Rn2 , where Rn1 and Rn2 are two vector
spaces. The relationship between the column vectors of the image matrix and that between the row vectors can be naturally characterized by
TSA. TSA detects the intrinsic local geometrical structure of the tensor
space by learning a lower dimensional tensor subspace. We compare our
proposed approach with PCA, LDA and LPP methods on two standard
databases. Experimental results demonstrate that TSA achieves better
recognition rate, while being much more efficient.
1
Introduction
There is currently a great deal of interest in appearance-based approaches to face recognition [1], [5], [8]. When using appearance-based approaches, we usually represent an image
of size n1 ? n2 pixels by a vector in Rn1 ?n2 . Throughout this paper, we denote by face
space the set of all the face images. The face space is generally a low dimensional manifold embedded in the ambient space [6], [7], [10]. The typical linear algorithms for learning
such a face manifold for recognition include Principal Component Analysis (PCA), Linear
Discriminant Analysis (LDA) and Locality Preserving Projection (LPP) [4].
Most of previous works on statistical image analysis represent an image by a vector in
high-dimensional space. However, an image is intrinsically a matrix, or the second order tensor. The relationship between the rows vectors of the matrix and that between the
column vectors might be important for finding a projection, especially when the number
of training samples is small. Recently, multilinear algebra, the algebra of higher-order
tensors, was applied for analyzing the multifactor structure of image ensembles [9], [11],
[12]. Vasilescu and Terzopoulos have proposed a novel face representation algorithm called
Tensorface [9]. Tensorface represents the set of face images by a higher-order tensor and
extends Singular Value Decomposition (SVD) to higher-order tensor data. In this way, the
multiple factors related to expression, illumination and pose can be separated from different
dimensions of the tensor.
In this paper, we propose a new algorithm for image (human faces in particular) representation based on the considerations of multilinear algebra and differential geometry. We call
it Tensor Subspace Analysis (TSA). For an image of size n1 ? n2 , it is represented as the
second order tensor (or, matrix) in the tensor space Rn1 ? Rn2 . On the other hand, the face
space is generally a submanifold embedded in Rn1 ? Rn2 . Given some images sampled
from the face manifold, we can build an adjacency graph to model the local geometrical
structure of the manifold. TSA finds a projection that respects this graph structure. The
obtained tensor subspace provides an optimal linear approximation to the face manifold
in the sense of local isometry. Vasilescu shows how to extend SVD(PCA) to higher order
tensor data. We extend Laplacian based idea to tensor data.
It is worthwhile to highlight several aspects of the proposed approach here:
1. While traditional linear dimensionality reduction algorithms like PCA, LDA and
LPP find a map from Rn to Rl (l < n), TSA finds a map from Rn1 ? Rn2 to
Rl1 ? Rl2 (l1 < n1 , l2 < n2 ). This leads to structured dimensionality reduction.
2. TSA can be performed in either supervised, unsupervised, or semi-supervised
manner. When label information is available, it can be easily incorporated into
the graph structure. Also, by preserving neighborhood structure, TSA is less sensitive to noise and outliers.
3. The computation of TSA is very simple. It can be obtained by solving two eigenvector problems. The matrices in the eigen-problems are of size n1 ?n1 or n2 ?n2 ,
which are much smaller than the matrices of size n ? n (n = n1 ? n2 ) in PCA,
LDA and LPP. Therefore, TSA is much more computationally efficient in time
and storage. There are few parameters that are independently estimated, so performance in small data sets is very good.
4. TSA explicitly takes into account the manifold structure of the image space. The
local geometrical structure is modeled by an adjacency graph.
5. This paper is primarily focused on the second order tensors (or, matrices). However, the algorithm and analysis presented here can also be applied to higher order
tensors.
2
Tensor Subspace Analysis
In this section, we introduce a new algorithm called Tensor Subspace Analysis for learning a
tensor subspace which respects the geometrical and discriminative structures of the original
data space.
2.1
Laplacian based Dimensionality Reduction
Problems of dimensionality reduction has been considered. One general approach is based
on graph Laplacian [2]. The objective function of Laplacian eigenmap is as follows:
X
2
min
(f (xi ) ? f (xj )) Sij
f
ij
where S is a similarity matrix. These optimal functions are nonlinear but may be expensive
to compute.
A class of algorithms may be optimized by restricting problem to more tractable families
of functions. One natural approach restricts to linear function giving rise to LPP [4]. In this
paper we will consider a more structured subset of linear functions that arise out of tensor
analysis. This provided greater computational benefits.
2.2
The Linear Dimensionality Reduction Problem in Tensor Space
The generic problem of linear dimensionality reduction in the second order tensor space
is the following. Given a set of data points X1 , ? ? ? , Xm in Rn1 ? Rn2 , find two transformation matrices U of size n1 ? l1 and V of size n2 ? l2 that maps these m points to a
set of points Y1 , ? ? ? , Ym ? Rl1 ? Rl2 (l1 < n1 , l2 < n2 ), such that Yi ?represents? Xi ,
where Yi = U T Xi V . Our method is of particular applicability in the special case where
X1 , ? ? ? , Xm ? M and M is a nonlinear submanifold embedded in Rn1 ? Rn2 .
2.3
Optimal Linear Embeddings
As we described previously, the face space is probably a nonlinear submanifold embedded
in the tensor space. One hopes then to estimate geometrical and topological properties of
the submanifold from random points (?scattered data?) lying on this unknown submanifold.
In this section, we consider the particular question of finding a linear subspace approximation to the submanifold in the sense of local isometry. Our method is fundamentally based
on LPP [4].
Given m data points X = {X1 , ? ? ? , Xm } sampled from the face submanifold M ? Rn1 ?
Rn1 , one can build a nearest neighbor graph G to model the local geometrical structure of
M. Let S be the weight matrix of G. A possible definition of S is as follows:
?
kX ?X k2
?
? i t j
?
, if Xi is among the k nearest
e
?
neighbors of Xj , or Xj is among
(1)
Sij =
?
the k nearest neighbors of Xi ;
?
?
0,
otherwise.
where t is a suitable constant. The function exp(?kXi ? Xj k2 /t) is the so called heat
kernel which is intimately
related to the manifold structure. k ? k is the Frobenius norm of
qP P
2
matrix, i.e. kAk =
i
j aij . When the label information is available, it can be easily
incorporated into the graph as follows:
(
kX ?X k2
? i t j
, if Xi and Xj share the same label;
e
(2)
Sij =
0,
otherwise.
Let U and V be the transformation matrices. A reasonable transformation respecting the
graph structure can be obtained by solving the following objective functions:
X
min
kU T Xi V ? U T Xj V k2 Sij
(3)
U,V
ij
The objective function incurs a heavy penalty if neighboring points Xi and Xj are mapped
far apart. Therefore, minimizing it is an attempt to ensure that if Xi and Xj are ?close?
T
then U T Xi V and
as well. Let Yi = U T Xi V . Let D be a diagonal
P U Xj V are ?close?
2
matrix, Dii = j Sij . Since kAk = tr(AAT ), we see that:
1X
1X T
kU Xi V ? U T Xj V k2 Sij =
tr (Yi ? Yj )(Yi ? Yj )T Sij
2 ij
2 ij
1X
tr Yi YiT + Yj YjT ? Yi YjT ? Yj YiT Sij
2 ij
X
X
= tr
Dii Yi YiT ?
Sij Yi YjT
=
i
ij
= tr
X
Dii U T Xi V V T XiT U ?
i
= tr U T
X
Sij U T Xi V V T XjT U
ij
X
Dii Xi V V T XiT ?
i
X
ij
Sij Xi V V T XjT U
.
= tr U T (DV ? SV ) U
P
P
T
T
T
T
2
where DV =
i Dii Xi V V Xi and SV =
ij Sij Xi V V Xj . Similarly, kAk =
T
tr(A A), so we also have
1X T
kU Xi V ? U T Xj V k2 Sij
2 ij
1X
tr (Yi ? Yj )T (Yi ? Yj ) Sij
2 ij
=
1X
tr YiT Yi + YjT Yj ? YiT Yj ? YjT Yi Sij
2 ij
X
X
= tr
Dii YiT Yi ?
Sij YiT Yj
=
i
= tr V
T
ij
X
Dii XiT U U T Xi
?
i
X
ij
.
= tr V T (DU ? SU ) V
XiT U U T Xj V
P
P
where DU = i Dii XiT U U T Xi and SU = ij Sij XiT U U T Xj . Therefore, we should
simultaneously minimize tr U T (DV ? SV ) U and tr V T (DU ? SU ) V .
In addition to preserving the graph structure, we also aim at maximizing the global variance
on the manifold. Recall that the variance of a random variable x can be written as follows:
Z
Z
2
var(x) =
(x ? ?) dP (x), ? =
xdP (x)
M
M
where M is the data manifold, ? is the expected value of x and dP is the probability
measure on the manifold. By spectral
graph theory [3], dP can be discretely estimated by
P
the diagonal matrix D(Dii =
S
) on the sample points. Let Y = U T XV denote
ij
j
the random variable in the tensor subspace and suppose the data points have a zero mean.
Thus, the weighted variance can be estimated as follows:
X
X
X
var(Y ) =
kYi k2 Dii =
tr(YiT Yi )Dii =
tr(V T XiT U U T Xi V )Dii
i
= tr V T
i
X
i
Dii XiT U U T Xi
i
! !
V
= tr V T DU V
Similarly, kYi k2 = tr(Yi YiT ), so we also have:
var(Y ) =
X
i
tr(Yi YiT )Dii = tr U T
X
Dii Xi V V T XiT
i
Finally, we get the following optimization problems:
tr U T (DV ? SV ) U
min
U,V
tr (U T DV U )
! !
U
= tr U T DV U
(4)
tr V T (DU ? SU ) V
(5)
U,V
tr (V T DU V )
The above two minimization problems (4) and (5) depends on each other, and hence can not
be solved independently. In the following subsection, we describe a simple computational
method to solve these two optimization problems.
min
2.4
Computation
In this subsection, we discuss how to solve the optimization problems (4) and (5). It is easy
to see that the optimal U should be the generalized eigenvectors of (DV ? SV , DV ) and the
optimal V should be the generalized eigenvectors of (DU ? SU , DU ). However, it is difficult to compute the optimal U and V simultaneously since the matrices DV , SV , DU , SU
are not fixed. In this paper, we compute U and V iteratively as follows. We first fix U , then
V can be computed by solving the following generalized eigenvector problem:
(DU ? SU )v = ?DU v
(6)
Once V is obtained, U can be updated by solving the following generalized eigenvector
problem:
(DV ? SV )u = ?DV u
(7)
Thus, the optimal U and V can be obtained by iteratively computing the generalized eigenvectors of (6) and (7). In our experiments, U is initially set to the identity matrix. It is easy
to show that the matrices DU , DV , DU ? SU , and DV ? SV are all symmetric and positive
semi-definite.
3
Experimental Results
In this section, several experiments are carried out to show the efficiency and effectiveness
of our proposed algorithm for face recognition. We compare our algorithm with the Eigenface (PCA) [8], Fisherface (LDA) [1], and Laplacianface (LPP) [5] methods, three of the
most popular linear methods for face recognition.
Two face databases were used. The first one is the PIE (Pose, Illumination, and Experience)
database from CMU, and the second one is the ORL database. In all the experiments,
preprocessing to locate the faces was applied. Original images were normalized (in scale
and orientation) such that the two eyes were aligned at the same position. Then, the facial
areas were cropped into the final images for matching. The size of each cropped image in all
the experiments is 32?32 pixels, with 256 gray levels per pixel. No further preprocessing is
done. For the Eigenface, Fisherface, and Laplacianface methods, the image is represented
as a 1024-dimensional vector, while in our algorithm the image is represented as a (32 ?
32)-dimensional matrix, or the second order tensor. The nearest neighbor classifier is used
for classification for its simplicity.
In short, the recognition process has three steps. First, we calculate the face subspace from
the training set of face images; then the new face image to be identified is projected into
d-dimensional subspace (PCA, LDA, and LPP) or (d ? d)-dimensional tensor subspace
(TSA); finally, the new face image is identified by nearest neighbor classifier. In our TSA
algorithm, the number of iterations is taken to be 3.
3.1
Experiments on PIE Database
The CMU PIE face database contains 68 subjects with 41,368 face images as a whole. The
face images were captured by 13 synchronized cameras and 21 flashes, under varying pose,
illumination and expression. We choose the five near frontal poses (C05, C07, C09, C27,
35
30
40
25
TSA
Laplacianfaces (PCA+LPP)
Fisherfaces (PCA+LDA)
Eigenfaces (PCA)
Baseline
30
30
20
20
10
TSA
Laplacianfaces (PCA+LPP)
Fisherfaces (PCA+LDA)
Eigenfaces (PCA)
Baseline
Error rate (%)
40
40
Error rate (%)
50
TSA
Laplacianfaces (PCA+LPP)
Fisherfaces (PCA+LDA)
Eigenfaces (PCA)
Baseline
50
Error rate (%)
TSA
Laplacianfaces (PCA+LPP)
Fisherfaces (PCA+LDA)
Eigenfaces (PCA)
Baseline
60
Error rate (%)
50
60
70
20
15
10
30
0
100
200
300
Dims d (d?d for TSA)
400
(a) 5 Train
0
200
400
Dims d (d?d for TSA)
600
(b) 10 Train
0
200
400
600
800
Dims d (d?d for TSA)
1000
5
0
(c) 20 Train
200
400
600
800
Dims d (d?d for TSA)
1000
(d) 30 Train
Figure 1: Error rate vs. dimensionality reduction on PIE database
Table 1: Performance comparison on PIE database
Method
Baseline
Eigenfaces
Fisherfaces
Laplacianfaces
TSA
error
69.9%
69.9%
31.5%
30.8%
27.9%
Method
Baseline
Eigenfaces
Fisherfaces
Laplacianfaces
TSA
error
38.2%
38.1%
15.4%
14.1%
9.64%
5 Train
dim
1024
338
67
67
112
20 Train
dim
1024
889
67
146
132
time(s)
0.907
1.843
2.375
0.594
error
55.7%
55.7%
22.4%
21.1%
16.9%
time(s)
14.328
35.828
39.172
7.125
error
27.9%
27.9%
7.77%
7.13%
6.88%
10 Train
dim
1024
654
67
134
132
30 Train
dim
1024
990
67
131
122
time(s)
5.297
9.609
11.516
2.063
time(s)
15.453
38.406
47.610
15.688
C29) and use all the images under different illuminations and expressions, thus we get 170
images for each individual. For each individual, l(= 5, 10, 20, 30) images are randomly
selected for training and the rest are used for testing.
The training set is utilized to learn the subspace representation of the face manifold by using
Eigenface, Fisherface, Laplacianface and our algorithm. The testing images are projected
into the face subspace in which recognition is then performed. For each given l, we average
the results over 20 random splits. It would be important to note that the Laplacianface
algorithm and our algorithm share the same graph structure as defined in Eqn. (2).
Figure 1 shows the plots of error rate versus dimensionality reduction for the Eigenface,
Fisherface, Laplacianface, TSA and baseline methods. For the baseline method, the recognition is simply performed in the original 1024-dimensional image space without any dimensionality reduction. Note that, the upper bound of the dimensionality of Fisherface is
c ? 1 where c is the number of individuals. For our TSA algorithm, we only show its performance in the (d ? d)-dimensional tensor subspace, say, 1, 4, 9, etc. As can be seen, the
performance of the Eigenface, Fisherface, Laplacianface, and TSA algorithms varies with
the number of dimensions. We show the best results obtained by them in Table 1 and the
corresponding face subspaces are called optimal face subspace for each method.
It is found that our method outperforms the other four methods with different numbers
of training samples (5, 10, 20, 30) per individual. The Eigenface method performs the
worst. It does not obtain any improvement over the baseline method. The Fisherface and
Laplacianface methods perform comparatively to each each. The dimensions of the optimal
subspaces are also given in Table 1.
As we have discussed, TSA can be implemented very efficiently. We show the running
time in seconds for each method in Table 1. As can be seen, TSA is much faster than the
40
40
Error rate (%)
Error rate (%)
45
45
TSA
Laplacianfaces (PCA+LPP)
Fisherfaces (PCA+LDA)
Eigenfaces (PCA)
Baseline
35
30
25
35
30
20
30
TSA
Laplacianfaces (PCA+LPP)
Fisherfaces (PCA+LDA)
Eigenfaces (PCA)
Baseline
30
25
20
15
20
15
40
TSA
Laplacianfaces (PCA+LPP)
Fisherfaces (PCA+LDA)
Eigenfaces (PCA)
Baseline
40
Error rate (%)
50
TSA
Laplacianfaces (PCA+LPP)
Fisherfaces (PCA+LDA)
Eigenfaces (PCA)
Baseline
50
Error rate (%)
55
20
10
10
0
10
20
30
40
Dims d
50
60
70
10
0
(a) 2 Train
10
20
30
40
Dims d
50
60
(b) 3 Train
70
5
0
10
20
30
40
Dims d
50
60
70
0
0
(c) 4 Train
10
20
30
40
Dims d
50
60
70
(d) 5 Train
Figure 2: Error rate vs. dimensionality reduction on ORL database
Table 2: Performance comparison on ORL database
Method
Baseline
Eigenfaces
Fisherfaces
Laplacianfaces
TSA
error
30.2%
30.2%
25.2%
22.2%
20.0%
Method
Baseline
Eigenfaces
Fisherfaces
Laplacianfaces
TSA
error
16.0%
15.9%
9.17%
8.54%
7.12%
2 Train
dim
1024
79
23
39
102
4 Train
dim
1024
122
39
39
102
time
38.13
60.32
62.65
65.00
error
22.4%
22.3%
13.1%
12.5%
10.7%
time
141.72
212.82
248.90
201.40
error
11.7%
11.6%
6.55%
5.45%
4.75%
3 Train
dim
1024
113
39
39
112
5 Train
dim
1024
182
39
40
102
time
85.16
119.69
136.25
135.93
time
224.69
355.63
410.78
302.97
Eigenface, Fisherface and Laplacianface methods. All the algorithms were implemented in
Matlab 6.5 and run on a Intel P4 2.566GHz PC with 1GB memory.
3.2
Experiments on ORL Database
The ORL (Olivetti Research Laboratory) face database is used in this test. It consists of a
total of 400 face images, of a total of 40 people (10 samples per person). The images were
captured at different times and have different variations including expressions (open or
closed eyes, smiling or non-smiling) and facial details (glasses or no glasses). The images
were taken with a tolerance for some tilting and rotation of the face up to 20 degrees. For
each individual, l(= 2, 3, 4, 5) images are randomly selected for training and the rest are
used for testing.
The experimental design is the same as that in the last subsection. For each given l, we
average the results over 20 random splits. Figure 3.2 shows the plots of error rate versus
dimensionality reduction for the Eigenface, Fisherface, Laplacianface, TSA and baseline
methods. Note that, the presentation of the performance of the TSA algorithm is different
from that in the last subsection. Here, for a given d, we show its performance in the (d?d)dimensional tensor subspace. The reason is for better comparison, since the Eigenface and
Laplacianface methods start to converge after 70 dimensions and there is no need to show
their performance after that. The best result obtained in the optimal subspace and the
running time (millisecond) of computing the eigenvectors for each method are shown in
Table 2.
As can be seen, our TSA algorithm performed the best in all the cases. The Fisherface
and Laplacianface methods performed comparatively to our method, while the Eigenface
method performed poorly.
4
Conclusions and Future Work
Tensor based face analysis (representation and recognition) is introduced in this paper in
order to detect the underlying nonlinear face manifold structure in the manner of tensor
subspace learning. The manifold structure is approximated by the adjacency graph computed from the data points. The optimal tensor subspace respecting the graph structure is
then obtained by solving an optimization problem. We call this Tensor Subspace Analysis
method.
Most of traditional appearance based face recognition methods (i.e. Eigenface, Fisherface,
and Laplacianface) consider an image as a vector in high dimensional space. Such representation ignores the spacial relationships between the pixels in the image. In our work, an
image is naturally represented as a matrix, or the second order tensor. Tensor representation
makes our algorithm much more computationally efficient than PCA, LDA, and LPP. Experimental results on PIE and ORL databases demonstrate the efficiency and effectiveness
of our method.
TSA is linear. Therefore, if the face manifold is highly nonlinear, it may fail to discover
the intrinsic geometrical structure. It remains unclear how to generalize our algorithm
to nonlinear case. Also, in our algorithm, the adjacency graph is induced from the local
geometry and class information. Different graph structures lead to different projections. It
remains unclear how to define the optimal graph structure in the sense of discrimination.
References
[1] P.N. Belhumeur, J.P. Hepanha, and D.J. Kriegman, ?Eigenfaces vs. fisherfaces: recognition
using class specific linear projection,?IEEE. Trans. Pattern Analysis and Machine Intelligence,
vol. 19, no. 7, pp. 711-720, July 1997.
[2] M. Belkin and P. Niyogi, ?Laplacian Eigenmaps and Spectral Techniques for Embedding and
Clustering ,? Advances in Neural Information Processing Systems 14, 2001.
[3] Fan R. K. Chung, Spectral Graph Theory, Regional Conference Series in Mathematics, number
92, 1997.
[4] X. He and P. Niyogi, ?Locality Preserving Projections,?Advance in Neural Information
Processing Systems 16, Vancouver, Canada, December 2003.
[5] X. He, S. Yan, Y. Hu, P. Niyogi, and H.-J. Zhang, ?Face Recognition using Laplacianfaces,?IEEE. Trans. Pattern Analysis and Machine Intelligence, vol. 27, No. 3, 2005.
[6] S. Roweis, and L. K. Saul, ?Nonlinear Dimensionality Reduction by Locally Linear Embedding,? Science, vol 290, 22 December 2000.
[7] J. B. Tenenbaum, V. de Silva, and J. C. Langford, ?A Global Geometric Framework for Nonlinear Dimensionality Reduction,? Science, vol 290, 22 December 2000.
[8] M. Turk and A. Pentland, ?Eigenfaces for recognition,? Journal of Cognitive Neuroscience,
3(1):71-86, 1991.
[9] M. A. O. Vasilescu and D. Terzopoulos, ?Multilinear Subspace Analysis for Image Ensembles,?
IEEE Conference on Computer Vision and Pattern Recognition, 2003.
[10] K. Q. Weinberger and L. K. Saul, ?Unsupervised Learning of Image Manifolds by SemiDefinite
Programming,? IEEE Conference on Computer Vision and Pattern Recognition, Washington,
DC, 2004.
[11] J. Yang, D. Zhang, A. Frangi, and J. Yang, ?Two-dimensional PCA: a new approach to
appearance-based face representation and recognition,?IEEE. Trans. Pattern Analysis and Machine Intelligence, vol. 26, No. 1, 2004.
[12] J. Ye, R. Janardan, Q. Li, ?Two-Dimensional Linear Discriminant Analysis ,? Advances in
Neural Information Processing Systems 17, 2004.
| 2899 |@word norm:1 open:1 hu:1 decomposition:1 lpp:18 incurs:1 tr:27 reduction:13 contains:1 series:1 outperforms:1 written:1 chicago:1 laplacianfaces:13 plot:2 tsa:40 v:3 discrimination:1 intelligence:3 selected:2 plane:1 short:1 provides:1 zhang:2 five:1 differential:1 consists:1 introduce:1 manner:2 expected:1 uiuc:1 detects:1 provided:1 discover:1 underlying:1 eigenvector:3 finding:2 transformation:3 k2:8 classifier:2 positive:1 aat:1 local:7 xv:1 analyzing:1 might:1 camera:1 yj:9 testing:3 definite:1 area:1 yan:1 projection:7 matching:1 janardan:1 get:2 close:2 storage:1 map:3 demonstrated:1 maximizing:1 independently:2 focused:1 simplicity:1 embedding:2 variation:2 rl1:2 updated:1 suppose:1 programming:1 recognition:16 expensive:1 utilized:1 approximated:1 database:13 solved:1 worst:1 calculate:1 respecting:2 kriegman:1 solving:5 algebra:3 efficiency:2 easily:2 represented:5 train:16 separated:1 heat:1 describe:1 neighborhood:1 solve:2 say:1 otherwise:2 niyogi:4 final:1 propose:2 p4:1 neighboring:1 aligned:1 poorly:1 roweis:1 frobenius:1 object:1 pose:4 nearest:5 ij:16 dengcai2:1 implemented:2 c:1 synchronized:1 human:2 dii:15 eigenface:11 adjacency:4 fix:1 multilinear:3 lying:1 considered:1 exp:1 great:1 achieves:1 label:3 currently:1 tilting:1 sensitive:1 weighted:1 hope:1 minimization:1 aim:1 varying:1 xit:9 improvement:1 baseline:16 sense:3 glass:2 dim:8 detect:1 initially:1 pixel:4 among:2 orientation:1 classification:1 special:1 once:1 washington:1 represents:2 unsupervised:2 future:1 fundamentally:1 few:1 primarily:1 belkin:1 randomly:2 simultaneously:2 individual:5 geometry:2 n1:9 attempt:1 interest:1 highly:1 semidefinite:1 pc:1 ambient:1 experience:1 facial:2 column:2 applicability:1 subset:1 submanifold:7 eigenmaps:1 varies:1 sv:8 kxi:1 person:1 ym:1 rn1:11 choose:1 cognitive:1 chung:1 li:1 account:1 de:1 rn2:7 explicitly:1 depends:1 performed:6 closed:1 start:1 partha:1 minimize:1 variance:3 efficiently:1 ensemble:2 generalize:1 lighting:1 definition:1 vasilescu:3 pp:1 turk:1 naturally:2 sampled:2 intrinsically:2 popular:1 recall:1 subsection:4 dimensionality:14 higher:5 supervised:2 done:1 langford:1 hand:1 eqn:1 su:8 nonlinear:8 lda:16 gray:1 smiling:2 normalized:1 ye:1 hence:1 symmetric:1 iteratively:2 laboratory:1 deal:1 kak:3 generalized:5 demonstrate:2 performs:1 l1:3 silva:1 geometrical:7 image:40 consideration:1 novel:1 recently:1 rotation:1 rl:1 qp:1 extend:2 discussed:1 he:2 mathematics:1 similarly:2 illinois:1 fisherface:11 similarity:1 etc:1 isometry:2 olivetti:1 apart:1 yi:17 preserving:5 captured:2 greater:1 seen:3 deng:1 belhumeur:1 converge:1 july:1 semi:2 multiple:1 champaign:1 faster:1 characterized:1 yjt:5 laplacian:5 xjt:2 vision:2 cmu:2 iteration:1 represent:2 kernel:1 addition:1 cropped:2 singular:1 rest:2 regional:1 probably:1 subject:1 induced:1 december:3 effectiveness:2 call:2 near:1 yang:2 split:2 embeddings:1 easy:2 xj:14 identified:2 idea:1 expression:4 pca:34 gb:1 penalty:1 matlab:1 generally:2 eigenvectors:4 locally:1 tenenbaum:1 restricts:1 millisecond:1 multifactor:1 estimated:3 dims:8 per:3 neuroscience:1 vol:5 four:1 yit:10 kyi:2 graph:17 run:1 extends:1 throughout:1 family:1 reasonable:1 orl:6 bound:1 fan:1 topological:1 discretely:1 aspect:1 min:4 department:2 structured:2 smaller:1 intimately:1 outlier:1 dv:13 sij:17 taken:2 computationally:2 previously:1 remains:2 discus:1 fail:1 tractable:1 available:2 worthwhile:1 generic:1 spectral:3 rl2:2 weinberger:1 eigen:1 original:3 running:2 include:2 ensure:1 clustering:1 giving:1 especially:1 build:2 comparatively:2 tensor:36 objective:3 question:1 traditional:2 diagonal:2 unclear:2 dp:3 subspace:26 mapped:1 manifold:15 considers:1 discriminant:3 reason:1 modeled:2 relationship:3 minimizing:1 difficult:1 pie:6 rise:1 design:1 unknown:1 fisherfaces:13 perform:1 upper:1 urbana:1 xiaofei:2 pentland:1 incorporated:2 y1:1 rn:1 locate:1 dc:1 canada:1 introduced:1 optimized:1 trans:3 usually:1 pattern:5 xm:3 including:1 memory:1 suitable:1 natural:1 eye:2 carried:1 geometric:1 l2:3 vancouver:1 embedded:4 highlight:1 frangi:1 cai2:1 he1:1 var:3 versus:2 degree:1 share:2 heavy:1 row:2 last:2 aij:1 uchicago:1 terzopoulos:2 neighbor:5 eigenfaces:14 face:38 saul:2 benefit:1 ghz:1 tolerance:1 dimension:4 ignores:1 preprocessing:2 projected:2 far:1 global:2 discriminative:1 xi:25 spacial:1 table:6 ku:3 learn:1 du:13 whole:1 noise:1 arise:1 n2:11 x1:3 intel:1 scattered:1 position:1 specific:1 intrinsic:2 restricting:1 effectively:1 illumination:4 kx:2 c29:1 locality:3 simply:1 appearance:4 identity:1 presentation:1 flash:1 typical:2 principal:2 called:5 total:2 experimental:4 svd:2 people:1 eigenmap:1 frontal:1 |
2,091 | 29 | 524
BASINS OF ATTRACTION FOR
ELECTRONIC NEURAL NETWORKS
C. M. Marcus
R. M. Westervelt
Division of Applied Sciences and Department of Physics
Harvard University, Cambridge, MA 02138
ABSTRACT
We have studied the basins of attraction for fixed point and
oscillatory attractors in an electronic analog neural network. Basin
measurement circuitry periodically opens the network feedback loop,
loads raster-scanned initial conditions and examines the resulting
attractor.
Plotting the basins for fixed points (memories), we show
that overloading an associative memory network leads to irregular
basin shapes. The network also includes analog time delay circuitry,
and we have shown that delay in symmetric networks can introduce
basins for oscillatory attractors. Conditions leading to oscillation
are related to the presence of frustration; reducing frustration by
diluting the connections can stabilize a delay network.
(1) - INTRODUCTION
The dynamical system formed from an interconnected network of
nonlinear neuron-like elements can perform useful parallel
computation l - 5 .
Recent progress in controlling the dynamics has
focussed on algorithms for encoding the location of fixed pOints 1 ,4
and on the stability of the flow to fixed points 3 ? 5-8. An equally
important aspect of the dynamics is the structure of the basins of
attraction, which describe the location of all pOints in initial
condition space which flow to a particular attractor 10 . 22 .
In a useful associative memory, an initial state should lead
reliably to the "closest" memory.
This requirement suggests that a
well-behaved basin of attraction should evenly surround its attractor
and have a smooth and regular shape. One dimensional basin maps
plotting "pull in" probability against Hamming distance from an
attract or do not reveal the shape of the basin in the high
dimensional space of initial states 9. 19 . Recently, a numerical study
of a Hopfield network with discrete time and two-state neurons showed
rough and irregular basin shapes in a two dimensional Hamming space,
suggesting that the high dimensional basin has a complicated
structure 10 . It is not known how the basin shapes change with the
size of the network and the connection rule.
We have investigated the basins of attraction in a network with
continuous state dynamics by building an electronic neural network
with eight variable gain sigmoid neurons and a three level (+,0,-)
interconnection matrix.
We have also built circuitry that can map
out the basins of attraction in two dimensional slices of initial
state space (Fig .1) .
The network and the basin measurements are
described in section 2.
@ American Institute of Physics 1988
525
In section 3, we show that the network operates well as an
associative memory and can retrieve up to four memories (eight fixed
points) without developing spurious attractors, but that for storage
of three or more memories, the basin shapes become irregular.
In section 4, we consider the effects of time delay. Real network
components cannot switch infinitely fast or propagate signals
instantaneously, so that delay is an intrinsic part of any hardware
implementation of a neural network. We have included a controllable
CCD (charge coupled device) analog time delay in each neuron to
investigate how time delay affects the dynamics of a neural network.
We find that networks with symmetric interconnection matrices, which
are guaranteed to converge to fixed points for no delay, show
collective sustained oscillations when time delay is present. By
discovering which configurations are maximally unstable to
oscillation, and looking at how these configurations appear in
networks, we are able to show that by diluting the interconnection
matrix, one can reduce or eliminate the oscillations in neural
networks with time delay.
(2) - NETWORK AND BASIN MEASUREMENT
A block diagram of the network and basin measurement circuit is
shown in fig.1.
digital
comparator
and
oscillation
detector
desired
memory
?c
sigmoid amplifiers
with
Block diagram
of the network and
basin measurement
system.
Fi~.l
The main feedback loop consists of non-linear amplifiers
("neurons", see fig.2) with capacitive inputs and a resistor matrix
allowing interconnection strengths of -l/R, 0, +l/R (R = 100 kn). In
all basin measurements, the input capacitance was 10 nF, giving a
time constant of 1 ms. A charge coupled device (CCD) analog time
del ay l l was built into each neuron, providing an adjustable delay per
neuron over a range 0.4 - 8 ms.
526
50k
Inverting
output
1/
Fig.2 Electronic neuron.
Non-linear gain provided
by feedback diodes.
Inset: Nonlinear
behavior at several
different values of
gain.
Analog switches allow the feedback path to be periodically
disconnected and each neuron input charged to an initial voltage. The
network is then reconnected and settles to the attractor associated
with that set of initial conditions. Two of the initial voltages are
raster scanned (on a time scale that is long compared to the load/run
switching time) with function generators that are also connected to
the X and Y axes of a storage scope.
The beam of the scope is
activated when the network settles into a de sired at tractor,
producing an image of the basin for that attractor in a twodimensional slice of initial condition space.
The "attractor of
8
interest" can be one of the 2 fixed points or an oscillatory
attractor.
A simple example of this technique is the case of three neurons
with symmetric non-inverting connection shown in fig.3.
",
"
", "
3-D
STATE
SPACE ?
" ....
..
G??gBASIN FORt t t
_
BASIN FOR ~ ??
-1.0V
1V
?
CIRCUIT:
j~
-1 V
.~.
+
-1 V
1V
Basin
of
Fig.3
attraction for three
neurons
with
symmetric non-inverting
coupling. Slices are
in
plane
of
the
initial voltages on
neurons 1 and 2. The
two fixed points are
all neurons saturated
all
positive
or
negative.
The data
are photographs of
the scope screen.
(3) BASINS FOR FIXED POINTS - ASSOCIATIVE MEMORY
Two dimensional slices of the eight dimensional initial condition
space (for the full network) reveal important qualitative features
about the high dimensional basins.
Fig. 4 shows a typical slice for
a network programmed with three memories according to a clipped Hebb
rule1, 12:
527
(1 )
where ~ is an N-component memory vector of l's and -l's, and m is
the number of memories. The memories were chosen to be orthogonal
(~a .~~ = N 8a~) .
1V- r - - - r - - - - - - - - . ,
1, ?1,
1,?1,
1,?1,
1,?1 \
0-
_\
'1,1,?1,?1,
1,1,1,1
.w._ ?.w.....""..
L.:.:.~
_ _~~~~_""'_
1,1, ?1,?1,
1,1,?1,?1
MEMORIES:
1,1,1,1,-1,-1,-1,-1
1 ,-1,1 ,-1,1 ,-1,1 ,-1
1,1,-1,-1,1 ,1,-1,-1
1,1,1,1,
?1,?1,?1,?1
-1,1,-1,1,
?1,1,-1,1
-1 V- ""'=====:;:===~
.I
-IV
0?
~
IV
Fi~.
4 A slice of initial condition space shows the basins of
attraction for five of the six fixed points for three memories
in eight-neuron Hopfield net. Learning rule was clipped Hebb
(Eq.1). Neuron gain = 15.
Because the Hebb rule (eq. 1) makes ~a and _~a stable attractors, a
three-memory network will have six fixed point attractors. In fig.4,
the basins for five of these attractors are visible, each produced
with a different rastering pattern to make it distinctive. Several
characteristic features should be noted:
-- All initial conditions lead to one of the memories (or
inverses), no spurious attractors were seen for three or four
memories.
This is interesting in light of the well documented
emergence of spurious attractors at miN -15% in larger networks with
discrete time 2 ,lS.
-- The basins have smooth and continuous edges.
-- The shapes of the basins as seen in this slice are irregular.
Ideally, a slice with attractors at each of the corners should have
rectangular basins, one basin in each quadrant of the slice and the
location of the lines dividing quadrants determined by the initial
conditions on the other neurons (the "unseen" dimensions). With three
or more memories the actual basins do not resemble this ideal form.
(4) TIME DELAY, FRUSTRATION AND SUSTAINED OSCILLATION
Arguments defining conditions which guarantee convergence to
fixed points 3 , 5,6 (based, for example, on the construction of a
Liapunov function) generally assume instantaneous communication
between elements of the network.
In any hardware implementation,
these assumptions break down due to the finite switching speed of
amplifiers and the charging time of long interconnect lines. 13 It is
the ratio of delay/RC which is important for stability, so keeping
this ratio small limits how fast a neural network chip can be
designed to run.
Time delay is also relevant to biological neural
nets where propagation and response times are comparable. 14 ,lS
528
Our particular interest in this section is how time delay can
lead to sustained oscillation in networks which are known to be
stable when there is no delay.
We therefore restrict our attention
to networks with symmetric interconnection matrices (Tlj = Tjl)'
An obvious ingredient in producing oscillations in a delay
network is feedback, or stated another way, a graph representing the
connections in a network must contain loops.
The simplest oscillatory structure made of delay elements is the
ring oscillator (fig.Sa).
Though not a symmetric configuration, the
ring oscillator illustrates an important point: the ring will
oscillate only when there is negative feedback at dc - that is, when
the product of'interconnection around the loop is negative. Positive
feedback at dc (loop product of connections > 0) will lead to
saturation.
Observing various symmetric configurations (e.g. fig.Sb) in the
delayed-neuron network, we find that a negative product of
connections around a loop is also a necessary condition for sustained
oscillation in symmetric circuits.
An important difference between
the ring (fig.Sa) and the symmetric loop (fig.Sb) is that the period
of oscillation for the ring is the total accumulated delay around the
ring - the larger the ring the longer the period. In contrast, for
those symmetric configurations which have oscillatory attractors, the
period of oscillation is roughly twice the delay, regardless of the
size of the configuration or the value of delay. This indicates that
for symmetric configurations the important feedback path is local,
not around the loop.
I ?\:~~~illator
?
(NEGATIVE
???,............
/;\\
FEEDBACK)
~bJmmetric
1/ \ \ (FI~~~TRATED)
................
=lime delay
neuron
/ ' =non-inverting
connection
,,:1' .. inverting
.?'
connection
Fir;;r . S (a) A ring oscillator:
needs negative feedback at dc
to oscillate. (b) Symmetrically connected triangle. This
configuration is "frustrated"
(defined in text), and has
both oscillatory and fixed
point attractors when neurons
have delay .
otl ~ ?????. h.",
Configurations with loop connection product < 0 are important in
the theory of spin glasses 16 , where such configurations are called
"frustrated." Frustration in magnetic (spin) systems, gives a measure
of "serious" bond disorder (disorder that cannot be removed by a
change of variables) which can lead to a spin glass state. 16.17
Recent results based on the similarity between spin glasses and
symmetric neural networks has shown that storage capacity limitations
can be understood in terms of this bond disorder. 18 ,19 Restating our
observation above: We only find stable oscillatory modes in symmetric
networks with delay when there is frustration. A similar result for a
sign-symmetric network (Tlj, Tjl both ~o or $0) with no delay is
described by Hirsch. 6
We can set up the basin measurement system (fig.l) to plot the
basin of attraction for the oscillatory mode. Fig.6 shows a slice of
the oscillatory basin for a frustrated triangle of delay neurons.
529
. .. .
1.5V
..
.
"
? .'
.. :,'.,"
... f....
'I
.l,,?
o
,
'i\"
,
"
.
.:
'
'I
O
'.'.'
'
.,
,
.
,:,
:,1.
;
'
?1.5V
I
?1.5V
I
o
I
1.5V
Fig.6 Basin for oscillatory attractor (cross-hatched region)
in frustrated triangle of delay-neurons.
Connections were
all symmetric and inverting; other frustrated configurations
(e.g. two non-inverting, one inverting, all symmetric) were
similar. (6a): delay = O.48RC, inset shows trajectory to fixed
point and oscillatory mode for two close-lying initial
conditions.
(6b): delay = O. 61RC, basin size increases.
A fully connected feedback associative network with more that one
memory will contain frustration. As more memories are added, the
amount of frustration will increases until memory retrieval
disappears. But before this point of memory saturation is reached,
delay could cause an oscillatory basin to open.
In order to design
out this possibility, one must understand how frustration, delay and
global stability are related. A first step in determining the
stability of a delay network is to consider which small
configurations are most prone to oscillation, and then see how these
"dangerous" configurations show up in the network. As described
above, we only need to consider frustrated configurations.
A frustrated configuration of neurons can be sparsely connected,
as in a loop, or densely connected, with all neurons connected to all
others, forming what is called in graph theory a "clique."
Representing a network with inverting and non-inverting connections
as a signed graph (edges carry + and -), we define a frustrated clique
as a fully connected set of vertices (r vertices,' r (r-l) /2 edges)
with all sets of three vertices in the clique forming frustrated
triangles. Some examples of frustrated loops and cliques are shown in
fig. 7.
Notice that neurons connected with all inverting symmetric
connections, a configuration that is useful as a "winner-take-all"
circuit, is a frustrated clique.
<> . .
\???????1
\!.....?????~FRUSTRATED
FiQ.7 Examples of frustrated
loops
and
frustrated
cliques.
In
the
graph
,~
representation
vertices
/ \
.... =inverting
/",non-inverting
(black dots) are neurons
,..
..
symmetric connection symmetric connection
,
.~~----------------~--------------~
?...............?
..?..
..'
FRUSTRATED (with delay) and undirected
edges
are
symmetric
;
/i~;~/' CLIQUES
connections.
:
\">'" ,.:(/ ~~);;.::~:. (fully connected;
.'
,/
~
................
\!
........
!
.
LOOPS
. :::.j..\.>.
: ! ?..?... ~:
... ,'
'."
?...........?
~:" ;;.!: .1'''. :
;.~~r.~
'"
..,'-'
all triangles
frustrated)
530
We find that delayed neurons connected in a frustrated loop
longer than three neurons do not show sustained oscillation for any
value of delay (tested up to delay = 8RC).
In contrast, when delayed
neurons are connected in any frustrated clique configuration, we do
find basins of attraction for sustained oscillation as well as fixed
point attractors, and that the larger the frustrated clique, the more
easily it oscillates in the following ways: (1) For a given value of
delay/RC, the size of the oscillatory basin increases with r, the
size of the frustrated clique (fig. 8). (2) The critical value of
delay at which the volume of the oscillatory basin goes to zero
decreases with increasing r (fig.9); For r=8 the critical delay is
already less than 1/30 RC.
/\
?..............?
1
1.1\
Fig.8
Size of basin
for
oscillatory mode
increases with size of
frustrated clique.
The
delay
is
0.46RC per
neuron in each picture.
Slices are in the space
of initial voltages on
neurons 1 and 2, other
initial voltages near
zero.
.:..............=-
?
iG
u
-u
.
-0:-
?
~
4:
.......
>-
?
co
Fig.9 The critical valu,? of delay
where the oscillatory mode vanishes .
Measured by reducing delay until
system leaves oscillatory attractor .
Delay plotted in units of the
characteristic time RioC, where Rio
= (Lj 1 /Rij) -1=10Sn/ (r-1) and C=10nF,
indicating that the critical delay
decreases faster than 1/(r-1).
? ??
Q)
- .1
"0
1
size of frustrated clique (r)
10
Having identified frustrated cliques as the maximally unstable
configuration of time delay neurons, we now ask how many cliques of a
given size do we expect to find in a large network.
A set of r vertices (neurons) can be fully connected by r(r-1)/2
edges of two types (+ or -) to form 2 r (r-1)/2 different cliques. Of
these, 2 (r-1) will be frustrated cliques.
Fig .?10 shows all 2 (4-1) =8
cases for r=4.
~
,':11
"
"
r.III'
~
: II
,', II
A
:
i
.
1'1',1
...............
~ .---.
............... .-------'.
Eig.10 All graphs of size r=4 that are frustrated cliques
(fully connected, every triangle frustrated.) Solid lines =
positive edges, dashed lines = negative edges.
531
For a randomly connected network, this result combined with
results from random graph th eory 20 gives an expected number of
frustrated cliques of size r in a network of size N, EN(r):
N
(2)
EN(r) = (r) c(r,p)
c(r,p) =
where
N
(r)
r ( r - l ) (r-2)/2 pr(r-l)/2
is the binomial coefficient and c(r,p)
(3)
is defined as the
concentration of frustrated cliques. p is the connectance of the
network, defined as the probability that any two neurons are
connected. Eq.3 is the special case where + and - edges (noninverting, inverting connections) are equally probable. We have also
generalized this result to the case p(+)~p(-).
Fig.11 shows the dramatic reduction in the concentration of all
frustrated configurations in a diluted random network. For the
general case (p(+)~p(-?
we find that the negative connections
affect the concentrations of frustrated cliques more strongly than
the positive connections,
as expected (Frustration requires
negatives, not positives, see fig. 10) .
10?y---------~------~--~--r__r~~~_,
Fig.11 Concentration of
frustrated cliques of size
r=3,4,S,6 in an unbiased
random network, from eq.3.
Concentrations
decrease
rapidly as the network is
diluted,
especially for
large cliques (note: log
scale) .
connectance (p)
1
When the interconnections in a network are specified by a
learning rule rather than at random, the expected numbers of any
configuration will differ from the above results.
We have compared
the number of frustrated triangles in large three-valued (+1,0,-1)
Hebb interconnection matrices (N=100,300,600) to the expected number
in a random matrix of the same size and connectance. The Hebb matrix
was constructed according to the rule:
?
Tij = Zk (La=l,m ~ia ~ja) ; Tii =
Zk(X) = +1 for x > k; 0 for -k $x $k; -1 for x < -k;
(4a)
(4b)
m is the number of memories, Zkis a threshold function with cutoff
k, and ~a is a random string of l's and -l's. The matrix constructed
by eq.4 is roughly unbiased (equal number of positive and negative
connections) and has a connectance p(k).
Fig.12 shows the ratio of
frustrated triangles in a diluted Hebb matrix to the expected number
in a random graph with the same connectance for different numbers of
532
memories stored in the Hebb matrix. At all values of connectance, the
Hebb matrix has fewer frustrated triangles than the random matrix by
a ratio that is decreased by diluting the matrix or storing fewer
memories. The curves do not seem to depend on the size of the matrix,
N.
This result suggests that diluting a Hebb matrix breaks up
frustration even more efficiently than diluting a random matrix.
."
CD
?
?a
??
0.9
~
-~
I'l!
?c
0.7
"C
~
ratio
ratio
ratio
ratio
ratio
1TI=o15
m-25
m-40
maS5
1TI=o100
N .. 300
0.5
::l
.::
'0
.2
0.3
T?
0.1
.1
connectance
FiQ'.12 The number of frustrated
triangles in a (+,0,-) Hebb rule
matrix (300x300) divided by the
expected number in a random
with
equal
signed
graph
connectance.
The different sets
of points are for different
numbers of random memories in the
The lines are
Hebb matrix.
guides to the eye.
The sensitive dependence of frustration on connectance suggests
that oscillatory modes in a large neural network with delay can be
eliminated by diluting the interconnection matrix.
As an example,
consider a unbiased random network with delay = RC/10.
From fig.9,
only frustrated cliques of size r=5 or larger have oscillatory basins
for this value of delay; frustration in smaller configurations in the
network cannot lead to sustained oscillation in the network.
Diluting the connectance to 60% will reduce the concentration of
frustrated cliques with r=5 by a factor of over 100 and r=6 by a
factor of 2000.
The reduction would be even greater for a clipped
Hebb matrix.
Results from spin glass theory 21 suggest that diluting a clipped
Hebb matrix can actually improve the storage capacity for moderated
dilution, with a maximum in the capacity at a connectance of 61%. To
the extent this treatment applies to an analog continuous-time
network, we should expect that by diluting connections, oscillatory
modes can be killed before memory capacity is compromised.
We have confirmed the stabilizing effect of dilution in our
network: For a fully connected eight neuron network programmed with
three orthogonal memories according to eq.l, adding a delay of 0.4RC
opens large basins for sustained oscillation.
By randomly diluting
the interconnections to
p . . . 0.85, we were able to close the
oscillatory basins and recover a useful associative memory.
SUMMARY
We have investigated the structure of fixed point and oscillatory
basins of attraction in an electronic network of eight non-linear
amplifiers with controllable time delay and a three value (+,0,-)
interconnection matrix.
For fixed point attractors, we find that the network performs
well as an associative memory - no spurious attractors were seen for
up to four stored memories - but for three or more memories, the
shapes of the basins of attraction became irregular.
533
A network which is stable with no delay can have basins for
oscillatory at tractors when time delay is present. For symmetric
networks with time delay, we only observe sustained oscillation when
there
is
frustration.
Frustrated cliques
(fully connected
configurations with all triangles frustrated), and not loops, are
most prone to oscillation, and the larger the frustrated clique, the
more easily it oscillates. The number of the se "dangerous"
configurations in a large network can be greatly reduced by diluting
the connections. We have demonstrated that a network with a large
basin for an oscillatory attractor can be stabilized by dilution.
ACKNOWLEDGEMENTS
We thank K.L.Babcock, S.W.Teitsworth, S.Strogatz and P.Horowitz for
useful discussions. One of us (C.M.M) acknowledges support as an AT&T
Bell Laboratories Scholar.
This work was supported by JSEP contract
no. N00014-84-K-0465.
REFERENCES
1)
2)
3)
4)
5)
6)
7)
8)
9)
10)
11)
12)
13)
14)
15)
16)
17)
18)
19)
20)
21)
22)
J.S.Denker, Physica 22ll, 216 (1986).
J.J. Hopfield, Proc.Nat.Acad.Sci. ~, 2554 (1982).
J.J. Hopfield, Proc.Nat.Acad.Sci. al, 3008 (1984).
J.S. Denker, Ed. Neural Networks for Computing, AlP Conf. Proc.
l.5..l. (1986).
M.A. Cohen, S. Grossberg, IEEE Trans. SMC-13, 815 (1983).
M.W.Hirsch, Convergence in Neural Nets, IEEE Conf.on Neural
Networks, 1987.
K.L. Babcock, R.M. Westervelt, Physica 2Jll,464 (1986).
K.L. Babcock, R.M. Westervelt, Physica zau,305 (1987).
See, for example: D.B.Schwartz, et aI, Appl.Phys.Lett.,~ (16),
1110 (1987); or M.A.Silviotti,et aI, in Ref.4, pg.408.
J.D. Keeler in Ref.4, pg.259.
CCD analog delay: EG&G Reticon RD5106A.
D.O.Hebb, The Organization of Behavior, (J.Wiley, N.Y., 1949).
Delay in VLSI discussed in: A. Muhkerjee, Introduction to nMOS
and CMOS VLSI System Design, (Prentice Hall, N.J.,1985).
U. an der Heiden, J.Math.Biology, ~, 345 (1979).
M.C. Mackey, U. an der Heiden, J.Math.Biology,~, 221 (1984).
Theory of spin glasses reviewed in: K. Binder, A.P. Young,
Rev. Mod. Phys. ,.5,a (4),801, (1986).
E. Fradkin,B.A. Huberman,S.H. Shenker, Phys.Rev.lila (9),4789
(1978) .
D.J. Amit, H. Gutfreund, H. Sompolinski, Ann.Phys. ~, 30,
(1987) and references therein.
J.L. van Hemmen, I. Morgenstern, Editors, Heidelberg Colloquium
on Glassy Dynamics, Lecture Notes in Physics~, (SpringerVerlag, Heidelberg, 1987).
P. Erdos, A. Renyi, Pub. Math. Inst. Hung .Acad. Sci., .5.,17, (1960).
I.Morgenstern in Ref.19, pg.399;H.Sompolinski in Ref.19, pg.485.
J. Guckenheimer, P.Holmes, Nonlinear Oscillations,Dynamical
Systems and Bifurcations of Vector Fields (Springer,N.Y.1983).
| 29 |@word open:3 propagate:1 pg:4 dramatic:1 solid:1 carry:1 reduction:2 initial:17 configuration:23 pub:1 must:2 numerical:1 visible:1 periodically:2 shape:8 designed:1 plot:1 mackey:1 discovering:1 device:2 leaf:1 liapunov:1 fewer:2 plane:1 math:3 location:3 five:2 rc:9 constructed:2 become:1 qualitative:1 consists:1 sustained:9 introduce:1 expected:6 roughly:2 behavior:2 actual:1 increasing:1 provided:1 circuit:4 what:1 string:1 morgenstern:2 gutfreund:1 guarantee:1 every:1 nf:2 ti:2 charge:2 oscillates:2 schwartz:1 unit:1 appear:1 producing:2 lila:1 positive:6 before:2 local:1 understood:1 limit:1 switching:2 acad:3 encoding:1 path:2 signed:2 black:1 twice:1 therein:1 studied:1 suggests:3 appl:1 co:1 binder:1 programmed:2 smc:1 range:1 grossberg:1 block:2 bell:1 regular:1 quadrant:2 suggest:1 cannot:3 close:2 storage:4 twodimensional:1 prentice:1 map:2 charged:1 demonstrated:1 go:1 attention:1 regardless:1 l:2 rectangular:1 stabilizing:1 disorder:3 examines:1 attraction:12 rule:6 holmes:1 pull:1 otl:1 retrieve:1 stability:4 moderated:1 controlling:1 construction:1 harvard:1 element:3 sparsely:1 rij:1 region:1 connected:17 decrease:3 removed:1 vanishes:1 colloquium:1 ideally:1 sired:1 dynamic:5 depend:1 distinctive:1 division:1 triangle:11 easily:2 hopfield:4 chip:1 various:1 fast:2 describe:1 larger:5 valued:1 interconnection:11 jsep:1 unseen:1 emergence:1 associative:7 net:3 interconnected:1 product:4 relevant:1 loop:15 rapidly:1 convergence:2 requirement:1 cmos:1 ring:8 diluted:3 coupling:1 measured:1 progress:1 sa:2 eq:6 dividing:1 diode:1 resemble:1 differ:1 alp:1 settle:2 ja:1 scholar:1 biological:1 probable:1 keeler:1 physica:3 lying:1 around:4 hall:1 scope:3 circuitry:3 proc:3 bond:2 sensitive:1 instantaneously:1 guckenheimer:1 rough:1 rather:1 voltage:5 ax:1 indicates:1 greatly:1 contrast:2 glass:5 rio:1 inst:1 attract:1 interconnect:1 sb:2 eliminate:1 accumulated:1 lj:1 spurious:4 vlsi:2 tjl:2 special:1 bifurcation:1 equal:2 field:1 having:1 eliminated:1 biology:2 others:1 serious:1 dilution:3 randomly:2 densely:1 delayed:3 attractor:24 diluting:11 amplifier:4 organization:1 interest:2 investigate:1 possibility:1 saturated:1 light:1 activated:1 glassy:1 edge:8 necessary:1 orthogonal:2 iv:2 desired:1 plotted:1 vertex:5 delay:54 stored:2 kn:1 combined:1 contract:1 physic:3 frustration:13 fir:1 corner:1 horowitz:1 american:1 conf:2 leading:1 suggesting:1 tii:1 de:1 stabilize:1 includes:1 coefficient:1 break:2 observing:1 reached:1 recover:1 parallel:1 complicated:1 formed:1 spin:6 became:1 characteristic:2 efficiently:1 produced:1 trajectory:1 confirmed:1 oscillatory:24 detector:1 phys:4 ed:1 against:1 raster:2 obvious:1 associated:1 hamming:2 gain:4 treatment:1 ask:1 actually:1 response:1 maximally:2 though:1 strongly:1 heiden:2 until:2 nonlinear:3 eig:1 propagation:1 del:1 jll:1 mode:7 reveal:2 behaved:1 building:1 effect:2 contain:2 unbiased:3 symmetric:21 laboratory:1 eg:1 ll:1 noted:1 m:2 generalized:1 ay:1 performs:1 image:1 instantaneous:1 recently:1 fi:3 sigmoid:2 cohen:1 winner:1 volume:1 shenker:1 analog:7 discussed:1 measurement:7 cambridge:1 surround:1 ai:2 killed:1 dot:1 stable:4 longer:2 similarity:1 closest:1 recent:2 showed:1 n00014:1 der:2 seen:3 greater:1 converge:1 period:3 signal:1 ii:2 dashed:1 full:1 smooth:2 faster:1 cross:1 long:2 retrieval:1 divided:1 equally:2 irregular:5 beam:1 decreased:1 diagram:2 undirected:1 flow:2 mod:1 seem:1 near:1 presence:1 ideal:1 symmetrically:1 iii:1 switch:2 affect:2 restrict:1 identified:1 reduce:2 six:2 oscillate:2 cause:1 useful:5 generally:1 tij:1 se:1 amount:1 hardware:2 simplest:1 documented:1 eory:1 reduced:1 stabilized:1 notice:1 sign:1 per:2 discrete:2 fradkin:1 four:3 threshold:1 cutoff:1 sompolinski:2 graph:8 run:2 inverse:1 clipped:4 electronic:5 oscillation:19 lime:1 comparable:1 guaranteed:1 strength:1 scanned:2 dangerous:2 westervelt:3 aspect:1 speed:1 argument:1 min:1 nmos:1 department:1 developing:1 according:3 disconnected:1 smaller:1 rev:2 pr:1 eight:6 observe:1 denker:2 magnetic:1 tlj:2 capacitive:1 binomial:1 ccd:3 giving:1 especially:1 amit:1 capacitance:1 added:1 already:1 concentration:6 dependence:1 distance:1 thank:1 sci:3 capacity:4 restating:1 evenly:1 extent:1 unstable:2 marcus:1 providing:1 ratio:9 negative:10 stated:1 implementation:2 reliably:1 collective:1 design:2 adjustable:1 perform:1 allowing:1 neuron:34 observation:1 finite:1 defining:1 looking:1 communication:1 dc:3 inverting:14 fort:1 specified:1 connection:21 trans:1 able:2 dynamical:2 pattern:1 saturation:2 built:2 memory:33 charging:1 ia:1 critical:4 representing:2 improve:1 eye:1 picture:1 disappears:1 acknowledges:1 coupled:2 sn:1 text:1 acknowledgement:1 determining:1 tractor:2 fully:7 expect:2 lecture:1 interesting:1 limitation:1 valu:1 ingredient:1 generator:1 digital:1 basin:50 plotting:2 editor:1 storing:1 cd:1 prone:2 summary:1 supported:1 keeping:1 guide:1 allow:1 understand:1 institute:1 focussed:1 van:1 slice:11 feedback:11 dimension:1 curve:1 lett:1 made:1 ig:1 hatched:1 erdos:1 clique:26 hirsch:2 global:1 continuous:3 compromised:1 reviewed:1 zk:2 controllable:2 heidelberg:2 investigated:2 main:1 ref:4 fig:26 en:2 hemmen:1 screen:1 hebb:14 wiley:1 x300:1 resistor:1 renyi:1 young:1 down:1 load:2 fiq:2 inset:2 intrinsic:1 overloading:1 adding:1 nat:2 illustrates:1 photograph:1 infinitely:1 forming:2 strogatz:1 applies:1 springer:1 babcock:3 frustrated:40 ma:1 comparator:1 ann:1 oscillator:3 change:2 springerverlag:1 included:1 typical:1 determined:1 reducing:2 operates:1 huberman:1 total:1 called:2 la:1 indicating:1 support:1 tested:1 hung:1 |
2,092 | 290 | 232
Sejnowski, Yuhas, Goldstein and Jenkins
Combining Visual and Acoustic Speech Signals
with a Neural Network Improves Intelligibility
T .J. Sejnowski
The Salk Institute
and
Department of Biology
The University of
California at San Diego
San Diego, CA 92037
B.P. Yuhas
M.H. Goldstein, Jr.
Department of Electrical
and Computer
Engineering
The Johns Hopkins
University
Baltimore, MD 21218
R.E. Jenkins
The Applied Physics
Laboratory
The Johns Hopkins
University
Laurel, MD 20707
ABSTRACT
Acoustic speech recognition degrades in the presence of noise. Compensatory information is available from the visual speech signals
around the speaker's mouth. Previous attempts at using these
visual speech signals to improve automatic speech recognition systems have combined the acoustic and visual speech information at a
symbolic level using heuristic rules. In this paper, we demonstrate
an alternative approach to fusing the visual and acoustic speech
information by training feedforward neural networks to map the
visual signal onto the corresponding short-term spectral amplitude
envelope (STSAE) of the acoustic signal. This information can
be directly combined with the degraded acoustic STSAE. Significant improvements are demonstrated in vowel recognition from
noise-degraded acoustic signals. These results are compared to the
performance of humans, as well as other pattern matching and estimation algorithms.
1
INTRODUCTION
Current automatic speech recognition systems rely almost exclusively on the acoustic speech signal, and as a consequence, these systems often perform poorly in noisy
Combining Visual and Acoustic Speech Signals
environments. To compensate for noise-degradation of the acoustic signal, one can
either attempt to remove the noise from the acoustic :;ignal or supplement the acoustic signal with other sources of speech information. One such source is the visible
movements of the mouth. For humans, visual speech signals can improve speech
perception when the acoustic signal is degraded by noise (Sumby and Pollack, 1954)
and can serve as a source of speech information when the acoustic signal is completely absent through lipreading. How can these visual speech signals be used to
improve the automatic recognition of speech?
One speech recognition system that has extensively used the visual speech signals
was developed by Eric Petajan (1987). For a limited vocabulary, Petajan demonstrated that the visual speech signals can be used to significantly improve automatic
speech recognition compared to the acoustic recognition alone. The system relied
upon a codebook of images that were used to translate incoming images into corresponding symbols. These symbol strings were then compared to stored sequences
representing different words in the vocabulary. This categorical treatment of speech
signals is required because of the computational limitations of currently available
digital serial hardware.
This paper proposes an alternative method for processing visual speech signals
based on analog computation in a distributed network architecture. By using many
interconnected processors working in parallel large amounts of data can be handled
concurrently. In addition to speeding up the computation, this approach does not
require segmentation in the early stages of processing; rather, analog signals from
the visual and auditory pathways flow through networks in real time and can be
combined directly.
Results are presented from a series of experiments that use neural networks to process the visual speech signals of two talkers. In these preliminary experiments,
the results are limited to static images of vowels. We demonstrate that these networks are able to extract speech information from the visual images, and that this
information can be used to improve automatic vowel recognition.
2
VISUAL AND ACOUSTIC SPEECH SIGNALS
The acoustic speech signal can be modeled as the response of the vocal tract filter to
a sound source (Fant, 1960). The resonances of the vocal tract are called formants.
They often appear as peaks in the short-term power spectrum, and are sufficient to
identify the individual vowels (Peterson and Barney, 1953). The overall shape of
the short-time spectra is important for general speech perception (Cole, 1980).
The configuration of the articulators define the shape of the vocal tract and the
corresponding resonance characteristics of the filter. While some of the articulators
are visible on the face of the speaker (e.g., the lips, teeth and sometimes the tip
of the tongue), others are not. The contribution of the visible articulators to the
acoustic signal results in speech sounds that are much more susceptible to acoustic
noise distortion than are the contributions from the hidden articulators (Petajan,
1987), and therefore, the visual speech signal tends to complement the acoustic
233
234
Sejnowski, Yuhas, Goldstein and Jenkins
signal. For example, the visibly distinct speech sounds Ibl and Ikl are among
the first pairs to be confused when presented acoustically in the presence of noise.
Because of this complementary structure, the perception of speech in noise is greatly
improved when both speech signals are present. How and at what level are these
two speech signals being combined?
In previous attempts at using the visual speech signals, the information from the
visual signal was incorporated into the recognition system after the signals were
categorized (Petajan, 1987). In the approach taken here, visual signals will be used
to resolve ambiguities in the acoustic signal before either is categorized. By combining these two sources of information at an early stage of processing, it is possible
to reduce the number of erroneous decisions made and increase the amount of information passed to later stages of processing (Summerfield, 1987). The additional
information provided by the visual signal can serve to constrain the possible interpretations of an ambiguous acoustic signal, or it can serve as an alternative source
of speech information when the acoustical signal is heavily noise-corrupted. In either case, a massive amount of computation must be performed on the raw data.
New massively-parallel architectures based on neural networks and new training
procedures have made this approach feasible.
3
INTERPRETING THE VISUAL SIGNALS
In our approach, the visual signal was mapped directly into an acoustic representation closely related to the vocal tract's transfer function (Summerfield, 1987). This
representation allowed the visual signal to be fused with the acoustic signal prior
to any symbolic encoding.
The visual signals provide only a partial description of the vocal tract transfer
function and that description is usually ambiguous. For a given visual signal there
are many possible configurations of the full vocal tract, and consequently many
possible corresponding acoustic signals. The goal was to define a good estimate of
that acoustic signal from the visual signal and then use that estimate in conjunction
with any residual acoustic information.
The speech signals used in these experiments were obtained from a male speaker
who was video taped while seated facing the camera, under well-lit conditions. The
visual and acoustic signals were then transferred and stored on laser disc (Bernstein
and Eberhardt, 1986), which allowed the access of individual video frames and the
corresponding sound track. The NTSC video standard is based upon 30 frames per
second and words are preserved as a series of frames on the laser disc. A data set
was constructed of 12 examples of 9 different vowels (Yuhas et al., 1989).
A reduced area-of-interest in the image was automatically defined and centered
around the mouth. The resulting sub-image was sampled to produce a topographically accurate image of 20 x 25 pixels that would serve to represent the visual
speech signal. While not the most efficient encoding one could use, it is faithful
to the parallel approach to computation advocated here and represents what one
might observe through an array of sensors.
Combining Visual and Acoustic Speech Signals
Along with each video frame on the laser disc there is 33 ms of acoustic speech. The
representation chosen for the acoustic output structure was the short-time spectral
amplitude envelope (STSAE) of the acoustic signal, because it is essential to speech
recognition and also closely related to the vocal tract's transfer function. It can be
calculated from the short-term power spectrum of the acoustic signal. The speech
signal was sampled and cepstral analysis was used to produced a smooth envelope
of the original power spectrum that could be sampled at 32 frequencies.
Figure 1: Typical lip images presented to the network.
Three-layered feedforward networks with non-linear units were used to perform the
mapping. A lip image was presented across 500 input units, and an estimated
STSAE was produced across 32 output units. Networks with five hidden units
were found to provide the necessary bandwidth while minimizing the effects of
over-learning. The standard backpropagation technique was used to compute the
error gradients for training the network. However, instead of using a fixed-step
steepest-descent algorithm for updating the weights, the error gradient was used
in a conjugate-gradient algorithm. The weights were changed only after all of the
training patterns were presented.
4
INTEGRATING THE VISUAL AND ACOUSTIC SPEECH
SIGNALS
To evaluate the spectral estimates, a feedforward network was trained to recognize
vowels from their STSAE's. With no noise present, the trained network could
correctly categorized 100% of the 54 STSAE's in its training set: thus serving as a
perfect recognizer for this data. The vowel recognizer was then presented with speech
information through two channels, as shown in Fig. 2. The path on the bottom
represents the information obtained from the acoustic signal, while the path on the
top provides information obtained from the corresponding visual speech signal.
To assess the performance of the recognizer in noise, clean spectral envelopes were
systematically degraded by noise and then presented to the recognizer. In this
particular condition, no visual input was given to the network. The noise was
introduced by adding a normalized random vector to the STSAE. Noise corrupted
vectors were produced at 3 dB intervals from -12 dB to 24 dB. At each step 6
different vectors were produced, and the performance reported was the average.
Fig. 3 shows the recognition rates as a function of the speech-to- noise ratio. At a
speech-to-noise ratio of -12 dB, the recognizer was operating at chance or 11.1%.
Next, a network trained to estimate the spectral envelopes from images was used
235
236
Sejnowski, Yuhas, Goldstein and Jenkins
to provide an independent STSAE input into the recognizer (along the top of Fig.
2). This network was not trained on any of the data that was used in training the
vowel recognizer. The task remained to combine these two STSAE's.
STSAE estimated
from the visual signal
Visual
Speech
Signal
>l...-N_e_t",_or_k--JF==~:~
Neural
_
~ ~ ==?
~
Acoustic
Speech
Signal
Recognizer
Combined
STSAE's
>1f==? b
STSAE of the
Acoustic signal
Noise
Figure 2: A vowel recognizer that integrates the acoustic and visual speech signals.
We considered three different ways of combining the estimates obtained from visual
signals with the noised degraded acoustic envelopes. The first approach was to
simply average the two envelopes, which proved to be less than optimal. The
recognizer was able to identify 55.6% ofthe STSAE estimated from the visual signal,
but when the visual estimate was combined with the noise degraded acoustic signal
the recognizer was only capable of 35% at a SIN of -12 dB. Similarly, at very high
signal-to-noise ratios, the combined input produced poorer results than the acoustic
signal alone provided. To correct for this, the two inputs needed to be weighted
according to the relative amount of information available from each source. A
weighting factor was introduced which was a function of speech-to-noise:
a
SViltJll1
+
(1 - a)
SAcotJ,tic
(1)
The optimal value for the parameter a was found empirically to vary linearly with
the speech-to-noise ratio in dB. The value for a ranged from approximately 0.8
at SIN of -12dB to 0.0 at 24 dB. The results obtained from using the a weighted
average are shown in Fig. 3.
The third method used to fuse the two STSAE's was with a second-order neural
network (Rumelhart et al. 1986). Sigma-pi networks were trained to take in noisedegraded acoustic envelopes and estimated envelopes from the corresponding visual
speech signal. The networks were able to recreate the noise-free acoustic envelope
with greater accuracy than any of the other methods, as measured by mean squared
error. This increased accuracy did not however translate into improved recognition
rates.
Combining Visual and Acoustic Speech Signals
100
80
~
'-"
....0
60
<l)
L..
L..
0
40
u
20
0
-15
-9
-3
3
9
15
21
27
SIN (dB)
Figure 3: The visual contribution to speech recognition in noise. The lower curve
shows the performance of the recognizer under varying signal-to-noise conditions
using only the acoustic channel. The top curve shows the final improvement when
the two channels were combined using the a weighted average.
5
COMPARING PERFORMANCE
The performance of the network was compared to more traditional signal-processing
techniques.
5.1
K-NEAREST NEIGHBORS
In this first comparison, an estimate of the STSAE was obtained using a k-nearest
neighbors approach. The images in the training set were stored along with their
corresponding STSAE calculated from the acoustic signal. These images served
as the data base of stored templates. Individual images from the test set were
correlated against all of the stored images and the closest k images were selected.
The acoustic STSAE corresponding to the k selected images were then averaged
to produce an estimate of the STSAE corresponding to the test image. Using this
procedure for various values of k, average MSE was calculated for the test set. This
procedure was then repeated with the test and training set reversed.
For values ofk between 2 and 6 the k-nearest neighbor estimator was able to produce
STSAE estimates with approximately the same accuracy as the neural networks.
Those networks evaluated after 500 training epochs produced estimates with 9%
more error than the KNN approach, while those weights corresponding to the networks' best performance, as defined above, produced estimates with 5% less error.
5.1.1
PRINCIPAL COMPONENT ANALYSIS
A second method of comparison was to obtain an STSAE estimate using a combination of optimal linear techniques. The first step was to encode the images using
a Hotelling transform, which produces an optimal encoding of an image with respect to a least-mean-squared error. The encoded image Yi was computed from the
237
238
Sejnowski, Yuhas, Goldstein and Jenkins
normalized image
%i
using
(2)
where m1: was the mean image. A was a transformation matrix whose rows were
the five largest eigenvectors of the covariance matrix of the images. The vector Yi
represents the image as do the hidden units of the neural network.
The second step was to find a mapping from the encoded image vector Yi to the
corresponding short-term spectral envelope Si using a linear least-squares fit. For
the Yi'S calculated above, a B was found that provided the best estimate of the
desired Si:
(3)
If we think of the matrix A as corresponding to the weights from the input layer to
the hidden units, then B maps the hidden units to the output units.
The networks trained to produce STSAE estimates were far superior to those obtained using the coefficients of A and B. This was true not only for the training
data from which A and B were calculated, but also for the test data set. When
compared to networks trained for 500 epochs, the networks produced estimates of
the STSAE's that were 46% better on the training set and 12% better on the test
set.
6
CONCLUSION
Humans are capable of combining information received through distinct sensory
channels with great speed and ease. The combined use of the visual and acoustic speech signals is just one example of integrating information across modalities.
Sumby and Pollack (1954) have shown that the relative improvement provided by
the visual signal varies with the signal-to-noise ratio of the acoustic signal. By
combining the speech information available from the two speech signals before categorizing, we obtained performance that was comparable to that demonstrated by
humans.
We have shown that visual and acoustic speech information can be effectively fused
without requiring categorical preprocessing. The low-level integration of the two
speech signals was particularly useful when the signal-to-noise ratio ranged from 3
dB to 15 dB, where the combined signals were recognized with a greater accuracy
than either of the two component signals alone. In contrast, an independent categorical decisions on each channel would have required additional information in the
form of ad hoc rules to produce the same level of performance.
Lip reading research has traditionally focused on the identification and evaluation
of visual features (Montgomery and Jackson, 1983). Reducing the original speech
signals to a finite set of predefined parameters or to discrete symbols can waste
a tremendous amount of information. For an automatic recognition system this
information may prove to be useful at a later stage of processing. In our approach,
speech information in the visual signal is accessed without requiring discrete feature
analysis or making categorical decisions.
Com bining Visual and Acoustic Speech Signals
This line of research has consequences for other problems, such as target identification based on multiple sensors. For example, the same problems arise in designing
systems that combine radar and infrared data. Mapping into a common representation using neural network models could also be applied to these problem domains.
The key insight is to combine this information at a stage prior to categorization.
Neural network learning procedures allow systems to be constructed for performing
the mappings as long as sufficient data are available to train the network.
Acknowledgements
This research was supported by grant AFOSR-86-0256 from the Air Force Office of
Scientific Research and by the Applied Physics Laboratory's IRAD.
References
Bernstein, L.E. and Eberhardt, S.P. (1986). Johns Hopkins Lipreading Corpus I-II,
Johns Hopkins University, Baltimore, MD
Cole, R.A. (1980). (Ed.) Perception and Production of Fluent Speech, Lawrence
Erlbaum Assoc, Publishers, Hillsdale, NJ
Fant, G. (1960). Acoustic Theory of Speech Production. Mouton & Co., Publishers,
The Hague, Netherlands
Montgomery, A. and Jackson, P.L. (1983). Physical Characteristics of the lips
underlying vowel lipreading. J. Acoust. Soc. Am. 73, 2134-2144.
Petajan, E.D. (1987). An improved Automatic Lipreading System To Enhance
Speech Recognition. Bell Laboratories Technical Report No. 11251-871012-111TM.
Peterson, G.E. and Barney, H.L. (1952). Control Methods Used in a Study of the
Vowels. J. Acoust. Soc. Am. 24, 175-184.
Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986). Learning internal representations by error propagation. In: D.E. Rumelhart and J.L. McClelland. (Eds.)
Parallel Distributed Processing in the Microstructure of Cognition: Vol 1. Foundations MIT Press, Cambridge, MA
Sumby, W.H. and Pollack, I. (1954). Visual Contribution to Speech Intelligibility
in Noise. J. Acoust. Soc. Am. 26, 212-215.
Summerfield, Q.(1987). Some preliminaries to a comprehensive account of audiovisual speech perception. In: B. Dodd and R. Campbell (Eds. ) Hearing by Eye:
The Pschology of Lip-Reading, Lawrence-Erlbaum Assoc, Hillsdale, NJ.
Yuhas, B.P., Goldstein, M.H. Jr. and Sejnowski, T.J. (1989). Integration of Acoustic and Visual Speech Signals Using Neural Networks. IEEE Comm Magazine 27,
November 65-71.
239
| 290 |@word bining:1 covariance:1 barney:2 configuration:2 series:2 exclusively:1 current:1 comparing:1 com:1 si:2 must:1 john:4 visible:3 shape:2 remove:1 alone:3 selected:2 steepest:1 short:6 provides:1 codebook:1 accessed:1 five:2 along:3 constructed:2 prove:1 yuhas:7 pathway:1 combine:3 formants:1 hague:1 audiovisual:1 automatically:1 resolve:1 confused:1 provided:4 underlying:1 what:2 tic:1 string:1 developed:1 acoust:3 transformation:1 nj:2 assoc:2 control:1 unit:8 grant:1 appear:1 before:2 engineering:1 tends:1 consequence:2 encoding:3 path:2 approximately:2 might:1 co:1 ease:1 limited:2 averaged:1 faithful:1 camera:1 backpropagation:1 dodd:1 procedure:4 area:1 bell:1 significantly:1 matching:1 word:2 integrating:2 vocal:7 symbolic:2 onto:1 layered:1 map:2 demonstrated:3 williams:1 fluent:1 focused:1 rule:2 estimator:1 array:1 insight:1 jackson:2 traditionally:1 ntsc:1 diego:2 target:1 heavily:1 massive:1 magazine:1 designing:1 rumelhart:3 recognition:16 particularly:1 updating:1 infrared:1 bottom:1 electrical:1 noised:1 movement:1 environment:1 comm:1 radar:1 trained:7 topographically:1 serve:4 upon:2 eric:1 completely:1 various:1 laser:3 train:1 distinct:2 sejnowski:6 whose:1 heuristic:1 encoded:2 distortion:1 knn:1 think:1 transform:1 noisy:1 final:1 hoc:1 sequence:1 interconnected:1 combining:8 translate:2 poorly:1 description:2 produce:6 categorization:1 tract:7 perfect:1 measured:1 nearest:3 received:1 advocated:1 soc:3 closely:2 correct:1 filter:2 centered:1 human:4 hillsdale:2 require:1 microstructure:1 preliminary:2 around:2 considered:1 great:1 lawrence:2 mapping:4 cognition:1 talker:1 vary:1 early:2 recognizer:12 estimation:1 integrates:1 currently:1 cole:2 largest:1 weighted:3 mit:1 concurrently:1 sensor:2 rather:1 varying:1 office:1 conjunction:1 encode:1 categorizing:1 improvement:3 articulator:4 laurel:1 visibly:1 greatly:1 ibl:1 contrast:1 am:3 hidden:5 pixel:1 overall:1 among:1 proposes:1 resonance:2 integration:2 biology:1 represents:3 lit:1 others:1 report:1 recognize:1 comprehensive:1 individual:3 vowel:11 attempt:3 interest:1 evaluation:1 male:1 predefined:1 accurate:1 poorer:1 capable:2 partial:1 necessary:1 desired:1 pollack:3 tongue:1 increased:1 fusing:1 hearing:1 erlbaum:2 stored:5 reported:1 varies:1 corrupted:2 combined:10 peak:1 physic:2 tip:1 acoustically:1 hopkins:4 fused:2 enhance:1 squared:2 ambiguity:1 account:1 waste:1 coefficient:1 ad:1 later:2 performed:1 relied:1 parallel:4 petajan:5 contribution:4 square:1 air:1 degraded:6 ass:1 accuracy:4 characteristic:2 who:1 identify:2 ofthe:1 raw:1 identification:2 disc:3 produced:8 served:1 processor:1 ed:3 against:1 frequency:1 static:1 sampled:3 auditory:1 proved:1 treatment:1 improves:1 segmentation:1 amplitude:2 goldstein:6 campbell:1 response:1 improved:3 evaluated:1 just:1 stage:5 working:1 ikl:1 propagation:1 scientific:1 effect:1 normalized:2 ranged:2 true:1 requiring:2 laboratory:3 sin:3 ambiguous:2 speaker:3 m:1 demonstrate:2 interpreting:1 image:25 superior:1 common:1 empirically:1 physical:1 analog:2 interpretation:1 m1:1 significant:1 cambridge:1 automatic:7 mouton:1 similarly:1 access:1 operating:1 base:1 closest:1 massively:1 lipreading:4 yi:4 additional:2 greater:2 recognized:1 signal:82 ii:1 full:1 sound:4 multiple:1 smooth:1 technical:1 compensate:1 long:1 serial:1 sometimes:1 represent:1 preserved:1 addition:1 baltimore:2 interval:1 source:7 modality:1 publisher:2 envelope:11 db:11 flow:1 presence:2 feedforward:3 bernstein:2 fit:1 architecture:2 bandwidth:1 reduce:1 tm:1 absent:1 recreate:1 handled:1 passed:1 speech:68 useful:2 eigenvectors:1 netherlands:1 amount:5 extensively:1 hardware:1 mcclelland:1 reduced:1 estimated:4 track:1 per:1 correctly:1 serving:1 discrete:2 vol:1 key:1 clean:1 fuse:1 taped:1 almost:1 decision:3 comparable:1 layer:1 constrain:1 speed:1 performing:1 transferred:1 department:2 according:1 combination:1 conjugate:1 jr:2 across:3 making:1 taken:1 montgomery:2 needed:1 available:5 jenkins:5 observe:1 intelligibility:2 spectral:6 hotelling:1 alternative:3 original:2 top:3 ofk:1 degrades:1 md:3 traditional:1 gradient:3 reversed:1 mapped:1 acoustical:1 modeled:1 ratio:6 minimizing:1 susceptible:1 sigma:1 perform:2 finite:1 descent:1 november:1 hinton:1 incorporated:1 frame:4 introduced:2 complement:1 pair:1 required:2 compensatory:1 california:1 acoustic:53 tremendous:1 able:4 usually:1 pattern:2 perception:5 reading:2 video:4 mouth:3 power:3 rely:1 force:1 residual:1 representing:1 improve:5 eye:1 categorical:4 extract:1 speeding:1 summerfield:3 prior:2 epoch:2 acknowledgement:1 relative:2 afosr:1 limitation:1 facing:1 digital:1 foundation:1 teeth:1 sufficient:2 systematically:1 seated:1 pi:1 production:2 row:1 changed:1 supported:1 free:1 allow:1 institute:1 neighbor:3 peterson:2 face:1 cepstral:1 template:1 distributed:2 curve:2 calculated:5 vocabulary:2 sumby:3 sensory:1 made:2 san:2 preprocessing:1 far:1 incoming:1 corpus:1 spectrum:4 lip:6 channel:5 transfer:3 ca:1 eberhardt:2 mse:1 domain:1 did:1 linearly:1 noise:27 arise:1 allowed:2 complementary:1 repeated:1 categorized:3 fig:4 salk:1 sub:1 weighting:1 third:1 remained:1 erroneous:1 symbol:3 essential:1 adding:1 effectively:1 supplement:1 simply:1 visual:50 irad:1 chance:1 ma:1 goal:1 consequently:1 jf:1 feasible:1 typical:1 reducing:1 degradation:1 principal:1 called:1 internal:1 evaluate:1 correlated:1 |
2,093 | 2,900 | Nonparametric inference of prior probabilities
from Bayes-optimal behavior
Liam Paninski?
Department of Statistics, Columbia University
[email protected]; http://www.stat.columbia.edu/?liam
Abstract
We discuss a method for obtaining a subject?s a priori beliefs from
his/her behavior in a psychophysics context, under the assumption that
the behavior is (nearly) optimal from a Bayesian perspective. The
method is nonparametric in the sense that we do not assume that the
prior belongs to any fixed class of distributions (e.g., Gaussian). Despite
this increased generality, the method is relatively simple to implement,
being based in the simplest case on a linear programming algorithm, and
more generally on a straightforward maximum likelihood or maximum
a posteriori formulation, which turns out to be a convex optimization
problem (with no non-global local maxima) in many important cases. In
addition, we develop methods for analyzing the uncertainty of these estimates. We demonstrate the accuracy of the method in a simple simulated
coin-flipping setting; in particular, the method is able to precisely track
the evolution of the subject?s posterior distribution as more and more data
are observed. We close by briefly discussing an interesting connection to
recent models of neural population coding.
Introduction
Bayesian methods have become quite popular in psychophysics and neuroscience (1?5); in
particular, a recent trend has been to interpret observed biases in perception and/or behavior
as optimal, in a Bayesian (average) sense, under ecologically-determined prior distributions
on the stimuli or behavioral contexts under study. For example, (2) interpret visual motion
illusions in terms of a prior weighted towards slow, smooth movements of objects in space.
In an experimental context, it is clearly desirable to empirically obtain estimates of the
prior the subject is operating under; the idea would be to then compare these experimental
estimates of the subject?s prior with the ecological prior he or she ?should? have been
using. Conversely, such an approach would have the potential to establish that the subject
is not behaving Bayes-optimally under any prior, but rather is in fact using a different, nonBayesian strategy. Such tools would also be quite useful in the context of studies of learning
and generalization, in which we would like to track the time course of a subject?s adaptation
to an experimentally-chosen prior distribution (5). Such estimates of the subject?s prior
have in the past been rather qualitative, and/or limited to simple parametric families (e.g.,
?
We thank N. Daw, P. Hoyer, S. Inati, K. Koerding, I. Nemenman, E. Simoncelli, A. Stocker, and D.
Wolpert for helpful suggestions, and in particular P. Dayan for pointing out the connection to neural
population coding models. This work was supported by funding from the Howard Hughes Medical
Institute, Gatsby Charitable Trust, and by a Royal Society International Fellowship.
the width of a Gaussian may be fit to the experimental data, but the actual Gaussian identity
of the prior is not examined systematically).
We present a more quantitative method here. We first discuss the method in the general case
of an arbitrarily-chosen loss function (the ?cost? which we assume the subject is attempting
to minimize, on average), then examine a few special important cases (e.g., mean-square
and mean-absolute error) in which the technique may be simplified somewhat. The algorithms for determining the subject?s prior distributions turn out to be surprisingly quick and
easy to code: the basic idea is that each observed stimulus-response pair provides a set of
constraints on what the actual prior could be. In the simplest case, these constraints are
linear, and the resulting algorithm is simply a version of linear programming, for which
very efficient algorithms exist. More generally, the constraints are probabilistic, and we
discuss likelihood-based methods for combining these noisy constraints (and in particular
when the resulting maximum likelihood, or maximum a posteriori, problem can be solved
efficiently via ascent methods, without fear of getting trapped in non-global local maxima).
Finally, we discuss Bayesian methods for representing the uncertainty in our estimates.
We should point out that related problems have appeared in the statistics literature, particularly under the subject of elicitation of expert opinion (6?8); in the machine learning literature, most recently in the area of ?inverse reinforcement learning? (9); and in
the economics/ game theory literature on utility learning (10). The experimental economics literature in particular is quite vast (where the relevance to gambling, price setting,
etc. is discussed at length, particularly in settings in which ?rational? ? expected utilitymaximizing ? behavior seems to break down); see, e.g. Wakker?s recent bibliography
(www1.fee.uva.nl/creed/wakker/refs/rfrncs.htm) for further references. Finally, it is worth
noting that the question of determining a subject?s (or more precisely, an opponent?s) priors in a gambling context ? in particular, in the binary case of whether or not an opponent
will accept a bet, given a fixed table of outcomes vs. payoffs ? has received attention
going back to the foundations of decision theory, most prominently in the discussions of
de Finetti and Savage. Nevertheless, we are unaware of any previous application of similar techniques (both for estimating a subject?s true prior and for analyzing the uncertainty
associated with these estimates) in the psychophysical or neuroscience literature.
General case
Our technique for determining the subject?s prior is based on several assumptions (some of
which will be relaxed below). To begin, we assume that the subject is behaving optimally
in a Bayesian sense. To be precise, we have four ingredients: a prior distribution on some
hidden parameter ?; observed input (stimulus) data, dependent in some probabilistic way
on ?; the subject?s corresponding output estimates of the underlying ?, given the input
data; and finally a loss function D(., .) that penalizes bad estimates for ?. The fundamental
assumption is that, on each trial i, the subject is choosing the estimate ??i of the underlying
parameter, given data xi , to minimize the posterior average error
Z
Z
p(?|xi )D(??i , ?)d? ? p(?)p(xi |?)D(??i , ?)d?,
(1)
where p(?) is the prior on hidden parameters (the unknown object the experimenter is trying
to estimate), and p(xi |?) is the likelihood of data xi given ?. For example, in the visual
motion example, ? could be the true underlying velocity of an object moving through space,
the observed data xi could be a short, noise-contaminated movie of the object?s motion, and
the subject would be asked to estimate the true motion ? given the data xi and any prior
conceptions, p(?), of how one expects objects to move. Note that we have also implicitly
assumed, in this simplest case, that both the loss D(., .) and likelihood functions p(xi |?)
are known, both to the subject and to the experimenter (perhaps from a preceding set of
?learning? trials).
So how can the experimenter actually estimate p(?), given the likelihoods p(x|?), the loss
function D(., .), and some set of data {xi } with corresponding estimates {??i } minimizing
the posterior expected loss (1)? This turns out to be a linear programming problem (11),
for which very efficient algorithms exist (e.g., ?linprog.m? in Matlab). To see why, first
note that the right hand side of expression (1) is linear in the prior p(?). Second, we have a
large collection of linear constraints on p(?): we know that
p(?)
Z
Z
p(?)d?
p(?)p(xi |?) D(??i , ?)
? 0
=
??
(2)
1
? D(z, ?) d? ? 0
(3)
?z
(4)
where (2-3) are satisfied by any proper prior distribution and (4) is the maximizer condition
(1) expressed in slightly different language. (See also (10), who noted the same linear
programming structure in an application to cost function estimation, rather than the prior
estimation examined here.)
The solution to the linear programming problem defined by (2-4) isn?t necessarily unique; it
corresponds to an intersection of half-spaces, which is convex in general. To come up with
a unique solution, we could maximimize a concave ?regularizing? function on this convex
set; possible such functions include, e.g., the entropy of p(?), or its negative mean-square
derivative (this function is strictly concave on the space of all functions whose integral is
held fixed, as is the case here given constraint (3)); more generally, if we have some prior
information on the form of the priors the subject might be using, and this information can
be expressed in the ?energy? form
P [p(?)] ? eq[p(?)] ,
for a concave functional q[.], we could use the log of this ?prior on priors? P . An alternative
solution would be to modify constraint (4) to
Z
p(?)p(xi |?) D(??i , ?) ? D(z, ?) ? ?? ?z,
where we can then adjust the slack variable ? until the contraint set shrinks to a single
point. This leads directly to another linear programming problem (where we want to make
the linear function ? as large as possible, under the above constraints). Note that for this
last approach to work ? for the linear programming problem to have a solution ? we need
to ensure that the set defined by the constraints (2-4) is compact; this basically means that
the constraint set (4) needs to be sufficiently rich, which, in turn, means that sufficient data
(or sufficiently strong prior constraints) are required. We will return to this point below.
Finally, what if our primary assumption is not met? That is, what if subjects are not quite
behaving optimally with respect to p(?)? It is possible to detect this situation in the above
framework, for example if the slack variable ? above is found to be negative. However, a
different, more probabilistic viewpoint can be taken. Assume the value of the choice ??i is
optimal under some ?comparison? noise, that is,
Z
p(?)p(xi |?) D(??i , ?) ? D(z, ?) ? ??i (z) ?z,
with ?i (z) a random variable of scale ? > 0 (assume ? to be i.i.d. for now, although this
may be generalized). If we assume this decision noise ? has a log-concave density (i.e.,
the log of the density is a concave function; e.g., Gaussian, or exponential), then so does
its integral (12), and the resulting maximum likelihood problem has no non-global local
maxima and is therefore solvable by ascent methods. To see this, write the log-likelihood
of (p, ?) given data {xi , ??i } as
Z ui (z)
X
L{xi ,??i } (p, ?) =
log
dp(?),
??
with the sum over the set of all the constraints in (4) and
Z
1
ui (z) ?
p(?)p(xi |?) D(??i , ?) ? D(z, ?) .
?
L is the sum of concave functions in ui , and hence is concave itself, and has no non-global
local maxima in these variables; since ? and p are linearly related through ui (and (p, ?)
live in a convex set), L has no non-global local maxima in (p, ?), either. Once again, this
maximum likelihood problem may be regularized by prior information1 , maximizing the
a posteriori likelihood L(p) ? q[p] instead of L(p); this problem is similarly tractable by
ascent methods, by the concavity of ?q[.] (note that this ?soft-constraint? problem reduces
exactly to the ?hard? constraint problem (4) as the noise ? ? 0)2 .
Note that the estimated value of the noise scale ? plays a similar role to that of the slack
variable ?, above, with the difference that ? can be much more sensitive to the worst trial
(that is, the trial on which the subject behaves most suboptimally); we can use either of
these slack variables to go back and ask about how close to optimally the subjects were
actually performing ? large values of ?, for example, imply sub-optimal performance. An
additional interesting idea is to use the computed value of ? as a kind of outlier test; ? large
implies the trial was particularly suboptimal.
Special cases
Maximum a posteriori estimation: The maximum a posteriori (MAP) estimator corresponds to the Hamming distance loss function,
D(i, j) = 1(i 6= j);
this implies that the constraints (4) have the simple form
p(??i ) ? p(z)L(??i , z) ? 0,
with L(??i , z) defined as the largest observed likelihood ratio for ??i and z, that is,
L(??i , z) ? max
xi
1
p(xi |z)
,
p(xi |??i )
Overfitting here is a symptom of the fact that in some cases ? particularly when few data samples
have been observed ? many priors (even highly implausible priors) can explain the observed data
fairly well; in this case, it is often quite useful to penalize these ?implausible? priors, thus effectively
regularizing our estimates. Similar observations have appeared in the context of medical applications
of Markov random field methods (13).
2
Another possible application of this regularization idea is as follows. We may incorporate improper priors ? that is, priors which may not integrate to unity (such priors frequently arise in the
analysis of reparameterization-invariant decision procedures, for example) ? without any major conceptual modification in our analysis, simply by removing the normalization contraint (3). However,
a problem arises: the zero measure, p(?) ? 0, will always trivially satisfy the remaining constraints
(2) and (4). This problem could potentially be ameliorated
R by introducing a convex regularizing term
(or equivalently, a log-concave prior) on the total mass p(?)d?.
with the maximum taken over all xi which led to the estimate ??i . This setup is perhaps
most appropriate for a two-alternative forced choice situation, where the problem is one of
classification or discrimination, not estimation.
Mean-square and absolute-error regression: Our discussion assumes an even simpler
form when the loss function D(., .) is taken to be squared error, D(x, y) = (x ? y)2 , or
absolute error, D(x, y) = |x ? y|. In this case it is convenient to work with a slightly
different noise model than the classification noise discussed above; instead, we may model
the subject?s responses as optimal plus estimation noise. For squared-error, the optimal
??i is known to be uniquely defined as the conditional mean of ? given xi . Thus we may
replace the collection of linear inequality constraints (4) with a much smaller set of linear
equalities (a single equality per trial, instead of a single inequality per trial per z):
Z
p(xi |?)(? ? ??i ) p(?)d? = ??i ;
(5)
the corresponding likelihood, again, has no non-global local maxima if ? has a log-concave
density. In the simplest case of Gaussian ?, the maximum likelihood problem may be
solved by standard nonnegative least-squares (e.g., ?lsqnonneg? or ?quadprog? in Matlab).
In the absolute error case, the optimal ??i is given by the conditional median of ? given
xi (although recall that the median is not necessarily unique here); thus, the inequality
constraints (4) may again be replaced by equalities which are linear in p(?):
Z ??i
Z ?
p(?)p(xi |?) ?
p(?)p(xi |?) = ??i ;
??i
??
again, for Gaussian ? this may be solved via standard nonnegative regression, albeit with a
different constraint matrix. In each case, ?i retains its utility as an outlier score.
A worked example: learning the fairness of a coin
In this section we will work through a concrete example, to show how to put the ideas
discussed above into practice. We take perhaps the simplest possible example, for clarity:
the subject observes some number N of independent, identically distributed coin flips, and
on each trial i tells us his/her probability of observing tails on the next trial, given that
t = t(i) tails were observed in the first i trials3 . Here
the likelihood functions p(xi |?)
take the standard binomial form p(t(i)|ptails ) = ti pttails (1 ? ptails )i?t (note that it is
reasonable to assume that these likelihoods are known to the subject, at least approximately,
due to the ubiquity of binomial data).
Under our assumptions, the subject?s estimates p?tails,i are given as the posterior mean
of ptails given the number of tails observed up to trial i. This puts us directly in the
mean-square framework discussed in equation (5); we assume Gaussian estimation noise ?,
construct a regression matrix A of N rows, with the i-th row given by p(t(i)|ptails )(ptails ?
p?tails,i ). To
R regularize our estimates, we add a small square-difference penalty of the form
q[p(?)] = |dp(?)/d?|2 d?. Finally, we estimate
p?(?) = arg
p?0;
||Ap||22
R min
1
p(?)d?=1
0
+ ?q[p],
for ? ? 10?7 ; this estimate is equivalent to MAP estimation under a (weak) Gaussian prior
on the function p(?) (truncated so that p(?) ? 0), and is computed using quadprog.m.
3
We note in passing that this simple binomial paradigm has potential applications to idealobserver analysis of classical neuroscientific tasks (e.g., synaptic release detection, or photon counting in retina) in addition to potential applications in psychophysics.
true prior
2
1
0
trial # (i)
150
# tails/i
estimate
100
50
likelihoods
8
6
4
2
0
est prior
0
3
2
1
est prior
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
i=0
50
100
6
4
2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
<???????all heads ... all tails????????>
0.8
0.9
Figure 1: Learning the fairness of a coin (numerical simulation). Top panel: True prior distribution
on coin fairness. The bimodal nature of this prior indicates that the subject expects coins to be
unfair (skewed towards heads, ptails < .5, or tails, ptails > .5) more often than fair (ptails =
.5). Second: Observed data. Open circles indicate the fraction of observed tails t = t(i) as a
function of trial number i (the maximum likelihood estimate, MLE, of the fairness and a minimal
sufficient statistic for this problem); + symbols indicate the subject?s estimate of the coin?s fairness,
assumed to correspond to the posterior mean of the fairness under the subject?s prior. Note the
systematic deviations of the subject?s estimate from the MLE; these deviations shrink as i increases
and the` ?strength of the prior relative to the likelihood term decreases. Third: Binomial likelihood
terms ti pttails (1 ? ptails )i?t . Color of trace correponds to trial number i, as indicated in previous
panel (traces are normalized for clarity). Fourth: Estimate of prior given 150 trials. Black trace
indicates true prior (as in top panel); red indicates estimate ?1 posterior standard error (computed
via importance sampling). Bottom: Tracking the evolution of the posterior. Black traces indicate
the subject?s true posterior after observing 0 (thin trace), 50 (medium trace), and 100 (thick trace)
sample coin flips; as more data are observed, the subject becomes more and more confident about
the true fairness of the coin (p = .5), and the posteriors match the likelihood terms (c.f. third panel)
more closely. Red traces indicate the estimated posterior given the full 150 or just the last 100 or 50
trials, respectively (errorbars omitted for visibility). Note that the procedure tracks the evolution of
the subject?s posterior quite accurately, given relatively few trials.
To place Bayesian confidence intervals around our estimate, we sample from the corresponding (truncated) Gaussian posterior distribution on p(?) (via importance sampling with
a suitably shifted, rescaled truncated Gaussian proposal density; similar methods are applicable more generally in the non-Gaussian case via the usual posterior approximation
techniques, e.g. Laplace approximation). Figs. 1-2 demonstrate the accuracy of the estimated p?(?); in particular, the bottom panels show that the method accurately tracks the
evolution of the model subjects? posteriors as an increasing amount of data are observed.
true prior
3
2
1
0
trial # (i)
150
# tails/i
estimate
100
50
likelihoods
0
10
5
est prior
0
4
2
est prior
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
i=0
50
100
6
4
2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
<???????all heads ... all tails????????>
0.8
0.9
Figure 2: Learning an unfair coin (ptails = .25). Conventions as in Fig. 1.
Connection to neural population coding
It is interesting to note a connection to the neural population coding model studied in (14)
(with more recent work reviewed in (15)). The basic idea is that neural populations encode
not just stimuli, but probability distributions over stimuli (where the distribution describes
the uncertainty in the state of the encoded object). Here the experimentally observed data
are neural firing rates, which provide constraints on the underlying encoded ?prior? distribution in terms of the individual tuning function of each cell in the observed population.
The simplest model is as follows: the observed spikes ni from the i-th cell are Poissondistributed, with rate a nonlinear function of a linear functional of some prior distribution,
Z
ni ? Poiss g
p(?)f (xi , ?)
,
where the kernel f is considered as the cell?s ?tuning function?; the log-concavity of the
likelihood of p is preserved for any nonlinearity g that is convex and log-concave, a class
including the linear rectifiers, exponentials, and power-laws (and studied more extensively
in (16)). Alternately, a simplified model is often used, e.g.:
R
ni ? p(?)f (xi , ?)
ni ? q
,
?
with q a log-concave density (typically Gaussian) to preserve the concavity of the loglikelihood; in this case, the scale ? of the noise does not vary with the mean firing rate,
as it does in the Poisson model. In both cases, the observed firing rates act as constraints
oriented linearly with respect to p; in the latter case, the noise scale ? sets the strength, or
confidence, of each such constraint (2, 3). Thus, under this framework, given the simultaneously recorded activity of many cells {ni } and some model for the tuning functions
f (xi , ?), we can infer p(?) (and represent the uncertainty in these estimates) using methods quite similar to those developed above.
Directions
The obvious open avenue for future research (aside from application to experimental data)
is to relax the assumptions: that the likelihood and cost function are both known, and that
the data are observed directly (without any noise). It seems fair to conjecture that the
subject can learn the likelihood and cost functions given enough data, but one would like to
test this directly, e.g. by estimating D(., .) and p together, perhaps under restrictions on the
form of D(., .). As emphasized above, the utility estimation problem has received a great
deal of attention, and it is plausible to expect that the methods proposed here for estimation
of the prior might be combined with previously-studied methods for utility elicitation and
estimation. It is also interesting to consider these elicitation methods in the context of
experimental design (8, 17, 18), in which we might actively seek stimuli xi to maximally
constrain the possible form of the prior and/or cost function.
References
1. D. Knill, W. Richards, eds., Perception as Bayesian Inference (Cambridge University Press,
1996).
2. Y. Weiss, E. Simoncelli, E. Adelson, Nature Neuroscience 5, 598 (2002).
3. Y. Weiss, D. Fleet, Statistical Theories of the Cortex (MIT Press, 2002), chap. Velocity likelihoods in biological and machine vision, pp. 77?96.
4. D. Kersten, P. Mamassian, A. Yuille, Annual Review of Psychology 55, 271 (2004).
5. K. Koerding, D. Wolpert, Nature 427, 244 (2004).
6. R. Hogarth, Journal of the American Statistical Association 70, 271 (1975).
7. J. Oakley, A. O?Hagan, Biometrika under review (2003).
8. P. Garthwaite, J. Kadane, A. O?Hagan, Handbook of Statistics (2004), chap. Elicitation.
9. A. Ng, S. Russell, ICML-17 (2000).
10. J. Blythe, AAAI02 (2002).
11. G. Strang, Linear algebra and its applications (Harcourt Brace, New York, 1988).
12. Y. Rinott, Annals of Probability 4, 1020 (1976).
13. M. Henrion, et al., Why is diagnosis using belief networks insensitive to imprecision in probabilities?, Tech. Rep. SMI-96-0637, Stanford (1996).
14. R. Zemel, P. Dayan, A. Pouget, Neural Computation 10, 403 (1998).
15. A. Pouget, P. Dayan, R. Zemel, Annual Reviews of Neuroscience 26, 381 (2003).
16. L. Paninski, Network: Computation in Neural Systems 15, 243 (2004).
17. K. Chaloner, I. Verdinelli, Statistical Science 10, 273 (1995).
18. L. Paninski, Advances in Neural Information Processing Systems 16 (2003).
| 2900 |@word trial:17 version:1 briefly:1 seems:2 suitably:1 open:2 simulation:1 seek:1 score:1 past:1 savage:1 numerical:1 visibility:1 v:1 discrimination:1 half:1 aside:1 short:1 provides:1 simpler:1 become:1 qualitative:1 behavioral:1 expected:2 behavior:5 examine:1 frequently:1 chap:2 actual:2 increasing:1 becomes:1 begin:1 estimating:2 underlying:4 panel:5 mass:1 medium:1 what:3 kind:1 developed:1 quantitative:1 ti:2 concave:11 act:1 exactly:1 biometrika:1 medical:2 local:6 modify:1 despite:1 analyzing:2 firing:3 approximately:1 ap:1 might:3 plus:1 black:2 studied:3 examined:2 conversely:1 limited:1 liam:3 unique:3 hughes:1 practice:1 implement:1 illusion:1 procedure:2 area:1 convenient:1 confidence:2 close:2 put:2 context:7 live:1 kersten:1 www:1 equivalent:1 map:2 quick:1 restriction:1 maximizing:1 straightforward:1 economics:2 attention:2 go:1 convex:6 pouget:2 estimator:1 regularize:1 his:2 reparameterization:1 population:6 laplace:1 annals:1 play:1 programming:7 quadprog:2 trend:1 velocity:2 particularly:4 hagan:2 richards:1 observed:19 role:1 bottom:2 solved:3 worst:1 improper:1 movement:1 decrease:1 rescaled:1 observes:1 russell:1 ui:4 asked:1 koerding:2 algebra:1 yuille:1 htm:1 forced:1 zemel:2 tell:1 outcome:1 choosing:1 quite:7 whose:1 encoded:2 plausible:1 stanford:1 loglikelihood:1 relax:1 statistic:4 noisy:1 itself:1 adaptation:1 combining:1 getting:1 object:6 develop:1 stat:2 received:2 eq:1 strong:1 come:1 implies:2 met:1 indicate:4 convention:1 direction:1 thick:1 closely:1 opinion:1 generalization:1 biological:1 strictly:1 sufficiently:2 around:1 considered:1 great:1 pointing:1 major:1 vary:1 omitted:1 estimation:10 applicable:1 sensitive:1 largest:1 tool:1 weighted:1 mit:1 clearly:1 gaussian:12 always:1 rather:3 poi:1 bet:1 encode:1 release:1 she:1 chaloner:1 likelihood:25 indicates:3 tech:1 sense:3 detect:1 helpful:1 posteriori:5 inference:2 dayan:3 dependent:1 typically:1 accept:1 her:2 hidden:2 poissondistributed:1 going:1 arg:1 classification:2 smi:1 priori:1 special:2 psychophysics:3 fairly:1 field:1 once:1 construct:1 ng:1 sampling:2 adelson:1 icml:1 nearly:1 fairness:7 thin:1 future:1 contaminated:1 stimulus:6 few:3 retina:1 oriented:1 preserve:1 simultaneously:1 individual:1 replaced:1 nemenman:1 detection:1 highly:1 adjust:1 nl:1 held:1 stocker:1 integral:2 mamassian:1 penalizes:1 circle:1 minimal:1 increased:1 soft:1 retains:1 cost:5 introducing:1 deviation:2 expects:2 optimally:4 kadane:1 combined:1 confident:1 density:5 international:1 fundamental:1 probabilistic:3 systematic:1 together:1 concrete:1 again:4 squared:2 satisfied:1 recorded:1 expert:1 derivative:1 american:1 return:1 actively:1 potential:3 photon:1 de:1 coding:4 satisfy:1 break:1 observing:2 red:2 bayes:2 minimize:2 square:6 ni:5 accuracy:2 who:1 efficiently:1 correspond:1 rinott:1 weak:1 bayesian:7 accurately:2 basically:1 ecologically:1 worth:1 explain:1 implausible:2 synaptic:1 ed:1 energy:1 pp:1 obvious:1 associated:1 hamming:1 rational:1 experimenter:3 popular:1 ask:1 recall:1 color:1 actually:2 back:2 response:2 maximally:1 wei:2 formulation:1 shrink:2 symptom:1 generality:1 just:2 until:1 hand:1 harcourt:1 trust:1 nonlinear:1 maximizer:1 indicated:1 perhaps:4 normalized:1 true:9 evolution:4 regularization:1 hence:1 equality:3 imprecision:1 deal:1 game:1 width:1 uniquely:1 skewed:1 noted:1 correponds:1 generalized:1 trying:1 demonstrate:2 motion:4 hogarth:1 funding:1 recently:1 behaves:1 functional:2 empirically:1 insensitive:1 discussed:4 he:1 tail:11 association:1 interpret:2 cambridge:1 tuning:3 trivially:1 similarly:1 nonlinearity:1 language:1 moving:1 cortex:1 operating:1 behaving:3 etc:1 add:1 posterior:14 recent:4 perspective:1 belongs:1 ecological:1 inequality:3 binary:1 arbitrarily:1 discussing:1 rep:1 additional:1 somewhat:1 relaxed:1 preceding:1 paradigm:1 full:1 desirable:1 simoncelli:2 reduces:1 infer:1 smooth:1 match:1 mle:2 basic:2 regression:3 vision:1 poisson:1 normalization:1 kernel:1 represent:1 bimodal:1 cell:4 penalize:1 proposal:1 addition:2 fellowship:1 want:1 preserved:1 interval:1 median:2 brace:1 ascent:3 subject:35 noting:1 information1:1 counting:1 easy:1 conception:1 identically:1 enough:1 blythe:1 fit:1 psychology:1 suboptimal:1 idea:6 avenue:1 fleet:1 whether:1 expression:1 utility:4 penalty:1 passing:1 york:1 matlab:2 generally:4 useful:2 amount:1 nonparametric:2 extensively:1 simplest:6 http:1 exist:2 shifted:1 neuroscience:4 trapped:1 track:4 estimated:3 per:3 diagnosis:1 write:1 finetti:1 four:1 nevertheless:1 clarity:2 vast:1 fraction:1 sum:2 inverse:1 uncertainty:5 fourth:1 place:1 family:1 reasonable:1 decision:3 fee:1 nonnegative:2 activity:1 annual:2 strength:2 precisely:2 constraint:22 worked:1 constrain:1 bibliography:1 min:1 attempting:1 performing:1 relatively:2 conjecture:1 department:1 smaller:1 slightly:2 describes:1 unity:1 modification:1 www1:1 outlier:2 invariant:1 taken:3 equation:1 previously:1 discus:4 turn:4 nonbayesian:1 slack:4 know:1 flip:2 tractable:1 opponent:2 oakley:1 appropriate:1 ubiquity:1 alternative:2 coin:10 assumes:1 remaining:1 include:1 ensure:1 binomial:4 top:2 establish:1 society:1 classical:1 psychophysical:1 move:1 question:1 flipping:1 spike:1 strategy:1 parametric:1 primary:1 usual:1 hoyer:1 dp:2 distance:1 thank:1 simulated:1 code:1 length:1 suboptimally:1 ratio:1 minimizing:1 equivalently:1 setup:1 potentially:1 trace:8 negative:2 neuroscientific:1 design:1 proper:1 unknown:1 observation:1 markov:1 contraint:2 howard:1 truncated:3 payoff:1 situation:2 precise:1 head:3 pair:1 required:1 connection:4 errorbars:1 daw:1 alternately:1 able:1 elicitation:4 below:2 perception:2 appeared:2 royal:1 max:1 including:1 belief:2 power:1 regularized:1 solvable:1 representing:1 movie:1 imply:1 columbia:3 isn:1 prior:52 literature:5 review:3 strang:1 determining:3 relative:1 law:1 loss:7 expect:1 interesting:4 suggestion:1 ingredient:1 foundation:1 integrate:1 sufficient:2 viewpoint:1 charitable:1 systematically:1 row:2 course:1 supported:1 surprisingly:1 last:2 bias:1 side:1 institute:1 absolute:4 distributed:1 unaware:1 rich:1 concavity:3 collection:2 reinforcement:1 simplified:2 compact:1 ameliorated:1 implicitly:1 global:6 overfitting:1 handbook:1 conceptual:1 assumed:2 xi:29 why:2 table:1 reviewed:1 nature:3 learn:1 obtaining:1 wakker:2 necessarily:2 uva:1 linearly:2 noise:12 arise:1 knill:1 fair:2 ref:1 fig:2 gambling:2 gatsby:1 slow:1 sub:1 exponential:2 prominently:1 unfair:2 third:2 down:1 removing:1 bad:1 rectifier:1 emphasized:1 symbol:1 albeit:1 effectively:1 importance:2 wolpert:2 intersection:1 entropy:1 led:1 paninski:3 simply:2 visual:2 expressed:2 tracking:1 fear:1 corresponds:2 conditional:2 identity:1 towards:2 price:1 replace:1 experimentally:2 hard:1 henrion:1 determined:1 total:1 verdinelli:1 experimental:6 est:4 latter:1 arises:1 relevance:1 incorporate:1 regularizing:3 |
2,094 | 2,901 | Is Early Vision Optimized for Extracting
Higher-order Dependencies?
Yan Karklin
[email protected]
Michael S. Lewicki?
[email protected]
Computer Science Department &
Center for the Neural Basis of Cognition
Carnegie Mellon University
Abstract
Linear implementations of the efficient coding hypothesis, such as independent component analysis (ICA) and sparse coding models, have provided functional explanations for properties of simple cells in V1 [1, 2].
These models, however, ignore the non-linear behavior of neurons and
fail to match individual and population properties of neural receptive
fields in subtle but important ways. Hierarchical models, including Gaussian Scale Mixtures [3, 4] and other generative statistical models [5, 6],
can capture higher-order regularities in natural images and explain nonlinear aspects of neural processing such as normalization and context effects [6,7]. Previously, it had been assumed that the lower level representation is independent of the hierarchy, and had been fixed when training
these models. Here we examine the optimal lower-level representations
derived in the context of a hierarchical model and find that the resulting
representations are strikingly different from those based on linear models. Unlike the the basis functions and filters learned by ICA or sparse
coding, these functions individually more closely resemble simple cell
receptive fields and collectively span a broad range of spatial scales. Our
work unifies several related approaches and observations about natural
image structure and suggests that hierarchical models might yield better
representations of image structure throughout the hierarchy.
1 Introduction
Efficient coding hypothesis has been proposed as a guiding computational principle for
the analysis of early visual system and motivates the search for good statistical models of
natural images. Early work revealed that image statistics are highly non-Gaussian [8, 9],
and models such as independent component analysis (ICA) and sparse coding have been
developed to capture these statistics to form efficient representations of natural images. It
has been suggested that these models explain the basic computational goal of early visual
cortex, as evidenced by the similarity between the learned parameters and the measured
receptive fields of simple cells in V1.
?
To whom correspondence should be addressed
In fact, it is not clear exactly how well these methods predict the shapes of neural receptive fields. There has been no thorough characterization of ICA and sparse coding results
for different datasets, pre-processing methods, and specific learning algorithms employed,
although some of these factors clearly affect the resulting representation [10]. When ICA
or sparse coding is applied to natural images, the resulting basis functions resemble Gabor functions [1, 2] ? 2D sine waves modulated by Gaussian envelopes ? which also
accurately model the shapes of simple cell receptive fields [11]. Often, these results are
visualized in a transformed space, by taking the logarithm of the pixel intensities, sphering (whitening) the image space, or filtering the images to flatten their spectrum. When
analyzed in the original image space, the learned filters (the models? analogues of neural
receptive fields) do not exhibit the multi-scale properties of the visual system, as they tend
to cluster at high spatial frequencies [10, 12]. Neural receptive fields, on the other hand,
span a broad range of spatial scales, and exhibit distributions of spatial phase and other
parameters unmatched by ICA and SC results [13,14]. Therefore, as models of early visual
processing, these models fail to predict accurately either the individual or the population
properties of cortical visual neurons.
Linear efficient coding methods are also limited in the type of statistical structure they can
capture. Applied to natural images, their coefficients contain significant residual dependencies that cannot be accounted for by the linear form of the models. Several solutions
have been proposed, including multiplicative Gaussian Scale Mixtures [4] and generative
hierarchical models [5, 6]. These models capture some of the observed dependencies; but
their analysis so far has been focused on the higher-order structure learned by the model.
Meanwhile, the lower-level representation is either chosen a priori [4] or adapted separately, in the absence of the hierarchy [6] or with a fixed hierarchical structure specified in
advance [5].
Here we examine whether the optimal lower-level representation of natural images is different when trained in the context of such non-linear hierarchical models. We also illustrate
how the model not only describes sparse marginal densities and magnitude dependencies,
but captures a variety of joint density functions that are consistent with previous observations and theoretical conjectures. We show that learned lower-level representations are
strikingly different from those learned by the linear models: they are more multi-scale,
spanning a wide range of spatial scales and phases of the Gabor sinusoid relative to the
Gaussian envelope. Finally, we place these results in the context of whitening, gain control, and non-linear neural processing.
2 Fully adaptable scale mixture model
A simple and scalable model for natural image patches is a linear factor model, in which the
data x are assumed to be generated as a linear combination of basis functions with additive
noise
x = Au + .
(1)
Typically, the noise is assumed to be Gaussian with variance ?2 , thus
!
X 1
2
|x ? Au|i .
P (x|A, u) ? exp ?
2?2
i
(2)
The coefficients u are assumed to be mutually independent, and often modeled with sparse
distributions (e.g. Laplacian) that reflect the non-Gaussian statistics of natural scenes [8,9],
Y
X
P (u) =
P (ui ) ? exp(?
|ui |) .
(3)
i
i
We can then adapt the basis functions A to maximize the expected log-likelihood of the
data L = hlog P (x|A)i over the data ensemble, thereby learning a compact, efficient representation of structure in natural images. This is the model underlying the sparse coding
algorithm [2] and closely related to independent component analysis (ICA) [1].
An alternative to fixed sparse priors for u (3) is to use a Gaussian Scale Mixture (GSM)
model [3]. In these models, each observed coefficient ui is modeled as a product of random
Gaussian variable yi and a multiplier ?i ,
p
(4)
ui = ?i yi
Conditional on the value of the multiplier ?i , the probability P (ui |?i ) is Gaussian with
variance ?i , but the form of the marginal distribution
Z
P (ui ) = N (0, ?i )P (?i )d?i
(5)
depends on the probability function of ?i and can assume a variety of shapes, including
sparse heavy-tailed functions that fit the observed distributions of wavelet and ICA coefficients [4]. This type of model can also account for the observed dependencies among
coefficients u, for example, by expressing them as pair-wise dependencies among the multiplier variables ? [4, 15].
A more general model, proposed in [6, 16], employs a hierarchical prior for P (u) with
adapted parameters tuned to the global patterns in higher-order dependencies. Specifically,
the logarithm of the variances of P (u) is assumed to be a linear function of the higher-order
random variables v,
log ?u2 = Bv .
(6)
Conditional on the higher-order variables, the joint distribution of coefficients is factorisable, as in GSM. In fact, if the conditional density P (u|v) is Gaussian, this Hierarchical Scale Mixture (HSM) is equivalent to a GSM model, with ? = ?u2 and
P (u|?) = P (u|v) = N (0, exp(Bv)), with the added advantage of a more flexible representation of higher-order statistical regularities in B. Whereas previous GSM models of
natural images focused on modeling local relationships between coefficients of fixed linear
transforms, this general hierarchical formulation is fully adaptable, allowing us to recover
the optimal lower-level representation A, as well as the higher-order components B.
Parameter estimation in the HSM involves adapting model parameters A and B to maximize data log-likelihood L = hlog P (x|A, B)i. The gradient descent algorithm for the
estimation of B has been previously described (see [6]). The optimal lower-level basis A is
computed similarly to the sparse coding algorithm ? the goal is to minimize reconstruction
? . However, u
? is estimated not with a fixed sparsifying
error of the inferred MAP estimate u
prior, but with a concurrently adapted hierarchical prior. If we assume a Gaussian conditional density P (u|v) and a standard-Normal prior P (v), the MAP estimates are computed
as
? } = arg min P (u, v|x, A, B)
{?
u, v
u,v
(7)
= arg min P (x|A, B, u, v)P (u|v)P (v)
(8)
u,v
?
?
!
X
X [Bv]j
X v2
u2j
1
2
k?
= arg min ? 2
|x ? Au|i +
+ [Bv]
+
. (9)
j
u,v
2? i
2
2
2e
j
k
Marginalizing over the latent higher-order variables in the hierarchical models leads to
sparse distributions similar to the Laplacian and other density functions assumed in ICA.
Gaussian, qu = 2
Laplacian, qu = 1
Gen Gauss, qu = 0.7
HSM, B = [0;0]
HSM, B = [1;1]
HSM, B = [2;2]
HSM, B = [1;?1]
HSM, B = [2;?2]
HSM, B = [1;?2]
Figure 1: This model can describe a variety of joint density functions for coefficients u.
Here we show example scatter plots and contour plots of some bivariate densities. Top row:
Gaussian, Laplacian, and generalized Gaussian densities of the form p(u) ? exp(?|u|q ).
Middle and bottom row: Hierarchical Scale Mixtures with different sets of parameters B.
For illustration, in the hierarchical models the dimensionality of v is 1, and the matrix B
is simply a column vector. These densities are computed by marginalizing over the latent
variables v, here assumed to follow a standard normal distribution. Even with this simple
hierarchy, the model can generate sparse star-shaped (bottom row) or radially symmetric
(middle row) densities, as well as more complex non-symmetric densities (bottom right). In
higher dimensions, it is possible to describe more complex joint distributions, with different
marginals along different projections.
However, although the model distribution for individual coefficients is similar to the fixed
sparse priors of ICA and sparse coding, the model is fundamentally non-linear and might
yield a different lower-level representation; the coefficients u are no longer mutually independent, and the optimal set of basis functions must account for this.
Also, the shape of the joint marginal distribution in the space of all the coefficients is more
complex than the i.i.d. joint density of the linear models. Bi-variate joint distributions of
GSM coefficients can capture non-linear dependencies in wavelet coefficients [4]. In the
fully adaptable HSM, however, the joint density can take a variety of shapes that depend on
the learned parameters B (figure 1). Note that this model can produce sparse, star-shaped
distributions as in the linear models, or radially symmetric distributions that cannot be
described by the linear models. Such joint density profiles have been observed empirically
in the responses of phase-offset wavelet coefficients to natural images and have inspired
polar transformation and quadrature pair models [17] (as well as connections to phaseinvariant neural responses). The model described here can capture these joint densities and
others, but rather than assume this structure a priori, it learns it automatically from the data.
3 Methods
To examine how the lower-level representation is affected by the hierarchical model structure, we compared A learned by the sparse coding algorithm [2] and the HSM described
above. The models were trained on 20 ? 20 image patches sampled from 40 images of out-
door scenes in the Kyoto dataset [12]. We applied a low-pass radially symmetric filter to the
full images to eliminate high corner frequencies (artifacts of the square sampling lattice),
and removed the DC component from each image patch, but did no further pre-processing.
All the results and analyses are reported in the original data space. Noise variance ?2 was
set to 0.1, and the basis functions were initialized to small random values and adapted on
stochastically sampled batches of 300 patches. We ran the algorithm for 10,000 iterations
with a step size of 0.1 (tapered for the last 1,000 iterations, once model parameters were
relatively unchanging).
The parameters of the hierarchical model were estimated in a similar fashion. Gradient
? and v
? . The step
descent on A and B was performed in parallel using MAP estimates u
size for adapting B was gradually increased from .0001 to .01, because emergence of the
variance patterns requires some stabilization in the basis functions in A.
Because encoding in the sparse coding and in the hierarchical model is a non-linear process,
it is not possible to compare the inverse of A to physiological data. Instead, we estimated
the corresponding filters using reverse correlation to derive a linear approximation to a
non-linear system, which is also a common method for characterizing V1 simple cells. We
analyzed the resulting filters by fitting them with 2D Gabor functions, then examining the
distribution of their frequencies, phase, and orientation parameters.
4 Results
The shapes of basis functions and filters obtained with sparse coding have been previously
analyzed and compared to neural receptive fields [10, 14]. However, some of the reported
results were in the whitened space or obtained by training on filtered images. In the original
space, sparse coding basis functions have very particular shapes: except for a few large, low
frequency functions, all are localized, odd-symmetric, and span only a single period of the
sinusoid (figure 2, top left). The estimated filters are similar but smaller (figure 2, bottom
left), with peak spatial frequencies clustered at higher frequencies (figure 3).
In the hierarchical model, the learned representation is strikingly different (figure 2, right
panels). Both the basis and the filters span a wider range of spatial scales, a result previously
unobserved for models trained on non-preprocessed images, and one that is more consistent
with physiological data [13, 14]. Also, the shapes of the basis functions are different ?
they more closely resemble Gabor functions, although they tend to be less smooth than the
sparse coding basis functions. Both SC- and HSM-derived filters are well fit with Gabor
functions.
We also compared the distributions of spatial phases for filters obtained with sparse coding
and the hierarchical model (figure 4). While sparse coding filters exhibit a strong tendency
for odd-symmetric phase profiles, the hierarchical model results in a much more uniform
distribution of spatial phases. Although some phase asymmetry has been observed in simple cell receptive fields, their phase properties tend to be much more uniform than sparse
coding filters [14].
In the hierarchical model, the higher-order representation B is also adapted to the statistical
structure of natural images. Although the choice of the prior density for v (e.g. sparse or
Gaussian) can determine the type of structure captured in B, we discovered that it does
not affect the nature of the lower-level representation. For the results reported here, we
assumed a Gaussian prior on v. Thus, as in other multi-variate Gaussian models, the precise
directions of B are not important; the learned vectors only serve to collectively describe
the volume of the space. In this case, they capture the principal components of the logvariances. Because we were interested specifically in the lower-level representation, we did
not analyze the matrix B in detail, though the principal components of this space seem to
SC basis funcs
HSM basis funcs
SC filters
HSM filters
Figure 2: The lower-level representations learned by sparse coding (SC) and the hierarchical scale model (HSM). Shown are subsets of the learned basis functions and the estimates
for the filters obtained with reverse correlation. These functions are displayed in the original image space.
SC filters
90?
HSM filters
0.5
0.25
180?
0
90?
0.5
0.25
0?
180?
0
0?
Figure 3: Scatter plots of peak frequencies and orientations of the Gabor functions fitted
to the estimated filters. The units on the radial scale are cycles/pixel and the solid line is
the Nyquist limit. Although both SC and HSM filters exhibit predominantly high spatial
frequencies, the hierarchical model yields a representation that tiles the spatial frequency
space much more evenly.
SC phase
HSM phase
SC freq
HSM freq
40
40
50
50
20
20
25
25
0
0
?/4
?/2
0
0
?/4
?/2
0
0.06 0.13 0.25 0.50
0
0.06 0.13 0.25 0.50
Figure 4: The distributions of phases and frequencies for Gabor functions fitted to sparse
coding (SC) and hierarchical scale model (HSM) filters. The phase units specify the phase
of the sinusoid in relation to the peak of the Gaussian envelope of the Gabor function; 0 is
even-symmetric, ?/2 is odd-symmetric. The frequency axes are in cycles/pixel.
group co-localized lower-level basis functions and separately represent spatial contrast and
oriented image structure. As reported previously [6,16], with a sparse prior on v, the model
learns higher-order components that individually capture complex spatial, orientation, and
scale regularities in image data.
5 Discussion
We have demonstrated that adapting a general hierarchical model yields lower-level representations that are significantly different than those obtained using fixed priors and linear
generative models. The resulting basis functions and filters are multi-scale and more consistent with several observed characteristics of neural receptive fields.
It is interesting that the learned representations are similar to the results obtained when ICA
or sparse coding is applied to whitened images (i.e. with a flattened power spectrum). This
might be explained by the fact that whitening ?spheres? the input space, normalizing the
scale of different directions in the space. The hierarchical model is performing a similar
scaling operation through the inference of higher-order variables v that scale the priors
on basis function coefficients u. Thus the model can rely on a generic ?white? lower
level representation, while employing an adaptive mechanism for normalizing the space,
which accounts for non-stationary statistics on an image-by-image basis [6]. A related
phenomenon in neural processing is gain control, which might be one specific type of a
general adaptation process.
The flexibility of the hierarchical model allows us to learn a lower-level representation that
is optimal in the context of the hierarchy. Thus, we expect the learned parameters to define
a better statistical model for natural images than other approaches in which the lower-level
representation or the higher-order dependencies are fixed in advance. For example, the flexible marginal distributions, illustrated in figure 1, should be able to capture a wider range of
statistical structure in natural images. One way to quantify the benefit of an adapted lowerlevel representation is to apply the model to problems like image de-noising and filling-in
missing pixels. Related models have achieved state-of-the-art performance [15, 18], and
we are currently investigating whether the added flexibility of the model discussed here
confers additional advantages.
Finally, although the results presented here are more consistent with the observed properties of neural receptive fields, several discrepancies remain. For example, our results, as
well as those of other statistical models, fail to account for the prevalence of low spatial frequency receptive fields observed in V1. This could be a result of the specific choice of the
distribution assumed by the model, although the described hierarchical framework makes
few assumptions about the joint distribution of basis function coefficients. More likely, the
non-stationary statistics of the natural scenes play a role in determining the properties of
the learned representation. As suggested by previous results [10], different image data-sets
can lead to different parameters. This provides a strong motivation for training models
with an ?over-complete? basis, in which the number of basis functions is greater than the
dimensionality of the input data [19]. In this case, different subsets of the basis functions
can adapt to optimally represent different image contexts, and the population properties of
such over-complete representations could be significantly different. It would be particularly interesting to investigate representations learned in these models in the context of a
hierarchical model.
References
[1] A. J. Bell and T. J. Sejnowski. The ?independent components? of natural scenes are edge filters.
Vision Research, 37(23):3327?3338, 1997.
[2] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive-field properties by learning
a sparse code for natural images. Nature, 381:607?609, 1996.
[3] D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. Journal of the Royal
Statistical Society B, 36(1):99?102, 1974.
[4] M. J. Wainwright, E. P. Simoncelli, and A. S. Willsky. Random cascades on wavelet trees and
their use in analyzing and modeling natural images. Applied Computational and Harmonic
Analysis, 11:89?123, 2001.
[5] A. Hyv?arinen, P. O. Hoyer, and M. Inki. Topographic independent component analysis. Neural
Computation, 13:1527?1558, 2001.
[6] Y. Karklin and M.S. Lewicki. A hierarchical bayesian model for learning non-linear statistical
regularities in non-stationary natural signals. Neural Computation, 17:397?423, 2005.
[7] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control. Nat.
Neurosci., 4:819?825, 2001.
[8] D. Field. What is the goal of sensory coding. Neural Computation, 6:559?601, 1994.
[9] D. R. Ruderman and W. Bialek. Statistics of natural images: Scaling in the woods. Physical
Review Letters, 73(6):814?818, 1994.
[10] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images
compared with simple cells in primary visual cortex. Proceedings of the Royal Society, London
B, 265:359?366, 1998.
[11] J. P. Jones and L. A. Palmer. An evaluation of the two-dimensional gabor filter model of simple
receptive fields in cat striate cortex. Journal of Neurophysiology, 58(6):1233?1258, 1987.
[12] E. Doi and M. S. Lewicki. Sparse coding of natural images using an overcomplete set of limited
capacity units. In Advances in Neural Processing Information Systems 18, 2004.
[13] R. L. De Valois, D. G. Albrecht, and L. G. Thorell. Spatial frequency selectivity of cells in
macaque visual cortex. Vision Research, 22:545?559, 1982.
[14] D. L. Ringach. Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual cortex. Journal of Neurophysiology, 88:455?463, 2002.
[15] J. Portilla, V. Strela, M. J. Wainwright, and E.P. Simoncelli. Image denoising using Gaussian
scale mixtures in the wavelet domain. IEEE Transactions on Image Processing, 12:1338?1351,
2003.
[16] Y. Karklin and M.S. Lewicki. Learning higher-order structures in natural images. Network:
Computation in Neural Systems, 14:483?499, 2003.
[17] C. Zetzsche and G. Krieger. Nonlinear neurons and highorder statistics: New approaches to
human vision and electronic image processing. In B. Rogowitz and T.V. Pappas, editors, Proc.
SPIE on Human Vision and Electronic Imaging IV, volume 3644, pages 2?33, 1999.
[18] M. S. Lewicki and B. A. Olshausen. A probabilistic framework for the adaptation and comparison of image codes. Journal of the Optical Society of America A, 16(7):1587?1601, 1999.
[19] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy
employed by V1? Vision Research, 37(23), 1997.
| 2901 |@word neurophysiology:2 middle:2 hyv:1 thereby:1 solid:1 valois:1 tuned:1 scatter:2 must:1 additive:1 unchanging:1 shape:8 plot:3 stationary:3 generative:3 filtered:1 provides:1 characterization:1 along:1 fitting:1 cnbc:1 expected:1 ica:11 behavior:1 examine:3 multi:4 inspired:1 automatically:1 provided:1 underlying:1 panel:1 what:1 strela:1 developed:1 unobserved:1 transformation:1 thorough:1 exactly:1 schwartz:1 control:3 unit:3 local:1 limit:1 encoding:1 analyzing:1 might:4 au:3 suggests:1 co:1 limited:2 palmer:1 range:5 bi:1 mallow:1 prevalence:1 yan:2 bell:1 gabor:9 adapting:3 projection:1 significantly:2 pre:2 flatten:1 radial:1 cascade:1 cannot:2 noising:1 context:7 confers:1 equivalent:1 map:3 demonstrated:1 center:1 missing:1 focused:2 population:3 hierarchy:5 play:1 hypothesis:2 particularly:1 observed:9 bottom:4 role:1 u2j:1 capture:10 cycle:2 removed:1 ran:1 ui:6 highorder:1 trained:3 depend:1 serve:1 basis:26 strikingly:3 joint:11 thorell:1 cat:1 america:1 describe:3 london:1 sejnowski:1 doi:1 sc:10 statistic:8 topographic:1 emergence:2 advantage:2 reconstruction:1 product:1 adaptation:2 gen:1 flexibility:2 regularity:4 cluster:1 asymmetry:1 produce:1 wider:2 illustrate:1 derive:1 andrew:1 measured:1 odd:3 strong:2 c:1 resemble:3 involves:1 quantify:1 direction:2 closely:3 filter:24 stabilization:1 human:2 arinen:1 clustered:1 normal:3 exp:4 cognition:1 predict:2 early:5 estimation:2 polar:1 proc:1 currently:1 individually:2 clearly:1 concurrently:1 gaussian:20 rather:1 derived:2 ax:1 likelihood:2 contrast:1 inference:1 typically:1 eliminate:1 relation:1 transformed:1 interested:1 pixel:4 arg:3 among:2 flexible:2 orientation:3 priori:2 spatial:16 art:1 schaaf:1 marginal:4 field:18 once:1 shaped:2 sampling:1 broad:2 jones:1 filling:1 discrepancy:1 others:1 fundamentally:1 employ:1 few:2 oriented:1 individual:3 phase:14 highly:1 investigate:1 evaluation:1 mixture:8 analyzed:3 zetzsche:1 edge:1 tree:1 iv:1 logarithm:2 initialized:1 overcomplete:2 theoretical:1 fitted:2 increased:1 column:1 modeling:2 lattice:1 subset:2 uniform:2 examining:1 optimally:1 reported:4 dependency:9 density:16 peak:3 probabilistic:1 michael:1 reflect:1 tile:1 unmatched:1 corner:1 stochastically:1 albrecht:1 account:4 de:2 star:2 coding:25 coefficient:16 depends:1 sine:1 multiplicative:1 performed:1 analyze:1 wave:1 recover:1 parallel:1 minimize:1 square:1 variance:5 characteristic:1 ensemble:1 yield:4 bayesian:1 unifies:1 accurately:2 explain:2 gsm:5 frequency:13 spie:1 gain:3 sampled:2 dataset:1 radially:3 dimensionality:2 subtle:1 adaptable:3 higher:16 follow:1 response:2 specify:1 formulation:1 though:1 pappa:1 correlation:2 hand:1 ruderman:1 nonlinear:2 artifact:1 olshausen:3 effect:1 contain:1 multiplier:3 sinusoid:3 symmetric:8 hsm:19 freq:2 illustrated:1 white:1 ringach:1 generalized:1 complete:2 image:44 wise:1 harmonic:1 predominantly:1 common:1 inki:1 rogowitz:1 functional:1 empirically:1 physical:1 volume:2 discussed:1 marginals:1 mellon:1 significant:1 expressing:1 similarly:1 had:2 cortex:5 similarity:1 whitening:3 longer:1 reverse:2 selectivity:1 yi:2 der:1 captured:1 additional:1 greater:1 employed:2 determine:1 maximize:2 period:1 signal:2 full:1 simoncelli:3 kyoto:1 smooth:1 match:1 adapt:2 sphere:1 laplacian:4 scalable:1 basic:1 whitened:2 vision:6 cmu:2 iteration:2 normalization:1 represent:2 achieved:1 cell:10 whereas:1 separately:2 addressed:1 envelope:3 unlike:1 tend:3 seem:1 extracting:1 door:1 revealed:1 variety:4 affect:2 fit:2 variate:2 whether:2 nyquist:1 clear:1 transforms:1 visualized:1 generate:1 estimated:5 carnegie:1 affected:1 sparsifying:1 group:1 tapered:1 preprocessed:1 v1:5 imaging:1 wood:1 inverse:1 letter:1 place:1 throughout:1 electronic:2 patch:4 scaling:2 correspondence:1 adapted:6 bv:4 scene:4 aspect:1 span:4 min:3 performing:1 optical:1 relatively:1 sphering:1 conjecture:1 department:1 combination:1 describes:1 smaller:1 remain:1 qu:3 explained:1 gradually:1 mutually:2 previously:5 fail:3 mechanism:1 operation:1 apply:1 hierarchical:29 v2:1 generic:1 alternative:1 batch:1 original:4 top:2 society:3 added:2 receptive:15 primary:2 strategy:1 striate:1 bialek:1 exhibit:4 gradient:2 hoyer:1 capacity:1 evenly:1 whom:1 spanning:1 willsky:1 code:2 modeled:2 relationship:1 illustration:1 hlog:2 implementation:1 motivates:1 allowing:1 neuron:3 observation:2 datasets:1 descent:2 displayed:1 precise:1 dc:1 discovered:1 portilla:1 intensity:1 inferred:1 evidenced:1 pair:2 specified:1 optimized:1 connection:1 learned:16 macaque:2 able:1 suggested:2 pattern:2 including:3 royal:2 explanation:1 analogue:1 power:1 wainwright:2 natural:25 rely:1 karklin:3 residual:1 prior:11 review:1 marginalizing:2 relative:1 determining:1 fully:3 expect:1 interesting:2 filtering:1 localized:2 lowerlevel:1 consistent:4 principle:1 editor:1 heavy:1 row:4 accounted:1 last:1 wide:1 taking:1 characterizing:1 sparse:32 benefit:1 van:2 dimension:1 cortical:1 contour:1 sensory:2 adaptive:1 far:1 employing:1 transaction:1 compact:1 ignore:1 global:1 investigating:1 assumed:9 spectrum:2 search:1 latent:2 tailed:1 nature:2 learn:1 symmetry:1 complex:4 meanwhile:1 domain:1 did:2 neurosci:1 motivation:1 noise:3 profile:2 quadrature:1 fashion:1 guiding:1 wavelet:5 learns:2 specific:3 offset:1 physiological:2 normalizing:2 bivariate:1 flattened:1 magnitude:1 nat:1 krieger:1 simply:1 likely:1 visual:8 lewicki:6 u2:2 collectively:2 conditional:4 goal:3 absence:1 specifically:2 except:1 denoising:1 principal:2 pas:1 gauss:1 tendency:1 modulated:1 hateren:1 phenomenon:1 |
2,095 | 2,902 | Size Regularized Cut for Data Clustering
Yixin Chen
Department of CS
Univ. of New Orleans
[email protected]
Ya Zhang
Department of EECS
Uinv. of Kansas
[email protected]
Xiang Ji
NEC-Labs America, Inc.
[email protected]
Abstract
We present a novel spectral clustering method that enables users to incorporate prior knowledge of the size of clusters into the clustering process.
The cost function, which is named size regularized cut (SRcut), is defined
as the sum of the inter-cluster similarity and a regularization term measuring the relative size of two clusters. Finding a partition of the data set
to minimize SRcut is proved to be NP-complete. An approximation algorithm is proposed to solve a relaxed version of the optimization problem
as an eigenvalue problem. Evaluations over different data sets demonstrate that the method is not sensitive to outliers and performs better than
normalized cut.
1
Introduction
In recent years, spectral clustering based on graph partitioning theories has emerged as
one of the most effective data clustering tools. These methods model the given data set
as a weighted undirected graph. Each data instance is represented as a node. Each edge
is assigned a weight describing the similarity between the two nodes connected by the
edge. Clustering is then accomplished by finding the best cuts of the graph that optimize
certain predefined cost functions. The optimization usually leads to the computation of the
top eigenvectors of certain graph affinity matrices, and the clustering result can be derived
from the obtained eigen-space [12, 6]. Many cost functions, such as the ratio cut [3],
average association [15], spectral k-means [19], normalized cut [15], min-max cut [7], and
a measure using conductance and cut [9] have been proposed along with the corresponding
eigen-systems for the data clustering purpose.
The above data clustering methods, as well as most other methods in the literature, bear a
common characteristic that manages to generate results maximizing the intra-cluster similarity, and/or minimizing the inter-cluster similarity. These approaches perform well in
some cases, but fail drastically when target data sets possess complex, extreme data distributions, and when the user has special needs for the data clustering task. For example, it
has been pointed out by several researchers that normalized cut sometimes displays sensitivity to outliers [7, 14]. Normalized cut tends to find a cluster consisting of a very small
number of points if those points are far away from the center of the data set [14].
There has been an abundance of prior work on embedding user?s prior knowledge of the
data set in the clustering process. Kernighan and Lin [11] applied a local search procedure
that maintained two equally sized clusters while trying to minimize the association between
the clusters. Wagstaff et al. [16] modified k-means method to deal with a priori knowledge
about must-link and cannot link constraints. Banerjee and Ghosh [2] proposed a method to
balance the size of the clusters by considering an explicit soft constraint. Xing et al. [17]
presented a method to learn a clustering metric over user specified samples. Yu and Shi [18]
introduced a method to include must-link grouping cues in normalized cut. Other related
works include leaving fraction of the points unclustered to avoid the effect of outliers [4]
and enforcing minimum cluster size constraint [10].
In this paper, we present a novel clustering method based on graph partitioning. The new
method enables users to incorporate prior knowledge of the expected size of clusters into
the clustering process. Specifically, the cost function of the new method is defined as the
sum of the inter-cluster similarity and a regularization term that measures the relative size
of two clusters. An ?optimal? partition corresponds to a tradeoff between the inter-cluster
similarity and the relative size of two clusters. We show that the size of the clusters generated by the optimal partition can be controlled by adjusting the weight on the regularization
term. We also prove that the optimization problem is NP-complete. So we present an approximation algorithm and demonstrate its performance using two document data sets.
2
Size regularized cut
We model a given data set using a weighted undirected graph G = G(V, E, W) where V,
E, and W denote the vertex set, edge set, and graph affinity matrix, respectively. Each
vertex i ? V represents a data point, and each edge (i, j) ? E is assigned a nonnegative
weight Wij to reflect the similarity between the data points i and j. A graph partitioning
method attempts to organize vertices into groups so that the intra-cluster similarity is high,
and/or the inter-cluster similarity is low. A simple way to quantify the cost for partitioning
vertices into two disjoint sets V1 and V2 is the cut size
X
cut(V1 , V2 ) =
Wij ,
i?V1 ,j?V2
which can be viewed as the similarity or association between V1 and V2 . Finding a binary
partition of the graph that minimizes the cut size is known as the minimum cut problem.
There exist efficient algorithms for solving this problem. However, the minimum cut criterion favors grouping small sets of isolated nodes in the graph [15].
To capture the need for more balanced clusters, it has been proposed to include the cluster
size information as a multiplicative penalty factor in the cost function, such as average
cut [3] and normalized cut [15]. Both cost functions can be uniformly written as [5]
1
1
+
.
(1)
cost(V1 , V2 ) = cut(V1 , V2 )
|V1 |?
|V1 |?
Here, ? = [?1 , ? ? ? , ?N ]T is a weight vector where ?i is a nonnegative weight associated
with vertex i, and N is the total number of vertices in V. The penalty factor for ?unbalanced
partition? is determined by |Vj |? (j = 1, 2), which is a weighted cardinality (or weighted
size) of Vj , i.e.,
X
|Vj |? =
?i .
(2)
i?Vj
Dhillon
P [5] showed that if ?i = 1 (for all i), the cost function (1) becomes average cut. If
?i = j Wij , then (1) turns out to be normalized cut.
In contrast with minimum cut, average cut and normalized cut tend to generate more balanced clusters. However, due to the multiplicative nature of their cost functions, average
cut and normalized cut are still sensitive to outliers. This is because the cut value for separating outliers from the rest of the data points is usually close to zero, and thus makes
the multiplicative penalty factor void. To avoid the drawback of the above multiplicative
cost functions, we introduce an additive cost function for graph bi-partitioning. The cost
function is named size regularized cut (SRcut), and is defined as
SRcut(V1 , V2 ) = cut(V1 , V2 ) ? ?|V1 |? |V2 |?
(3)
where |Vj |? (j = 1, 2) is described in (2), ? and ? > 0 are given a priori. The last term in
(3), ?|V1 |? |V2 |? , is the size regularization term, which can be interpreted as below.
Since |V1 |? + |V2 |? = |V|? = ? T e where e is a vector of 1?s, it is straightforward to
T 2
show that the following inequality |V1 |? |V2 |? ? ? 2 e holds for arbitrary V1 , V2 ? V
satisfying V1 ? V2 = V and V1 ? V2 = ?. In addition, the equality holds if and only if
|V1 |? = |V2 |? =
?T e
.
2
Therefore, |V1 |? |V2 |? achieves the maximum value when two clusters are of equal
weighted size. Consequently, minimizing SRcut is equivalent to minimizing the similarity between two clusters and, at the same time, searching for a balanced partition. The
tradeoff between the inter-cluster similarity and the balance of the cut depends on the ?
parameter, which needs to be determined by the prior information on the size of clusters. If
? = 0, minimum SRcut will assign all vertices to one cluster. On the other end, if ? 0,
minimum SRcut will generate two clusters of equal size (if N is an even number). We
defer the discussion on the choice of ? to Section 5.
In a spirit similar to that of (3), we can define size regularized association (SRassoc) as
X
SRassoc(V1 , V2 ) =
cut(Vi , Vi ) + 2?|V1 |? |V2 |?
i=1,2
where cut(Vi , Vi ) measures the intra-cluster similarity. An important property of SRassoc
and SRcut is that they are naturally related:
cut(V, V) ? SRassoc(V1 , V2 )
.
2
Hence, minimizing size regularized cut is in fact identical to maximizing size regularized
association. In other words, minimizing the size regularized inter-cluster similarity is equivalent to maximizing the size regularized intra-cluster similarity. In this paper, we will use
SRcut as the clustering criterion.
SRcut(V1 , V2 ) =
3
Size ratio monotonicity
min(|V | ,|V | )
Let V1 and V2 be a partition of V. The size ratio r = max(|V11 |?? ,|V22 |?? ) defines the relative
size of two clusters. It is always within the interval [0, 1], and a larger value indicates a
more balanced partition. The following theorem shows that by controlling the parameter ?
in the SRcut cost function, one can control the balance of the optimal partition. In addition,
the size ratio increases monotonically as the increase of ?.
Theorem 3.1 (Size Ratio Monotonicity) Let V1i and V2i be the clusters generated by the
minimum SRcut with ? = ?i , and the corresponding size ratio, ri , be defined as
ri =
If ?1 > ?2 ? 0, then r1 ? r2 .
min(|V1i |? , |V2i |? )
.
max(|V1i |? , |V2i |? )
Proof: Given vertex weight vector ?, let S be the collection of all distinct values that the
size regularization term in (3) can have, i.e.,
S = {S | V1 ? V2 = V, V1 ? V2 = ?, S = |V1 |? |V2 |? } .
Clearly, |S|, the number of elements in S, is less than or equal to 2N ?1 where N is the size
of V. Hence we can write the elements in S in ascending order as
T 2
? e
0 = S1 < S2 < ? ? ? ? ? ? < S|S| ?
.
2
Next, we define cuti be the minimal cut satisfying |V1 |? |V2 |? = Si , i.e.,
cuti =
min
|V1 |? |V2 |? = Si
V1 ? V 2 = V
V1 ? V 2 = ?
cut(V1 , V2 ) ,
then
min
V1 ? V 2 = V
V1 ? V 2 = ?
SRcut(V1 , V2 ) =
min
i=1,???,|S|
(cuti ? ?Si ) .
If V12 and V22 are the clusters generated by the minimum SRcut with ? = ?2 , then
|V12 |? |V22 |? = Sk? where k ? = argmini=1,???,|S| (cuti ? ?2 Si ). Therefore, for any
1 ? t < k? ,
cutk? ? ?2 Sk? ? cutt ? ?2 St .
(4)
If ?1 > ?2 , we have
(?2 ? ?1 )Sk? < (?2 ? ?1 )St .
(5)
Adding (4) and (5) gives cutk? ? ?1 Sk? < cutt ? ?1 St , which implies
k ? ? argmini=1,???,|S| (cuti ? ?1 Si ) .
(6)
Now, let V11 and V21 be the clusters generated by the minimum SRcut with ? = ?1 , and
|V11 |? |V21 |? = Sj ? where j ? = argmini=1,???,|S| (cuti ? ?1 Si ). From (6) we have j ? ?
k ? , therefore Sj ? ? Sk? , or equivalently |V11 |? |V21 |? ? |V12 |? |V22 |? . Without loss of
|V|
generality, we can assume that |V11 |? ? |V21 |? and |V12 |? ? |V22 |? , therefore |V11 |? ? 2 ?
|V|
and |V12 |? ? 2 ? . Considering the fact that f (x) = x(|V|? ? x) is strictly monotonically
|V|
increasing as x ? 2 ? and f (|V11 |? ) ? f (|V12 |? ), we have |V11 |? ? |V12 |? . This leads to
r1 =
|V11 |?
|V21 |?
? r2 =
|V12 |?
|V22 |?
.
Unfortunately, minimizing size regularized cut for an arbitrary ? is an NP-complete problem. This is proved in the following section.
4
Size regularized cut and graph bisection
The decision problem for minimum SRcut can be formulated as: whether, given an undirected graph G(V, E, W) with weight vector ? and regularization parameter ?, a partition
exists such that SRcut is less than a given cost. This decision problem is clearly NP because we can verify in polynomial time the SRcut value for a given partition. Next we show
that graph bisection can be reduced, in polynomial time, to minimum SRcut. Since graph
bisection is a classified NP-complete problem [1], so is minimum SRcut.
Definition 4.1 (Graph Bisection) Given an undirected graph G = G(V, E, W) with even
number of vertices where W is the adjacency matrix, find a pair of disjoint subsets
V1 , V2 ? V of equal size and V1 ? V2 = V, such that the number of edges between
vertices in V1 and vertices in V2 , i.e., cut(V1 , V2 ), is minimal.
Theorem 4.2 (Reduction of Graph Bisection to SRcut) For any given undirected graph
G = G(V, E, W) where W is the adjacency matrix, finding the minimum bisection of G is
equivalent to finding a partition of G that minimizes the SRcut cost function with weights
? = e and the regularization parameter ? > d? where
X
d? = max
Wij .
i=1,???,N
j=1,???,N
Proof: Without loss of generality, we assume that N is even (if not, we can always add an
isolated vertex). Let cuti be the minimal cut with the size of the smaller subset is i, i.e.,
cuti =
min
min(|V1 |, |V2 |) = i
V1 ? V 2 = V
V1 ? V 2 = ?
cut(V1 , V2 ) .
Clearly, we have d? ? cuti+1 ? cuti for 0 ? i ?
N ? 2i ? 1 ? 1. Therefore, for any ? > d? , we have
N
2
? 1. If 0 ? i ?
N
2
? 1, then
?(N ? 2i ? 1) > d? ? cuti+1 ? cuti .
This implies that cuti ? ?i(N ? i) > cuti+1 ? ?(i + 1)(N ? i ? 1) , or, equivalently,
min
min(|V1 |, |V2 |) = i
V1 ? V 2 = V
V1 ? V 2 = ?
for 0 ? i ?
N
2
cut(V1 , V2 )??|V1 ||V2 | >
min
min(|V1 |, |V2 |) = i + 1
V1 ? V 2 = V
V1 ? V 2 = ?
cut(V1 , V2 )??|V1 ||V2 |
? 1. Hence, for any ? > d? , minimizing SRcut is identical to minimizing
cut(V1 , V2 ) ? ?|V1 ||V2 |
with the constraint that |V1 | = |V2 | = N2 , V1 ? V2 = V, and V1 ? V2 = ?, which is exactly
2
the graph bisection problem since ?|V1 ||V2 | = ? N4 is a constant.
5
An approximation algorithm for SRcut
Given a partition of vertex set V into two sets V1 and V2 , let x ? {?1, 1}N be an indicator
vector such that xi = 1 if i ? V1 and xi = ?1 if i ? V2 . It is not difficult to show that
(e + x)T
(e ? x)
(e + x)T
(e ? x)
W
and |V1 |? |V2 |? =
?? T
.
2
2
2
2
We can therefore rewrite SRcut in (3) as a function of the indicator vector x:
cut(V1 , V2 ) =
(e + x)T
(e ? x)
(W ? ??? T )
2
2
1 T
1
= ? x (W ? ??? T )x + eT (W ? ??? T )e .
4
4
Given W, ?, and ?, we have
SRcut(V1 , V2 ) =
(7)
argminx?{?1,1}N SRcut(x) = argmaxx?{?1,1}N xT (W ? ??? T )x
If we define a normalized indicator vector, y = ?1N x (i.e., kyk = 1), then minimum SRcut
can be found by solving the following discrete optimization problem
y = argmaxy?{? ?1
N
?1
N
}N
yT (W ? ??? T )y ,
(8)
which is NP-complete. However, if we relax all the elements in the indicator vector y from
discrete values to real values and keep the unit length constraint on y, the above optimization problem can be easily solved. And the solution is the eigenvector corresponding to the
largest eigenvalue of W ? ??? T (or named the largest eigenvector).
Similar to other spectral graph partitioning techniques that use top eigenvectors to approximate ?optimal? partitions, the largest eigenvector of W ? ??? T provides a linear search
direction, along which a splitting point can be found. We use a simple approach by checking each element in the largest eigenvector as a possible splitting point. The vertices, whose
continuous indicators are greater than or equal to the splitting point, are assigned to one
cluster. The remaining vertices are assigned to the other cluster. The corresponding SRcut
value is then computed. The final partition is determined by the splitting point with the
minimum SRcut value. The relaxed optimization problem provides a lower bound on the
optimal SRcut value, SRcut? . Let ?1 be the largest eigenvalue of W ? ??? T . From (7)
and (8), it is straightforward to show that
SRcut? ?
eT (W ? ??? T )e ? N ?1
.
4
The SRcut value of the partition generated by the largest eigenvector provides an upper
bound for SRcut? .
As implied by SRcut cost function in (3), the partition of the dataset depends on the value
of ?, which determines the tradeoff between inter-cluster similarity and the balance of the
partition. Moreover, Theorem 3.1 indicates that with the increase of ?, the size ratio of
the clusters generated by the optimal partition increase monotonically, i.e., the partition
becomes more balanced. Even though, we do not have a counterpart of Theorem 3.1 for
the approximated partition derived above, our empirical study shows that, in general, the
size ratio of the approximated partition increases along with ?. Therefore, we use the prior
information on the size of the clusters to select ?. Specifically, we define expected size
min(s1 ,s2 )
where s1 and s2 are the expected size of the two clusters
ratio, R, as R = max(s
1 ,s2 )
(known a priori). We then search for a value of ? such that the resulting size ratio is
close to R. A simple one-dimensional search method based on bracketing and bisection
is implemented [13]. The pseudo code of the searching algorithm is given in Algorithm
1 along with the rest of the clustering procedure. The input of the algorithm is the graph
affinity matrix W, the weight vector ?, the expected size ratio R, and ?0 > 0 (the initial
T
value of ?). The output is a partition of V. In our experiments, ?0 is chosen to be 10 e NWe
2 .
If the expected size ratio R is unknown, one can estimate R assuming that the data are
i.i.d. samples and a sample belongs to the smaller cluster with probability p ? 0.5 (i.e.,
p
R = 1?p
). It is not difficult to prove that p? of n randomly selected samples from the data set
is an unbiased estimator of p. Moreover, the distribution of p? can be well approximated by
a normal distribution with mean p and variance p(1?p)
when n is sufficiently large (say n >
n
30). Hence p? converges to p as the increase of n. This suggests a simple strategy for SRcut
with unknown R. One can manually examine n N randomly selected data instances
to get p? and the 95% confidence interval [plow , phigh ], from which one can evaluate the
invertal [Rlow , Rhigh ] for R. Algorithm 1 is then applied to a number of evenly distributed
R?s within the interval to find the corresponding partitions. The final partition is chosen to
be the one with the minimum cut value by assuming that a ?good? partition should have a
small cut.
6
Time complexity
The time complexity of each iteration is determined by that of computing the largest eigenvector. Using power method or Lanczos method [8], the running time is O(M N 2 ) where
M is the number of matrix-vector computations required and N is the number of vertices.
Hence the overall time complexity is O(KM N 2 ) where K is the number of iterations in
searching ?. Similar to other spectral graph clustering methods, the time complexity of
SRcut can be significantly reduced if the affinity matrix W is sparse, i.e., the graph is only
Algorithm 1: Size Regularized Cut
1 initialize ?l to 2?0 and ?h to ?20
2 REPEAT
3
?l ? ?2l , y ? largest eigenvector of W ? ?l ?? T
4
partition V using y and compute size ratio r
5 UNTIL (r < R)
6 REPEAT
7
?h ? 2?h , y ? largest eigenvector of W ? ?h ?? T
8
partition V using y and compute size ratio r
9 UNTIL (r ? R)
10 REPEAT
h
11
? ? ?l +?
, y ? largest eigenvector of W ? ??? T
2
12
partition V using y and compute size ratio r
13
IF (r < R)
14
?l ? ?
15
ELSE
16
?h ? ?
17
END IF
18 UNTIL (|r ? R| < 0.01R or ?h ? ?l < 0.01?0 )
locally connected. Although W ? ??? T is in general not sparse, the time complexity of
power method is still O(M N ). This is because (W ? ??? T )y can be evaluated as the
sum of Wy and ??(? T y), each requiring O(N ) operations. Therefore, by enforcing the
sparsity, the overall time complexity of SRcut is O(KM N ).
7
Experiments
We test the SRcut algorithm using two data sets, Reuters-21578 document corpus and 20Newsgroups. Reuters-21578 data set contains 21578 documents that have been manually
assigned to 135 topics. In our experiments, we discarded documents with multiple category
labels, and removed the topic classes containing less than 5 documents. This leads to a data
set of 50 clusters with a total of 9102 documents. The 20-Newsgroups data set contains
about 20000 documents collected from 20 newsgroups, each corresponding to a distinct
topic. The number of news articles in each cluster is roughly the same. We pair each
cluster with another cluster to form a data set, so that 190 test data sets are generated. Each
document is represented by a term-frequency vector using TF-IDF weights.
We use the normalized mutual information as our evaluation metric. Normalized mutual
information is always within the interval [0, 1], with a larger value indicating a better performance. A simple sampling scheme described in Section 5 is used to estimate the expected size ratio. For the Reuters-21578 data set, 50 test runs were conducted, each on a
test set created by mixing 2 topics randomly selected from the data set. The performance
score in Table 1 was obtained by averaging the scores from 50 test runs. The results for 20Newsgroups data set were obtained by averaging the scores from 190 test data sets. Clearly,
SRcut outperforms the normalized cut on both data sets. SRcut performs significantly better than normalized cut on the 20-Newsgroups data set. In comparison with Reuters-21578,
many topic classes in the 20-Newsgroups data set contain outliers. The results suggest that
SRcut is less sensitive to outliers than normalized cut.
8
Conclusions
We proposed size regularized cut, a novel method that enables users to specify prior knowledge of the size of two clusters in spectral clustering. The SRcut cost function takes into
Table 1: Performance comparison for SRcut and Normalized Cut. The numbers shown are the
normalized mutual information. A larger value indicates a better performance.
Algorithms
Reuters-21578 20-Newsgroups
SRcut
0.7330
0.7315
Normalized Cut
0.7102
0.2531
account inter-cluster similarity and the relative size of two clusters. The ?optimal? partition of the data set corresponds to a tradeoff between the inter-cluster similarity and the
balance of the partition. We proved that finding a partition with minimum SRcut is an NPcomplete problem. We presented an approximation algorithm to solve a relaxed version of
the optimization problem. Evaluations over different data sets indicate that the method is
not sensitive to outliers and performs better than normalized cut. The SRcut model can be
easily adapted to solve multiple-clusters problem by applying the clustering method recursively/iteratively on data sets. Since graph bisection can be reduced to SRcut, the proposed
approximation algorithm provides a new spectral technique for graph bisection. Comparing
SRcut with other graph bisection algorithms is therefore an interesting future work.
References
[1] S. Arora, D. Karger, and M. Karpinski, ?Polynomial Time Approximation Schemes for Dense
Instances of NP-hard Problems,? Proc. ACM Symp. on Theory of Computing, pp. 284-293, 1995.
[2] A. Banerjee and J. Ghosh, ?On Scaling up Balanced Clustering Algorithms,? Proc. SIAM Int?l
Conf. on Data Mining, pp. 333-349, 2002.
[3] P. K. Chan, D. F. Schlag, and J. Y. Zien, ?Spectral k-Way Ratio-Cut Partitioning and Clustering,?
IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 13:1088-1096, 1994.
[4] M. Charikar, S. Khuller, D. M. Mount, and G. Narasimhan, ?Algorithms for Facility Location
Problems with Outliers,? Proc. ACM-SIAM Symp. on Discrete Algorithms, pp. 642-651, 2001.
[5] I. S. Dhillon, ?Co-clustering Documents and Words using Bipartite Spectral Graph Partitioning,?
Proc. ACM SIGKDD Conf. Knowledge Discovery and Data Mining, pp. 269-274, 2001.
[6] C. Ding, ?Data Clustering: Principal Components, Hopfield and Self-Aggregation Networks,?
Proc. Int?l Joint Conf. on Artificial Intelligence, pp. 479-484, 2003.
[7] C. Ding, X. He, H. Zha, M. Gu, and H. Simon, ?Spectral Min-Max Cut for Graph Partitioning
and Data Clustering,? Proc. IEEE Int?l Conf. Data Mining, pp. 107-114, 2001.
[8] G. H. Golub and C. F. Van Loan, Matrix Computations, John Hopkins Press, 1999.
[9] R. Kannan, S. Vempala, and A. Vetta, ?On Clusterings - Good, Bad and Spectral,? Proc. IEEE
Symp. on Foundations of Computer Science, pp. 367-377, 2000.
[10] D. R. Karget and M. Minkoff, ?Building Steiner Trees with Incomplete Global Knowledge,?
Proc. IEEE Symp. on Foundations of Computer Science, pp. 613-623, 2000
[11] B. Kernighan and S. Lin, ?An Efficient Heuristic Procedure for Partitioning Graphs,? The Bell
System Technical Journal, 49:291-307, 1970.
[12] A. Y. Ng, M. I. Jordan, and Y. Weiss, ?On Spectral Clustering: Analysis and an Algorithm,?
Advances in Neural Information Processing Systems 14, pp. 849-856, 2001.
[13] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C,
second edition, Cambridge University Press, 1992.
[14] A. Rahimi and B. Recht, ?Clustering with Normalized Cuts is Clustering with a Hyperplane,?
Statistical Learning in Computer Vision, 2004.
[15] J. Shi and J. Malik, ?Normalized Cuts and Image Segmentation,? IEEE Trans. on Pattern
Analysis and Machine Intelligence, 22:888-905, 2000.
[16] K. Wagstaff, C. Cardie, S. Rogers, and S. Schrodl, ?Constrained K-means Clustering with
Background Knowledge,? Proc. Int?l Conf. on Machine Learning, pp. 577-584, 2001.
[17] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell, ?Distance Metric Learning, with Applications
to Clustering with Side Information,? Advances in Neural Information Processing Systems 15,
pp. 505-512, 2003.
[18] X. Yu and J. Shi, ?Segmentation Given Partial Grouping Constraints,? IEEE Trans. on Pattern
Analysis and Machine Intelligence, 26:173-183, 2004.
[19] H. Zha, X. He, C. Ding, H. Simon, and M. Gu, ?Spectral Relaxation for K-means Clustering,?
Advances in Neural Information Processing Systems 14, pp. 1057-1064, 2001.
| 2902 |@word version:2 polynomial:3 km:2 recursively:1 reduction:1 initial:1 contains:2 score:3 karger:1 document:9 outperforms:1 steiner:1 com:1 comparing:1 v21:5 si:6 must:2 written:1 john:1 additive:1 partition:32 numerical:1 enables:3 cue:1 selected:3 intelligence:3 kyk:1 provides:4 node:3 location:1 zhang:1 along:4 prove:2 symp:4 introduce:1 inter:10 expected:6 xji:1 roughly:1 examine:1 v2i:3 considering:2 cardinality:1 becomes:2 increasing:1 moreover:2 circuit:1 plow:1 interpreted:1 minimizes:2 eigenvector:9 narasimhan:1 finding:6 ghosh:2 pseudo:1 exactly:1 partitioning:10 control:1 unit:1 organize:1 local:1 tends:1 mount:1 suggests:1 co:1 bi:1 orleans:1 procedure:3 empirical:1 bell:1 significantly:2 word:2 v11:9 confidence:1 suggest:1 get:1 cannot:1 close:2 applying:1 optimize:1 equivalent:3 center:1 shi:3 maximizing:3 straightforward:2 yt:1 splitting:4 estimator:1 embedding:1 searching:3 target:1 controlling:1 user:6 element:4 satisfying:2 approximated:3 cut:60 ding:3 solved:1 capture:1 connected:2 news:1 russell:1 removed:1 balanced:6 complexity:6 solving:2 rewrite:1 bipartite:1 gu:2 easily:2 joint:1 hopfield:1 v22:6 represented:2 america:1 univ:1 distinct:2 effective:1 artificial:1 whose:1 emerged:1 larger:3 solve:3 heuristic:1 say:1 relax:1 favor:1 final:2 eigenvalue:3 mixing:1 recipe:1 cluster:50 r1:2 converges:1 implemented:1 c:2 implies:2 indicate:1 quantify:1 direction:1 drawback:1 rogers:1 adjacency:2 assign:1 strictly:1 hold:2 sufficiently:1 normal:1 achieves:1 yixin:2 purpose:1 proc:9 label:1 phigh:1 sensitive:4 largest:10 v1i:3 tf:1 tool:1 weighted:5 clearly:4 always:3 modified:1 avoid:2 derived:2 indicates:3 contrast:1 sigkdd:1 vetterling:1 integrated:1 wij:4 overall:2 priori:3 constrained:1 special:1 initialize:1 mutual:3 equal:5 ng:2 sampling:1 manually:2 identical:2 represents:1 yu:2 future:1 np:7 randomly:3 consisting:1 argminx:1 attempt:1 conductance:1 mining:3 intra:4 evaluation:3 golub:1 argmaxy:1 extreme:1 predefined:1 edge:5 partial:1 tree:1 incomplete:1 isolated:2 minimal:3 instance:3 soft:1 measuring:1 lanczos:1 cost:18 vertex:16 subset:2 conducted:1 schlag:1 eec:1 sv:1 st:3 recht:1 sensitivity:1 siam:2 hopkins:1 reflect:1 containing:1 conf:5 account:1 int:4 inc:1 depends:2 vi:4 nwe:1 multiplicative:4 lab:2 xing:2 aggregation:1 zha:2 defer:1 simon:2 minimize:2 variance:1 characteristic:1 manages:1 bisection:11 cardie:1 researcher:1 classified:1 definition:1 frequency:1 pp:12 naturally:1 associated:1 proof:2 proved:3 adjusting:1 dataset:1 knowledge:8 segmentation:2 specify:1 wei:1 evaluated:1 though:1 generality:2 until:3 banerjee:2 kernighan:2 defines:1 building:1 effect:1 normalized:21 verify:1 unbiased:1 counterpart:1 requiring:1 regularization:7 assigned:5 hence:5 equality:1 contain:1 dhillon:2 iteratively:1 facility:1 deal:1 self:1 maintained:1 criterion:2 trying:1 complete:5 demonstrate:2 performs:3 npcomplete:1 image:1 novel:3 common:1 ji:1 association:5 he:2 cambridge:1 pointed:1 similarity:18 add:1 recent:1 showed:1 chan:1 belongs:1 certain:2 inequality:1 binary:1 accomplished:1 minimum:17 greater:1 relaxed:3 monotonically:3 zien:1 multiple:2 rahimi:1 technical:1 lin:2 equally:1 controlled:1 vision:1 metric:3 iteration:2 sometimes:1 karpinski:1 addition:2 background:1 interval:4 void:1 else:1 leaving:1 bracketing:1 rest:2 posse:1 tend:1 undirected:5 spirit:1 jordan:2 newsgroups:7 tradeoff:4 whether:1 penalty:3 eigenvectors:2 locally:1 category:1 reduced:3 generate:3 exist:1 disjoint:2 write:1 discrete:3 group:1 v1:64 graph:30 relaxation:1 fraction:1 sum:3 year:1 run:2 named:3 v12:8 decision:2 scaling:1 bound:2 display:1 nonnegative:2 adapted:1 constraint:6 idf:1 uno:1 ri:2 min:14 vempala:1 department:2 charikar:1 smaller:2 n4:1 s1:3 outlier:9 wagstaff:2 describing:1 turn:1 fail:1 ascending:1 end:2 operation:1 away:1 spectral:13 v2:52 eigen:2 top:2 clustering:31 include:3 remaining:1 running:1 implied:1 malik:1 strategy:1 affinity:4 distance:1 link:3 separating:1 evenly:1 topic:5 collected:1 enforcing:2 kannan:1 assuming:2 length:1 code:1 ratio:17 minimizing:8 balance:5 equivalently:2 difficult:2 unfortunately:1 design:1 unknown:2 perform:1 upper:1 discarded:1 arbitrary:2 introduced:1 pair:2 required:1 specified:1 trans:3 usually:2 below:1 wy:1 pattern:2 unclustered:1 sparsity:1 max:6 power:2 regularized:13 indicator:5 scheme:2 kansa:1 arora:1 created:1 prior:7 literature:1 discovery:1 checking:1 xiang:1 relative:5 loss:2 bear:1 interesting:1 foundation:2 article:1 repeat:3 last:1 drastically:1 side:1 sparse:2 distributed:1 van:1 collection:1 far:1 sj:2 approximate:1 keep:1 monotonicity:2 global:1 corpus:1 xi:2 search:4 continuous:1 sk:5 table:2 ku:1 learn:1 nature:1 argmaxx:1 complex:1 vj:5 dense:1 s2:4 reuters:5 edition:1 n2:1 explicit:1 abundance:1 theorem:5 bad:1 xt:1 r2:2 grouping:3 exists:1 adding:1 nec:2 chen:1 flannery:1 khuller:1 corresponds:2 determines:1 acm:3 teukolsky:1 vetta:1 sized:1 viewed:1 formulated:1 consequently:1 hard:1 argmini:3 aided:1 specifically:2 determined:4 uniformly:1 loan:1 averaging:2 hyperplane:1 principal:1 total:2 ya:1 indicating:1 select:1 unbalanced:1 incorporate:2 evaluate:1 |
2,096 | 2,903 | Assessing Approximations for
Gaussian Process Classification
Malte Kuss and Carl Edward Rasmussen
Max Planck Institute for Biological Cybernetics
Spemannstra?e 38, 72076 T?ubingen, Germany
{kuss,carl}@tuebingen.mpg.de
Abstract
Gaussian processes are attractive models for probabilistic classification
but unfortunately exact inference is analytically intractable. We compare Laplace?s method and Expectation Propagation (EP) focusing on
marginal likelihood estimates and predictive performance. We explain
theoretically and corroborate empirically that EP is superior to Laplace.
We also compare to a sophisticated MCMC scheme and show that EP is
surprisingly accurate.
In recent years models based on Gaussian process (GP) priors have attracted much attention in the machine learning community. Whereas inference in the GP regression model
with Gaussian noise can be done analytically, probabilistic classification using GPs is analytically intractable. Several approaches to approximate Bayesian inference have been
suggested, including Laplace?s approximation, Expectation Propagation (EP), variational
approximations and Markov chain Monte Carlo (MCMC) sampling, some of these in conjunction with generalisation bounds, online learning schemes and sparse approximations.
Despite the abundance of recent work on probabilistic GP classifiers, most experimental
studies provide only anecdotal evidence, and no clear picture has yet emerged, as to when
and why which algorithm should be preferred. Thus, from a practitioners point of view
probabilistic GP classification remains a jungle. In this paper, we set out to understand and
compare two of the most wide-spread approximations: Laplace?s method and Expectation
Propagation (EP). We also compare to a sophisticated, but computationally demanding
MCMC scheme to examine how close the approximations are to ground truth.
We examine two aspects of the approximation schemes: Firstly the accuracy of approximations to the marginal likelihood which is of central importance for model selection and
model comparison. In any practical application of GPs in classification (usually multiple)
parameters of the covariance function (hyperparameters) have to be handled. Bayesian
model selection provides a consistent framework for setting such parameters. Therefore, it
is essential to evaluate the accuracy of the marginal likelihood approximations as a function
of the hyperparameters, in order to assess the practical usefulness of the approach
Secondly, we need to assess the quality of the approximate probabilistic predictions. In the
past, the probabilistic nature of the GP predictions have not received much attention, the
focus being mostly on classification error rates. This unfortunate state of affairs is caused
primarily by typical benchmarking problems being considered outside of a realistic context. The ability of a classifier to produce class probabilities or confidences, have obvious
relevance in most areas of application, eg. medical diagnosis. We evaluate the predictive
distributions of the approximate methods, and compare to the MCMC gold standard.
1
The Gaussian Process Model for Binary Classification
Let y ? {?1, 1} denote the class label of an input x. Gaussian process classification (GPC)
is discriminative in modelling p(y|x) for given x by a Bernoulli distribution. The probability of success p(y = 1|x) is related to an unconstrained latent function f (x) which is
mapped to the unit interval by a sigmoid transformation, eg. the logit or the probit. For reasons of analytic convenience we exclusively use the probit model p(y = 1|x) = ?(f (x)),
where ? denotes the cumulative density function of the standard Normal distribution.
In the GPC model Bayesian inference is performed about the latent function f in the light
of observed data D = {(yi , xi )|i = 1, . . . , m}. Let fi = f (xi ) and f = [f1 , . . . , fm ]>
be shorthand for the values of the latent function and y = [y1 , . . . , ym ]> and X =
[x1 , . . . , xm ]> collect the class labels and inputs respectively. Given the latent function
the class labels are independent Bernoulli variables, so the joint likelihood factories:
p(y|f ) =
m
Y
p(yi |fi ) =
i=1
m
Y
?(yi fi ),
i=1
and depends on f only through its value at the observed inputs. We use a zero-mean
Gaussian process prior over the latent function f with a covariance function k(x, x0 |?),
which may depend on hyperparameters ? [1]. The functional form and parameters of
the covariance function encodes assumptions about the latent function, and adaptation of
these is part of the inference. The posterior distribution over latent function values f at the
observed X for given hyperparameters ? becomes:
Z
m
N (f |0, K) Y
p(f |D, ?) =
?(yi fi ), where p(D|?) = p(y|f )p(f |X, ?)df ,
p(D|?) i=1
denotes the marginal likelihood. Unfortunately neither the marginal likelihood, nor the
posterior itself, or predictions can be computed analytically, so approximations are needed.
2
Approximate Bayesian Inference
For the GPC model approximations are either based on a Gaussian approximation to the
posterior p(f |D, ?) ? q(f |D, ?) = N (f |m, A) or involve Markov chain Monte Carlo
(MCMC) sampling [2]. We compare Laplace?s method and Expectation Propagation (EP)
which are two alternative approaches to finding parameters m and A of the Gaussian
q(f |D, ?). Both methods also allow approximate evaluation of the marginal likelihood,
which is useful for ML-II hyperparameter optimisation.
Laplace?s approximation (LA) is found by making a second order Taylor approximation of
the (un-normalised) log posterior [3]. The mean m is placed at the mode (MAP) and the
covariance A equals the negative inverse Hessian of the log posterior density at m.
The EP approximation [4] also gives a Gaussian approximation to the posterior. The parameters m and A are found in an iterative scheme by matching the approximate marginal
moments of p(fi |D, ?) by the marginals of the approximation N (fi |mi , Aii ). Although
we cannot prove the convergence of EP, we conjecture that it always converges for GPC
with probit likelihood, and have never encountered an exception.
A key insight is that a Gaussian approximation to the GPC posterior is equivalent to a GP
approximation to the posterior distribution over latent functions. For a test input x? the
fi
1
0.16
0.14
0.8
0.6
0.1
fj
p(y|f)
p(f|y)
0.12
Likelihood p(y|f)
Prior p(f)
Posterior p(f|y)
Laplace q(f|y)
EP q(f|y)
0.08
0.4
0.06
0.04
0.2
0.02
0
?4
0
4
8
0
f
.
(a)
(b)
Figure 1: Panel (a) provides a one-dimensional illustration of the approximations. The
prior N (f |0, 52 ) combined with the probit likelihood (y = 1) results in a skewed posterior.
The likelihood uses the right axis, all other curves use the left axis. Laplace?s approximation
peaks at the posterior mode, but places far too much mass over negative values of f and
too little at large positive values. The EP approximation matches the first two posterior
moments, which results in a larger mean and a more accurate placement of probability mass
compared to Laplace?s approximation. In Panel (b) we caricature a high dimensional zeromean Gaussian prior as an ellipse. The gray shadow indicates that for a high dimensional
Gaussian most of the mass lies in a thin shell. For large latent signals (large entries in K),
the likelihood essentially cuts off regions which are incompatible with the training labels
(hatched area), leaving the upper right orthant as the posterior. The dot represents the mode
of the posterior, which remains close to the origin.
approximate predictive latent and class probabilities are:
q(f? |D, ?, x? ) = N (?? , ??2 ),
and
p
q(y? = 1|D, x? ) = ?(?? / 1 + ??2 ),
?1
?1
where ?? = k>
m and ??2 = k(x? , x? )?k>
? K?1 AK?1 )k? , where the vector
?K
? (K
k? = [k(x1 , x? ), . . . , k(xm , x? )]> collects covariances between x? and training inputs X.
MCMC sampling has the advantage that it becomes exact in the limit of long runs and so
provides a gold standard by which to measure the two analytic methods described above.
Although MCMC methods can in principle be used to do inference over f and ? jointly [5],
we compare to methods using ML-II optimisation over ?, thus we use MCMC to integrate
over f only. Good marginal likelihood estimates are notoriously difficult to obtain; in our
experiments we use Annealed Importance Sampling (AIS) [6], combining several Thermodynamic Integration runs into a single (unbiased) estimate of the marginal likelihood.
Both analytic approximations have a computational complexity which is cubic O(m3 ) as
common among non-sparse GP models due to inversions m ? m matrices. In our implementations LA and EP need similar running times, on the order of a few minutes for several
hundred data-points. Making AIS work efficiently requires some fine-tuning and a single
estimate of p(D|?) can take several hours for data sets of a few hundred examples, but this
could conceivably be improved upon.
3
Structural Properties of the Posterior and its Approximations
Structural properties of the posterior can best be understood by examining its construction.
The prior is a correlated m-dimensional Gaussian N (f |0, K) centred at the origin. Each
likelihood term p(yi |fi ) softly truncates the half-space from the prior that is incompatible
with the observed label, see Figure 1. The resulting posterior is unimodal and skewed,
similar to a multivariate Gaussian truncated to the orthant containing y. The mode of
the posterior remains close to the origin, while the mass is placed in accordance with the
observed class labels. Additionally, high dimensional Gaussian distributions exhibit the
property that most probability mass is contained in a thin ellipsoidal shell ? depending on
the covariance structure ? away from the mean [7, ch. 29.2]. Intuitively this occurs since in
high dimensions the volume grows extremely rapidly with the radius. As an effect the mode
becomes less representative (typical) for the prior distribution as the dimension increases.
For the GPC posterior this property persists: the mode of the posterior distribution stays
relatively close to the origin, still being unrepresentative for the posterior distribution, while
the mean moves to the mass of the posterior making mean and mode differ significantly.
We cannot generally assume the posterior to be close to Gaussian, as in the often studied
limit of low-dimensional parametric models with large amounts of data. Therefore in GPC
we must be aware of making a Gaussian approximation to a non-Gaussian posterior. From
the properties of the posterior it can be expected that Laplace?s method places m in the right
orthant but too close to the origin, such that the approximation will overlap with regions
having practically zero posterior mass. As an effect the amplitude of the approximate latent
posterior GP will be underestimated systematically, leading to overly cautious predictive
distributions. The EP approximation does not rely on a local expansion, but assumes that
the marginal distributions can be well approximated by Gaussians. This assumption will
be examined empirically below.
4
Experiments
In this section we compare and inspect approximations for GPC using various benchmark
data sets. The primary focus is not to optimise the absolute performance of GPC models
but to compare the relative accuracy of approximations and to validate the arguments given
in the previous section. In all experiments we use a covariance function of the form:
2
k(x, x0 |?) = ? 2 exp ? 12 kx ? x0 k /`2 ,
(1)
such that ? = [?, `]. We refer to ? 2 as the signal variance and to ` as the characteristic
length-scale. Note that for many classification tasks it may be reasonable to use an individual length scale parameter for every input dimension (ARD) or a different kind of
covariance function. Nevertheless, for the sake of presentability we use the above covariance function and we believe the conclusions about the accuracy of approximations to be
independent of this choice, since it relies on arguments which are independent of the form
of the covariance function.
As measure of the accuracy of predictive probabilities we use the average information in
bits of the predictions about the test targets in excess of that of random guessing. Let
p? = p(y? = 1|D, ?, x? ) be the model?s prediction, then we average:
I(p?i , yi ) =
yi +1
2
log2 (p?i ) +
1?yi
2
log2 (1 ? p?i ) + H
(2)
over all test cases, where H is the entropy of the training labels. The error rate E is equal
to the percentage of erroneous class assignments if prediction is understood as a decision
problem with symmetric costs.
For the first set of experiments presented here the well-known USPS digits and the Ionosphere data set were used. A binary sub-problem from the USPS digits is defined by only
considering 3?s vs. 5?s (which is probably the hardest of the binary sub-problems) and dividing the data into 767 cases for training and 773 for testing. The Ionosphere data is split
into 200 training and 151 test cases. We do an exhaustive investigation on a fine regular
grid of values for the log hyperparameters. For each ? on the grid we compute the approximated log marginal likelihood by LA, EP and AIS. Additionally we compute the respective
predictive performance (2) on the test set. Results are shown in Figure 2.
Log marginal likelihood
?150 ?130
?200
Log marginal likelihood
5
?115
?105
?95
4
?115
?105
3
?130
?100
?150
2
1
log magnitude, log(?f)
log magnitude, log(?f)
4
Log marginal likelihood
5
?160
4
?100
3
?130
?92
?160
2
?105
?160
?105
?200
?115
1
log magnitude, log(?f)
5
?92
?95
3
?100
?105
2?200
?115
?160
?130
?200
1
?200
0
0
0
?200
3
4
log lengthscale, log(l)
5
2
3
4
log lengthscale, log(l)
(1a)
4
0.84
4 0.8
0.8
0.25
3
0.8
0.84
2
0.7
0.7
1
0.5
log magnitude, log(?f)
0.86
5
0.86
0.8
0.89
0.88
0.7
1
0.5
3
4
log lengthscale, log(l)
2
3
4
log lengthscale, log(l)
(2a)
Log marginal likelihood
?90
?70
?100
?120
?120
0
?70
?75
?120
1
?100
1
2
3
log lengthscale, log(l)
4
0
?70
?90
?65
2
?100
?100
1
?120
?80
1
2
3
log lengthscale, log(l)
4
?1
?1
5
5
f
0.1 0.2
0.55
0
1
0.4
1
2
3
log lengthscale, log(l)
5
0.5
0.1
0
0.3
0.4
0.6
0.55
0.3
0.2
0.2
0.1
1
0
0.2
4
5
?1
?1
0.4
0.2
0.6
2
0.3
10
0
0.1
0.2 0.1
0
0
0.5
1
2
3
log lengthscale, log(l)
0.5
0.5
0.55
3
0
0.1
0
1
2
3
log lengthscale, log(l)
0.5
0.3
0.5
4
2
5
(3c)
0.5
3
4
Information about test targets in bits
4
log magnitude, log(?f)
4
2
0
(3b)
Information about test targets in bits
0.3
log magnitude, log(? )
?75
0
?1
?1
5
5
0
?120
3
?120
(3a)
?1
?1
?90
?80
?65
?100
2
Information about test targets in bits
0
?75
4
0
3
5
Log marginal likelihood
?90
3
?100
0
0.25
3
4
log lengthscale, log(l)
5
log magnitude, log(?f)
log magnitude, log(?f)
f
log magnitude, log(? )
?80
3
0.5
(2c)
?75
?90
0.7
0.8
2
4
?75
?1
?1
0.86
0.84
Log marginal likelihood
4
1
0.7
1
5
5
?150
2
(2b)
5
2
0.88
3
0
5
0.84
0.89
0.25
0 0.7
0.25
0
0.86
4
0.84
3
2
5
Information about test targets in bits
log magnitude, log(?f)
log magnitude, log(?f)
5
?200
3
4
log lengthscale, log(l)
(1c)
Information about test targets in bits
5
2
2
(1b)
Information about test targets in bits
0.5
5
log magnitude, log(?f)
2
4
5
?1
?1
0
1
2
3
log lengthscale, log(l)
4
5
(4a)
(4b)
(4c)
Figure 2: Comparison of marginal likelihood approximations and predictive performances
of different approximation techniques for USPS 3s vs. 5s (upper half) and the Ionosphere
data (lower half). The columns correspond to LA (a), EP (b), and MCMC (c). The rows
show estimates of the log marginal likelihood (rows 1 & 3) and the corresponding predictive
performance (2) on the test set (rows 2 & 4) respectively.
MCMC samples
Laplace p(f|D)
EP p(f|D)
0.2
0.15
0.45
0.1
0.4
0.05
0.3
?16
?14
?12
?10
?8
?6
f
?4
?2
0
2
4
p(xi)
0
0.35
(a)
0.06
0.25
0.2
0.15
MCMC samples
Laplace p(f|D)
EP p(f|D)
0.1
0.05
0.04
0
0
2
0.02
xi
4
6
(c)
0
?40
?35
?30
?25
?20
?15
?10
?5
0
5
10
15
f
(b)
Figure 3: Panel (a) and (b) show two marginal distributions p(fi |D, ?) from a GPC posterior and its approximations. The true posterior is approximated by a normalised histogram
of 9000 samples of fi obtained by MCMC sampling. Panel (c) shows a histogram of samples of a marginal distribution of a truncated high-dimensional Gaussian. The line describes
a Gaussian with mean and variance estimated from the samples.
For all three approximation techniques we see an agreement between marginal likelihood
estimates and test performance, which justifies the use of ML-II parameter estimation. But
the shape of the contours and the values differ between the methods. The contours for
Laplace?s method appear to be slanted compared to EP. The marginal likelihood estimates
of EP and AIS agree surprisingly well1 , given that the marginal likelihood comes as a 767
respectively 200 dimensional integral. The EP predictions contain as much information
about the test cases as the MCMC predictions and significantly more than for LA. Note
that for small signal variances (roughly ln(? 2 ) < 1) LA and EP give very similar results.
A possible explanation is that for small signal variances the likelihood does not truncate
the prior but only down-weights the tail that disagrees with the observation. As an effect
the posterior will be less skewed and both approximations will lead to similar results.
For the USPS 3?s vs. 5?s we now inspect the marginal distributions p(fi |D, ?) of single
latent function values under the posterior approximations for a given value of ?. We have
chosen the values ln(?) = 3.35 and ln(`) = 2.85 which are between the ML-II estimates of EP and LA. Hybrid MCMC was used to generate 9000 samples from the posterior
p(f |D, ?). For LA and EP the approximate marginals are q(fi |D, ?) = N (fi |mi , Aii )
where m and A are found by the respective approximation techniques.
In general we observe that the marginal distributions of MCMC samples agree very well
with the respective marginal distributions of the EP approximation. For Laplace?s approximation we find the mean to be underestimated and the marginal distributions to overlap
with zero far more than the EP approximations. Figure (3a) displays the marginal distribution and its approximations for which the MCMC samples show maximal skewness.
Figure (3b) shows a typical example where the EP approximation agrees very well with the
MCMC samples. We show this particular example because under the EP approximation
p(yi = 1|D, ?) < 0.1% but LA gives a wrong p(yi = 1|D, ?) ? 18%.
In the experiment we saw that the marginal distributions of the posterior often agree very
1
Note that the agreement between the two seems to be limited by the accuracy of the MCMC
runs, as judged by the regularity of the contour lines; the tolerance is less than one unit on a (natural)
log scale.
well with a Gaussian approximation. This seems to contradict the description given in the
previous section were we argued that the posterior is skewed by construction. In order to
inspect the marginals of a truncated high-dimensional multivariate Gaussian distribution
we made an additional synthetic experiment. We constructed a 767 dimensional Gaussian
N (x|0, C) with a covariance matrix having one eigenvalue of 100 with eigenvector 1, and
all other eigenvalues are 1. We then truncate this distribution such that all xi ? 0. Note
that the mode of the truncated Gaussian is still at zero, whereas the mean moves towards
the remaining mass. Figure (3c) shows a normalised histogram of samples from a marginal
distribution of one xi . The samples agree very well with a Gaussian approximation. In
the previous section we described the somewhat surprising property, that for a truncated
high-dimensional Gaussian, resembling the posterior, the mode (used by LA) may not be
particularly representative of the distribution. Although the marginal is also truncated, it
is still exceptionally well modelled by a Gaussian ? however, the Laplace approximation
centred on the origin would be completely inappropriate.
In a second set of experiments we compare the predictive performance of LA and EP for
GPC on several well known benchmark problems. Each data set is randomly split into 10
folds of which one at a time is left out as a test set to measure the predictive performance
of a model trained (or selected) on the remaining nine folds. All performance measures are
averages over the 10 folds. For GPC we implement model selection by ML-II hyperparameter estimation, reporting results given the ? that maximised the respective approximate
marginal likelihoods p(D|?).
In order to get a better picture of the absolute performance we also compare to results
obtained by C-SVM classification. The kernel we used is equivalent to the covariance
function (1) without the signal variance parameter. For each fold the parameters C and
` are found in an inner loop of 5-fold cross-validation, in which the parameter grids are
refined until the performance stabilises. Predictive probabilities for test cases are obtained
by mapping the unthresholded output of the SVM to [0, 1] using a sigmoid function [8].
Results are summarised in Table 1. Comparing Laplace?s method to EP the latter shows
to be more accurate both in terms of error rate and information. While the error rates are
relatively similar the predictive distribution obtained by EP shows to be more informative
about the test targets. Note that for GPC the error rate only depends of the sign of the
mean ?? of the approximated posterior over latent functions and not the entire posterior
predictive distribution. As to be expected, the length of the mean vector kmk shows much
larger values for the EP approximations. Comparing EP and SVMs the results are mixed.
For the Crabs data set all methods show the same error rate but the information content of
the predictive distributions differs dramatically. For some test cases the SVM predicts the
wrong class with large certainty.
5
Summary & Conclusions
Our experiments reveal serious differences between Laplace?s method and EP when used in
GPC models. From the structural properties of the posterior we described why LA systematically underestimates the mean m. The resulting posterior GP over latent functions will
have too small amplitude, although the sign of the mean function will be mostly correct. As
an effect LA gives over-conservative predictive probabilities, and diminished information
about the test labels. This effect has been show empirically on several real world examples. Large resulting discrepancies in the actual posterior probabilities were found, even at
the training locations, which renders the predictive class probabilities produced under this
approximation grossly inaccurate. Note, the difference becomes less dramatic if we only
consider the classification error rates obtained by thresholding p? at 1/2. For this particular
task, we?ve seen the the sign of the latent function tends to be correct (at least at the training
locations).
Laplace
EP
SVM
Data Set
m
n
E%
I
kmk
E%
I
kmk
E%
I
Ionosphere
351 34 8.84 0.591 49.96 7.99 0.661
124.94 5.69 0.681
Wisconsin
683
9 3.21 0.804 62.62 3.21 0.805
84.95 3.21 0.795
Pima Indians 768
8 22.77 0.252 29.05 22.63 0.253
47.49 23.01 0.232
Crabs
200
7
2.0 0.682 112.34
2.0 0.908 2552.97
2.0 0.047
Sonar
208 60 15.36 0.439 26.86 13.85 0.537 15678.55 11.14 0.567
USPS 3 vs 5 1540 256 2.27 0.849 163.05 2.21 0.902 22011.70 2.01 0.918
Table 1: Results for benchmark data sets. The first three columns give the name of the data
set, number of observations m and dimension of inputs n. For Laplace?s method and EP
the table reports the average error rate E%, the average information I (2) and the average
length kmk of the mean vector of the Gaussian approximation. For SVMs the error rate
and the average information about the test targets are reported. Note that for the Crabs data
set we use the sex (not the colour) of the crabs as class label.
The EP approximation has shown to give results very close to MCMC both in terms of
predictive distributions and marginal likelihood estimates. We have shown and explained
why the marginal distributions of the posterior can be well approximated by Gaussians.
Further, the marginal likelihood values obtained by LA and EP differ systematically which
will lead to different results of ML-II hyperparameter estimation. The discrepancies are
similar for different tasks. Using AIS we were able to show the accuracy of marginal
likelihood estimates, which to the best of our knowledge has never been done before.
In summary, we found that EP is the method of choice for approximate inference in binary GPC models, when the computational cost of MCMC is prohibitive. In contrast, the
Laplace approximation is so inaccurate that we advise against its use, especially when
predictive probabilities are to be taken seriously. Further experiments and a detailed description of the approximation schemes can be found in [2].
Acknowledgements Both authors acknowledge support by the German Research Foundation (DFG) through grant RA 1030/1. This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST2002-506778. This publication only reflects the authors? views.
References
[1] C. K. I. Williams and C. E. Rasmussen. Gaussian processes for regression. In David S. Touretzky,
Michael C. Mozer, and Michael E. Hasselmo, editors, NIPS 8, pages 514?520. MIT Press, 1996.
[2] M. Kuss and C. E. Rasmussen. Assessing approximate inference for binary Gaussian process
classification. Journal of Machine Learning Research, 6:1679?1704, 2005.
[3] C. K. I. Williams and D. Barber. Bayesian classification with Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342?1351, 1998.
[4] T. P. Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD thesis, Department of Electrical Engineering and Computer Science, MIT, 2001.
[5] R. M. Neal. Regression and classification using Gaussian process priors. In J. M. Bernardo, J. O.
Berger, A. P. Dawid, and A. F. M. Smith, editors, Bayesian Statistics 6, pages 475?501. Oxford
University Press, 1998.
[6] R. M. Neal. Annealed importance sampling. Statistics and Computing, 11:125?139, 2001.
[7] D. J. C. MacKay. Information Theory, Inference and Learning Algorithms. CUP, 2003.
[8] J. C. Platt. Probabilities for SV machines. In Advances in Large Margin Classifiers, pages 61?73.
The MIT Press, 2000.
| 2903 |@word inversion:1 seems:2 logit:1 sex:1 covariance:12 dramatic:1 moment:2 exclusively:1 seriously:1 past:1 kmk:4 comparing:2 surprising:1 yet:1 attracted:1 slanted:1 must:1 realistic:1 informative:1 shape:1 analytic:3 v:4 half:3 selected:1 prohibitive:1 intelligence:1 maximised:1 affair:1 smith:1 provides:3 location:2 firstly:1 constructed:1 shorthand:1 prove:1 excellence:1 x0:3 theoretically:1 ra:1 expected:2 roughly:1 mpg:1 nor:1 examine:2 little:1 actual:1 inappropriate:1 considering:1 becomes:4 panel:4 mass:8 kind:1 skewness:1 eigenvector:1 finding:1 transformation:1 certainty:1 every:1 bernardo:1 classifier:3 wrong:2 platt:1 unit:2 medical:1 grant:1 appear:1 planck:1 positive:1 before:1 engineering:1 understood:2 persists:1 accordance:1 tends:1 jungle:1 limit:2 local:1 despite:1 ak:1 oxford:1 studied:1 examined:1 collect:2 limited:1 practical:2 testing:1 implement:1 differs:1 digit:2 area:2 significantly:2 matching:1 confidence:1 regular:1 advise:1 get:1 convenience:1 close:7 selection:3 cannot:2 judged:1 context:1 equivalent:2 map:1 annealed:2 resembling:1 attention:2 williams:2 insight:1 laplace:20 construction:2 target:9 exact:2 carl:2 gps:2 us:1 origin:6 agreement:2 dawid:1 approximated:5 particularly:1 cut:1 predicts:1 ep:37 observed:5 electrical:1 region:2 mozer:1 complexity:1 trained:1 depend:1 ist2002:1 predictive:18 upon:1 usps:5 completely:1 joint:1 aii:2 various:1 lengthscale:12 monte:2 outside:1 refined:1 exhaustive:1 emerged:1 larger:2 ability:1 statistic:2 gp:9 jointly:1 itself:1 online:1 advantage:1 eigenvalue:2 maximal:1 adaptation:1 combining:1 loop:1 rapidly:1 gold:2 description:2 validate:1 cautious:1 convergence:1 regularity:1 assessing:2 produce:1 converges:1 depending:1 ard:1 received:1 edward:1 dividing:1 shadow:1 come:1 differ:3 radius:1 correct:2 argued:1 f1:1 investigation:1 biological:1 secondly:1 practically:1 crab:4 considered:1 ground:1 normal:1 exp:1 mapping:1 estimation:3 label:9 saw:1 agrees:1 hasselmo:1 reflects:1 anecdotal:1 mit:3 gaussian:33 always:1 publication:1 conjunction:1 focus:2 modelling:1 likelihood:32 bernoulli:2 indicates:1 contrast:1 inference:11 softly:1 entire:1 inaccurate:2 germany:1 caricature:1 classification:14 among:1 pascal:1 integration:1 mackay:1 marginal:37 equal:2 aware:1 never:2 having:2 sampling:6 represents:1 hardest:1 thin:2 discrepancy:2 report:1 serious:1 primarily:1 few:2 randomly:1 ve:1 individual:1 dfg:1 evaluation:1 light:1 chain:2 accurate:3 integral:1 respective:4 spemannstra:1 taylor:1 column:2 corroborate:1 assignment:1 cost:2 entry:1 hundred:2 usefulness:1 examining:1 too:4 reported:1 sv:1 synthetic:1 combined:1 density:2 peak:1 stay:1 probabilistic:6 off:1 michael:2 ym:1 thesis:1 central:1 containing:1 leading:1 de:1 centred:2 caused:1 depends:2 performed:1 view:2 ass:2 accuracy:7 variance:5 characteristic:1 efficiently:1 correspond:1 modelled:1 bayesian:7 produced:1 carlo:2 notoriously:1 cybernetics:1 kuss:3 explain:1 touretzky:1 against:1 underestimate:1 grossly:1 minka:1 obvious:1 mi:2 knowledge:1 amplitude:2 sophisticated:2 focusing:1 improved:1 done:2 zeromean:1 until:1 propagation:4 mode:9 quality:1 gray:1 reveal:1 believe:1 grows:1 name:1 effect:5 contain:1 unbiased:1 true:1 analytically:4 symmetric:1 neal:2 eg:2 attractive:1 skewed:4 fj:1 variational:1 fi:13 superior:1 sigmoid:2 common:1 functional:1 empirically:3 volume:1 tail:1 marginals:3 refer:1 cup:1 ai:5 tuning:1 unconstrained:1 grid:3 dot:1 posterior:41 multivariate:2 recent:2 ubingen:1 binary:5 success:1 yi:10 seen:1 additional:1 somewhat:1 signal:5 ii:6 multiple:1 thermodynamic:1 unimodal:1 match:1 cross:1 long:1 prediction:8 regression:3 optimisation:2 expectation:4 df:1 essentially:1 histogram:3 kernel:1 whereas:2 fine:2 interval:1 underestimated:2 leaving:1 probably:1 practitioner:1 structural:3 split:2 fm:1 inner:1 handled:1 colour:1 render:1 hessian:1 nine:1 dramatically:1 useful:1 gpc:15 clear:1 involve:1 generally:1 detailed:1 amount:1 ellipsoidal:1 svms:2 generate:1 percentage:1 sign:3 estimated:1 overly:1 diagnosis:1 summarised:1 hyperparameter:3 ist:1 key:1 nevertheless:1 neither:1 year:1 run:3 inverse:1 place:2 reporting:1 reasonable:1 family:1 incompatible:2 decision:1 bit:7 bound:1 display:1 fold:5 encountered:1 placement:1 encodes:1 sake:1 aspect:1 argument:2 extremely:1 relatively:2 conjecture:1 department:1 truncate:2 describes:1 making:4 conceivably:1 intuitively:1 explained:1 taken:1 computationally:1 ln:3 agree:4 remains:3 german:1 needed:1 gaussians:2 observe:1 away:1 alternative:1 denotes:2 running:1 assumes:1 remaining:2 unfortunate:1 log2:2 especially:1 ellipse:1 move:2 occurs:1 parametric:1 primary:1 guessing:1 exhibit:1 mapped:1 barber:1 tuebingen:1 reason:1 length:4 berger:1 illustration:1 difficult:1 unfortunately:2 mostly:2 truncates:1 pima:1 negative:2 implementation:1 upper:2 inspect:3 observation:2 markov:2 benchmark:3 acknowledge:1 orthant:3 truncated:6 y1:1 community:2 david:1 hour:1 nip:1 able:1 suggested:1 usually:1 below:1 xm:2 pattern:1 max:1 including:1 optimise:1 explanation:1 overlap:2 malte:1 demanding:1 rely:1 hybrid:1 natural:1 scheme:6 picture:2 axis:2 prior:10 disagrees:1 acknowledgement:1 relative:1 wisconsin:1 probit:4 mixed:1 validation:1 foundation:1 integrate:1 consistent:1 principle:1 thresholding:1 editor:2 systematically:3 row:3 summary:2 surprisingly:2 placed:2 rasmussen:3 supported:1 allow:1 understand:1 normalised:3 institute:1 wide:1 unthresholded:1 absolute:2 sparse:2 tolerance:1 curve:1 dimension:4 world:1 cumulative:1 contour:3 author:2 made:1 programme:1 far:2 transaction:1 excess:1 approximate:13 hatched:1 contradict:1 preferred:1 ml:6 discriminative:1 xi:6 un:1 latent:15 iterative:1 sonar:1 why:3 table:3 additionally:2 nature:1 expansion:1 european:1 spread:1 stabilises:1 noise:1 hyperparameters:5 x1:2 representative:2 benchmarking:1 cubic:1 sub:2 factory:1 lie:1 abundance:1 minute:1 down:1 erroneous:1 svm:4 ionosphere:4 evidence:1 intractable:2 essential:1 importance:3 phd:1 magnitude:12 justifies:1 kx:1 margin:1 entropy:1 contained:1 ch:1 truth:1 relies:1 shell:2 towards:1 unrepresentative:1 exceptionally:1 content:1 diminished:1 generalisation:1 typical:3 conservative:1 experimental:1 la:14 m3:1 exception:1 support:1 latter:1 relevance:1 indian:1 evaluate:2 mcmc:20 correlated:1 |
2,097 | 2,904 | Hierarchical Linear/Constant Time SLAM
Using Particle Filters for Dense Maps
Austin I. Eliazar
Ronald Parr
Department of Computer Science
Duke University
Durham, NC 27708
{eliazar,parr}@cs.duke.edu
Abstract
We present an improvement to the DP-SLAM algorithm for simultaneous localization and mapping (SLAM) that maintains multiple hypotheses about densely populated maps (one full map per particle in a particle filter) in time that is linear in all significant algorithm parameters
and takes constant (amortized) time per iteration. This means that the
asymptotic complexity of the algorithm is no greater than that of a pure
localization algorithm using a single map and the same number of particles. We also present a hierarchical extension of DP-SLAM that uses a
two level particle filter which models drift in the particle filtering process
itself. The hierarchical approach enables recovery from the inevitable
drift that results from using a finite number of particles in a particle filter
and permits the use of DP-SLAM in more challenging domains, while
maintaining linear time asymptotic complexity.
1 Introduction
The ability to construct and use a map of the environment is a critical enabling technology
for many important applications, such as search and rescue or extraterrestrial exploration.
Probabilistic approaches have proved successful at addressing the basic problem of localization using particle filters [6]. Expectation Maximization (EM) has been used successfully to address the problem of mapping [1] and Kalman filters [2, 10] have shown promise
on the combined problem of simultaneous localization and mapping (SLAM).
SLAM algorithms ought to produce accurate maps with bounded resource consumption per
sensor sweep. To the extent that it is possible, it is desirable to avoid explicit map correcting
actions, which are computationally intensive and would be symptomatic of accumulating
error in the map. One family of approaches to SLAM assumes relatively sparse, relatively
unambiguous landmarks and builds a Kalman filter over landmark positions [2, 9, 10] .
Other approaches assume dense sensor data which individually are not very distinctive,
such as those available from a laser range finder [7, 8]. An advantage of the latter group is
that they are capable of producing detailed maps that can be used for path planning.
In earlier work, we presented an algorithm called DP-SLAM [4], which produced extremely accurate, densely populated maps by maintaining a joint distribution over robot
maps and poses using a particle filter. DP-SLAM uses novel data structures that exploit
shared structure between maps, permitting efficient use of many joint map/pose particles.
This gives DP-SLAM the ability to resolve map ambiguities automatically, as a natural part
of the particle filtering process, effectively obviating the explicit loop closing phase needed
for other approaches [7, 12].
A known limitation of particle filters is that they can require a very large number of particles to track systems with diffuse posterior distributions. This limitation strongly affected
earlier versions of DP-SLAM, which had a worst-case run time that scaled quadratically
with the number of particles. In this paper, we present a significant improvement to DPSLAM which reduces the run time to linear in the number of particles, giving multiple map
hypothesis SLAM the same asymptotic complexity per particle as localization with a single
map. The new algorithm also has a more straightforward analysis and implementation.
Unfortunately, even with linear time complexity, there exist domains which require infeasibly large numbers of particles for accurate mapping. The cumulative effect of very small
errors (resulting from sampling or discretization) can cause drift. To address the issue of
drift in a direct and principled manner, we propose a hierarchical particle filter method
which can specifically model and recover from small amounts of drift, while maintaining
particle diversity longer than in typical particle filters. The combined result is an algorithm
that can produce extraordinarily detailed maps of large domains at close to real time speeds.
2 Linear Time Algorithm
A DP-SLAM ancestry tree contains all of the current particles as leaves. The parent of
a given node represents the particle of the previous iteration from which that particle was
resampled. An ancestry tree is minimal if the following two properties hold:
1. A node is a leaf node if and only if it corresponds to a current generation particle.
2. All interior nodes have at least two children.
The first property is ensured by simply removing particles that are not resampled from the
ancestry tree. The second property is ensured by merging parents with only-child nodes. It
is easy to see that for a particle filter with P particles, the corresponding minimal ancestry
tree will have a branching factor of at least two and depth of no more than O(P ).
The complexity of maintaining a minimal ancestry tree will depend upon the manner in
which observations, and thus maps, are associated with nodes in the tree. DP-SLAM distributes this information in the following manner: All map updates for all nodes in the
ancestry tree are stored in a single global grid, while each node in the ancestry tree also
maintains a list of all grid squares updated by that node. The information contained in
these two data structures is integrated for efficient access at each cycle of the particle filter
through a new data structure called an map cache.
2.1 Core Data Structures
The DP-SLAM map is a global occupancy grid-like array. Each grid cell contains an observation vector with one entry for each ancestry tree node that has made an observation of
the grid cell. Each vector entry is an observation node containing the following fields:
opacity a data structure storing sufficient statistics for the current estimate of the opacity
of the grid cell to the laser range finder. See Eliazar and Parr [4] for details.
parent a pointer to a parent observation node for which this node is an update. (If an
ancestor of a current particle has seen this square already, then the opacity value
for this square is considered an update to the previous value stored by the ancestor.
However, both the update and the original observation are stored, since it may not
be the case that all successors of the ancestor have made updates to this square.)
anode a pointer to the ancestry tree node associated with the current opacity estimate.
In previous versions of DP-SLAM, this information was stored using a balanced tree. This
added significant overhead to the algorithm, both conceptual and computational, and is no
longer required in the current version.
The DP-SLAM ancestry tree is a basic tree data structure with pointers to parents and
children. Each node in the ancestry tree also contains an onodes vector, which contains
pointers to observation nodes in the grid cells updated by the ancestry tree node.
2.2 Map cache
The main sacrifice that was made when originally designing DP-SLAM was that map accesses no longer took constant time, due to the need to search the observation vector at a
given grid square. The map cache provides a way of returning to this constant time access, by reconstructing a separate local map which is consistent with the history of map
updates for each particle. Each local map is only as large as the area currently observed,
and therefore is of a manageable size.
For a localization procedure using P particles and observing an area of A grid squares,
there is a total of O(AP ) map accesses. For the constant time accesses provided by the
map cache to be useful, the time complexity to build the map cache needs to be O(AP ).
This result can be achieved by constructing the cache in two passes.
The first pass is to iterate over all grid squares in the global map which could be within
sensor range of the robot. For each of these grid squares, the observation vector stores
all observations made of that grid square by any particle. This vector is traversed, and
for each observation, we update the corresponding local map with a pointer back to the
corresponding observation node. This creates a set of partial local maps that store pointers
to map updates, but no inherited map information. Since the size of the observation vector
can be no greater than the size of the ancestry tree, which has O(P ) nodes, the first pass
takes O(P ) time per grid square.
In the second pass we fill holes in the local maps by propagating inherited map information.
The entire ancestry tree is traced, depth first, and the local map is checked for each ancestor
node encountered. If the local map for the current ancestor node was not filled during the
first pass, then the hole is patched by inheritance from the ancestor node?s parent. This will
fill any gaps in the local maps for grid squares that have been seen by any current particle.
As this pass is directly based on the size of the ancestry tree, it is also O(P ) per grid square.
Therefore, the total complexity of building the map cache is O(AP ).
For each particle, the algorithm constructs a grid of pointers to observation nodes. This
provides constant time access to the opacity values consistent with each particle?s map.
Localization now becomes trivial with this representation: Laser scans are traced through
the corresponding local map, and the necessary opacity values are extracted via the pointers. With the constant time accesses afforded by the local maps, the total localization cost
in DP-SLAM is now O(AP ).
2.3 Updates and Deletions
When the observations associated with a new particle?s sensor sweep are integrated into the
map, two basic steps are performed. First, a new observation is added to the observation
vector of each grid square which was visited by the particle?s laser casts. Next, a pointer to
each new observation is added to this particle?s onodes vector. The cost of this operation is
obviously no more than that of localization.
There are two situations which require deleting nodes from the ancestry tree. The first is
the simple case of removing a node from which the particle filter has not resampled. Each
ancestor node maintains a vector of pointers to all observations attributed to it. Therefore,
these entries can be removed from the observation vectors in the global grid in constant
time. Since there can be no more deletions than there are updates, this process has an
amortized cost of O(AP ).
The second case for deleting a node occurs when a node in the ancestry tree which has
an only child is merged with that child. This involves replacing the opacity value for the
parent with that of the child, and then removing that child?s entry from the associated grid
cell?s observation vector. Therefore, this process is identical to the first case, except that
each removal of an entry from the global map is preceded by a single update to the same
grid square. Since the observation vector at each grid square is not ordered, additions to
the vector can be done in constant time, and does not change the complexity from O(AP ).
3 Drift
A significant problem faced by current SLAM algorithms is that of drift. Small errors
can accumulate over several iterations, and while the resulting map may seem locally consistent, there could be large total errors, which become apparent after the robot closes a
large loop. In theory, drift can be avoided by some algorithms in situations where strong
linear Gaussian assumptions hold [10]. In practice, it is hard to avoid drift, either as a
consequence of violated assumptions or as a consequence of particle filtering. The best
algorithms can only extend the distance that the robot travels before experiencing drift. Errors come from (at least) three sources: insufficient particle coverage, coarse precision, and
resampling itself (particle depletion).
The first problem is a well known issue with particle filters. Given a finite number of
particles, there will be unsampled gaps in the particle coverage of the state space and the
proximity to the true state can be as coarse as the size of these gaps. This is exacerbated by
the fact that particle filters are often applied to high dimensional state spaces with Gaussian noise, making it impossible to cover unlikely (but still possible) events in the tails of
distribution with high particle density. The second issue is coarse precision. This can occur
as a result of explicit discretization through an occupancy grid, or implicit discretization
through the use of a sensor with finite precision. Coarse precision can make minor perturbations in the state appear identical from the perspective of the sensors and the particle
weights. Finally, resampling itself can lead to drift by shifting a finite population of particles away from low probability regions of the state space. While this behavior of a particle
filter is typically viewed as a desirable reallocation of computational resources, it can shift
particles away from the true state in some cases.
The net effect of these errors can be the gradual accumulation of small errors resulting from
failure to sample, differentiate, or remember a state vector that is sufficiently close to the
true state. In practice, we have found that there exist large domains where high precision
mapping is essentially impossible with any reasonable number of particles.
4 Hierarchical SLAM
In the first part of the paper, we presented an approach to SLAM that reduced the asymptotic complexity per particle to that of pure localization. This is likely as low as can reasonably be expected and should allow the use of large numbers of particles for mapping.
However, the discussion of drift in the previous section underscores that the ability to use
large numbers of particles may not be sufficient, and we would like techniques that delay the onset of drift as long as possible. We therefore propose a hierarchical approach to
SLAM that is capable of recognizing, representing, and recovering from drift.
The basic idea is that the main sources of drift can be modeled as the cumulative effect
of a sequence of random events. Through experimentation, we can quantify the expected
amount of drift over a certain distance for a given algorithm, much in the same way that we
create a probabilistic motion model for the noise in the robot?s odometry. Since the total
drift over a trajectory is assumed to be a summation of many small, largely independent
sources of error, it will be close to a Gaussian distribution.
If we view the act of completing a small map segment as a random process with noise, we
can then apply a higher level filter to the output of the map segment process in an attempt
to track the underlying state more accurately. There are two benefits to this approach.
First, it explicitly models and permits the correction of drift. Second, the coarser time
granularity of the high level process implies fewer resampling steps and fewer opportunities
for particle depletion. Thus, if we can model how much drift is expected to occur over a
small section of the robot?s trajectory, we can maintain this extra uncertainty longer, and
resolve inaccuracies or ambiguities in the map in a natural fashion.
There are some special properties of the SLAM problem that make it particularly well
suited to this approach. In the full generality of an arbitrary tracking problem, one should
view drift as a problem that affects entire trajectories through state space and the complete
belief state at any time. Sampling the space of drifts would then require sampling perturbations to the entire state vector. In this fully general case, the benefit of the hierarchical view
would be unclear, as the end result would be quite similar to adding additional noise to
the low level process. In SLAM, we can make two assumptions that simplify things. The
first is that the robot state vector is highly correlated with the remaining state variables,
and the second is that we have access to a low level mapping procedure with moderate
accuracy and local consistency. Under these assumptions, the the effects of drift on low
level maps can be accurately approximated by perturbations to the endpoints of the robot
trajectory used to construct a low level map. By sampling drift only at endpoints, we will
fail to sample some of the internal structure that is possible in drifts, e.g., we will fail to
distinguish between a linear drift and a spiral pattern with the same endpoints. However,
the existence of significant, complicated drift patterns within a map segment would violate
our assumption of moderate accuracy and local consistency within our low level mapper.
To achieve a hierarchical approach to SLAM, we use a standard SLAM algorithm using
a small portion of the robot?s trajectory as input for the low level mapping process. The
output is not only a distribution over maps, but also a distribution over robot trajectories.
We can treat the distribution over trajectories as a distribution over motions in the higher
level SLAM process, to which additional noise from drift is added. This allows us to use
the output from each of our small mapping efforts as the input for a new SLAM process,
working at a much higher level of time granularity.
For the high level SLAM process, we need to be careful to avoid double counting evidence. Each low level mapping process runs as an independent process intialized with an
empty map. The distribution over trajectories returned by the low level mapping process
incorporates the effects of the observations used by the low level mapper. To avoid double
counting, the high level SLAM process can only weigh the match between the new observations and the existing high level maps. In other words, all of the observations for a single
high level motion step (single low level trajectory) must be evaluated against the high level
map, before any of those observations are used to update the map. We summarize the high
level SLAM loop for each high level particle as follows:
1. Sample a high level SLAM state (high level map and robot state).
2. Perturb the sampled robot state by adding random drift.
3. Sample a low level trajectory from the distribution over trajectories returned by the low
level SLAM process.
4. Compute a high level weight by evaluating the trajectory and robot observations against the
sampled high level map, starting from the perturbed robot state.
5. Update the high level map based upon the new observations.
In practice this can give a much greater improvement in accuracy over simply doubling
the resources allocated to a single level SLAM algorithm because the high level is able to
model and recover from errors much longer than would be otherwise possible with only
a single particle filter. In our implementation we used DP-SLAM at both levels of the
hierarchy to ensure a total computational complexity of O(AP ). However, there is reason
to believe that this approach could be applied to any other sampling-based SLAM method
just as effectively. We also implemented this idea with only one level of hierarchy, but
multiple levels could provide additional robustness. We felt that the size of the domains on
which we tested did not warrant any further levels.
5 Implementation and Empirical Results
Our description of the algorithm and complexity analysis assumes constant time updates to
the vectors storing information in the core DP-SLAM data structures. This can be achieved
in a straightforward manner using doubly linked lists, but a somewhat more complicated
implementation using adjustable arrays is dramatically more efficient in practice. A careful
implementation can also avoid caching maps for interior nodes of the ancestry tree.
As with previous versions of DP-SLAM, we generate many more particles than we keep at
each iteration. Evaluating a particle requires line tracing 181 laser casts. However, many
particles will have significantly lower probability than others and this can be discovered
before they are fully evaluated. Using a technique we call particle culling we use partial
scan information to identify and discard lower probability particles before they are evaluated fully. In practice, this leads to large reduction in the number of laser casts that are
fully traced through the grid. Typically, less than one tenth of the particles generated are
resampled.
For a complex algorithm like DP-SLAM, asymptotic analysis may not always give a
complete picture of real world performance. Therefore, we provide a comparison of actual run times for each method on three different data logs. The particle counts provided are the minimum number of particles needed (at each level) to produce high-quality
maps reliably. The improved run time for the linear algorithm also reflects the benefits
of some improvements in our culling technique and a cleaner implementation permitted by the linear time algorithm. The quadratic code is simply too slow to run on the
Wean Hall data. Log files for these runs are available from the DP-SLAM web page:
http://www.cs.duke.edu/?parr/dpslam/. The results show a significant practical advantage for the linear code, and vast improvement, both in terms of time and number
of particles, for the hierarchical implementation.
Log
loop5
loop25
Wean Hall
Quadratic
Particles Minutes
1500
55
11000
1345
120000
N/A
Linear
Particles Minutes
1500
14
11000
690
120000
2535
Hierarchical
Particles (high/low) Minutes
200/250
12
2000/3000
289
2000/3000
293
Finally, in Figure 1 we include sample output from the hierarchical mapper on the Wean
Hall data shown in our table. In this domain, the robot travels approximately 220m before
returning to its starting position. Each low level SLAM process was run for 75 time steps,
with an average motion of 12cm for each time step. The nonhierarchical approach can produce a very similar result, but requires at least 120,000 particles to do so reliably. (Smaller
numbers of particles produced maps with noticeable drifts and errors.) This extreme difference in particle counts and computation time demonstrates the great improvement that
can be realized with the hierarchical approach. (The Wean Hall dataset has been mapped
Figure 1: CMU?s Wean Hall at 4cm resolution, using hierarchical SLAM. Please zoom in
on the map using a software viewer to appreciate some of the fine detail.
successfully before at low resolution using a non-hierarchical approach with run time per
iteration that grows with the number of iterations [8].)
6 Related Work
Other methods have attempted to preserve uncertainty for longer numbers of time steps.
One approach seeks to delay the resampling step for several iterations, so as to address
the total noise in a certain number of steps as one Gaussian with a larger variance [8]. In
general, look-ahead methods can ?peek? at future observations to use the information from
later time steps to influence samples at a previous time step [3]. The HYMM approach[11]
combines different types of maps. Another way to interpret hierarchical SLAM is in terms
of a hierarchical hidden Markov model framework [5]. In a hierarchical HMM, each node
in the HMM has the potential to invoke sub-HMMs to produce a series of observations. The
main difference is that in hierarchical HMMs, there is assumed to be a single process that
can be represented in different ways. In our hierarchical SLAM approach, only the lowest
level models a physical process, while higher levels model the errors in lower levels.
7 Conclusions and Future Research
We have presented a SLAM algorithm which is the culmination of our efforts to make
multiple hypothesis mapping practical for densely populated maps. Our first algorithmic
accomplishment is to show that this requires no more effort, asymptotically, than pure localization using a particle filter. However, for mapping, the number of particles needed can
be large and can still grow to be unmanageable for large domains due to drift. We therefore
developed a method to improve the accuracy achieveable with a reasonable number of particles. This is accomplished through the use of a hierarchical particle filter. By allowing an
additional level of sampling on top of a series of small particle filters, we can successfully
maintain the necessary uncertainty to produce very accurate maps. This is due to the explicit modeling of the drift, a key process which differentiates this approach from previous
attempts to preserve uncertainty in particle filters.
The hierarchical approach to SLAM has been shown to be very useful in improving DPSLAM performance. This would lead us to believe that similar improvements could also be
realized in applying this to other sampling based SLAM methods. SLAM is perhaps not the
only viable application for hierarchical framework for particle filters. However, one of the
key aspects of SLAM is that the drift can easily be represented by a very low dimensional
descriptor. Other particle filter applications which have drift that must be modeled in many
more dimensions could benefit much less from this hierarchical approach.
The work of Hahnel et al. [8] has made progress in increasing efficiency and reducing
drift by using scan matching rather than pure sampling from a noisy proposal distribution.
Since much of the computation time used by DP-SLAM is spent evaluating bad particles, a
combination of DP-SLAM with scan matching could yield significant practical speedups.
Acknowledgments
This research was supported by SAIC, the Sloan foundation, and the NSF. The Wean Hall
data were gracriously provided by Dirk Hahnel and Dieter Fox.
References
[1] W. Burgard, D. Fox, H. Jans, C. Matenar, and S. Thrun. Sonar-based mapping with mobile
robots using EM. In Proc. of the International Conference on Machine Learning, 1999.
[2] P. Cheeseman, P. Smith, and M. Self. Estimating uncertain spatial relationships in robotics. In
Autonomous Robot Vehicles, pages 167?193. Springer-Verlag, 1990.
[3] N. de Freitas, R. Dearden, F. Hutter, R. Morales-Menendez, J. Mutch, and D. Poole. Diagnosis
by a waiter and a Mars explorer. In IEEE Special Issue on Sequential State Estimation, pages
455?468, 2003.
[4] A. Eliazar and R. Parr. DP-SLAM 2.0. In IEEE International Conference on Robotics and
Automation (ICRA), 2004.
[5] Shai Fine, Yoram Singer, and Naftali Tishby. The hierarchical hidden markov model: Analysis
and applications. Machine Learning, 32(1):41?62, 1998.
[6] Dieter Fox, Wolfram Burgard, Frank Dellaert, and Sebastian Thrun. Monte carlo localization:
Efficient position estimation for mobile robots. In AAAI-99, 1999.
[7] J. Gutmann and K. Konolige. Incremental mapping of large cyclic environments. In IEEE
International Symposium on Computational Intelligence in Robotics and Automation (ICRA),
pages 318?325, 2000.
[8] Dirk Hahnel, Wolfram Burgard, Dieter Fox, and Sebastian Thrun. An efficient fastslam algorithm for generating maps of large-scale cyclic environments from raw laser range measurements. In Proceedings of the International Conference on Intelligent Robots and Systems, 2003.
[9] John H. Leonard, , and Hugh F. Durrant-Whyte. Mobile robot localization by tracking geometric
beacons. In IEEE Transactions on Robotics and Automation, pages 376?382. IEEE, June 1991.
[10] M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit. FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. In
IJCAI-03, Morgan Kaufmann, 2003. 1151?1156.
[11] J. Nieto, J. Guivant, and E. Nebot. The HYbrid Metric Maps (HYMMS): A novel map representation for denseSLAM. In IEEE International Conference on Robotics and Automation
(ICRA), 2004.
[12] S. Thrun. A probabilistic online mapping algorithm for teams of mobile robots. International
Journal of Robotics Research, 20(5):335?363, 2001.
| 2904 |@word version:4 manageable:1 seek:1 gradual:1 reduction:1 cyclic:2 contains:4 series:2 existing:1 freitas:1 current:9 discretization:3 must:2 john:1 ronald:1 enables:1 update:14 resampling:4 intelligence:1 leaf:2 fewer:2 menendez:1 fastslam:2 smith:1 core:2 wolfram:2 pointer:10 provides:2 coarse:4 node:30 direct:1 become:1 symposium:1 viable:1 doubly:1 overhead:1 combine:1 manner:4 sacrifice:1 expected:3 behavior:1 planning:1 automatically:1 resolve:2 actual:1 nieto:1 cache:7 increasing:1 becomes:1 provided:3 estimating:1 bounded:1 underlying:1 lowest:1 cm:2 developed:1 ought:1 remember:1 act:1 ensured:2 scaled:1 returning:2 demonstrates:1 appear:1 producing:1 before:6 local:12 treat:1 consequence:2 path:1 culling:2 ap:7 approximately:1 challenging:1 hmms:2 range:4 practical:3 acknowledgment:1 practice:5 procedure:2 jan:1 area:2 empirical:1 significantly:1 matching:2 word:1 close:4 interior:2 impossible:2 influence:1 applying:1 accumulating:1 accumulation:1 www:1 map:71 straightforward:2 starting:2 resolution:2 recovery:1 pure:4 correcting:1 array:2 fill:2 population:1 autonomous:1 updated:2 hierarchy:2 experiencing:1 duke:3 us:2 designing:1 hypothesis:3 amortized:2 approximated:1 particularly:1 coarser:1 observed:1 worst:1 region:1 cycle:1 gutmann:1 removed:1 principled:1 balanced:1 environment:3 weigh:1 complexity:11 depend:1 segment:3 localization:14 distinctive:1 upon:2 creates:1 efficiency:1 easily:1 joint:2 represented:2 laser:7 monte:1 extraordinarily:1 apparent:1 quite:1 larger:1 otherwise:1 ability:3 statistic:1 itself:3 noisy:1 online:1 obviously:1 differentiate:1 advantage:2 sequence:1 net:1 took:1 propose:2 loop:3 achieve:1 description:1 parent:7 double:2 empty:1 ijcai:1 produce:6 generating:1 incremental:1 converges:1 spent:1 propagating:1 pose:2 minor:1 noticeable:1 progress:1 wean:6 exacerbated:1 strong:1 coverage:2 c:2 involves:1 come:1 recovering:1 quantify:1 implies:1 implemented:1 whyte:1 merged:1 filter:25 exploration:1 successor:1 require:4 unsampled:1 summation:1 traversed:1 extension:1 viewer:1 correction:1 hold:2 proximity:1 sufficiently:1 considered:1 hall:6 great:1 mapping:17 algorithmic:1 parr:5 estimation:2 proc:1 travel:2 currently:1 visited:1 individually:1 create:1 successfully:3 reflects:1 sensor:6 gaussian:4 always:1 odometry:1 rather:1 avoid:5 caching:1 mobile:4 june:1 improvement:7 underscore:1 typically:2 integrated:2 entire:3 unlikely:1 hidden:2 koller:1 ancestor:7 provably:1 issue:4 spatial:1 special:2 field:1 construct:3 sampling:8 identical:2 represents:1 look:1 warrant:1 inevitable:1 future:2 others:1 simplify:1 intelligent:1 konolige:1 preserve:2 densely:3 zoom:1 saic:1 phase:1 maintain:2 attempt:2 highly:1 extreme:1 slam:54 accurate:4 capable:2 partial:2 necessary:2 fox:4 tree:21 infeasibly:1 filled:1 minimal:3 uncertain:1 hutter:1 earlier:2 modeling:1 cover:1 maximization:1 cost:3 addressing:1 entry:5 burgard:3 delay:2 successful:1 recognizing:1 too:1 tishby:1 stored:4 perturbed:1 combined:2 density:1 international:6 hugh:1 probabilistic:3 invoke:1 ambiguity:2 aaai:1 containing:1 potential:1 diversity:1 de:1 automation:4 explicitly:1 sloan:1 onset:1 vehicle:1 performed:1 view:3 later:1 observing:1 linked:1 portion:1 recover:2 maintains:3 complicated:2 inherited:2 shai:1 square:15 accuracy:4 kaufmann:1 variance:1 largely:1 descriptor:1 yield:1 identify:1 raw:1 accurately:2 produced:2 carlo:1 trajectory:12 anode:1 history:1 simultaneous:3 sebastian:2 checked:1 failure:1 against:2 associated:4 attributed:1 sampled:2 proved:1 dataset:1 back:1 originally:1 higher:4 symptomatic:1 permitted:1 mutch:1 improved:2 done:1 evaluated:3 strongly:1 generality:1 mar:1 just:1 implicit:1 working:1 web:1 replacing:1 quality:1 perhaps:1 believe:2 grows:1 building:1 effect:5 true:3 during:1 branching:1 self:1 please:1 unambiguous:1 naftali:1 complete:2 motion:4 novel:2 preceded:1 physical:1 endpoint:3 extend:1 tail:1 interpret:1 accumulate:1 significant:7 measurement:1 grid:23 populated:3 consistency:2 particle:84 closing:1 had:1 mapper:3 robot:21 access:8 longer:6 posterior:1 perspective:1 moderate:2 discard:1 store:2 certain:2 verlag:1 accomplished:1 seen:2 minimum:1 greater:3 additional:4 somewhat:1 morgan:1 waiter:1 multiple:4 full:2 desirable:2 reduces:1 violate:1 match:1 long:1 beacon:1 permitting:1 finder:2 basic:4 essentially:1 expectation:1 cmu:1 metric:1 iteration:7 achieved:2 cell:5 robotics:6 proposal:1 addition:1 fine:2 grow:1 source:3 peek:1 allocated:1 extra:1 pass:1 file:1 thing:1 incorporates:1 seem:1 call:1 counting:2 granularity:2 easy:1 spiral:1 iterate:1 affect:1 idea:2 intensive:1 accomplishment:1 shift:1 effort:3 patched:1 returned:2 dellaert:1 cause:1 action:1 dramatically:1 useful:2 detailed:2 cleaner:1 amount:2 locally:1 reduced:1 generate:1 http:1 exist:2 nsf:1 rescue:1 per:8 track:2 diagnosis:1 promise:1 affected:1 group:1 key:2 traced:3 tenth:1 vast:1 asymptotically:1 nonhierarchical:1 run:9 uncertainty:4 family:1 reasonable:2 resampled:4 completing:1 culmination:1 distinguish:1 quadratic:2 encountered:1 occur:2 ahead:1 afforded:1 software:1 diffuse:1 felt:1 aspect:1 speed:1 extremely:1 relatively:2 speedup:1 department:1 combination:1 smaller:1 em:2 reconstructing:1 making:1 dieter:3 eliazar:4 depletion:2 computationally:1 resource:3 count:2 fail:2 differentiates:1 needed:3 singer:1 end:1 available:2 operation:1 permit:2 reallocation:1 experimentation:1 apply:1 hierarchical:24 away:2 robustness:1 existence:1 original:1 assumes:2 remaining:1 ensure:1 include:1 top:1 opportunity:1 maintaining:4 exploit:1 giving:1 yoram:1 perturb:1 build:2 icra:3 appreciate:1 sweep:2 already:1 added:4 occurs:1 realized:2 unclear:1 dp:22 distance:2 separate:1 mapped:1 thrun:5 landmark:2 hmm:2 consumption:1 extent:1 trivial:1 reason:1 kalman:2 code:2 modeled:2 relationship:1 insufficient:1 nc:1 unfortunately:1 frank:1 implementation:7 reliably:2 adjustable:1 allowing:1 observation:30 markov:2 finite:4 enabling:1 situation:2 team:1 dirk:2 discovered:1 perturbation:3 arbitrary:1 drift:34 cast:3 required:1 quadratically:1 deletion:2 inaccuracy:1 address:3 able:1 poole:1 pattern:2 summarize:1 deleting:2 shifting:1 belief:1 critical:1 event:2 natural:2 dearden:1 explorer:1 hybrid:1 cheeseman:1 representing:1 occupancy:2 improve:1 technology:1 picture:1 faced:1 geometric:1 inheritance:1 removal:1 asymptotic:5 fully:4 generation:1 limitation:2 filtering:4 foundation:1 sufficient:2 consistent:3 storing:2 austin:1 morale:1 supported:1 allow:1 unmanageable:1 sparse:1 tracing:1 benefit:4 depth:2 dimension:1 evaluating:3 cumulative:2 world:1 made:5 avoided:1 wegbreit:1 transaction:1 keep:1 global:5 conceptual:1 assumed:2 search:2 ancestry:18 sonar:1 table:1 reasonably:1 improving:1 complex:1 constructing:1 domain:7 did:1 dense:2 main:3 noise:6 child:7 obviating:1 fashion:1 slow:1 precision:5 sub:1 position:3 montemerlo:1 explicit:4 durrant:1 removing:3 minute:3 bad:1 achieveable:1 list:2 evidence:1 merging:1 effectively:2 adding:2 sequential:1 hole:2 gap:3 durham:1 suited:1 simply:3 likely:1 contained:1 ordered:1 tracking:2 doubling:1 springer:1 corresponds:1 opacity:7 extracted:1 viewed:1 careful:2 leonard:1 shared:1 change:1 hard:1 specifically:1 typical:1 except:1 reducing:1 distributes:1 called:2 total:7 pas:5 attempted:1 internal:1 latter:1 scan:4 violated:1 tested:1 correlated:1 |
2,098 | 2,905 | Divergences, surrogate loss functions and
experimental design
XuanLong Nguyen
University of California
Berkeley, CA 94720
[email protected]
Martin J. Wainwright
University of California
Berkeley, CA 94720
[email protected]
Michael I. Jordan
University of California
Berkeley, CA 94720
[email protected]
Abstract
In this paper, we provide a general theorem that establishes a correspondence between surrogate loss functions in classification and the family
of f -divergences. Moreover, we provide constructive procedures for
determining the f -divergence induced by a given surrogate loss, and
conversely for finding all surrogate loss functions that realize a given
f -divergence. Next we introduce the notion of universal equivalence
among loss functions and corresponding f -divergences, and provide necessary and sufficient conditions for universal equivalence to hold. These
ideas have applications to classification problems that also involve a component of experiment design; in particular, we leverage our results to
prove consistency of a procedure for learning a classifier under decentralization requirements.
1
Introduction
A unifying theme in the recent literature on classification is the notion of a surrogate loss
function?a convex upper bound on the 0-1 loss. Many practical classification algorithms
can be formulated in terms of the minimization of surrogate loss functions; well-known
examples include the support vector machine (hinge loss) and Adaboost (exponential loss).
Significant progress has been made on the theoretical front by analyzing the general statistical consequences of using surrogate loss functions [e.g., 2, 10, 13].
These recent developments have an interesting historical antecedent. Working in the context of experimental design, researchers in the 1960?s recast the (intractable) problem of
minimizing the probability of classification error in terms of the maximization of various
surrogate functions [e.g., 5, 8]. Examples of experimental design include the choice of a
quantizer as a preprocessor for a classifier [12], or the choice of a ?signal set? for a radar
system [5]. The surrogate functions that were used included the Hellinger distance and various forms of KL divergence; maximization of these functions was proposed as a criterion
for the choice of a design. Theoretical support for this approach was provided by a classical
theorem on the comparison of experiments due to Blackwell [3]. An important outcome
of this line of work was the definition of a general family of ?f -divergences? (also known
as ?Ali-Silvey distances?), which includes Hellinger distance and KL divergence as special
cases [1, 4].
In broad terms, the goal of the current paper is to bring together these two literatures, in
particular by establishing a correspondence between the family of surrogate loss functions
and the family of f -divergences. Several specific goals motivate us in this regard: (1)
different f -divergences are related by various well-known inequalities [11], so that a correspondence between loss functions and f -divergences would allow these inequalities to
be harnessed in analyzing surrogate loss functions; (2) a correspondence could allow the
definition of interesting equivalence classes of losses or divergences; and (3) the problem
of experimental design, which motivated the classical research on f -divergences, provides
new venues for applying the loss function framework from machine learning. In particular,
one natural extension?and one which we explore towards the end of this paper?is in requiring consistency not only in the choice of an optimal discriminant function but also in
the choice of an optimal experiment design.
The main technical contribution of this paper is to state and prove a general theorem relating surrogate loss functions and f -divergences. 1 We show that the correspondence is quite
strong: any surrogate loss induces a corresponding f -divergence, and any f -divergence
satisfying certain conditions corresponds to a family of surrogate loss functions. Moreover,
exploiting tools from convex analysis, we provide a constructive procedure for finding loss
functions from f -divergences. We also introduce and analyze a notion of universal equivalence among loss functions (and corresponding f -divergences). Finally, we present an application of these ideas to the problem of proving consistency of classification algorithms
with an additional decentralization requirement.
2
Background and elementary results
Consider a covariate X ? X , where X is a compact topological space, and a random
variable Y ? Y := {?1, +1}. The space (X ? Y ) is assumed to be endowed with a
Borel regular probability measure P . In this paper, we consider a variant of the standard
classification problem, in which the decision-maker, rather than having direct access to X,
only observes some variable Z ? Z that is obtained via conditional probability Q(Z|X).
The stochastic map Q is referred to as an experiment in statistics; in the signal processing
literature, where Z is generally taken to be discrete, it is referred to as a quantizer. We let
Q denote the space of all stochastic Q and let Q0 denote its deterministic subset.
Given a fixed experiment Q, we can formulate a standard binary classification problem as
one of finding a measurable function ? ? ? := {Z ? R} that minimizes the Bayes risk
P (Y 6= sign(?(Z))). Our focus is the broader question of determining both the classifier
? ? ?, as well as the experiment choice Q ? Q so as to minimize the Bayes risk.
The Bayes risk corresponds to the expectation of the 0-1 loss. Given the non-convexity of
this loss function, it is natural to consider a surrogate loss function ? that we optimize in
place of the 0-1 loss. We refer to the quantity R? (?, Q) := E?(Y ?(Z)) as the ?-risk. For
each fixed quantization rule Q, the optimal ? risk (as a function of Q) is defined as follows:
R? (Q) := inf R? (?, Q).
???
(1)
Given priors q = P (Y = ?1) and p = P (Y = 1), define nonnegative measures ? and ?:
Z
?(z) = P (Y = 1, Z = z) = p Q(z|x)dP (x|Y = 1)
x
Z
?(z) = P (Y = ?1, Z = z) = q Q(z|x)dP (x|Y = ?1).
x
1
Proofs are omitted from this manuscript for lack of space; see the long version of the paper [7]
for proofs of all of our results.
As a consequence of Lyapunov?s theorem, the space of {(?, ?)} obtained by varying Q ?
Q (or Q0 ) is both compact and convex (see [12] for details). For simplicity, we assume that
the space Q of Q is restricted such that both ? and ? are strictly positive measures.
One approach to choosing Q is to define an f -divergence between ? and ?; indeed this is
the classical approach referred to earlier [e.g., 8]. Rather than following this route, however,
we take an alternative path, setting up the problem in terms of ?-risk and optimizing out
the discriminant function ?. Note in particular that the ?-risk can be represented in terms
of the measures ? and ? as follows:
X
R? (?, Q) =
?(?(z))?(z) + ?(??(z))?(z).
(2)
z
This representation allows us to compute the optimal value for ?(z) for all z ? Z, as well
as the optimal ? risk for a fixed Q. We illustrate this calculation with several examples:
0-1 loss. If ? is 0-1 loss, then ?(z) = sign(?(z)??(z)).
Thus the optimal Bayes
P risk given
P
a fixed Q takes the form: Rbayes (Q) = z?Z min{?(z), ?(z)} = 21 ? 12 z?Z |?(z) ?
?(z)| =: 12 (1 ? V (?, ?)), where V (?, ?) denotes the variational distance between two
measures ? and ?.
Hinge loss. Let ?hinge (y?(z)) = (1 ? y?(z))+ . In this
Pcase ?(z) = sign(?(z) ? ?(z))
and
the
optimal
risk
takes
the
form:
R
(Q)
=
hinge
z?Z 2 min{?(z), ?(z)} = 1 ?
P
z?Z |?(z) ? ?(z)| = 1 ? V (?, ?) = 2Rbayes (Q).
Least squares loss. Letting ?sqr (y?(z)) = (1 ? y?(z))2 , we have ?(z) = ?(z)??(z)
?(z)+?(z) . The
P
P
4?(z)?(z)
(?(z)??(z))2
optimal risk takes the form: Rsqr (Q) = z?Z ?(z)+?(z) = 1 ? z?Z ?(z)+?(z) =:
1 ? ?(?, ?), where ?(?, ?) denotes the triangular discrimination distance.
?
?
?(z)
Logistic loss. Letting ?log (y?(z)) := log 1 + exp?y?(z) , we have ?(z) = log ?(z)
.
P
?(z)+?(z)
The optimal risk for logistic loss takes the form: Rlog (Q) = z?Z ?(z) log ?(z) +
?+?
= log 2 ? KL(?|| ?+?
?(z) log ?(z)+?(z)
?(z)
2 ) ? KL(?|| 2 ) =: log 2 ? C(?, ?), where
C(U, V ) denotes the capacitory discrimination distance.
?(z)
Exponential loss. Letting ?exp (y?(z)) = exp(?y?(z)), we have ?(z) = 21 log ?(z)
.
p
P
The optimal risk for exponential loss takes the form: Rexp (Q) = z?Z 2 ?(z)?(z) =
p
p
P
1 ? z?Z ( ?(z) ? ?(z))2 = 1 ? 2h2 (?, ?), where h(?, ?) denotes the Hellinger
distance between measures ? and ?.
All of the distances given above (e.g., variational, Hellinger) are all particular instances of
f -divergences. This fact points to an interesting correspondence between optimized ?-risks
and f -divergences. How general is this correspondence?
3
The correspondence between loss functions and f -divergences
In order to resolve this question, we begin with precise definitions of f -divergences, and
surrogate loss functions. A f -divergence functional is defined as follows [1, 4]:
Definition 1. Given any continuous convex function f : [0, +?) ? R ?? {+?},
? the
P
?(z)
f -divergence between measures ? and ? is given by If (?, ?) := z ?(z)f ?(z)
.
For instance, the variational distance is given by f (u) = |u ? 1|, KL divergence by f (u) =
u log u, triangular
discrimination by f (u) = (u ? 1)2 /(u + 1), and Hellinger distance by
1 ?
f (u) = 2 ( u ? 1)2 .
Surrogate loss ?. First, we require that any surrogate loss function ? is continuous and
convex. Second, the function ? must be classification-calibrated [2], meaning that for
any a, b ? 0 and a 6= b, inf ?:?(a?b)<0 ?(?)a + ?(??)b > inf ??R ?(?)a + ?(??)b. It
can be shown [2] that in the convex case ? is classification-calibrated if and only if it is
differentiable at 0 and ?0 (0) < 0. Lastly, let ?? = inf ? {?(?) = inf ?}. If ?? < +?,
then for any ? > 0, we require that ?(?? ? ?) ? ?(?? + ?). The interpretation of the last
assumption is that one should penalize deviations away from ?? in the negative direction
at least as strongly as deviations in the positive direction; this requirement is intuitively
reasonable given the margin-based interpretation of ?.
From ?-risk to f -divergence. We begin with a simple result that formalizes how any ?risk induces a corresponding f -divergence. More precisely, the following lemma proves
that the optimal ? risk for a fixed Q can be written as the negative of an f divergence.
Lemma 2. For each fixed Q, let ?Q denote the optimal decision rule. The ? risk for (Q, ?Q )
is an f -divergence between ? and ? for some convex function f :
R? (Q) = ?If (?, ?).
(3)
Proof. The optimal ? risk takes the form:
?
?
X
X
?(z)
R? (Q) =
inf (?(?)?(z) + ?(??)?(z)) =
?(z) inf ?(??) + ?(?)
.
?
?
?(z)
z
z?Z
?(z)
For each z let u = ?(z)
, then inf ? (?(??) + ?(?)u) is a concave function of u (since
minimization over a set of linear function is a concave function). Thus, the claim follows
by defining (for u ? R)
f (u) := ? inf (?(??) + ?(?)u).
(4)
?
From f -divergence to ?-risk. In the remainder of this section, we explore the converse
of Lemma 2. Given a divergence If (?, ?) for some convex function f , does there exist a
loss function ? for which R? (Q) = ?If (?, ?)? In the following, we provide a precise
characterization of the set of f -divergences that can be realized in this way, as well as a
constructive procedure for determining all ? that realize a given f -divergence.
Our method requires the introduction of several intermediate functions. First, let us define,
for each ?, the inverse mapping ??1 (?) := inf{? : ?(?) ? ?}, where inf ? := +?.
Using the function ??1 , we then define a new function ? : R ? R by
?
?(???1 (?)) if ??1 (?) ? R,
?(?) :=
(5)
+?
otherwise.
Note that the domain of ? is Dom(?) = {? ? R : ??1 (?) ? R}. Define
?1 := inf{? : ?(?) < +?} and ?2 := inf{? : ?(?) = inf ?}.
?
?
(6)
It is simple to check that inf ? = inf ? = ?(? ), and ?1 = ?(? ), ?2 = ?(??? ).
Furthermore, ?(?2 ) = ?(?? ) = ?1 , ?(?1 ) = ?(??? ) = ?2 . With this set-up, the
following lemma captures several important properties of ?:
Lemma 3.
(a) ? is strictly decreasing in (?1 , ?2 ). If ? is decreasing, then ? is also
decreasing in (??, +?). In addition, ?(?) = +? for ? < ?1 .
(b) ? is convex in (??, ?2 ]. If ? is decreasing, then ? is convex in (??, +?).
(c) ? is lower semi-continuous, and continuous in its domain.
(d) There exists u? ? (?1 , ?2 ) such that ?(u? ) = u? .
(e) There holds ?(?(?)) = ? for all ? ? (?1 , ?2 ).
The connection between ? and an f -divergence arises from the following fact. Given the
definition (5) of ?, it is possible to show that
f (u) = sup(??u ? ?(?)) = ?? (?u),
(7)
??R
where ?? denotes the conjugate dual of the function ?. Hence, if ? is a lower semicontinuous convex function, it is possible to recover ? from f by means of convex duality [9]:
?(?) = f ? (??). Thus, equation (5) provides means for recovering a loss function ? from
?. Indeed, the following theorem provides a constructive procedure for finding all such ?
when ? satisfies necessary conditions specified in Lemma 3:
Theorem 4. (a) Given a lower semicontinuous convex function f : R ? R, define:
?(?) = f ? (??).
(8)
If ? is a decreasing function satisfying the properties specified in parts (c), (d) and (e) of
Lemma 3, then there exist convex continuous loss function ? for which (3) and (4) hold.
(b) More precisely, all such functions ? are of the form: For any ? ? 0,
?(?) = ?(g(? + u? )),
?
?
?
and
?
?(??) = g(? + u? ),
(9)
?
where u satisfies ?(u ) = u for some u ? (?1 , ?2 ) and g : [u , +?) ? R is any
increasing continuous convex function such that g(u? ) = u? . Moreover, g is differentiable
at u? + and g 0 (u? +) > 0.
One interesting consequence of Theorem 4 that any realizable f -divergence can in fact be
obtained from a fairly large set of ? loss functions. More precisely, examining the statement
of Theorem 4(b) reveals that for ? ? 0, we are free to choose a function g that must satisfy
only mild conditions; given a choice of g, then ? is specified for ? > 0 accordingly by
equation (9). We describe below how the Hellinger distance, for instance, is realized not
only by the exponential loss (as described earlier), but also by many other surrogate loss
functions. Additional examples can be found in [7].
Illustrative ?
examples. Consider Hellinger distance, which is an f -divergence 2 with
f (u) = ?2 u. Augment the domain of f with f (u) = +? for u < 0. Following
the prescription of Theorem 4(a), we first recover ? from f :
?
1/? when ? > 0
?(?) = f ? (??) = sup(??u ? f (u)) =
+? otherwise.
u?R
Clearly, u? = 1. Now if we choose g(u) = eu?1 , then we obtain the exponential loss
?(?) = exp(??). However, making the alternative choice g(u) = u, we obtain the
function ?(?) = 1/(? + 1) and ?(??) = ? + 1, which also realizes the Hellinger distance.
Recall that we have shown previously that the 0-1 loss induces the variational distance,
which can be expressed as an f -divergence with fvar (u) = ?2 min(u, 1) for u ? 0. It
is thus of particular interest to determine other loss functions that also lead to variational
distance. If we augment the function fvar by defining fvar (u) = +? for u < 0, then we
can recover ? from fvar as follows:
?
(2 ? ?)+ when ? ? 0
?
?(?) = fvar
(??) = sup(??u ? fvar (u)) =
+?
when ? < 0.
u?R
2
We consider f -divergences for two convex functions f1 and f2 to be equivalent if f1 and f2 are
related by a linear term, i.e., f1 = cf2 + au + b for some constants c > 0, a, b, because then If1 and
If2 are different by a constant.
Clearly u? = 1. Choosing g(u) = u leads to the hinge loss ?(?) = (1 ? ?)+ , which is
consistent with our earlier findings. Making the alternative choice g(u) = e u?1 leads to a
rather different loss?namely, ?(?) = (2 ? e? )+ for ? ? 0 and ?(?) = e?? for ? < 0?
that also realizes the variational distance.
Using Theorem 4 it can be shown that an f -divergence is realizable by a margin-based surrogate loss if and only if it is symmetric [7]. Hence, the list of non-realizable f -divergences
includes the KL divergence KL(?||?) (as well as KL(?||?)). The symmetric KL divergence KL(?||?) + KL(?||?) is a realizable f -divergence. Theorem 4 allows us to construct all ? losses that realize it. One of them turns out to have the simple closed-form
?(?) = e?? ? ?, but obtaining it requires some non-trivial calculations [7].
4
On comparison of loss functions and quantization schemes
The previous section was devoted to study of the correspondence between f -divergences
and the optimal ?-risk R? (Q) for a fixed experiment Q. Our ultimate goal, however, is that
of choosing an optimal Q, a problem known as experimental design in the statistics literature [3]. One concrete application is the design of quantizers for performing decentralized
detection [12, 6] in a sensor network.
In this section, we address the experiment design problem via the joint optimization of ?risk (or more precisely, its empirical version) over both the decision ? and the choice of
experiment Q (hereafter referred to as a quantizer). This procedure raises the natural theoretical question: for what loss functions ? does such joint optimization lead to minimum
Bayes risk? Note that the minimum here is taken over both the decision rule ? and the
space of experiments Q, so that this question is not covered by standard consistency results [13, 10, 2]. Here we describe how the results of the previous section can be leveraged
to resolve this issue of consistency.
4.1
Universal equivalence
The connection between f -divergences and 0-1 loss can be traced back to seminal work
on the comparison of experiments [3]. Formally, we say that the quantization scheme Q 1
dominates than Q2 if Rbayes (Q1 ) ? Rbayes (Q2 ) for any prior probabilities q ? (0, 1). We
have the following theorem [3] (see also [7] for a short proof):
Theorem 5. Q1 dominates Q2 iff If (?Q1 , ? Q1 ) ? If (?Q2 , ? Q2 ), for all convex functions
f . The superscripts denote the dependence of ? and ? on the quantizer rules Q1 , Q2 .
Using Lemma 2, we can establish the following:
Corollary 6. Q1 dominates Q2 iff R? (Q1 ) ? R? (Q2 ) for any surrogate loss ?.
One implication of Corollary 6 is that if R? (Q1 ) ? R? (Q2 ) for some loss function ?, then
Rbayes (Q1 ) ? Rbayes (Q2 ) for some set of prior probabilities on the labels Y . This fact
justifies the use of a surrogate ?-loss as a proxy for the 0-1 loss, at least for a certain subset
of prior probabilities. Typically, however, the goal is to select the optimal experiment Q
for a pre-specified set of priors, in which context this implication is of limited use. We
are thus motivated to consider a different method of determining which loss functions (or
equivalently, f -divergences) lead to the same optimal experimental design as the 0-1 loss
(respectively the variational distance). More generally, we are interested in comparing two
arbitrary loss function ?1 and ?2 , with corresponding divergences induced by f1 and f2
respectively:
Definition 7. The surrogate loss functions ?1 and ?2 are universally equivalent, denoted
u
u
by ?1 ? ?2 (and f1 ? f2 ), if for any P (X, Y ) and quantization rules Q1 , Q2 , there holds:
R?1 (Q1 ) ? R?1 (Q2 ) ? R?2 (Q1 ) ? R?2 (Q2 ).
(10)
The following result provides necessary and sufficient conditions for universal equivalence:
Theorem 8. Suppose that f1 and f2 are differentiable a.e., convex functions that map
u
[0, +?) to R. Then f1 ? f2 if and only if f1 (u) = cf2 (u) + au + b for some constants
a, b ? R and c > 0.
If we restrict our attention to convex and differentiable a.e. functions f , then it follows that
all f -divergences univerally equivalent to the variational distance must have the form
f (u) = ?c min(u, 1) + au + b
with c > 0.
(11)
As a consequence, the only ?-loss functions universally equivalent to 0-1 loss are those that
induce an f -divergence of this form (11). One well-known example of such a function is
the hinge loss; more generally, Theorem 4 allows us to construct all such ?.
4.2
Consistency in experimental design
The notion of universal equivalence might appear quite restrictive because condition (10)
must hold for any underlying probability measure P (X, Y ). However, this is precisely
what we need when P (X, Y ) is unknown. Assume that the knowledge about P (X, Y )
comes from an empirical data sample (xi , yi )ni=1 .
Consider any algorithm (such as that proposed by Nguyen et al. [6]) that involves choosing
a classifier-quantizer pair (?, Q) ? ? ? Q by minimizing an empirical version of ?-risk:
n
XX
? ? (?, Q) := 1
R
?(yi ?(z))Q(z|xi ).
n i=1 z
More formally, suppose that (Cn , Dn ) is a sequence of increasing compact function classes
such that C1 ? C2 ? . . . ? ? and D1 ? D2 ? . . . ? Q. Let (?n? , Q?n ) be an optimal
? ? (?, Q), and let R?
solution to the minimization problem min(?,Q)?(Cn ,Dn ) R
bayes denote
the minimum Bayes risk achieved over the space of decision rules (?, Q) ? (?, Q). We
?
call Rbayes (?n? , Q?n ) ? Rbayes
the Bayes error of our estimation procedure. We say that
such a procedure is universally consistent if the Bayes error tends to 0 as n ? ?, i.e., for
any (unknown) Borel probability measure P on X ? Y ,
?
lim Rbayes (?n? , Q?n ) ? Rbayes
= 0 in probability.
n??
When the surrogate loss ? is universally equivalent to 0-1 loss, we can prove that suitable learning procedures are indeed universally consistent. Our approach is based on the
framework developed by various authors [13, 10, 2] for the case of ordinary classification, and using the strategy of decomposing the Bayes error into a combination of (a)
approximation error introduced by the bias of the function classes Cn ? ?: E0 (Cn , Dn ) =
inf (?,Q)?(Cn ,Dn ) R? (?, Q) ? R?? , where R?? := inf (?,Q)?(?,Q) R? (?, Q); and (b) estimation error introduced by the variance of using finite sample size n, E1 (Cn , Dn ) =
? ? (?, Q) ? R? (?, Q)|, where the expectation is taken with respect
E sup(?,Q)?(Cn ,Dn ) |R
to the (unknown) probability measure P (X, Y ).
Assumptions. Assume that the loss function ? is universally equivalent to the 0-1
loss. From Theorem 8, the corresponding f -divergence must be of the form f (u) =
?c min(u, 1) + au + b, for a, b ? R and c > 0. Finally, we also assume that
(a ? b)(p ? q) ? 0 and ?(0) ? 0.3 In addition, for each n = 1, 2, . . ., suppose that
Mn := supy,z sup(?,Q)?(Cn ,Dn ) |?(y?(z))| < +?.
3
These technical conditions are needed so that the approximation error due to varying Q dominates the approximation error due to varying ?. Setting a = b is sufficient.
The following lemma plays a key role in our proof: it links the excess ?-risk to the Bayes
error when performing joint minimization:
?
) ? R? (?, Q) ? R?? .
Lemma 9. For any (?, Q), we have 2c (Rbayes (?, Q) ? Rbayes
Finally, we can relate the Bayes error to the approximation error and estimation error, and
provide general conditions for universal consistency:
Theorem 10. (a) For any Borel probability measure P , with probability at p
least 1??, there
?
? 2c (2E1 (Cn , Dn ) + E0 (Cn , Dn ) + 2Mn 2 ln(2/?)/n).
holds: Rbayes (?n? , Q?n ) ? Rbayes
?
(b) (Universal Consistency) If ??
n=1 Dn is dense in Q and if ?n=1 Cn is dense in ? so
that limn?? E0 (Cn , Dn ) = 0, and if the sequence of function classes
p (Cn , Dn ) grows
sufficiently slowly enough so that limn?? E1 (Cn , Dn ) = limn?? Mn ln n/n = 0, there
?
= 0 in probability.
holds limn?? Rbayes (?n? , Q?n ) ? Rbayes
5
Conclusions
We have presented a general theoretical connection between surrogate loss functions and
f -divergences. As illustrated by our application to decentralized detection, this connection can provide new domains of application for statistical learning theory. We also expect
that this connection will provide new applications for f -divergences within learning theory; note in particular that bounds among f -divergences (of which many are known; see,
e.g., [11]) induce corresponding bounds among loss functions.
References
[1] S. M. Ali and S. D. Silvey. A general class of coefficients of divergence of one distribution from
another. J. Royal Stat. Soc. Series B, 28:131?142, 1966.
[2] P. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification and risk bounds. Journal
of the American Statistical Association, 2005. To appear.
[3] D. Blackwell. Equivalent comparisons of experiments. Annals of Statistics, 24(2):265?272,
1953.
[4] I. Csisz?ar. Information-type measures of difference of probability distributions and indirect
observation. Studia Sci. Math. Hungar, 2:299?318, 1967.
[5] T. Kailath. The divergence and Bhattacharyya distance measures in signal selection. IEEE
Trans. on Communication Technology, 15(1):52?60, 1967.
[6] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Nonparametric decentralized detection using
kernel methods. IEEE Transactions on Signal Processing, 53(11):4053?4066, 2005.
[7] X. Nguyen, M. J. Wainwright, and M. I. Jordan. On divergences, surrogate loss functions
and decentralized detection. Technical Report 695, Department of Statistics, University of
California at Berkeley, September 2005.
[8] H. V. Poor and J. B. Thomas. Applications of Ali-Silvey distance measures in the design of
generalized quantizers for binary decision systems. IEEE Trans. on Communications, 25:893?
900, 1977.
[9] G. Rockafellar. Convex Analysis. Princeton University Press, Princeton, 1970.
[10] I. Steinwart. Consistency of support vector machines and other regularized kernel machines.
IEEE Trans. Info. Theory, 51:128?142, 2005.
[11] F. Topsoe. Some inequalities for information divergence and related measures of discrimination.
IEEE Transactions on Information Theory, 46:1602?1609, 2000.
[12] J. Tsitsiklis. Extremal properties of likelihood-ratio quantizers. IEEE Trans. on Communication,
41(4):550?558, 1993.
[13] T. Zhang. Statistical behavior and consistency of classification methods based on convex risk
minimization. Annal of Statistics, 53:56?134, 2004.
| 2905 |@word mild:1 version:3 d2:1 semicontinuous:2 q1:12 series:1 hereafter:1 bhattacharyya:1 wainwrig:1 current:1 comparing:1 must:5 written:1 realize:3 discrimination:4 accordingly:1 short:1 provides:4 characterization:1 quantizer:5 math:1 zhang:1 dn:13 c2:1 direct:1 prove:3 hellinger:8 introduce:2 indeed:3 behavior:1 decreasing:5 resolve:2 increasing:2 provided:1 begin:2 moreover:3 underlying:1 xx:1 what:2 minimizes:1 q2:13 developed:1 finding:5 formalizes:1 berkeley:7 concave:2 classifier:4 converse:1 appear:2 mcauliffe:1 positive:2 tends:1 consequence:4 analyzing:2 establishing:1 path:1 might:1 au:4 equivalence:7 conversely:1 limited:1 practical:1 procedure:9 universal:8 empirical:3 pre:1 induce:2 regular:1 selection:1 context:2 applying:1 risk:28 seminal:1 optimize:1 measurable:1 equivalent:7 map:2 deterministic:1 attention:1 convex:21 formulate:1 simplicity:1 rule:6 proving:1 notion:4 annals:1 suppose:3 play:1 satisfying:2 role:1 capture:1 eu:1 observes:1 convexity:2 dom:1 radar:1 motivate:1 raise:1 ali:3 decentralization:2 f2:6 joint:3 indirect:1 various:4 represented:1 describe:2 outcome:1 choosing:4 quite:2 say:2 otherwise:2 triangular:2 statistic:5 superscript:1 sequence:2 differentiable:4 if2:1 remainder:1 iff:2 rlog:1 csisz:1 exploiting:1 requirement:3 illustrate:1 stat:1 progress:1 strong:1 soc:1 recovering:1 c:2 involves:1 come:1 lyapunov:1 direction:2 stochastic:2 require:2 f1:8 elementary:1 extension:1 strictly:2 hold:7 rexp:1 sufficiently:1 exp:4 mapping:1 claim:1 omitted:1 estimation:3 realizes:2 label:1 maker:1 topsoe:1 extremal:1 establishes:1 tool:1 minimization:5 clearly:2 sensor:1 rather:3 varying:3 broader:1 corollary:2 focus:1 check:1 likelihood:1 realizable:4 typically:1 interested:1 issue:1 classification:13 among:4 dual:1 augment:2 denoted:1 development:1 special:1 fairly:1 construct:2 having:1 broad:1 report:1 divergence:58 antecedent:1 detection:4 interest:1 devoted:1 silvey:3 implication:2 necessary:3 e0:3 theoretical:4 annal:1 instance:3 earlier:3 ar:1 maximization:2 ordinary:1 deviation:2 subset:2 examining:1 front:1 eec:1 calibrated:2 venue:1 michael:1 together:1 concrete:1 choose:2 leveraged:1 slowly:1 american:1 includes:2 coefficient:1 rockafellar:1 satisfy:1 closed:1 analyze:1 sup:5 bayes:12 recover:3 contribution:1 minimize:1 square:1 ni:1 variance:1 sqr:1 researcher:1 definition:6 proof:5 studia:1 recall:1 knowledge:1 lim:1 if1:1 back:1 manuscript:1 adaboost:1 strongly:1 furthermore:1 lastly:1 working:1 steinwart:1 lack:1 logistic:2 grows:1 requiring:1 hence:2 q0:2 symmetric:2 illustrated:1 illustrative:1 criterion:1 generalized:1 bring:1 meaning:1 variational:8 functional:1 harnessed:1 association:1 interpretation:2 relating:1 significant:1 refer:1 consistency:10 access:1 recent:2 optimizing:1 inf:18 route:1 certain:2 inequality:3 binary:2 yi:2 minimum:3 additional:2 determine:1 signal:4 semi:1 technical:3 calculation:2 long:1 prescription:1 e1:3 variant:1 expectation:2 kernel:2 achieved:1 penalize:1 c1:1 background:1 addition:2 limn:4 induced:2 jordan:5 call:1 leverage:1 intermediate:1 enough:1 restrict:1 idea:2 cn:14 motivated:2 bartlett:1 ultimate:1 generally:3 covered:1 involve:1 xuanlong:2 nonparametric:1 induces:3 exist:2 sign:3 discrete:1 key:1 traced:1 inverse:1 place:1 family:5 reasonable:1 decision:6 bound:4 correspondence:9 topological:1 nonnegative:1 precisely:5 min:6 performing:2 martin:1 department:1 combination:1 poor:1 conjugate:1 making:2 intuitively:1 restricted:1 taken:3 ln:2 equation:2 previously:1 turn:1 needed:1 letting:3 end:1 decomposing:1 endowed:1 decentralized:4 away:1 alternative:3 thomas:1 denotes:5 include:2 hinge:6 unifying:1 restrictive:1 prof:1 establish:1 classical:3 question:4 quantity:1 realized:2 strategy:1 dependence:1 surrogate:26 september:1 dp:2 distance:20 link:1 sci:1 discriminant:2 trivial:1 ratio:1 minimizing:2 hungar:1 equivalently:1 statement:1 quantizers:3 relate:1 info:1 negative:2 design:13 unknown:3 upper:1 observation:1 finite:1 defining:2 communication:3 precise:2 arbitrary:1 introduced:2 namely:1 blackwell:2 kl:11 specified:4 optimized:1 connection:5 pair:1 california:4 trans:4 address:1 below:1 recast:1 royal:1 wainwright:3 suitable:1 natural:3 regularized:1 mn:3 scheme:2 technology:1 prior:5 literature:4 determining:4 loss:71 expect:1 interesting:4 h2:1 supy:1 sufficient:3 consistent:3 proxy:1 last:1 free:1 tsitsiklis:1 bias:1 allow:2 regard:1 author:1 made:1 universally:6 nguyen:4 historical:1 transaction:2 excess:1 compact:3 reveals:1 assumed:1 xi:2 continuous:6 ca:3 obtaining:1 domain:4 main:1 dense:2 referred:4 borel:3 theme:1 exponential:5 theorem:17 preprocessor:1 specific:1 covariate:1 list:1 dominates:4 intractable:1 exists:1 quantization:4 justifies:1 margin:2 explore:2 cf2:2 expressed:1 corresponds:2 satisfies:2 conditional:1 goal:4 formulated:1 kailath:1 towards:1 included:1 lemma:10 duality:1 experimental:7 formally:2 select:1 support:3 arises:1 constructive:4 princeton:2 d1:1 |
2,099 | 2,906 | Correlated Topic Models
David M. Blei
Department of Computer Science
Princeton University
John D. Lafferty
School of Computer Science
Carnegie Mellon University
Abstract
Topic models, such as latent Dirichlet allocation (LDA), can be useful
tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document
arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation
even though, for example, a document about genetics is more likely to
also be about disease than x-ray astronomy. This limitation stems from
the use of the Dirichlet distribution to model the variability among the
topic proportions. In this paper we develop the correlated topic model
(CTM), where the topic proportions exhibit correlation via the logistic
normal distribution [1]. We derive a mean-field variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. The CTM gives a better fit than LDA on a collection of OCRed
articles from the journal Science. Furthermore, the CTM provides a natural way of visualizing and exploring this and other unstructured data
sets.
1
Introduction
The availability and use of unstructured historical collections of documents is rapidly growing. As one example, JSTOR (www.jstor.org) is a not-for-profit organization that maintains a large online scholarly journal archive obtained by running an optical character recognition engine over the original printed journals. JSTOR indexes the resulting text and provides online access to the scanned images of the original content through keyword search.
This provides an extremely useful service to the scholarly community, with the collection
comprising nearly three million published articles in a variety of fields.
The sheer size of this unstructured and noisy archive naturally suggests opportunities for
the use of statistical modeling. For instance, a scholar in a narrow subdiscipline, searching
for a particular research article, would certainly be interested to learn that the topic of
that article is highly correlated with another topic that the researcher may not have known
about, and that is not explicitly contained in the article. Alerted to the existence of this new
related topic, the researcher could browse the collection in a topic-guided manner to begin
to investigate connections to a previously unrecognized body of work. Since the archive
comprises millions of articles spanning centuries of scholarly work, automated analysis is
essential.
Several statistical models have recently been developed for automatically extracting the
topical structure of large document collections. In technical terms, a topic model is a
generative probabilistic model that uses a small number of distributions over a vocabulary
to describe a document collection. When fit from data, these distributions often correspond
to intuitive notions of topicality. In this work, we build upon the latent Dirichlet allocation
(LDA) [4] model. LDA assumes that the words of each document arise from a mixture
of topics. The topics are shared by all documents in the collection; the topic proportions
are document-specific and randomly drawn from a Dirichlet distribution. LDA allows each
document to exhibit multiple topics with different proportions, and it can thus capture the
heterogeneity in grouped data that exhibit multiple latent patterns. Recent work has used
LDA in more complicated document models [9, 11, 7], and in a variety of settings such
as image processing [12], collaborative filtering [8], and the modeling of sequential data
and user profiles [6]. Similar models were independently developed for disability survey
data [5] and population genetics [10].
Our goal in this paper is to address a limitation of the topic models proposed to date: they
fail to directly model correlation between topics. In many?indeed most?text corpora, it
is natural to expect that subsets of the underlying latent topics will be highly correlated. In
a corpus of scientific articles, for instance, an article about genetics may be likely to also
be about health and disease, but unlikely to also be about x-ray astronomy. For the LDA
model, this limitation stems from the independence assumptions implicit in the Dirichlet
distribution on the topic proportions. Under a Dirichlet, the components of the proportions
vector are nearly independent; this leads to the strong and unrealistic modeling assumption
that the presence of one topic is not correlated with the presence of another.
In this paper we present the correlated topic model (CTM). The CTM uses an alternative, more flexible distribution for the topic proportions that allows for covariance structure
among the components. This gives a more realistic model of latent topic structure where
the presence of one latent topic may be correlated with the presence of another. In the
following sections we develop the technical aspects of this model, and then demonstrate its
potential for the applications envisioned above. We fit the model to a portion of the JSTOR
archive of the journal Science. We demonstrate that the model gives a better fit than LDA,
as measured by the accuracy of the predictive distributions over held out documents. Furthermore, we demonstrate qualitatively that the correlated topic model provides a natural
way of visualizing and exploring such an unstructured collection of textual data.
2
The Correlated Topic Model
The key to the correlated topic model we propose is the logistic normal distribution [1]. The
logistic normal is a distribution on the simplex that allows for a general pattern of variability
between the components by transforming a multivariate normal random variable. Consider
the natural parameterization of a K-dimensional multinomial distribution:
p(z | ?) = exp{? T z ? a(?)}.
(1)
The random variable Z can take on K values; it can be represented by a K-vector with
exactly one component equal to one, denoting a value in {1, . . . , K}. The cumulant generating function of the distribution is
P
K
a(?) = log
(2)
i=1 exp{?i } .
The mapping between the mean parameterization (i.e., the simplex) and the natural parameterization is given by
?i = log ?i /?K .
(3)
Notice that this is not the minimal exponential family representation of the multinomial
because multiple values of ? can yield the same mean parameter.
?
?k
?d
Zd,n
?
Wd,n
N
D
K
Figure 1: Top: Graphical model representation of the correlated topic model. The logistic
normal distribution, used to model the latent topic proportions of a document, can represent
correlations between topics that are impossible to capture using a single Dirichlet. Bottom:
Example densities of the logistic normal on the 2-simplex. From left: diagonal covariance
and nonzero-mean, negative correlation between components 1 and 2, positive correlation
between components 1 and 2.
The logistic normal distribution assumes that ? is normally distributed and then mapped
to the simplex
with the inverse of the mapping given in equation (3); that is, f (?i ) =
P
exp ?i / j exp ?j . The logistic normal models correlations between components of the
simplicial random variable through the covariance matrix of the normal distribution. The
logistic normal was originally studied in the context of analyzing observed compositional
data such as the proportions of minerals in geological samples. In this work, we extend its
use to a hierarchical model where it describes the latent composition of topics associated
with each document.
Let {?, ?} be a K-dimensional mean and covariance matrix, and let topics ?1:K be K
multinomials over a fixed word vocabulary. The correlated topic model assumes that an
N -word document arises from the following generative process:
1. Draw ? | {?, ?} ? N (?, ?).
2. For n ? {1, . . . , N }:
(a) Draw topic assignment Zn | ? from Mult(f (?)).
(b) Draw word Wn | {zn , ?1:K } from Mult(?zn ).
This process is identical to the generative process of LDA except that the topic proportions
are drawn from a logistic normal rather than a Dirichlet. The model is shown as a directed
graphical model in Figure 1.
The CTM is more expressive than LDA. The strong independence assumption imposed
by the Dirichlet in LDA is not realistic when analyzing document collections, where one
may find strong correlations between topics. The covariance matrix of the logistic normal
in the CTM is introduced to model such correlations. In Section 3, we illustrate how the
higher order structure given by the covariance can be used as an exploratory tool for better
understanding and navigating a large corpus of documents. Moreover, modeling correlation
can lead to better predictive distributions. In some settings, such as collaborative filtering,
the goal is to predict unseen items conditional on a set of observations. An LDA model
will predict words based on the latent topics that the observations suggest, but the CTM
has the ability to predict items associated with additional topics that are correlated with the
conditionally probable topics.
2.1
Posterior inference and parameter estimation
Posterior inference is the central challenge to using the CTM. The posterior distribution of
the latent variables conditional on a document, p(?, z1:N | w1:N ), is intractable to compute;
once conditioned on some observations, the topic assignments z1:N and log proportions
? are dependent. We make use of mean-field variational methods to efficiently obtain an
approximation of this posterior distribution.
In brief, the strategy employed by mean-field variational methods is to form a factorized
distribution of the latent variables, parameterized by free variables which are called the variational parameters. These parameters are fit so that the Kullback-Leibler (KL) divergence
between the approximate and true posterior is small. For many problems this optimization
problem is computationally manageable, while standard methods, such as Markov Chain
Monte Carlo, are impractical. The tradeoff is that variational methods do not come with
the same theoretical guarantees as simulation methods. See [13] for a modern review of
variational methods for statistical inference.
In graphical models composed of conjugate-exponential family pairs and mixtures, the
variational inference algorithm can be automatically derived from general principles [2,
14]. In the CTM, however, the logistic normal is not conjugate to the multinomial. We
will therefore derive a variational inference algorithm by taking into account the special
structure and distributions used by our model.
We begin by using Jensen?s inequality to bound the log probability of a document:
log p(w1:N | ?, ?, ?) ?
Eq [log p(? | ?, ?)] +
(4)
PN
n=1 (Eq
[log p(zn | ?)] + Eq [log p(wn | zn , ?)]) + H (q) ,
where the expectation is taken with respect to a variational distribution of the latent variables, and H (q) denotes the entropy of that distribution. We use a factorized distribution:
QK
QN
2
q(?1:K , z1:N | ?1:K , ?1:K
, ?1:N ) = i=1 q(?i | ?i , ?i2 ) n=1 q(zn | ?n ).
(5)
The variational distributions of the discrete variables z1:N are specified by the Kdimensional multinomial parameters ?1:N . The variational distribution of the continuous
variables ?1:K are K independent univariate Gaussians {?i , ?i }. Since the variational parameters are fit using a single observed document w1:N , there is no advantage in introducing a non-diagonal variational covariance matrix.
The nonconjugacy of the logistic normal leads to difficulty in computing the expected log
probability of a topic assignment:
h
i
PK
Eq [log p(zn | ?)] = Eq ? T zn ? Eq log( i=1 exp{?i }) .
(6)
To preserve the lower bound on the log probability, we upper bound the log normalizer
with a Taylor expansion,
h P
i
PK
K
Eq log
exp{?
}
? ? ?1 ( i=1 Eq [exp{?i }]) ? 1 + log(?),
(7)
i
i=1
where we have introduced a new variational parameter ?. The expectation Eq [exp{?i }] is
the mean of a log normal distribution with mean and variance obtained from the variational
parameters {?i , ?i2 }; thus, Eq [exp{?i }] = exp{?i + ?i2 /2} for i ? {1, . . . , K}.
fossil record
birds
fossils
dinosaurs
fossil
evolution
taxa
species
specimens
evolutionary
ancient
found
impact
million years ago
africa
site
bones
years ago
date
rock
mantle
crust
upper mantle
meteorites
ratios
rocks
grains
isotopic
isotopic composition
depth
climate
ocean
ice
changes
climate change
north atlantic
record
warming
temperature
past
earthquake
earthquakes
fault
images
data
observations
features
venus
surface
faults
brain
memory
subjects
left
task
brains
cognitive
language
human brain
learning
co2
carbon
carbon dioxide
methane
water
energy
gas
fuel
production
organic matter
ozone
atmospheric
measurements
stratosphere
concentrations
atmosphere
air
aerosols
troposphere
measured
neurons
stimulus
motor
visual
cortical
axons
stimuli
movement
cortex
eye
ca2
calcium
release
ca2 release
concentration
ip3
intracellular calcium
intracellular
intracellular ca2
ca2 i
ras
atp
camp
gtp
adenylyl cyclase
cftr
adenosine triphosphate atp
guanosine triphosphate gtp
gap
gdp
synapses
ltp
glutamate
synaptic
neurons
long term potentiation ltp
synaptic transmission
postsynaptic
nmda receptors
hippocampus
males
male
females
female
sperm
sex
offspring
eggs
species
egg
gene
disease
mutations
families
mutation
alzheimers disease
patients
human
breast cancer
normal
genetic
population
populations
differences
variation
evolution
loci
mtdna
data
evolutionary
p53
cell cycle
activity
cyclin
regulation
protein
phosphorylation
kinase
regulated
cell cycle progression
amino acids
cdna
sequence
isolated
protein
amino acid
mrna
amino acid sequence
actin
clone
development
embryos
drosophila
genes
expression
embryo
developmental
embryonic
developmental biology
vertebrate
wild type
mutant
mutations
mutants
mutation
gene
yeast
recombination
phenotype
genes
Figure 2: A portion of the topic graph learned from 16,351 OCR articles from Science.
Each node represents a topic, and is labeled with the five most probable phrases from its
distribution (phrases are found by the ?turbo topics? method [3]). The interested reader can
browse the full model at http://www.cs.cmu.edu/?lemur/science/.
Given a model {?1:K , ?, ?} and a document w1:N , the variational inference algorithm optimizes equation (4) with respect to the variational parameters {?1:K , ?1:K , ?1:N , ?}. We
use coordinate ascent, repeatedly optimizing with respect to each parameter while holding
the others fixed. In variational inference for LDA, each coordinate can be optimized analytically. However, iterative methods are required for the CTM when optimizing for ?i and
?i2 . The details are given in Appendix A.
Given a collection of documents, we carry out parameter estimation in the correlated topic
model by attempting to maximize the likelihood of a corpus of documents as a function
of the topics ?1:K and the multivariate Gaussian parameters {?, ?}. We use variational
expectation-maximization (EM), where we maximize the bound on the log probability of a
collection given by summing equation (4) over the documents.
In the E-step, we maximize the bound with respect to the variational parameters by performing variational inference for each document. In the M-step, we maximize the bound
with respect to the model parameters. This is maximum likelihood estimation of the topics and multivariate Gaussian using expected sufficient statistics, where the expectation
is taken with respect to the variational distributions computed in the E-step. The E-step
and M-step are repeated until the bound on the likelihood converges. In the experiments
reported below, we run variational inference until the relative change in the probability
bound of equation (4) is less than 10?6 , and run variational EM until the relative change in
the likelihood bound is less than 10?5 .
3
Examples and Empirical Results: Modeling Science
In order to test and illustrate the correlated topic model, we estimated a 100-topic CTM
on 16,351 Science articles spanning 1990 to 1999. We constructed a graph of the latent topics and the connections among them by examining the most probable words from
each topic and the between-topic correlations. Part of this graph is illustrated in Figure 2. In this subgraph, there are three densely connected collections of topics: material
science, geology, and cell biology. Furthermore, an estimated CTM can be used to explore otherwise unstructured observed documents. In Figure 4, we list articles that are
assigned to the cognitive science topic and articles that are assigned to both the cog-
2200
?112800
?
?
?
?
?
?113600
?
1600
?
?
?
?
?114800
?
?
?
?
?
?
1400
1200
?
?
400
?
?
1000
?
800
L(CTM) ? L(LDA)
?114400
?
600
?114000
?
?
?
?
?115200
Held?out log likelihood
?
?
?
1800
?113200
?
?
?115600
?
2000
CTM
LDA
?
?
?116000
200
?116400
0
?
?
5
10
20
30
40
50
60
70
80
Number of topics
90
100
110
120
?
?
20
30
?
10
40
50
60
70
80
90
100
110
120
Number of topics
Figure 3: (L) The average held-out probability; CTM supports more topics than LDA. See
figure at right for the standard error of the difference. (R) The log odds ratio of the held-out
probability. Positive numbers indicate a better fit by the correlated topic model.
nitive science and visual neuroscience topics. The interested reader is invited to visit
http://www.cs.cmu.edu/?lemur/science/ to interactively explore this model, including the topics, their connections, and the articles that exhibit them.
We compared the CTM to LDA by fitting a smaller collection of articles to models of varying numbers of topics. This collection contains the 1,452 documents from 1960; we used
a vocabulary of 5,612 words after pruning common function words and terms that occur
once in the collection. Using ten-fold cross validation, we computed the log probability of
the held-out data given a model estimated from the remaining data. A better model of the
document collection will assign higher probability to the held out data. To avoid comparing
bounds, we used importance sampling to compute the log probability of a document where
the fitted variational distribution is the proposal.
Figure 3 illustrates the average held out log probability for each model and the average
difference between them. The CTM provides a better fit than LDA and supports more
topics; the likelihood for LDA peaks near 30 topics while the likelihood for the CTM peaks
close to 90 topics. The means and standard errors of the difference in log-likelihood of the
models is shown at right; this indicates that the CTM always gives a better fit.
Another quantitative evaluation of the relative strengths of LDA and the CTM is how well
the models predict the remaining words after observing a portion of the document. Suppose we observe words w1:P from a document and are interested in which model provides
a better predictive distribution p(w | w1:P ) of the remaining words. To compare these distributions, we use perplexity, which can be thought of as the effective number of equally
likely words according to the model. Mathematically, the perplexity of a word distribution is defined as the inverse of the per-word geometric average of the probability of the
observations,
PD ?1
Q
D QNd
d=1 (Nd ?P ) ,
p(w
|
?,
w
)
Perp(?) =
i
1:P
d=1
i=P +1
where ? denotes the model parameters of an LDA or CTM model. Note that lower numbers
denote more predictive power.
The plot in Figure 4 compares the predictive perplexity under LDA and the CTM. When a
(1) A Head for Figures
(2) Sources of Mathematical Thinking: Behavioral and Brain
Imaging Evidence
(3) Natural Language Processing
(4) A Romance Blossoms Between Gray Matter and Silicon
(5) Computer Vision
2400
2200
Predictive perplexity
?
?
?
?
?
2000
Top Articles with
{brain, memory, human, visual, cognitive} and
{computer, data, information, problem, systems}
CTM
LDA
?
?
?
?
?
?
?
?
?
?
?
?
?
1800
(1) Separate Neural Bases of Two Fundamental Memory
Processes in the Human Medial Temporal Lobe
(2) Inattentional Blindness Versus Inattentional Amnesia for
Fixated but Ignored Words
(3) Making Memories: Brain Activity that Predicts How Well
Visual Experience Will be Remembered
(4) The Learning of Categories: Parallel Brain Systems for
Item Memory and Category Knowledge
(5) Brain Activation Modulated by Sentence Comprehension
2600
Top Articles with
{brain, memory, human, visual, cognitive}
10
20
30
40
50
60
70
80
90
% observed words
Figure 4: (Left) Exploring a collection through its topics. (Right) Predictive perplexity for
partially observed held-out documents from the 1960 Science corpus.
small number of words have been observed, there is less uncertainty about the remaining
words under the CTM than under LDA?the perplexity is reduced by nearly 200 words, or
roughly 10%. The reason is that after seeing a few words in one topic, the CTM uses topic
correlation to infer that words in a related topic may also be probable. In contrast, LDA
cannot predict the remaining words as well until a large portion of the document as been
observed so that all of its topics are represented.
Acknowledgments Research supported in part by NSF grants IIS-0312814 and IIS0427206 and by the DARPA CALO project.
References
[1] J. Aitchison. The statistical analysis of compositional data. Journal of the Royal
Statistical Society, Series B, 44(2):139?177, 1982.
[2] C. Bishop, D. Spiegelhalter, and J. Winn. VIBES: A variational inference engine for
Bayesian networks. In NIPS 15, pages 777?784. Cambridge, MA, 2003.
[3] D. Blei, J. Lafferty, C. Genovese, and L. Wasserman. Turbo topics. In progress, 2006.
[4] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine
Learning Research, 3:993?1022, January 2003.
[5] E. Erosheva. Grade of membership and latent structure models with application to
disability survey data. PhD thesis, Carnegie Mellon University, Department of Statistics, 2002.
[6] M. Girolami and A. Kaban. Simplicial mixtures of Markov chains: Distributed modelling of dynamic user profiles. In NIPS 16, pages 9?16, 2004.
[7] T. Griffiths, M. Steyvers, D. Blei, and J. Tenenbaum. Integrating topics and syntax.
In Advances in Neural Information Processing Systems 17, 2005.
[8] B. Marlin. Collaborative filtering: A machine learning perspective. Master?s thesis,
University of Toronto, 2004.
[9] A. McCallum, A. Corrada-Emmanuel, and X. Wang. The author-recipient-topic
model for topic and role discovery in social networks. 2004.
[10] J. Pritchard, M. Stephens, and P. Donnelly. Inference of population structure using
multilocus genotype data. Genetics, 155:945?959, June 2000.
[11] M. Rosen-Zvi, T. Griffiths, M. Steyvers, and P. Smith. In UAI ?04: Proceedings of
the 20th Conference on Uncertainty in Artificial Intelligence, pages 487?494.
[12] J. Sivic, B. Rusell, A. Efros, A. Zisserman, and W. Freeman. Discovering object
categories in image collections. Technical report, CSAIL, MIT, 2005.
[13] M. Wainwright and M. Jordan. A variational principle for graphical models. In New
Directions in Statistical Signal Processing, chapter 11. MIT Press, 2005.
[14] E. Xing, M. Jordan, and S. Russell. A generalized mean field algorithm for variational
inference in exponential families. In Proceedings of UAI, 2003.
A
Variational Inference
We describe a coordinate ascent optimization algorithm for the likelihood bound in equation (4) with respect to the variational parameters.
The first term of equation (4) is
Eq [log p(? | ?, ?)] = (1/2) log |??1 | ? (K/2) log 2? ? (1/2)Eq (? ? ?)T ??1 (? ? ?) ,
(8)
where
Eq (? ? ?)T ??1 (? ? ?) = Tr(diag(? 2 )??1 ) + (? ? ?)T ??1 (? ? ?).
(9)
The second term of equation (4), using the additional bound in equation (7), is
P
PK
K
2
Eq [log p(zn | ?)] = i=1 ?i ?n,i ? ? ?1
i=1 exp{?i + ?i /2} + 1 ? log ?.
(10)
The third term of equation (4) is
Eq [log p(wn | zn , ?)] =
PK
i=1
?n,i log ?i,wn .
Finally, the fourth term is the entropy of the variational distribution:
PK 1
PN Pk
2
i=1 2 (log ?i + log 2? + 1) ?
n=1
i=1 ?n,i log ?n,i .
(11)
(12)
We maximize the bound in equation (4) with respect to the variational parameters ?1:K ,
?1:K , ?1:N , and ?. We use a coordinate ascent algorithm, iteratively maximizing the bound
with respect to each parameter.
First, we maximize equation (4) with respect to ?, using the second bound in equation (7).
The derivative with respect to ? is
P
K
2
?1
f 0 (?) = N ? ?2
,
(13)
i=1 exp{?i + ?i /2} ? ?
which has a maximum at
PK
?? = i=1 exp{?i + ?i2 /2}.
Second, we maximize with respect to ?n . This yields a maximum at
??n,i ? exp{?i }?i,w , i ? {1, . . . , K}.
(14)
(15)
n
Third, we maximize with respect to ?i . Since equation (4) is not amenable to analytic
maximization, we use a conjugate gradient algorithm with derivative
PN
dL/d? = ???1 (? ? ?) + n=1 ?n,1:K ? (N/?) exp{? + ? 2 /2} .
(16)
Finally, we maximize with respect to ?i2 . Again, there is no analytic solution. We use
Newton?s method for each coordinate, constrained such that ?i > 0:
2
2
dL/d?i2 = ???1
(17)
ii /2 ? N/2? exp{? + ?i /2} + 1/(2?i ).
Iterating between these optimizations defines a coordinate ascent algorithm on equation (4).
| 2906 |@word blindness:1 manageable:1 proportion:11 hippocampus:1 nd:1 sex:1 simulation:1 lobe:1 covariance:7 profit:1 tr:1 carry:1 phosphorylation:1 contains:1 series:1 denoting:1 document:33 genetic:1 africa:1 atlantic:1 past:1 wd:1 comparing:1 activation:1 romance:1 john:1 grain:1 realistic:2 analytic:2 motor:1 plot:1 medial:1 generative:3 intelligence:1 discovering:1 item:3 parameterization:3 mccallum:1 smith:1 record:2 blei:4 provides:6 node:1 toronto:1 org:1 five:1 mathematical:1 constructed:1 amnesia:1 wild:1 fitting:1 ray:2 behavioral:1 manner:1 expected:2 ra:1 indeed:1 roughly:1 growing:1 brain:9 grade:1 freeman:1 automatically:2 vertebrate:1 begin:2 project:1 underlying:1 moreover:1 corrada:1 factorized:2 fuel:1 developed:2 astronomy:2 marlin:1 impractical:1 guarantee:1 temporal:1 quantitative:1 exactly:1 normally:1 crust:1 grant:1 positive:2 service:1 ice:1 offspring:1 lemur:2 perp:1 receptor:1 analyzing:2 topicality:1 bird:1 studied:1 suggests:1 directed:1 acknowledgment:1 earthquake:2 empirical:1 mult:2 printed:1 thought:1 organic:1 word:23 integrating:1 griffith:2 seeing:1 suggest:1 protein:2 qnd:1 cannot:1 close:1 context:1 impossible:1 www:3 imposed:1 maximizing:1 mrna:1 meteorite:1 independently:1 survey:2 unstructured:5 wasserman:1 ozone:1 steyvers:2 century:1 searching:1 notion:1 population:4 exploratory:1 aerosol:1 variation:1 coordinate:6 suppose:1 user:2 us:3 recognition:1 predicts:1 labeled:1 bottom:1 observed:7 role:1 wang:1 capture:2 cycle:2 connected:1 keyword:1 movement:1 russell:1 disease:4 envisioned:1 transforming:1 developmental:2 pd:1 co2:1 dynamic:1 predictive:7 upon:1 darpa:1 represented:2 chapter:1 describe:2 effective:1 monte:1 artificial:1 otherwise:1 ability:1 statistic:2 unseen:1 noisy:1 online:2 advantage:1 sequence:2 rock:2 propose:1 rapidly:1 date:2 subgraph:1 intuitive:1 troposphere:1 nonconjugacy:1 transmission:1 generating:1 converges:1 object:1 derive:2 develop:2 illustrate:2 geology:1 measured:2 school:1 progress:1 eq:15 strong:3 c:2 come:1 indicate:1 girolami:1 direction:1 guided:1 human:5 calo:1 material:1 atmosphere:1 potentiation:1 assign:1 scholar:1 drosophila:1 probable:4 comprehension:1 mathematically:1 exploring:3 normal:17 exp:16 mapping:2 predict:5 efros:1 nitive:1 ctm:26 estimation:3 grouped:1 tool:2 mit:2 gaussian:2 always:1 rather:1 pn:3 avoid:1 varying:1 derived:1 release:2 june:1 mutant:2 modelling:1 likelihood:9 indicates:1 methane:1 contrast:1 normalizer:1 camp:1 sperm:1 inference:15 dependent:1 membership:1 unlikely:1 comprising:1 interested:4 among:3 flexible:1 development:1 constrained:1 special:1 field:5 equal:1 once:2 ng:1 sampling:1 identical:1 biology:2 represents:1 nearly:3 genovese:1 thinking:1 simplex:4 report:1 stimulus:2 gdp:1 others:1 few:1 rosen:1 modern:1 randomly:1 composed:1 preserve:1 divergence:1 densely:1 organization:1 highly:2 investigate:1 evaluation:1 certainly:1 male:2 mixture:4 genotype:1 held:8 chain:2 amenable:1 experience:1 alzheimers:1 taylor:1 ancient:1 isolated:1 theoretical:1 minimal:1 fitted:1 instance:2 modeling:5 zn:10 assignment:3 maximization:2 phrase:2 introducing:1 subset:1 examining:1 zvi:1 reported:1 density:1 clone:1 peak:2 fundamental:1 csail:1 probabilistic:1 kdimensional:1 w1:6 thesis:2 central:1 again:1 interactively:1 cognitive:4 derivative:2 account:1 potential:1 fossil:3 availability:1 north:1 matter:2 taxon:1 explicitly:1 bone:1 observing:1 portion:4 xing:1 maintains:1 complicated:2 parallel:1 erosheva:1 mutation:4 collaborative:3 air:1 accuracy:1 qk:1 variance:1 efficiently:1 acid:3 correspond:1 yield:2 simplicial:2 bayesian:1 carlo:1 researcher:2 published:1 ago:2 synapsis:1 synaptic:2 energy:1 naturally:1 associated:2 knowledge:1 nmda:1 warming:1 scholarly:3 originally:1 higher:2 zisserman:1 though:1 furthermore:3 implicit:1 correlation:12 until:4 expressive:1 defines:1 logistic:13 lda:27 gray:1 scientific:1 yeast:1 true:1 evolution:2 analytically:1 assigned:2 nonzero:1 leibler:1 iteratively:1 i2:7 illustrated:1 climate:2 conditionally:1 visualizing:2 generalized:1 syntax:1 demonstrate:3 isotopic:2 temperature:1 image:4 variational:32 recently:1 common:1 multinomial:6 million:3 extend:1 cdna:1 mellon:2 composition:2 measurement:1 silicon:1 cambridge:1 atp:2 language:2 access:1 cortex:1 surface:1 base:1 posterior:6 multivariate:3 recent:1 female:2 perspective:1 optimizing:2 optimizes:1 perplexity:6 browse:2 inequality:1 remembered:1 fault:2 additional:2 employed:1 specimen:1 maximize:9 signal:1 ii:2 stephen:1 multiple:3 full:1 infer:1 stem:2 technical:3 ip3:1 cross:1 long:1 equally:1 visit:1 impact:1 breast:1 patient:1 expectation:4 cmu:2 vision:1 represent:1 cell:3 proposal:1 winn:1 source:1 invited:1 archive:4 ascent:4 subject:1 ltp:2 lafferty:2 jordan:3 odds:1 extracting:1 near:1 presence:4 wn:4 automated:1 variety:2 independence:2 fit:9 venus:1 tradeoff:1 expression:1 compositional:2 repeatedly:1 ignored:1 useful:2 iterating:1 ten:1 tenenbaum:1 category:3 reduced:1 http:2 nsf:1 notice:1 estimated:3 neuroscience:1 per:1 zd:1 aitchison:1 carnegie:2 discrete:2 donnelly:1 key:1 sheer:1 drawn:2 imaging:1 graph:3 year:2 run:2 inverse:2 parameterized:1 uncertainty:2 ca2:4 master:1 multilocus:1 fourth:1 family:4 reader:2 draw:3 appendix:1 bound:15 fold:1 jstor:4 turbo:2 activity:2 strength:1 scanned:1 occur:1 aspect:1 extremely:1 attempting:1 performing:1 optical:1 department:2 p53:1 according:1 conjugate:4 describes:1 smaller:1 em:2 character:1 postsynaptic:1 making:1 embryo:2 taken:2 computationally:1 equation:14 dioxide:1 previously:1 fail:1 locus:1 gaussians:1 progression:1 hierarchical:1 ocr:1 observe:1 ocean:1 alternative:1 existence:1 original:2 recipient:1 assumes:4 dirichlet:10 running:1 top:3 denotes:2 graphical:4 opportunity:1 remaining:5 newton:1 recombination:1 emmanuel:1 build:1 society:1 strategy:1 concentration:2 diagonal:2 disability:2 exhibit:4 navigating:1 evolutionary:2 regulated:1 gradient:1 separate:1 mapped:1 geological:1 mineral:1 topic:74 spanning:2 water:1 reason:1 index:1 ratio:2 regulation:1 carbon:2 holding:1 negative:1 unrecognized:1 calcium:2 kinase:1 upper:2 observation:5 neuron:2 markov:2 gas:1 january:1 heterogeneity:1 variability:2 head:1 topical:1 pritchard:1 community:1 mantle:2 atmospheric:1 david:1 introduced:2 pair:1 required:1 kl:1 specified:1 connection:3 z1:4 optimized:1 sentence:1 sivic:1 engine:2 learned:1 textual:1 narrow:1 nip:2 address:1 below:1 pattern:2 challenge:1 including:1 memory:6 gtp:2 royal:1 wainwright:1 unrealistic:1 power:1 natural:6 difficulty:1 glutamate:1 spiegelhalter:1 brief:1 eye:1 health:1 text:2 review:1 understanding:1 geometric:1 discovery:1 relative:3 expect:1 limitation:4 allocation:3 filtering:3 versus:1 validation:1 sufficient:1 article:16 principle:2 production:1 embryonic:1 cancer:1 genetics:4 dinosaur:1 supported:1 free:1 actin:1 blossom:1 taking:1 distributed:2 depth:1 vocabulary:4 cortical:1 qn:1 author:1 collection:19 qualitatively:1 historical:1 alerted:1 social:1 approximate:2 pruning:1 kaban:1 kullback:1 gene:4 uai:2 fixated:1 corpus:5 summing:1 search:1 latent:15 continuous:1 iterative:1 learn:1 expansion:1 diag:1 pk:7 intracellular:3 arise:2 profile:2 repeated:1 amino:3 body:1 site:1 egg:2 axon:1 comprises:1 exponential:3 third:2 cog:1 specific:1 bishop:1 jensen:1 list:1 evidence:1 dl:2 essential:1 intractable:1 sequential:1 importance:1 phd:1 conditioned:1 illustrates:1 gap:1 phenotype:1 vibe:1 entropy:2 likely:3 univariate:1 explore:2 visual:5 contained:1 partially:1 ma:1 conditional:2 goal:2 shared:1 content:1 change:4 except:1 called:1 specie:2 support:2 inability:1 arises:1 cumulant:1 modulated:1 princeton:1 correlated:16 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.