Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
1,300 | 2,184 | Temporal Coherence, Natural Image Sequences,
and the Visual Cortex
Jarmo Hurri and Aapo Hyv?rinen
Neural Networks Research Centre
Helsinki University of Technology
P.O.Box 9800, 02015 HUT, Finland
{jarmo.hurri,aapo.hyvarinen}@hut.fi
Abstract
We show that two important properties of the primary visual cortex
emerge when the principle of temporal coherence is applied to natural
image sequences. The properties are simple-cell-like receptive fields and
complex-cell-like pooling of simple cell outputs, which emerge when
we apply two different approaches to temporal coherence. In the first
approach we extract receptive fields whose outputs are as temporally coherent as possible. This approach yields simple-cell-like receptive fields
(oriented, localized, multiscale). Thus, temporal coherence is an alternative to sparse coding in modeling the emergence of simple cell receptive
fields. The second approach is based on a two-layer statistical generative
model of natural image sequences. In addition to modeling the temporal
coherence of individual simple cells, this model includes inter-cell temporal dependencies. Estimation of this model from natural data yields
both simple-cell-like receptive fields, and complex-cell-like pooling of
simple cell outputs. In this completely unsupervised learning, both layers of the generative model are estimated simultaneously from scratch.
This is a significant improvement on earlier statistical models of early
vision, where only one layer has been learned, and others have been fixed
a priori.
1 Introduction
The functional role of simple and complex cells has puzzled scientists since their response
properties were first mapped by Hubel and Wiesel in the 1950s (see, e.g., [1]). The current
view of the functionality of sensory neural networks emphasizes learning and the relationship between the structure of the cells and the statistical properties of the information they
process (see, e.g., [2]). In 1996 a major advance was achieved when Olshausen and Field
showed that simple-cell-like receptive fields emerge when sparse coding is applied to natural image data [3]. Similar results were obtained with independent component analysis
shortly thereafter [4]. In the case of image data, independent component analysis is closely
related to sparse coding [5].
In this paper we show that a principle called temporal coherence [6, 7, 8, 9] leads to the
emergence of major properties of the primary visual cortex from natural image sequences.
Temporal coherence is based on the idea that when processing temporal input, the representation changes as little as possible over time. Several authors have demonstrated the
usefulness of this principle using simulated data (see, e.g., [6, 7]).
We apply the principle of temporal coherence to natural input, and at the level of early
vision, in two different ways. In the first approach we show that when the input consists
of natural image sequences, the maximization of temporal response strength correlation
of cell output leads to receptive fields which are similar to simple cell receptive fields.
These results show that temporal coherence is an alternative to sparse coding, in that they
both result in the emergence of simple-cell-like receptive fields from natural input data.
Whereas earlier research has focused on establishing a link between temporal coherence
and complex cells, our results demonstrate that such a connection exists even on the simple
cell level. We will also show how this approach can be interpreted as estimation of a linear
latent variable model in which the latent signals have varying variances.
In the second approach we use the principle of temporal coherence to formulate a two-layer
generative model of natural image sequences. In addition to single-cell temporal coherence,
this model also captures inter-cell temporal dependencies. We show that when this model
is estimated from natural image sequence data, the results include both simple-cell-like
receptive fields, and a complex-cell-like pooling of simple cell outputs. Whereas in earlier
research learning two-layer statistical models of early vision has required fixing one of the
layers beforehand, in our model both layers are learned simultaneously.
2 Simple-cell-like receptive fields are temporally coherent features
Our first approach to modeling temporal coherence in natural image sequences can be interpreted either as maximization of temporal coherence of cell outputs, or as estimation of
a latent variable model in which the underlying variables have certain kind of time structure. This situation is analogous to sparse coding, because measures of sparseness can also
be used to estimate linear generative models with non-Gaussian independent sources [5].
We first describe our measure of temporal coherence, and then provide the link to latent
variable models.
In this paper we restrict ourselves to consider linear spatial models of simple cells. Linear simple cell models are commonly used in studies concerning the connections between
visual input statistics and simple cell receptive fields [3, 4]. (Non-negative and spatiotemporal extensions of this basic framework are discussed in [10].) The linear spatial model
uses a set of spatial filters (vectors) w1 , ..., wK to relate input to output. Let signal vector
x(t) denote the input of the system at time t. A vectorization of image patches can be done
by scanning images column-wise into vectors ? for windows of size N ? N this yields
vectors with dimension N 2 . The output of the kth filter at time t, denoted by signal yk (t),
is given by yk (t) = wkT x(t). Let matrix W = [w1 ? ? ? wK ]T denote a matrix with all the
filters as rows. Then the input-output relationship can be expressed in vector form by
y(t) = Wx(t),
(1)
T
where signal vector y(t) = [y1 (t) ? ? ? yK (t)] .
Temporal response strength correlation, the objective function, is defined by
f (W) =
K
X
Et {g(yk (t))g(yk (t ? ?t))} ,
(2)
k=1
where the nonlinearity g is strictly convex, even (rectifying), and differentiable. The symbol ?t denotes a delay in time. The nonlinearity g measures the strength (amplitude) of
the response of the filter, and emphasizes large responses over small ones (see [10] for
A
B
9
0
y2 (t)
y(t)
3
6
3
0
?3
0
200
400
time index
0
200
400
time index
Figure 1: Illustration of nonstationarity of variance. (A) A temporally uncorrelated signal
y(t) with nonstationary variance. (B) Plot of y 2 (t).
additional discussion). Examples of choices for this nonlinearity are g 1 (?) = ?2 , which
measures the energy of the response, and g2 (?) = ln cosh ?, which is a robustified version of g1 . A set of filters which has a large temporal response strength correlation is such
that the same filters often respond strongly at consecutive time points, outputting large (either positive or negative) values. This means that the same filters will respond strongly
over short periods of time, thereby expressing temporal coherence of a population code. A
detailed discussion of the difference between temporal response strength correlation and
sparseness, including several control experiments, can be found in [10].
To keep the outputs of the filters bounded we enforce the unit variance constraint on each of
the output signals yk (t). Additional constraints are needed to keep the filters from converging to the same solution ? we force their outputs to be uncorrelated. A gradient projection
method can be used to maximize (2) under these constraints. The initial value of W is
selected randomly. See [10] for details.
The interpretation of maximization of objective function (2) as estimation of a generative
model is based on the concept of sources with nonstationary variances [11, 12]. The linear
generative model for x(t), the counterpart of equation (1), is similar to the one in [13, 3]:
x(t) = Ay(t).
(3)
Here A = [a1 ? ? ? aK ] denotes a matrix which relates the image patch x(t) to the activities
of the simple cells, so that each column ak , k = 1, ..., K, gives the feature that is coded by
the corresponding simple cell. The dimension of x(t) is typically larger than the dimension
of y(t), so that (1) is generally not invertible but an underdetermined set of linear equations.
A one-to-one correspondence between W and A can be established by computing the
pseudoinverse solution A = WT (WWT )?1 .
The nonstationarity of the variances of sources y(t) means that their variances change over
time, and the variance of a signal is correlated at nearby time points. An example of a signal
with nonstationary variance is shown in Figure 1. It can be shown [12] that optimization of
a cumulant-based criterion, similar to equation (2), can separate independent sources with
nonstationary variances. Thus, the maximization of the objective function can also be interpreted as estimation of generative models in which the activity levels of the sources vary
over time, and are temporally correlated over time. As was noted above, this situation is
analogous to the application of measures of sparseness to estimate linear generative models
with non-Gaussian sources.
The algorithm was applied to natural image sequence data, which was sampled from a subset of image sequences used in [14]. The number of samples was 200,000, ?t was 40 ms,
and the sampled image patches were of size 16?16 pixels. Preprocessing consisted of temporal decorrelation, subtraction of local mean, and normalization [10], and dimensionality
reduction from 256 to 160 using principal component analysis [5] (this degree of reduction
Figure 2: Basis vectors estimated using the principle of temporal coherence. The
vectors were estimated from natural image sequences by optimizing temporal response
strength correlation (2) under unit energy and uncorrelatedness constraints (here nonlinearity g(?) = ln cosh ?). The basis vectors have been ordered according to
Et {g(yk (t))g(yk (t ? ?t))} , that is, according to their ?contribution? into the final objective value (vectors with largest values top left).
retains 95% of signal energy).
Figure 2 shows the basis vectors (columns of matrix A) which emerge when temporal
response strength correlation is maximized for this data. The basis vectors are oriented, localized, and have multiple scales. These are the main features of simple cell receptive fields
[1]. A quantitative analysis, showing that the resulting receptive fields are similar to those
obtained using sparse coding, can be found in [10], where the details of the experiments
are also described.
3 Inter-cell temporal dependencies yield simple cell output pooling
3.1 Model
Temporal response strength correlation, equation (2), measures the temporal coherence of
individual simple cells. In terms of the generative model described above, this means that
the nonstationary variances of different yk (t)?s have no interdependencies. In this section
we add another layer to the generative model presented above to extend the theory to simple
cell interactions, and to the level of complex cells.
Like in the generative model described at the end of the previous section, the output layer
of the model (see Figure 3) is linear, and maps signed cell responses to image features. But
in contrast to the previous section, or models used in independent component analysis [5]
or basic sparse coding [3], we do not assume that the components of y(t) are independent.
Instead, we model the dependencies between these components with a multivariate autoreT
gressive model in the first layer of our model. Let abs (y(t)) = [|y1 (t)| ? ? ? |yK (t)|] , let
v(t) denote a driving noise signal, and let M denote a K ? K matrix. Our model is a
multidimensional first-order autoregressive process, defined by
abs (y(t)) = M abs (y(t ? ?t)) + v(t).
(4)
As in independent
analysis, we also need to fix the scale of the latent variables
n 2 component
o
by defining Et yk (t) = 1 for k = 1, ..., K.
abs (y(t))
v(t)
abs (y(t)) = M abs (y(t ? ?t)) + v(t)
y(t)
?
x(t) = Ay(t)
x(t)
random signs
T
Figure 3: The two layers of the generative model. Let abs (y(t)) = [|y1 (t)| ? ? ? |yK (t)|]
denote the amplitudes of simple cell responses. In the first layer, the driving noise signal
v(t) generates the amplitudes of simple cell responses via an autoregressive model. The
signs of the responses are generated randomly between the first and second layer to yield
signed responses y(t). In the second layer, natural video x(t) is generated linearly from
simple cell responses. In addition to the relations shown here, the generation of v(t) is affected by M abs (y(t ? ?t)) to ensure non-negativity of abs (y(t)) . See text for details.
There are dependencies between the driving noise v(t) and output strengths abs (y(t)) ,
caused by the non-negativity of abs (y(t)) . To take these dependencies into account, we use the following formalism. Let u(t) denote a random vector with
components which are statistically independent of each other. We define v(t) =
max (?M abs (y(t ? ?t)) , u(t)) , where, for vectors a and b, max (a, b) =
T
[max(a1 , b1 ) ? ? ? max(an , bn )] . We assume that u(t) and abs (y(t)) are uncorrelated.
To make the generative model complete, a mechanism for generating the signs of cell responses y(t) must be included. We specify that the signs are generated randomly with
equal probability for plus or minus after the strengths of the responses have been generated. Note that one consequence of this is that the different yk (t)?s are uncorrelated. In the
estimation of the model this uncorrelatedness property is used as a constraint. When this
is combined with the unit variance (scale) constraints described above, the resulting set of
constraints is the same as in the approach described in Section 2.
In equation (4), a large positive matrix element M(i, j), or M(j, i), indicates that there is
strong temporal coherence between the output strengths of cells i and j. Thinking in terms
of grouping temporally coherent cells together, matrix M can be thought of as containing
similarities (reciprocals of distances) between different cells. We will use this property in
the experimental section to derive a topography of simple cell receptive fields from M.
3.2 Estimation of the model
To estimate the model defined above we need to estimate both M and W (pseudoinverse
of A). We first show how to estimate M, given W. We then describe an objective function
which can be used to estimate W, given M. Each iteration of the estimation algorithm
consists of two steps. During the first step M is updated, and W is kept constant; during
the second step these roles are reversed.
First, regarding the estimation of M, consider a situation in which W is kept constant. It
can be shown that M can be estimated by using approximative method of moments, and
that the estimate is given by
n
o
M ? ?Et (abs (y(t)) ? Et {abs (y(t))}) (abs (y(t ? ?t)) ? Et {abs (y(t))})T
n
o?1
T
? Et (abs (y(t)) ? Et {abs (y(t))}) (abs (y(t)) ? Et {abs (y(t))})
,
(5)
where ? > 1. Since this multiplier has a constant linear effect in the objective function
given below, its value does not change the optima, so we can set ? = 1 in the optimization.
(Details are given in [15].) The resulting estimator is the same as the optimal least mean
squares linear predictor in the case of unconstrained v(t).
The estimation of W is more complicated. A rigorous derivation of an objective function
based on well-known estimation principles is very difficult, because the statistics involved
are non-Gaussian, and the processes have difficult interdependencies. Therefore, instead
of deriving an objective function from first principles, we derived an objective function
heuristically, and verified through simulations that the objective function is capable of estimating the two-layer model. The objective function is a weighted sum of the covariances
of filter output strengths at times t ? ?t and t, defined by
f (W, M) =
K X
K
X
M(i, j) cov {|yi (t)| , |yj (t ? ?t)|} .
(6)
i=1 j=1
In the actual estimation algorithm, W is updated by employing a gradient projection approach to the optimization of (6) under the constraints. The initial value of W is selected
randomly.
The fact that the algorithm described above is able to estimate the two-layer model has
been verified through extensive simulations (details can be found in [15]).
3.3 Experiments
The estimation algorithm was run on the same data set as in the previous experiment (see
Section 2). The extracted matrices A and M can be visualized simultaneously by using the
interpretation of M as a similarity matrix (see Section 3.1). Figure 4 illustrates the basis
vectors ? that is, columns of A ? laid out at spatial coordinates derived from M in a way
explained below. The resulting basis vectors are again oriented, localized and multiscale,
as in the previous experiment.
The two-dimensional coordinates of the basis vectors were determined from M using multidimensional scaling (see figure caption for details). The temporal coherence between the
outputs of two cells i and j is reflected in the distance between the corresponding receptive
fields: the larger the elements M(i, j) and M(j, i) are, the closer the receptive fields are
to each other. We can see that local topography emerges in the results: those basis vectors
which are close to each other seem to be mostly coding for similarly oriented features at
nearby spatial positions. This kind of grouping is characteristic of pooling of simple cell
outputs at complex cell level [1].1
Thus, the estimation of our two-layer model from natural image sequences yields both
simple-cell-like receptive fields, and grouping similar to the pooling of simple cell outputs.
Linear receptive fields emerge in the second layer (matrix A), and cell output grouping
emerges in the first layer (matrix M). Both of these layers are estimated simultaneously.
This is a significant improvement on earlier statistical models of early vision, because no a
priori fixing of either of these layers is needed.
4 Conclusions
We have shown in this paper that when the principle of temporal coherence is applied to natural image sequences, both simple-cell-like receptive fields, and complex-cell-like pooling
of simple cell outputs emerge. These results were obtained with two different approaches
1
Some global topography also emerges: those basis vectors which code for horizontal features
are on the left in the figure, while those that code for vertical features are on the right.
Figure 4: Results of estimating the two-layer generative model from natural image sequences. Basis vectors (columns of A) plotted at spatial coordinates given by applying
multidimensional scaling to M. Matrix M was first converted to a non-negative similarity
matrix Ms by subtracting mini,j M(i, j) from each of its elements, and by setting each
of the diagonal elements at value 1. Multidimensional scaling was then applied to M s
by interpreting entries Ms (i, j) and Ms (j, i) as similarity measures between cells i and j.
Some of the resulting coordinates were very close to each other, so tight cell clusters were
magnified for purposes of visual display. Details are given in [15].
to temporal coherence. The first used temporally coherent simple cell outputs, and the
second was based on a temporal two-layer generative model of natural image sequences.
Simple-cell-like receptive fields emerge in both cases, and the output pooling emerges as a
local topographic property in the case of the two-layer generative model.
These results are important for two reasons. First, to our knowledge this is the first time
that localized and oriented receptive fields with different scales have been shown to emerge
from natural data using the principle of temporal coherence. In some models of invariant
visual representations [8, 16] simple cell receptive fields are obtained as by-products, but
learning is strongly modulated by complex cells, and the receptive fields seem to lack the
important properties of spatial localization and multiresolution. Second, in earlier research
on statistical models of early vision, learning two-layer models has required a priori fixing
of one of the layers. This is not needed in our two-layer model, because both layers emerge
simultaneously in a completely unsupervised manner from the natural input data.
References
[1] Stephen E. Palmer. Vision Science ? Photons to Phenomenology. The MIT Press, 1999.
[2] Eero P. Simoncelli and Bruno A. Olshausen. Natural image statistics and neural representation.
Annual Review of Neuroscience, 24:1193?1216, 2001.
[3] Bruno A. Olshausen and David Field. Emergence of simple-cell receptive field properties by
learning a sparse code for natural images. Nature, 381(6583):607?609, 1996.
[4] Anthony Bell and Terrence J. Sejnowski. The independent components of natural scenes are
edge filters. Vision Research, 37(23):3327?3338, 1997.
[5] Aapo Hyv?rinen, Juha Karhunen, and Erkki Oja. Independent Component Analysis. John Wiley
& Sons, 2001.
[6] Peter F?ldi?k. Learning invariance from transformation sequences. Neural Computation,
3(2):194?200, 1991.
[7] James Stone. Learning visual parameters using spatiotemporal smoothness constraints. Neural
Computation, 8(7):1463?1492, 1996.
[8] Christoph Kayser, Wolfgang Einh?user, Olaf D?mmer, Peter K?nig, and Konrad K?rding. Extracting slow subspaces from natural videos leads to complex cells. In Georg Dorffner, Horst
Bischof, and Kurt Hornik, editors, Artificial Neural Networks ? ICANN 2001, volume 2130 of
Lecture notes in computer science, pages 1075?1080. Springer, 2001.
[9] Laurenz Wiskott and Terrence J. Sejnowski. Slow feature analysis: Unsupervised learning of
invariances. Neural Computation, 14(4):715?770, 2002.
[10] Jarmo Hurri and Aapo Hyv?rinen. Simple-cell-like receptive fields maximize temporal coherence in natural video. Neural Computation, 2003. In press.
[11] Kiyotoshi Matsuoka, Masahiro Ohya, and Mitsuru Kawamoto. A neural net for blind separation
of nonstationary signals. Neural Networks, 8(3):411?419, 1995.
[12] Aapo Hyv?rinen. Blind source separation by nonstationarity of variance: A cumulant-based
approach. IEEE Transactions on Neural Networks, 12(6):1471?1474, 2001.
[13] Aapo Hyv?rinen and Patrik O. Hoyer. A two-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images. Vision Research, 41(18):2413?
2423, 2001.
[14] J. Hans van Hateren and Dan L. Ruderman. Independent component analysis of natural image sequences yields spatio-temporal filters similar to simple cells in primary visual cortex.
Proceedings of the Royal Society of London B, 265(1412):2315?2320, 1998.
[15] Jarmo Hurri and Aapo Hyv?rinen. A two-layer dynamic generative model of natural image
sequences. Submitted.
[16] Teuvo Kohonen, Samuel Kaski, and Harri Lappalainen. Self-organized formation of various
invariant-feature filters in the adaptive-subspace SOM. Neural Computation, 9(6):1321?1344,
1997.
| 2184 |@word version:1 wiesel:1 heuristically:1 hyv:6 simulation:2 bn:1 covariance:1 thereby:1 minus:1 moment:1 reduction:2 initial:2 kurt:1 current:1 must:1 john:1 wx:1 plot:1 generative:17 selected:2 reciprocal:1 short:1 consists:2 dan:1 manner:1 rding:1 inter:3 little:1 actual:1 window:1 laurenz:1 estimating:2 underlying:1 bounded:1 kind:2 interpreted:3 magnified:1 transformation:1 temporal:38 quantitative:1 multidimensional:4 control:1 unit:3 positive:2 scientist:1 local:3 consequence:1 ak:2 establishing:1 signed:2 plus:1 christoph:1 palmer:1 statistically:1 yj:1 kayser:1 bell:1 thought:1 projection:2 close:2 applying:1 map:1 demonstrated:1 convex:1 focused:1 formulate:1 estimator:1 deriving:1 population:1 coordinate:4 analogous:2 updated:2 rinen:6 caption:1 user:1 us:1 approximative:1 element:4 role:2 capture:1 yk:13 dynamic:1 tight:1 localization:1 completely:2 basis:10 kaski:1 various:1 harri:1 derivation:1 describe:2 london:1 sejnowski:2 artificial:1 formation:1 whose:1 larger:2 statistic:3 cov:1 g1:1 topographic:1 emergence:4 final:1 sequence:18 differentiable:1 net:1 outputting:1 interaction:1 subtracting:1 product:1 kohonen:1 multiresolution:1 olaf:1 cluster:1 optimum:1 generating:1 derive:1 fixing:3 ldi:1 strong:1 closely:1 functionality:1 filter:13 fix:1 underdetermined:1 extension:1 strictly:1 hut:2 driving:3 major:2 finland:1 early:5 consecutive:1 vary:1 purpose:1 estimation:14 largest:1 weighted:1 mit:1 ohya:1 gaussian:3 varying:1 derived:2 improvement:2 indicates:1 contrast:1 rigorous:1 typically:1 relation:1 pixel:1 denoted:1 priori:3 spatial:7 field:29 equal:1 unsupervised:3 thinking:1 others:1 oriented:5 randomly:4 oja:1 simultaneously:5 individual:2 ourselves:1 ab:21 beforehand:1 edge:1 capable:1 closer:1 plotted:1 column:5 modeling:3 earlier:5 formalism:1 retains:1 maximization:4 subset:1 entry:1 predictor:1 usefulness:1 delay:1 teuvo:1 dependency:6 scanning:1 spatiotemporal:2 combined:1 terrence:2 invertible:1 together:1 w1:2 again:1 containing:1 account:1 converted:1 photon:1 coding:9 wk:2 includes:1 caused:1 blind:2 view:1 wolfgang:1 lappalainen:1 complicated:1 rectifying:1 contribution:1 square:1 variance:13 characteristic:1 maximized:1 yield:7 emphasizes:2 submitted:1 nonstationarity:3 energy:3 involved:1 james:1 puzzled:1 sampled:2 knowledge:1 emerges:4 dimensionality:1 organized:1 amplitude:3 wwt:1 jarmo:4 reflected:1 response:19 specify:1 done:1 box:1 strongly:3 correlation:7 horizontal:1 ruderman:1 multiscale:2 lack:1 matsuoka:1 olshausen:3 effect:1 concept:1 y2:1 consisted:1 counterpart:1 multiplier:1 konrad:1 during:2 self:1 noted:1 samuel:1 criterion:1 m:4 stone:1 ay:2 complete:1 demonstrate:1 interpreting:1 image:27 wise:1 fi:1 functional:1 volume:1 discussed:1 interpretation:2 extend:1 significant:2 expressing:1 smoothness:1 unconstrained:1 similarly:1 centre:1 nonlinearity:4 bruno:2 han:1 cortex:4 similarity:4 add:1 uncorrelatedness:2 multivariate:1 showed:1 optimizing:1 certain:1 yi:1 additional:2 subtraction:1 maximize:2 period:1 signal:12 stephen:1 relates:1 multiple:1 interdependency:2 simoncelli:1 mmer:1 concerning:1 coded:1 a1:2 converging:1 aapo:7 basic:2 vision:8 iteration:1 normalization:1 achieved:1 cell:65 addition:3 whereas:2 source:7 nig:1 wkt:1 pooling:8 seem:2 nonstationary:6 extracting:1 restrict:1 idea:1 regarding:1 peter:2 dorffner:1 generally:1 detailed:1 cosh:2 visualized:1 sign:4 estimated:6 neuroscience:1 affected:1 georg:1 thereafter:1 verified:2 kept:2 sum:1 run:1 respond:2 laid:1 patch:3 separation:2 coherence:24 scaling:3 layer:30 display:1 correspondence:1 annual:1 activity:2 strength:12 constraint:9 helsinki:1 scene:1 erkki:1 nearby:2 generates:1 robustified:1 according:2 son:1 explained:1 invariant:2 ln:2 equation:5 mechanism:1 needed:3 end:1 kawamoto:1 phenomenology:1 apply:2 enforce:1 alternative:2 shortly:1 denotes:2 top:1 include:1 ensure:1 society:1 objective:11 receptive:27 primary:3 diagonal:1 hoyer:1 gradient:2 kth:1 subspace:2 distance:2 link:2 mapped:1 simulated:1 separate:1 reversed:1 reason:1 code:4 index:2 relationship:2 illustration:1 mini:1 difficult:2 mostly:1 relate:1 negative:3 vertical:1 juha:1 situation:3 defining:1 y1:3 david:1 required:2 extensive:1 connection:2 bischof:1 coherent:4 learned:2 established:1 able:1 below:2 masahiro:1 including:1 max:4 video:3 royal:1 decorrelation:1 natural:29 force:1 technology:1 temporally:6 negativity:2 extract:1 patrik:1 text:1 review:1 lecture:1 topography:4 generation:1 localized:4 degree:1 wiskott:1 principle:10 editor:1 uncorrelated:4 row:1 einh:1 emerge:9 sparse:9 van:1 dimension:3 autoregressive:2 sensory:1 author:1 commonly:1 horst:1 preprocessing:1 adaptive:1 employing:1 hyvarinen:1 transaction:1 keep:2 pseudoinverse:2 hubel:1 global:1 b1:1 hurri:4 eero:1 spatio:1 latent:5 vectorization:1 nature:1 hornik:1 complex:11 anthony:1 som:1 icann:1 main:1 linearly:1 noise:3 slow:2 wiley:1 position:1 learns:1 showing:1 symbol:1 grouping:4 exists:1 illustrates:1 karhunen:1 sparseness:3 visual:8 expressed:1 ordered:1 g2:1 springer:1 extracted:1 change:3 included:1 determined:1 wt:1 principal:1 called:1 invariance:2 experimental:1 modulated:1 cumulant:2 hateren:1 scratch:1 correlated:2 |
1,301 | 2,185 | A Statistical Mechanics Approach to
Approximate Analytical Bootstrap Averages
D?orthe Malzahn
Manfred Opper
Informatics and Mathematical Modelling, Technical University of Denmark,
R.-Petersens-Plads Building 321, DK-2800 Lyngby, Denmark
Neural Computing Research Group, School of Engineering and Applied Science,
Aston University, Birmingham B4 7ET, United Kingdom
[email protected]
[email protected]
Abstract
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap
averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results
with averages obtained by Monte-Carlo sampling.
1 Introduction
The application of tools from Statistical Mechanics to analyzing the average case performance of learning algorithms has a long tradition in the Neural Computing and Machine
Learning community [1, 2]. When data are generated from a highly symmetric distribution
and the dimension of the data space is large, methods of statistical mechanics of disordered systems allow for the computation of learning curves for a variety of interesting and
nontrivial models ranging from simple perceptrons to Support-vector Machines. Unfortunately, the specific power of this approach, which is able to give explicit distribution
dependent results represents also a major drawback for practical applications. In general,
data distributions are unknown and their replacement by simple model distributions might
only reveal some qualitative behavior of the true learning performance.
In this paper we suggest a novel application of the Statistical Mechanics techniques to
a topic within Machine Learning for which the distribution over data is well known and
controlled by the experimenter. It is given by the resampling of an existing dataset in the
so called bootstrap approach [3]. Creating bootstrap samples of the original dataset by
random resampling with replacement and retraining the statistical model on the bootstrap
sample is a widely applicable statistical technique. By replacing averages over the true
unknown distribution of data with suitable averages over the bootstrap samples one can
estimate various properties such as the bias, the variance and the generalization error of a
statistical model.
While in general bootstrap averages can be approximated by Monte-Carlo sampling, it is
useful to have also analytical approximations which avoid the time consuming retraining
of the model for each sample. Existing analytical approximations (based on asymptotic
techniques) such as the delta method and the saddle point method (see e.g.[5]) require
usually explicit analytical formulas for the estimators of the parameters for a trained model.
These may not be easily obtained for more complex models in Machine Learning. In this
paper, we discuss an application of the replica method of Statistical Physics [4] which
combined with a variational method [6] can produce approximate averages over the random
drawings of bootstrap samples. Explicit formulas for parameter estimates are avoided and
replaced by the implicit condition that such estimates are expectations with respect to a
certain Gibbs distribution to which the methods of Statistical Physics can be well applied.
We demonstrate the method for the case of regression with Gaussian processes (GP) (which
is a kernel method that has gained high popularity in the Machine Learning community in
recent years [7]) and compare our analytical results with results obtained by Monte-Carlo
sampling.
2 Basic setup and Gibbs distribution
We will keep the notation in this section fairly general, indicating that most of the theory
can be developed for a broader class of models. We assume that a fixed set of data
is modeled by a likelihood of the type
!
" # )(*
(1)
#%$ '&
where the ?training error? is parametrized by a parameter (which can be a finite or even
infinite dimensional object)& which must be estimated from the data.
will later specialize
,+ .- We
+
to supervised learning problems where each data point
consists of an input
(usually
a finite dimensional vector) and a real label - . In this case, stands for a function
,+' which
models the outputs, or for the parameters (like the weights of a neural network)
which parameterize such functions. We will later apply our approach to the mean square
error given by
# 0
1 / + # - #
&
(2)
The first basic ingredient of our approach is the assumption that the estimator for the unknown ?true? function can be represented as the mean with respect to a posterior distribution over all possible ?s. This avoids the problem of writing down explicit, complicated
formulas for estimators. To be precise, we assume that the statistical estimator
(which
is based on the training set
) can be represented as the expectation of with respect to
the measure
758
6
8
2 354
"
! :9 ;=/ < 9 #%$ >8 & # )(*
which is constructed from a suitable prior distribution < 9 and the likelihood (1).
8
; @?BAC< 9 " # (*
#%$ &
(3)
(4)
denotes a normalizing partition function. Our choice of (3) does not mean that we restrict
ourselves to Bayesian estimators. By introducing specific (?temperature? like) parameters
in the prior and the likelihood, the measure (3) can be strongly concentrated at its mean
such that maximum likelihood/MAP estimators can be included in our framework.
3 Bootstrap averages
We will explain our analytical approximation to resampling averages for the case of supervised learning problems. If we are interested in, say, estimating the expected error on test
=
points 1 which are not contained in the training set
of size
and if we have no hold
out data, we can create artificial data sets by
resampling
(with
replacement)
data from
the original set
, where
each data point
is taken with equal probability
.
Hence, some of the ?s will appear several times in the bootstrap sample and others not at
all. A proxy for the true average test error can be obtained by retraining the model on each
bootstrap training set , calculating the test error only on those points which are not contained in and finally averaging over many sets . In practice, the case maybe of
main importance, but we will also allow for estimating a lager part of the ?learning curve?
by allowing for
and . We will not discuss the statistical properties of such
bootstrap estimates and their refinements (such as Efron?s .632 estimate) in this paper, but
refer the reader to the standard literature [3, 5].
/
For any given set
, we represent a bootstrap sample
by the vector of ?occupation?
with
.
is
the number of times example
numbers
appears in the set . Denoting the expectation over random bootstrap samples by ,
Efron?s estimator for the bootstrap generalization error is
(5)
/ "
$
$
3
82 3 + ) 3
9
3
bootstrap
where we specialized to the square error for testing. Eq.(5) computes the average
. The Kronecker symbol, defined by
for
test error at each data point ! from
contribute
! #" and $ else, guarantees that only realizations of bootstrap training sets
which do not contain the test point. Introducing the abbreviation
%
(6)
#
/
6
@
+ -
2 3
(which is a linear function of ), and using the definition of the estimator
as an average
of ?s over the Gibbs distribution (3), the bootstrap estimate (5) can be rewritten as
& %
'
%
/ "
$
8
8
8
; / ? A < 9 AC< 9
" # # )( # . (*+*,
'6
(7)
&
#%$ &
0
which involves copies (or replicas) and of the variable . More complicated types
of test errors which are polynomials or can be approximated by polynomials in 2 3 can be
rewritten in a similar way, involving more replicas of the variable .
3 / 9 3
4 Analytical averages using the ?replica trick?
For fixed , the distribution of ?s is multinomial. It is simpler (and does not make a big
difference when is sufficiently large) when we work with a Poisson distribution for the
size of the set with as the mean number of data points in the sample. In this case we
get the simpler, factorizing joint distribution
.0/2143
- < 05
$
8
where <
. With Eq. (8) follows 3
9
for the occupation numbers
1
The average is over the unknown distribution of training data sets.
(8)
/6173
.
To enable the analytical average over the vector (which is the ?quenched disorder? in
the language of Statistical Physics) it is necessary to introduce the auxiliary quantity
1
/ 17
%
& '
%
"
8
8
? A < 9A < 9
" # # ( # . (* *,
'
(9)
&
#%$ &
. The advantage of this definition
for real, which allows 0 to write
is that for integers
,
can be represented
in terms of replicas of the original
variable for which an explicit average over ?s is possible. At the end of all calculations
an analytical continuation to arbitrary real and the limit
$ must
0 be performed. Using
the definition of the partition function (4), we get for integer
8
/ 14
"
(10)
/ $ 3
? $ A < 9 % %
'
" # " # (* *,
'
#%$ $ &
/
$
;
3
Exchanging the expectation over datasets with the expectation over ?s and using the explicit form of the distribution (8) we obtain
/ 1
/ 14
%
%
(11)
where the brackets ! denote an average with respect to a Gibbs measure for replicas
"
which is given by
(12)
"
%$
/ 1#
where
(13)
/
7
"
$
$
8
/
<
9
$
"
#%$ $
8
9
and where the partition function has been introduced for convenience to normalize the
measure for '& $ . In most nontrivial cases, averages with respect to the measure (12)
can not be calculated exactly. Hence, we have to apply a sensible approximation. Our idea
is to use techniques which have been frequently applied to probabilistic models [10] such
as the variational approximation, the mean field approximation and the TAP approach. In
this paper, we restrict ourself to a variational Gaussian approximation. More advanced
approximations will be given elsewhere.
5 Variational approximation
A method, frequently used in Statistical Physics which has also attracted considerable interest in the Machine Learning community, is the variational approximation [8]. Its goal is
to replace an intractable distribution like (12) by a different, sufficiently close distribution
from a tractable class which we will write in the form
"
(14)
7
8
<
9
$
8
9
7
7
7
"
7
will be used in (11) instead of
to approximate the average.
will be chosen (see
e.g. [10]) to minimize the relative entropy between
and
resulting in a minimization
of the variational free energy
"
"
"
8
8
? $ AC< 9
-
9
(
(15)
being an upper bound to the true free energy for any integer . The brackets !
denote averages with respect to the variational distribution (14).
"
For our application to Gaussian process models, we will now specialize to Gaussian priors
. For
, we choose the quadratic expression
"
)(
(16)
8
< 9
"
"
,+ # + # ,+ # "
+ # ,+ # )*(
/
/
0
2
2
$
%# $
$
as a+ suitable
leading to a Gaussian distribution (14). The functions
,+ Hamiltonian,
# and 2trial
2
# are the variational parameters to be optimized. To continue the
variational solutions to arbitrary real ,,+ we
optimal ,parameters
assume
,+ # thatas the
+ # ,+ should
# for
#
well
as
be replica symmetric,
i.e.
we
set
2
2
2
2
,
+
,
+
& and 2 # 2 # . The variational free energy can then be expressed
+ # by
the ,local
moments
(?order
parameters?
in
the
language
of
Statistical
Physics)
+ ,+ + + ,+ # ,+ ,+ # for & and + + #
,+ # ,+ # + ,+
# ' # which have the same replica symmetric structure. Since
each of the
matrices (such as 2 ) are assumed to have only two types of entries, it
is possible to obtain variational equations which contain the number of replicas as a simexplicitly
(see appendix). In
ple parameter for which the limit $ can
+ # be
,+ # + # areperformed
this limit, the limiting order parameters
,
found to have simple inter,+ # with respect to
pretations as the (approximate) mean and variance
of
the predictor 2 3
+
+
# becomes the (approximate) bootstrap
the average over bootstrap data sets while
averaged posterior covariance.
6 Explicit results for regression with Gaussian processes
8
We consider a GP model for regression with training energy given by Eq. (2). In this
case, the prior measure
can be simply represented by an
dimensional Gaussian
distribution
for
the
vector
having
zero
mean
and
covariance matrix
, where
is the covariance kernel of the GP.
< 9 +
+ .
+ .-
+ + #
7
7
Using the limiting (for $ ) values of order parameters, and by approximating
by
in Eq.(11), the explicit result for the bootstrap mean square generalization error is found to
be
/ 143
(
(
5
(17)
8
" <
,
+
+
,+ / + 1
/ $
9 $
/
The entire analysis can be repeated for testing (keeping the training energy fixed) with a
,+7 - . The result is
general loss function of the type 2 3
.
/ 14/3 3 "
2 3 ,+7 -
(18)
$
/61
+ - (
,+7 +
/ 143
"
" <
A
(
,+ + 1
/ $ ?
0
$ 5
/
"
+ -
"!
#
8
2.0
} N=1000
Bootstrap Test Error
Bootstrap Test Error
Simulation
Theory
7
6
m=N
5
Simulation
Theory
1.9
} N=1000
1.8
1.7
1.6
m=N
1.5
4
0
1.4
0
1000 1500 2000
500
Size m of Bootstrap Sample
1000 1500 2000
500
Size m of Bootstrap Sample
Figure 1: Average bootstrapped generalization error on Abalone data using square error
loss (left) and epsilon insensitive loss (right). Simulation (circles) and theory (lines) based
on
thesame data set
$6$2$ data points. The GP model uses an RBF kernel
with
with
on whitened inputs. For the data noise we
set
$ .
+ +
+ + 0 /
1
/
We have applied our theory to the Abalone data set [11] where we have computed the
approximate bootstrapped generalization errors for the square error loss and the so-called
-insensitive loss which is defined by
1
1
0 if $
(
(19)
if
(
if
. We have set
with
$ and
$ . The bootstrap average from our
theory is obtained from Eq.(18). Figure 1 shows the generalization error measured by the
square error loss (Eq.(17), left panel) as well as the one measured by the -insensitive loss
(right panel). Our theory (line) is compared with simulations (circles) which were based
on Monte-Carlo sampling averages that were computed using the same data set
having
of size are obtained by sampling from
$2$6$ . The Monte-Carlo training sets
with replacement. We find a good agreement between theory and simulations in the
. When we oversample the data set
, however, the agreement is
region were
not so good and corrections to our variational Gaussian approximation
would be required.
7
Figure 2 shows the bootstrap average of the posterior variance
over the
whole data set
,
$2$2$ , and compares our theory (line) with simulations (circles)
which were based on Monte-Carlo sampling averages. The overall approximation looks
better than for the bootstrap generalization error.
8
2 3 ,+ -
88 /
/
/
/
9
/9
9
/
/
$
,+ +
Finally, it is important to note that all displayed theoretical learning curves have been obtained computationally much faster than their respective simulated learning curves.
7 Outlook
The replica approach to bootstrap averages can be extended in a variety of different directions. Besides the average generalization error, one can compute its bootstrap sample
fluctuations by introducing more complicated replica expressions. It is also straightforward to apply the approach to more complex problems in supervised learning which are
related to Gaussian processes, such as GP classifiers or Support-vector Machines. Since
Posterior Variance
10
10
Simulation
Theory
-1
} N=1000
-2
0
1000 1500 2000
500
Size m of Bootstrap Sample
Figure 2: Bootstrap averaged posterior variance for Abalone data. Simulation (circles) and
theory (line) based on the same data set
with
$2$6$ data points.
6
/
our method requires the solution of a set of variational equations of the size of the original
training set, we can expect that its computational complexity should be similar to the one
needed for making the actual predictions with the basic model. This will also apply to the
problem of very large datasets, where one may use a variety of well known sparse approximations (see e.g. [9] and references therein). It will also be important to assess the quality
of the approximation introduced by the variational method and compare it to alternative
approximation techniques in the computation of the replica average (11), such as the mean
field method and its more complex generalizations (see e.g. [10]).
Acknowledgement
We would like to thank Lars Kai Hansen for stimulating discussions. DM thanks the Copenhagen Image and Signal Processing Graduate School for financial support.
Appendix: Variational equations
For reference, we will give the explicit form of the equations for variational and order
parameters in the limit
$ . The derivations will be given elsewhere. We obtain
+ +
,+7 + # is given by
where the matrix
where
#
,+
,+ + #
"
/ #%$
"
/ %# $
,+7 + # ,+ #
2
,+ + # , + + # + #
2
1
(
(21)
1
is the kernel matrix. Finally #
(20)
# 2 +
2
,+ . .
(22)
The order parameter equations Eqs.(20-22) must be solved together with the variational
equations which are given by
(
(23)
2
+
1
,+ + .
+
+)
2
+ ,+7
2 2
with
2
2
- 8 2 , +
+ -
+ . .
(
,+ +7 9 2 , + %
(24)
(25)
+ +
(
'
1
Combining Eqs.(22) and (23), a self consistent matrix equation
is
obtained where depends on the diagonal
elements
.
Its
iterative
solution
(based
on a good initial
) requires usually only a few iterations. The order
7 guess for
parameters
and
can then be solved subsequently using Eq.(20,21) with
(24,25).
,+
+ +
+ +
References
[1] A. Engel and C. Van den Broeck, Statistical Mechanics of Learning (Cambridge University Press, 2001).
[2] H. Nishimori, Statistical Physics of Spin Glasses and Information Processing (Oxford
Science Publications, 2001).
[3] B. Efron and R. J. Tibshirani, An Introduction to the Bootstrap, Monographs on Statistics and Applied Probability 57 (Chapman Hall, 1993).
[4] M. M?ezard, G. Parisi, and M. A. Virasoro, Spin Glass Theory and Beyond, Lecture
Notes in Physics 9 (World Scientific, 1987).
[5] J. Shao and D. Tu, The Jackknife and Bootstrap, Springer Series in Statistics (Springer
Verlag, 1995).
[6] D. Malzahn and M. Opper, A variational approach to learning curves, NIPS 14, Editors: T.G. Dietterich, S. Becker, Z. Ghahramani, (MIT Press, 2002).
[7] R. Neal, Bayesian Learning for Neural Networks, Lecture Notes in Statistics 118
(Springer, 1996).
[8] R. P. Feynman and A. R. Hibbs, Quantum mechanics and path integrals (Mc GrawHill Inc., 1965).
[9] L. Csat?o and M. Opper, Sparse Gaussian Processes, Neural Computation 14, No 3,
641 - 668 (2002).
[10] M. Opper and D. Saad (editors), Advanced Mean Field Methods: Theory and Practice, (MIT Press, 2001).
[11] From http://www1.ics.uci.edu/ mlearn/MLSummary.html.
The data set contains 4177 examples. We used a representative fraction (the forth
block (a 1000 data) from the list).
| 2185 |@word trial:1 polynomial:2 retraining:3 simulation:8 covariance:3 outlook:1 moment:1 initial:1 series:1 contains:1 united:1 denoting:1 bootstrapped:2 existing:2 must:3 attracted:1 partition:3 resampling:4 guess:1 hamiltonian:1 manfred:1 contribute:1 simpler:2 mathematical:1 constructed:1 qualitative:1 specialize:2 consists:1 introduce:1 inter:1 expected:1 behavior:1 frequently:2 mechanic:6 actual:1 becomes:1 estimating:3 notation:1 panel:2 developed:1 guarantee:1 exactly:1 classifier:1 uk:1 appear:1 engineering:1 local:1 limit:4 analyzing:1 oxford:1 fluctuation:1 path:1 might:1 therein:1 graduate:1 averaged:2 practical:1 testing:2 practice:2 block:1 bootstrap:34 quenched:1 suggest:1 get:2 convenience:1 close:1 writing:1 map:1 straightforward:1 disorder:1 estimator:8 financial:1 limiting:2 us:1 agreement:2 trick:1 element:1 approximated:2 solved:2 parameterize:1 region:1 monograph:1 complexity:1 ezard:1 trained:1 shao:1 easily:1 joint:1 various:1 represented:4 derivation:1 monte:6 artificial:1 widely:1 kai:1 say:1 drawing:1 statistic:3 gp:5 advantage:1 parisi:1 analytical:10 tu:1 uci:1 combining:1 realization:1 opperm:1 forth:1 normalize:1 produce:1 object:1 ac:3 measured:2 school:2 eq:9 auxiliary:1 involves:1 direction:1 drawback:1 lars:1 subsequently:1 disordered:1 enable:1 require:1 generalization:10 correction:1 hold:1 sufficiently:2 hall:1 ic:1 major:1 birmingham:1 applicable:1 label:1 hansen:1 create:1 engel:1 tool:1 minimization:1 mit:2 gaussian:11 avoid:1 broader:1 publication:1 modelling:1 likelihood:4 tradition:1 glass:2 dependent:1 entire:1 interested:1 overall:1 html:1 fairly:1 equal:1 field:3 having:2 sampling:6 chapman:1 represents:1 look:1 others:1 few:1 replaced:1 ourselves:1 replacement:4 interest:1 highly:1 bracket:2 integral:1 necessary:1 respective:1 circle:4 theoretical:1 virasoro:1 exchanging:1 introducing:3 entry:1 predictor:1 broeck:1 combined:2 thanks:1 probabilistic:1 physic:8 informatics:1 together:1 choose:1 creating:1 leading:1 inc:1 depends:1 later:2 performed:1 complicated:3 minimize:1 square:6 ass:1 spin:2 variance:5 bayesian:2 mc:1 carlo:6 mlearn:1 explain:1 definition:3 petersens:1 energy:5 dm:2 experimenter:1 dataset:2 efron:3 appears:1 supervised:3 strongly:1 implicit:1 oversample:1 replacing:1 quality:1 reveal:1 scientific:1 building:1 dietterich:1 contain:2 true:5 hence:2 symmetric:3 orthe:1 neal:1 self:1 abalone:3 demonstrate:2 temperature:1 ranging:1 variational:19 image:1 novel:1 specialized:1 multinomial:1 b4:1 insensitive:3 refer:1 cambridge:1 gibbs:4 language:2 posterior:5 recent:1 certain:1 verlag:1 continue:1 signal:1 technical:1 faster:1 calculation:1 long:1 controlled:1 prediction:1 involving:1 regression:4 basic:3 whitened:1 expectation:5 poisson:1 iteration:1 kernel:4 represent:1 else:1 saad:1 integer:3 variety:3 restrict:2 idea:1 expression:2 becker:1 hibbs:1 useful:1 maybe:1 concentrated:1 bac:1 continuation:1 http:1 delta:1 estimated:1 popularity:1 tibshirani:1 csat:1 write:2 group:1 replica:13 fraction:1 year:1 mlsummary:1 reader:1 appendix:2 bound:1 quadratic:1 nontrivial:2 kronecker:1 jackknife:1 making:1 www1:1 plads:1 den:1 taken:1 lyngby:1 equation:7 computationally:1 discus:2 needed:1 tractable:1 feynman:1 end:1 rewritten:2 apply:5 alternative:1 original:4 denotes:1 calculating:1 ghahramani:1 epsilon:1 approximating:1 quantity:1 diagonal:1 thank:1 simulated:1 parametrized:1 sensible:1 topic:1 ourself:1 denmark:2 besides:1 modeled:1 kingdom:1 unfortunately:1 setup:1 unknown:4 allowing:1 upper:1 datasets:2 finite:2 displayed:1 extended:1 precise:1 arbitrary:2 community:3 introduced:2 copenhagen:1 required:1 optimized:1 tap:1 nip:1 malzahn:2 able:1 beyond:1 usually:3 power:1 suitable:3 advanced:2 aston:2 dtu:1 prior:4 literature:1 acknowledgement:1 nishimori:1 asymptotic:1 occupation:2 relative:1 loss:7 expect:1 lecture:2 interesting:1 ingredient:1 proxy:1 consistent:1 editor:2 elsewhere:2 copy:1 free:3 keeping:1 bias:1 allow:2 sparse:2 van:1 curve:5 opper:4 dimension:1 stand:1 avoids:1 calculated:1 computes:1 world:1 quantum:1 refinement:1 avoided:1 ple:1 approximate:7 keep:1 imm:1 assumed:1 consuming:1 factorizing:1 iterative:1 complex:3 main:1 big:1 noise:1 whole:1 repeated:1 representative:1 explicit:9 formula:3 down:1 specific:2 symbol:1 list:1 dk:2 normalizing:1 intractable:1 gained:1 importance:1 entropy:1 simply:1 saddle:1 expressed:1 contained:2 springer:3 stimulating:1 abbreviation:1 goal:1 rbf:1 replace:1 considerable:1 included:1 infinite:1 averaging:1 called:2 perceptrons:1 indicating:1 support:3 |
1,302 | 2,186 | Bayesian Estimation of Time-Frequency
Coefficients for Audio Signal Enhancement
Patrick J. Wolfe
Department of Engineering
University of Cambridge
Cambridge CB2 1PZ, UK
[email protected]
Simon J. Godsill
Department of Engineering
University of Cambridge
Cambridge CB2 1PZ, UK
[email protected]
Abstract
The Bayesian paradigm provides a natural and effective means of exploiting prior knowledge concerning the time-frequency structure of sound
signals such as speech and music?something which has often been overlooked in traditional audio signal processing approaches. Here, after constructing a Bayesian model and prior distributions capable of taking into
account the time-frequency characteristics of typical audio waveforms,
we apply Markov chain Monte Carlo methods in order to sample from the
resultant posterior distribution of interest. We present speech enhancement results which compare favourably in objective terms with standard
time-varying filtering techniques (and in several cases yield superior performance, both objectively and subjectively); moreover, in contrast to
such methods, our results are obtained without an assumption of prior
knowledge of the noise power.
1 Introduction
Natural sounds can be meaningfully represented as a superposition of translated and
frequency-modulated versions of simple functions (atoms). As a result, so-called timefrequency representations are ubiquitous in audio signal processing. The focus of this
paper is on signal enhancement via a regression in which time-frequency atoms form the
regressors. This choice is motivated by the notion that an atomic time-frequency decomposition is the most natural way to split an audio waveform into its constituent parts?such as
note attacks and steady pitches for music, voiced and unvoiced speech, and so on. Moreover, these features, along with prior knowledge concerning their generative mechanisms,
are most easily described jointly in time and frequency through the use of Gabor frames.
1.1 Gabor Frames
We begin by briefly reviewing the concept of Gabor systems; detailed results and proofs
may be found in, for example, [1]. Consider a function whose time-frequency support
Audio examples described in this paper, as well as Matlab code allowing for their reproduction,
may be found at the author?s web page: http://www-sigproc.eng.cam.ac.uk/ pjw47.
is centred about the origin, and let denote a time-shifted (translation by ) and
frequency-shifted (modulation by ) version thereof; such a collection of shifts defines a
sampling grid over the time-frequency
plane. Then (roughly speaking) if is reasonably
well-behaved and the lattice
is sufficiently dense, the Gabor system
provides a (possibly non-orthogonal, or even redundant) series expansion of any function in a
Hilbert space, and is thus said to generate a frame.
More formally, a Gabor frame
is a dictionary of time-frequency shifted versions of
a single basic window function , having the additional property that there exist constants
(frame bounds) such that
"! #
(') *
"! - /.02143
,+
$&%
3
'65 5
where is the Hilbert space of functions of interest and + denotes the inner product.
This property can be understood as an approximate Plancherel formula, guaranteeing
/173 comcan
pleteness of the set of building blocks in the function space. That is, any signal
be represented as an absolutely convergent infinite series of the
, or in the finite case,
a linear combination thereof. Such a representation is given by the following formula:
98 #
': <;
(1)
+
,
$&%
where *; is a dual frame for = . Dual frames exist for any frame; however,
the canonical dual frame, guaranteeing
minimal (two-)norm coefficients in the expan=; 8?>A@B * , where > is the frame operator, defined by
sion
of
(1),
is
given
by
>C78ED
') *
F+ * .
$&%
The notion of a frame thus incorporates bases as well
redundant representations;
Gas
8 certain
) with G8 8IH ; the union of
for example, an orthonormal basis is a tight frame (
J8 8JK
. Importantly, a
two orthonormal bases yields a tight frame with frame bounds
key result in time-frequency theory (the Balian-Low Theorem) implies that redundancy is
a necessary consequence of good time-frequency localisation. 1 However, even with redundancy, the frame operator may, in certain special cases, be diagonalised. If, furthermore,
the = are normalised in such a case, then analysis and synthesis can take place using
the same window and inversion of the frame operator is avoided completely. Accordingly,
Daubechies et al. [2] term such cases ?painless nonorthogonal expansions?.
1.2 Short-Time Spectral Attenuation
The standard noise reduction method in engineering applications is actually such an expansion in disguise (see, e.g., [3]). In this method, known as short-time spectral attenuation,
a time-varying filter is applied to the frequency-domain transform of a noisy signal, using
the overlap-add method of short-time Fourier analysis and synthesis. The observed signal
y is first divided into overlapping segments through multiplication by a smooth, ?sliding?
window function, which is non-zero only for a duration on the order of tens of milliseconds. The Fourier transform is then taken on each length-L interval (possibly
1POQ zero-padded
to length M ), and the resultant N vectors of spectral values
Y
$&RS B UTUTUT V @B
can be plotted side by side to yield a time-frequency representation known as the Gabor
transform, or sub-sampled short-time Fourier transform, the modulus of which is the wellknown spectrogram. The coefficients of this transform are attenuated to some degree in
order to reduce the noise; as shown in Fig. 1, individual short-time intervals Y are then
inverse-transformed, multiplied by a smoothing window, and added together in an appropriate manner to form a time-domain signal reconstruction xW .
1
There is, however, an exception for real signals, which will be explored in more detail in X 3.2.
Figure 1: Short-time spectral attenuation
This method of noise reduction, while being relatively fast and easily understood, exhibits
several shortcomings: in its most basic form it ignores dependencies between the timedomain data in adjacent short-time blocks, and it assumes knowledge of the noise variance.
Moreover, previous approaches in this vein have relied (either explicitly or implicitly) on
independence assumptions amongst the time-frequency coefficients; see, e.g., [4]. Thus,
with the aim of improving upon this popular class of audio noise reduction techniques, we
have used these approaches as a starting point from which to proceed with a fully Bayesian
analysis. As a step in this direction, we propose a Gabor regression model as follows.
2 Coefficient Shrinkage for Audio Signal Enhancement
2.1 Gabor Regression
1
Let x
denote a sampled audio waveform, the observation of which has been corrupted
8
by additive white Gaussian noise of variance , yielding the simple additive model y
x d. We consider regression in this case using a design matrix obtained from a Gabor
frame.2
In our particular case, this choice of regressors is motivated by a desire for constant absolute bandwidth, as opposed to, e.g., the constant relative bandwidth of wavelets. We do not
attempt to address here the relative merits of Gabor and wavelet frames per se; rather, we
simply note that the changing frequency content of natural sound signals carries much of
their information, and thus a time-frequency representation may well be more appropriate
than a time-scale one. Moreover, audio signal enhancement results with wavelets have been
for the most part disappointing (witness the dearth of literature in this area), whereas standard engineering practice has evolved to use time-varying filtering?which is inherently
Gabor analysis.
Although space does not permit a discussion of the relevance of Gabor-type transforms
to auditory perception (see, e.g., [5]), as a final consideration it is interesting to note that
Gabor?s original formulations [6]?[7] were motivated by psychoacoustic as well as information theoretic considerations.
2
mod , under the assumption (without loss of generTechnically, we consider the ring
ality) that the vector of sampled observations y has been extended to length in a proper way at its
boundary before being periodically extended on
.
2.2 Bayesian Model
1
can be represented as a linear
By the completeness property of Gabor frames, any x
combination of the elements of the frame. Thus, one has the model
8
y Gc d
17O
19
O
where the columns of G
form the Gabor synthesis atoms, and elements of c
represent the respective synthesis coefficients. To complete this model we assume an independent, identically distributed Gaussian noise vector, conditionally Gaussian coefficients,
and inverted-Gamma conjugate priors:
d
c
c
,
diag
,
I
c
K K
,
(2)
where diag
c denotes a diagonal matrix, the individual elements of which are assumed
to be distributed as in (2) above, and ! and " are hyperparameters. We note that it is
possible to obtain vague priors through the choice of these hyperparameters; alternatively,
one may wish to incorporate genuine prior knowledge about audio signal behaviour through
them. In # 3.2, we consider the case in which frequency-dependent coefficient priors are
specified in order to exploit the time-frequency structure of natural sound signals.
The choice of an inverted-Gamma prior for
is justified by its flexibility; for instance,
in many audio enhancement applications one may be able to obtain a good estimate of
the noise variance, which may in turn be reflected in the choice of hyperparameters and
. However, in order to demonstrate the performance of our model in the H ?worst-case?
scenario of little prior information, we assume here a diffuse prior
$%$
for .
2.3 Implementation
As a means of obtaining samples from the posterior distribution and hence the corresponding point estimates, we propose to sample from the posterior using Markov chain Monte
Carlo (MCMC) methods [8]. By design, all model parameters may be easily sampled from
their respective full conditional distributions, thus allowing the straightforward employment of a Gibbs sampler [9].
In all of the experiments described herein, a tight, normalised Hanning window was employed as the Gabor window function, and a regular time-frequency lattice was constructed
to yield a redundancy of two (corresponding to the common practice of a 50% window
overlap in the overlap-add method.) The arithmetic mean of the signal reconstructions
from 1000 iterations (following 1000 iterations of ?burn-in?, by which time the sampler
appeared to have reached a stationary regime in each case) was taken to be the final result.
As a further note, colour plots and representative audio examples may be found at the URL
specified on the title page of this paper.
While here we show results from random initialisations, with no attempt made to optimise
parameters, we note that in practice it may be most efficient to initialise the sampler with
the Gabor expansion of the noisy observation vector (such an initialisation will indeed
be possible without inversion of the frame operator in the cases we consider here, which
correspond to the overlap-add method described in # 1.2). It can also be expected that,
where possible, convergence may be speeded by starting the sampler in regions of likely
high posterior probability, via use of a preliminary noise reduction method to obtain a
robust coefficient initialisation.
3 Simulations
3.1 Coefficient Shrinkage in the Overcomplete Case
To test the noise reduction capabilities of the Gabor regression model, a speech signal of the
short utterance ?sound check?, sampled at 11.025 kHz, was artificially degraded with white
Gaussian noise to yield signal-to-noise ratios (SNR) between 0 and 20 dB. At each SNR, ten
runs of the sampler, at different random initialisations and using different pseudo-random
number sequences, were performed as specified above. By way of comparison, three standard methods of short-time spectral attenuation (the Wiener filter, magnitude spectral subtraction, and the Ephraim and Malah suppression rule (EMSR) [4]) were also tested on the
same data (noise variances were estimated from 5 seconds of the noise realisation in these
cases); the results are shown in Fig. 2, along with estimates of the noise variance averaged
over each of the ten runs.
25
True noise variance
Estimated noise variance
?3
10
15
log(?2)
Output SNR (dB)
20
?2
10
? ? Wiener filter rule
? Gabor regression
? . Magnitude spectral subtraction
.. Ephraim and Malah rule
10
?4
10
5
0
0
?5
5
10
Input SNR (dB)
15
(a) Gains and corresponding interpolants.
Individual realisations corresponding to the
ten sampler runs are so closely spaced as to
be indistinguishable.
20
10
0
5
10
Input SNR (dB)
15
20
(b) True and estimated noise variances
(each averaged over ten runs of the sampler)
Figure 2: Noise reduction results for the Gabor regression experiment of # 3.1
As it is able to outperform many of the short-time methods over a wide range of SNR (despite its relative disadvantage of not being given the estimated noise variance), and is also
able to accurately estimate the noise variance over this range, the results of Fig. 2 would
seem to indicate the appropriateness of the Gabor regression scheme for audio signal enhancement. However, listening tests reveal that the algorithm, while improving upon the
shortcomings of standard approaches discussed in # 1.2, still suffers from the same ?musical? residual noise. The EMSR, on the other hand, is known for its more colourless residual
noise (although as can be seen from Fig. 2, it tends to exhibit severe over-smoothing at
higher SNR); we address this issue in the following section.
3.2 Coefficient Shrinkage Using Wilson Bases
In the case of a real signal, it is still possible to obtain good time-frequency localisation
without incurring the penalty of redundancy through the use of Wilson bases (also known
in the engineering literature as lapped transforms; see, e.g., [1]).
As an example of incorporating basic prior knowledge about audio signal structure in a relatively simple and straightforward manner, now consider letting the scale factor " of (2) become an inverse function of frequency, so that elements of the inverted-Gamma-distributed
coefficient variance vector c , although independent, are no longer identically distributed.
To test the effects of such a frequency-dependent prior in the context of a Wilson regression
of the
model (in comparison with the diffuse priors employed in # 3.1), the Hspeech
K H signal
@ , to yield
previous example was degraded with white Gaussian noise of variance
an SNR of 10 dB. Once again, posterior mean estimates over the last 1000 iterations of a
2000-iteration Gibbs sampler run were taken as the final result. Figure 3 shows samples of
the noise variance parameter in this case. While both the diffuse and frequency-dependent
?4
2
x 10
Noise variance, identical prior case
Noise variance, true value
Frequency?dependent prior case
1.8
?2
1.6
1.4
1.2
1
1
500
1000
Iteration
1500
2000
Figure 3: Noise variance samples for the two Wilson regression schemes of # 3.2
prior schemes yield an estimate close to the true noise variance, and indeed give similar
SNR gains of 3.07 and 2.85 dB, respectively, the corresponding restorations differ greatly
in their perceptual quality. Figure 4 shows spectrograms of the clean and noisy test signal,
as well as the resultant restorations; whereas Fig. 5 shows waveform and spectrogram plots
of the corresponding residuals (for greater clarity, colour plots are provided on-line).
It may be seen from Figs. 4 and 5 that the residual noise in the case of the frequencydependent priors appears less coloured, and in fact this restoration suffers much less from
the so-called ?musical noise? artefact common to audio signal enhancement methods. It is
well-known that a ?whiter-sounding? residual is perceptually preferable; in fact, some noise
reduction methods have attempted this explicitly [10].
4 Discussion
Here we have presented a model for regression of audio signals, using elements of a Gabor
frame as a design matrix. Note that in alternative contexts, others have also considered
scale mixtures of normals as we do here (see, e.g., [11]?[12]); in fact, the priors discussed
in [13] constitute special cases of those employed in the Gabor regression model. This
model may also be extended to include indicator variables, thus allowing one to perform
Bayesian model averaging [8]?[9]. In this case it may be desirable to employ an even larger
Posterior Mean Reconstruction,
Identical Prior Case
Original Speech Signal
5000
5000
4000
4000
3000
3000
2000
2000
1000
1000
0
0
0.1
0.2
0.3
0.4
0
0
0.1
0.2
0.3
0.4
Posterior Mean Reconstruction,
Frequency?Dependent Prior Case
Degraded Speech Signal
5000
5000
4000
4000
3000
3000
2000
2000
1000
1000
0
0
0.1
0.2
0.3
0.4
0
0
0.1
0.2
0.3
0.4
Figure 4: Spectrograms for the two Wilson regression schemes of # 3.2 in the case of
diffuse vs. frequency-dependent priors (grey scale is proportional to log-amplitude)
?dictionary? of regressors, in order to obtain the most parsimonious representation possible.3 Multi-resolution wavelet-like schemes are one of many possibilities; for an example
application in this vein we refer the reader to [14].
The strength of such a fully Bayesian approach lies largely in its extensibility to allow for
more accurate signal and noise models; in this vein work is continuing on the development
of appropriate conditional prior structures for audio signals, including the formulation of
Markov random field models. The main weakness of this method at present lies in the
computational intensity inherent in the sampling scheme; a comparison to more recent and
sophisticated probabilistic methods (e.g., [15]?[16]) is now in order to determine whether
the benefits to be gained from such an approach outweigh its computational drawbacks.
References
[1] Gr?ochenig, K. (2001). Foundations of Time-Frequency Analysis. Boston: Birkh?auser.
[2] Daubechies, I., Grossmann, A., and Meyer, Y. (1986). Painless nonorthogonal expansions. J.
Math. Phys. 27, 1271?1283.
[3] D?orfler, M. (2001). Time-frequency analysis for music signals: a mathematical approach. J. New
Mus. Res. 30, 3?12.
[4] Ephraim, Y. and Malah, D. (1984). Speech enhancement using a minimum mean-square error
short-time spectral amplitude estimator. IEEE Trans. Acoust., Speech, Signal Processing ASSP-32,
1109?1121.
3
It remains an open question as to whether the resultant variable selection problem would be
amenable to approaches other than MCMC?for instance, a perfect sampling scheme.
Residual, Identical Prior Case
0.03
5000
0.02
0.01
4000
0
3000
?0.01
2000
?0.02
1000
?0.03
?0.04
1000
2000
3000
4000
0
5000
0
0.1
0.2
0.3
0.4
0.2
0.3
Time (s)
0.4
Residual, Frequency?Dependent Prior Case
0.04
Frequency (Hz)
Signal Amplitude
5000
0.02
0
4000
3000
2000
?0.02
1000
?0.04
1000
2000
3000
4000
5000
0
0
0.1
Sample Number
Figure 5: Waveform and spectrogram plots of the Wilson regression residuals
[5] Wolfe, P. J. and Godsill, S. J. (2001). Perceptually motivated approaches to music restoration. J.
New Mus. Res. 30, 83?92.
[6] Gabor, D. (1946). Theory of communication. J. IEE 93, 429?457.
[7] Gabor, D. (1947). Acoustical quanta and the theory of hearing. Nature 159, 591?594.
[8] Robert, C. P. and Casella, G. (1999). Monte Carlo Statistical Methods. New York: Springer.
[9] Gilks, W. R., Richardson, S., and Spiegelhalter, D. J. (1996). Markov Chain Monte Carlo in
Practice. London: Chapman & Hall.
[10] Ephraim, Y. and Van Trees, H. L. (1995). A signal subspace approach for speech enhancement.
IEEE Trans. Speech Audio Processing 3, 251?266.
[11] Shepard, N. (1994). Partial non-Gaussian state space. Biometrika 81, 115?131.
[12] Godsill, S. J. and Rayner, P. J. W. (1998). Digital Audio Restoration: A Statistical Model Based
Approach. Berlin: Springer-Verlag.
[13] Figueiredo, M. A. T. (2002). Adaptive sparseness using Jeffreys prior. In T. G. Dietterich, S.
Becker, and Z. Ghahramani (eds.), Advances in Neural Information Processing Systems 14, pp. 697?
704. Cambridge, MA: MIT Press.
[14] Wolfe, P. J., D?orfler, M., and Godsill, S. J. (2001). Multi-Gabor dictionaries for audio timefrequency analysis. In Proc. IEEE Worksh. App. Signal Processing Audio Acoust., pp. 43?46.
[15] H. Attias, L. Deng, A. Acero, and J. C. Platt (2001). A new method for speech denoising
and robust speech recognition using probabilistic models for clean speech and for noise. In Proc.
Eurospeech 2001, vol. 3, pp. 1903?1906.
[16] H. Attias, J.C. Platt, A. Acero, and L. Deng (2001). Speech denoising and dereverberation
using probabilistic models. In T. Leen (ed.), Advances in Neural Information Processing Systems 13,
pp. 758?764. Cambridge, MA: MIT Press.
| 2186 |@word timefrequency:2 version:3 inversion:2 norm:1 briefly:1 open:1 grey:1 r:1 simulation:1 eng:3 decomposition:1 ality:1 rayner:1 carry:1 reduction:7 series:2 initialisation:4 periodically:1 additive:2 plot:4 v:1 stationary:1 generative:1 accordingly:1 plane:1 short:11 provides:2 completeness:1 math:1 attack:1 mathematical:1 along:2 constructed:1 become:1 manner:2 expected:1 indeed:2 roughly:1 multi:2 little:1 window:7 begin:1 provided:1 moreover:4 evolved:1 acoust:2 pseudo:1 attenuation:4 preferable:1 biometrika:1 uk:5 platt:2 before:1 engineering:5 understood:2 tends:1 consequence:1 despite:1 modulation:1 burn:1 speeded:1 range:2 averaged:2 gilks:1 atomic:1 union:1 block:2 practice:4 cb2:2 area:1 gabor:26 regular:1 close:1 selection:1 operator:4 acero:2 context:2 www:1 outweigh:1 straightforward:2 starting:2 duration:1 resolution:1 rule:3 estimator:1 importantly:1 orthonormal:2 initialise:1 notion:2 origin:1 wolfe:3 element:5 recognition:1 jk:1 vein:3 observed:1 worst:1 region:1 extensibility:1 ephraim:4 mu:2 cam:3 employment:1 reviewing:1 tight:3 segment:1 upon:2 completely:1 basis:1 translated:1 vague:1 easily:3 po:1 represented:3 fast:1 effective:1 shortcoming:2 monte:4 birkh:1 london:1 whose:1 larger:1 objectively:1 richardson:1 jointly:1 transform:5 noisy:3 final:3 sequence:1 reconstruction:4 propose:2 product:1 flexibility:1 constituent:1 exploiting:1 convergence:1 enhancement:10 guaranteeing:2 ring:1 perfect:1 ac:3 implies:1 indicate:1 appropriateness:1 direction:1 waveform:5 differ:1 closely:1 drawback:1 artefact:1 filter:3 sjg:1 behaviour:1 preliminary:1 sufficiently:1 considered:1 hall:1 normal:1 nonorthogonal:2 g8:1 dictionary:3 estimation:1 proc:2 superposition:1 title:1 mit:2 gaussian:6 aim:1 rather:1 shrinkage:3 sion:1 varying:3 wilson:6 focus:1 check:1 greatly:1 contrast:1 suppression:1 dependent:7 transformed:1 issue:1 dual:3 development:1 smoothing:2 special:2 auser:1 genuine:1 once:1 field:1 having:1 atom:3 sampling:3 identical:3 chapman:1 others:1 realisation:2 employ:1 inherent:1 gamma:3 individual:3 attempt:2 interest:2 possibility:1 localisation:2 severe:1 weakness:1 mixture:1 yielding:1 chain:3 amenable:1 accurate:1 capable:1 partial:1 necessary:1 respective:2 orthogonal:1 tree:1 continuing:1 re:2 plotted:1 overcomplete:1 minimal:1 instance:2 column:1 disadvantage:1 restoration:5 lattice:2 hearing:1 snr:9 gr:1 iee:1 eurospeech:1 dependency:1 corrupted:1 probabilistic:3 synthesis:4 together:1 daubechies:2 again:1 opposed:1 possibly:2 disguise:1 expan:1 account:1 centred:1 coefficient:12 explicitly:2 performed:1 reached:1 relied:1 capability:1 simon:1 voiced:1 square:1 degraded:3 wiener:2 musical:2 largely:1 variance:17 characteristic:1 yield:7 correspond:1 spaced:1 bayesian:7 accurately:1 carlo:4 app:1 phys:1 suffers:2 casella:1 ed:3 frequency:32 pp:4 thereof:2 resultant:4 proof:1 sampled:5 auditory:1 gain:2 popular:1 knowledge:6 ubiquitous:1 hilbert:2 amplitude:3 sophisticated:1 actually:1 appears:1 higher:1 reflected:1 formulation:2 leen:1 furthermore:1 hand:1 favourably:1 web:1 overlapping:1 defines:1 quality:1 reveal:1 behaved:1 modulus:1 building:1 effect:1 dietterich:1 concept:1 true:4 hence:1 white:3 conditionally:1 adjacent:1 indistinguishable:1 steady:1 theoretic:1 complete:1 demonstrate:1 consideration:2 superior:1 common:2 khz:1 shepard:1 discussed:2 refer:1 cambridge:6 gibbs:2 grid:1 longer:1 subjectively:1 patrick:1 base:4 something:1 add:3 posterior:7 recent:1 wellknown:1 disappointing:1 scenario:1 certain:2 verlag:1 malah:3 inverted:3 seen:2 minimum:1 additional:1 greater:1 spectrogram:5 employed:3 deng:2 subtraction:2 determine:1 paradigm:1 redundant:2 signal:34 arithmetic:1 sliding:1 full:1 sound:5 desirable:1 smooth:1 divided:1 concerning:2 pitch:1 regression:14 basic:3 iteration:5 represent:1 justified:1 whereas:2 interval:2 sounding:1 hz:1 db:6 meaningfully:1 incorporates:1 mod:1 seem:1 split:1 identically:2 independence:1 bandwidth:2 inner:1 reduce:1 attenuated:1 attias:2 listening:1 shift:1 whether:2 motivated:4 colour:2 url:1 becker:1 penalty:1 speech:15 speaking:1 proceed:1 constitute:1 york:1 matlab:1 detailed:1 se:1 transforms:2 ten:5 http:1 generate:1 outperform:1 exist:2 canonical:1 millisecond:1 shifted:3 estimated:4 per:1 timedomain:1 vol:1 key:1 redundancy:4 changing:1 clarity:1 clean:2 padded:1 run:5 inverse:2 place:1 reader:1 interpolants:1 parsimonious:1 bound:2 convergent:1 strength:1 diffuse:4 fourier:3 relatively:2 department:2 combination:2 conjugate:1 jeffreys:1 taken:3 remains:1 turn:1 mechanism:1 merit:1 letting:1 incurring:1 permit:1 multiplied:1 apply:1 spectral:8 appropriate:3 alternative:1 original:2 denotes:2 assumes:1 include:1 xw:1 music:4 exploit:1 ghahramani:1 objective:1 added:1 question:1 traditional:1 diagonal:1 said:1 exhibit:2 amongst:1 subspace:1 berlin:1 acoustical:1 code:1 length:3 ratio:1 robert:1 godsill:4 design:3 implementation:1 proper:1 perform:1 allowing:3 observation:3 markov:4 unvoiced:1 finite:1 gas:1 witness:1 extended:3 assp:1 communication:1 frame:22 gc:1 intensity:1 overlooked:1 specified:3 herein:1 trans:2 address:2 able:3 perception:1 appeared:1 regime:1 dereverberation:1 optimise:1 including:1 power:1 overlap:4 natural:5 indicator:1 residual:8 scheme:7 spiegelhalter:1 utterance:1 prior:26 literature:2 coloured:1 multiplication:1 relative:3 fully:2 loss:1 grossmann:1 j8:1 interesting:1 filtering:2 proportional:1 digital:1 foundation:1 degree:1 translation:1 last:1 figueiredo:1 side:2 normalised:2 allow:1 wide:1 taking:1 absolute:1 distributed:4 benefit:1 boundary:1 van:1 quantum:1 ignores:1 author:1 collection:1 made:1 regressors:3 avoided:1 adaptive:1 dearth:1 approximate:1 implicitly:1 assumed:1 alternatively:1 nature:1 reasonably:1 robust:2 plancherel:1 inherently:1 obtaining:1 improving:2 expansion:5 artificially:1 constructing:1 domain:2 diag:2 psychoacoustic:1 dense:1 main:1 noise:35 hyperparameters:3 fig:6 representative:1 sub:1 meyer:1 hanning:1 wish:1 lie:2 perceptual:1 wavelet:4 formula:2 theorem:1 explored:1 pz:2 reproduction:1 incorporating:1 ih:1 gained:1 magnitude:2 perceptually:2 painless:2 sparseness:1 boston:1 simply:1 likely:1 desire:1 springer:2 ma:2 conditional:2 content:1 typical:1 infinite:1 sampler:8 averaging:1 denoising:2 called:2 attempted:1 exception:1 formally:1 support:1 modulated:1 relevance:1 absolutely:1 incorporate:1 mcmc:2 audio:22 tested:1 |
1,303 | 2,187 | Robust Novelty Detection with
Single-Class MPM
Gert R.G. Lanckriet
EECS, V.C. Berkeley
[email protected]. edu
Laurent EI Ghaoui
EECS, V.C. Berkeley
[email protected]
Michael I. Jordan
Computer Science and
Statistics, V.C. Berkeley
jordan@cs. berkeley. edu
Abstract
In this paper we consider the problem of novelty detection, presenting an algorithm that aims to find a minimal region in input
space containing a fraction 0: of the probability mass underlying
a data set. This algorithm- the "single-class minimax probability machine (MPM)" - is built on a distribution-free methodology
that minimizes the worst-case probability of a data point falling
outside of a convex set, given only the mean and covariance matrix
of the distribution and making no further distributional assumptions. We present a robust approach to estimating the mean and
covariance matrix within the general two-class MPM setting, and
show how this approach specializes to the single-class problem. We
provide empirical results comparing the single-class MPM to the
single-class SVM and a two-class SVM method.
1
Introduction
Novelty detection is an important unsupervised learning problem in which test data
are to be judged as having been generated from the same or a different process as
that which generated the training data. In essence, we wish to estimate a quantile
of the distribution underlying the training data: for a fixed constant 0: E (0,1],
we attempt to find a (small) set Q such that Pr{y E Q} = 0:, where, for novelty
detection, 0: is typically chosen near one (Scholkopf and Smola, 2001 , Ben-David
and Lindenbaum , 1997) . This formulation of novelty detection in terms of quantile
estimation is to be compared to the (costly) approach of estimating a density based
on the training data and thresholding the estimated density.
Although of reduced complexity when compared to density estimation, multivariate
quantile estimation is still a challenging problem, necessitating computationally
efficient methods for representing and manipulating sets in high dimensions. A
significant step forward in this regard was provided by Scholkopf and Smola (2001),
who treated novelty detection as a "single-class" classification problem in which
data are separated from the origin in feature space. This allowed them to invoke
the computationally-efficient technology of support vector machines.
In the current paper we adopt the "single-class" perspective of Scholkopf and Smola
(2001), but make use of a different kernel-based technique for finding discriminant
boundaries- the minimax probability machine (MPM) of Lanckriet et al. (2002).
To see why the MPM should be particularly appropriate for quantile estimation,
consider the following theorem, which lies at the core of the MPM. Given a random
vector y with mean y and covariance matrix ~y , and given arbitrary constants
a?- 0, b such that aTy :S b, we have (for a proof, see Lanckriet et al., 2002):
inf
Pr{aTy:Sb}2::a
y~(y,:Ey)
{:}
b-aTY2::,,;(a) /aT "5:,ya,
V
(1)
where ,,;(a) = Ja/1 - a, and a E [0, 1). Note that this is a "distribution-free"
result- the infimum is taken over all distributions for y having mean y and covariance matrix "5:,y (assumed to be positive definite for simplicity). While Lanckriet
et al. (2002) were able to exploit this theorem to design a binary classification algorithm, it is clear that the theorem provides even more direct leverage on the
"single-class" problem- it directly bounds the probability of an observation falling
outside of a given set.
There is one important aspect of the MPM formulation that needs further consideration, however, if we wish to apply the approach to the novelty detection problem.
In particular, y and ~y are usually unknown in practice and must be estimated
from data. In the classification setting, Lanckriet et al. (2002) successfully made
use of plug-in estimates of these quantities- in some sense the bias incurred by the
use of plug-in estimates in the two classes appears to "cancel" and have diminished
overall impact on the discriminant boundary. In the one-class setting, however, the
uncertainty due to estimation of y and ~y translates directly into movement of the
discriminant boundary and cannot be neglected.
We begin in Section 2 by revisiting the MPM and showing how to account for
uncertainty in the means and covariance matrices within the framework of robust
estimation. Section 3 then applies this robust estimation approach to the singleclass MPM problem. We present empirical results in Section 4 and present our
conclusions in Section 5.
2
Robust Minimax Probability Machine (R-MPM)
Let x, y E jRn denote random vectors in a binary classification problem, modelling
data from each of two classes, with means and covariance matrices given by X, Y E
jRn, and "5:, x , "5:,y E jRnxn (both symmetric and positive semidefinite), respectively.
We wish to determine a hyperplane H(a , b) = {z I aTz = b}, where a E jRn\{o}
and b E jR, that maximizes the worst-case probability a that future data points
are classified correctly with respect to all distributions having these means and
covariance matrices:
max
a,a,cO,b
a
S.t.
inf
Pr{ aT x 2:: b} 2:: a
inf
Pr{aTy:Sb} 2:: a,
x~(x,:Ex)
y~(y , :Ey)
(2)
where x '" (x, "5:,x) refers to the class of distributions that have mean x and covariance "5:,x, but are otherwise arbitrary; likewise for y. The worst-case probability of
misclassification is explicitly obtained and given by 1 - a.
Solving this optimization problem involves converting the probabilistic constraints
in Eq. (2) into deterministic constraints, a step which is achieved via the theorem
referred to earlier in Eq. (1). This eventually leads to the following convex optimization problem, whose solution determines an optimal hyperplane H(a, b) (Lanckriet
et al., 2002):
(3)
where b is set to the value b* = arx - x:*Jar~xa*, with a* an optimal solution
of Eq. (3). The optimal worst-case misclassification probability is obtained via
1 - a* = 1/(1 + x:;). Once an optimal hyperplane is found, classification of a new
data point Znew is done by evaluating sign( ar Znew - b*): if this is +1, Znew is
classified as belonging to class x, otherwise Zn ew is classified as belonging to class
y.
While in our earlier work, we simply computed sample-based estimates of means
and covariance matrices and plugged them into the MPM optimization problem in
Eq. (3), we now show how to treat this estimation problem within the framework
of robust optimization. Assume the mean and covariance matrix of each class are
unknown but lie within specified convex sets: (x, ~x) E X, with X C jRn X {M E
jRnxnlM = MT,M ~ O}, and (y,~y) E y, with Y c jRn X {M E jRnxnlM =
M T , M ~ O}. We now want the probabilistic guarantees in Eq. (2) to be robust
against variations of the mean and covariance matrix within these sets:
max
a,a#O,b
a
S.t.
inf
Pr{aTx2b}2aV(x,~x)EX,
inf
Pr{aTy::; b} 2 a V(y,~y) E y.
x~(x,Ex)
x~(y , Ey)
(4)
In other words, we would like to guarantee a worst-case misclassification probability for all distributions which have unknown-but-bounded mean and covariance
matrix, but which are otherwise arbitrary. The complexity of this problem depends
obviously on the structure of the uncertainty sets X, y. We now consider a specific
choice for X and y, motivated both statistically and numerically:
X
Y
{(x,~x): (x-xO)T~x-1(X_XO)::;v2, II~x-~xoIIF::;p},
{(y,~y): (y_yO)T~y-1(y_yO)::;v2, II~Y-~/IIF::;p},
(5)
with xO, ~x 0 the "nominal" mean and covariance estimates and with v, p 2 0 fixed
and, for simplicity, assumed equal for X and y. Section 4 discusses how their values
can be determined. The matrix norm is the Frobenius norm: IIAIIj" = Tr(AT A).
Our model for the uncertainty in the mean assumes the mean of class y belongs to
an ellipsoid - a convex set - centered around yO, with shape determined by the
(unknown) ~Y' This is motivated by the standard statistical approach to estimating
a region of confidence based on Laplace approximations to a likelihood function. The
covariance matrix belongs to a matrix norm ball - a convex set - centered around
~Y o. This uncertainty model is perhaps less classical from a statistical viewpoint,
but it will lead to a regularization term of a classical form.
In order to solve Eq. (4), we apply Eq. (1) and notice that
b-aTy 2
x:(ah/aT~ya, V(y, ~y)
E
Y {:} b- max
(y,Ey)EY
aTy 2 x:(a)
max
(y ,Ey)EY
aT~ya,
where the right-hand side guarantees the constraint for the worst-case estimate of
the mean and covariance matrix within the bounded set y. For given a and yO:
(6)
Indeed, the Lagrangian is ?(y, >.) = _aTy + >.((y - yO)T~y -l(y - yO) - v2) and
is to be maximized with respect to >. 2 0 and minimized with respect to y. At the
A
optimum, we have /y ?(y, A) = 0 and t>.. ?(y, A) = 0, leading to y = yO + ~ya
and A = JaT~ya/4v which eventually leads to Eq. (6). For given a and ~/:
(7)
where In is the n x n identity matrix. Indeed, without loss of generality, we can let
~ be of the form ~ = ~o + p~~. We then obtain
a - aT~ ?a+p
max
aT ~~ a - aT~ ?a+paT a
y y
.6.Ey : II.6.EYI IF~ l
y Y
,
(8)
using the Cauchy-Schwarz inequality and compatibility of the Frobenius matrix
norm and the Euclidean vector norm:
Ey :
max
I I Ey-EyOIlF~P
aT~
aT ~~a::::: IlaI1211~~aI12 ::::: IlaI1211~~IIFllaI12 ::::: lIall~,
because II~~IIF ::::: 1. For ~~ = In , this upper bound is attained and we get
Eq. (7). Combining this with Eq. (6) leads to the robust version of Eq. (1):
inf
y~(y , Ey)
Pr{aTy ::::: b} :2: a, \fey, ~y) E
Y
?}
b_aTyO :2:
(",(a)+v)JaT(~/ + pln)a.
(9)
Applying this result to Eq. (4) thus shows that the optimal robust minimax probability classifier for X, Y given by Eq. (5) can be obtained by solving problem Eq. (3),
with ~x = ~x 0 + pIn' ~y = ~y 0 + pIn. If ",:;-1 is the optimal value of that problem,
the corresponding worst-case misclassification probability is
1 - a* =
1
1 + max(O , ("'* - V))2
.
With only uncertainty in the mean (p = 0), the robust hyperplane is the same as the
non-robust one; the only change is in the increase in the worst-case misclassification
probability. Uncertainty in the covariance matrix adds a term pIn to the covariance
matrices, which can be interpreted as regularization term. This affects the hyperplane and increases the worst-case misclassification probability as well. If there is
too much uncertainty in the mean (i.e., "'* < v) , the robust version is not feasible:
no hyperplane can be found that separates the two classes in the robust minimax
probabilistic sense and the worst-case misclassification probability is 1 - a* = 1.
This robust approach can be readily generalized to allow nonlinear decision boundaries via the use of Mercer kernels (Lanckriet et al., 2002).
3
Single-class MPM for robust novelty detection
We now turn to the quantile estimation problem. Recall that for a E (0,1], we
wish to find a small region Q such that Pr{ x E Q} = a. Let us consider data
x ,..., (x, ~x) and let us focus (for now) on the linear case where Q is a half-space
not containing the origin.
We seek a half-space Q(a,b) = {z I aTz :2: b}, with a E JRn\{o} and b E JR, and
not containing 0, such that with probability at least a, the data lies in Q, for every
distribution having mean x and covariance matrix ~x. We assume again that the
real x, ~x are unknown but bounded in a set X as specified in Eq. (5):
inf
x~(x , Ex)
Pr{aTx:2:b}:2:a
\f(x,~x)EX.
We want the region Q to be tight, so we maximize its Mahalanobis distance (with
respect to ~x) to the origin in a robust way, i.e., for the worst-case estimate of
~x -the matrix that gives us the smallest Mahalanobis distance:
s.t.
inf
x~(x , Ex)
Pr{ aT x 2:: b} 2:: a
\I(x, ~x) EX. (10)
Note that Q(a, b) does not contain 0 if and only if b > o. Also, the optimization
problem in Eq. (10) is positively homogeneous in (a, b). Thus, without loss of
generality, we can set b = 1 in problem Eq. (10). Furthermore, we can use Eq. (7)
and Eq. (9) and get (where superscript 0 for the estimates has been omitted):
JaT(~x + pIn)a
mln
s.t.
aTx -12:: (,..(a) +
v)JaT(~x + pIn)a ,
(11)
where a-::/:-O can be omitted since the constraint never holds in this case. Again,
we obtain a (convex) second order cone programming problem. The worst-case
probability of occurrence outside region Q is given by 1 - a. Notice that the
particular choice of a E (0,1] must be feasible , i.e. ,
:3 a : aTx -12:: (,..(a) +
v)JaT(~x + pIn)a.
For p -::/:- 0, ~x + pIn is certainly positive definite and the halfspace is unique.
Furthermore, it can be determined explicitly. To see this, we write Eq. (11) as:
min
a
11(~x + pIn? /2 aI12
s.t.
aTx 2:: 1 + (,..(a) + v) 11(~x + pIn )1/2 a I12
(12)
Decomposing a as A(~x + pIn)-lx + z, where the variable z satisfies zT X = 0,
we easily obtain that at the optimum, z = O. In other words, the optimal a is
parallel to x, in the form a = A(~x + pIn) - lx, and the problem reduces to the
one-dimensional problem:
mIn IAIII(~x+pIn) -1/2 xI12
: AxT (~x+pIn)-lx 2:: l+(,..(a)+v) 11(~x+pIn)-1/2xIl2IAI?
The constraint implies that A 2:: 0, hence the problem reduces to
min A : A ((2 - (,..(a) + v)() 2:: l.
(13)
>.::::0
with (2 = xT(~x + pIn) - lx > 0 (because Eq. (12) implies x -::/:- 0). Because A 2:: 0,
this can only be satisfied if (2 - (,..(a) + v)( 2:: 0, which is nothing other than the
feasibility condition for a:
If this is fulfilled, the optimization in Eq. (13) is feasible and boils down to:
.
mm
>.::::0 A s.t.
1
A 2:: (2 - (,..(a)
It's easy to see that the optimal A is given by A*
a* =
(~x + pIn)-lX, b* = 1,
(2 _ (,..(a) + v)(
with
+ v )(
= 1/((2 (=
(,..(a) + v)(), yielding:
/xT(~x +
V
pIn) -l X.
(14)
Notice that the uncertainty in the covariance matrix ~x leads to the typical, wellknown regularization for inverting this matrix. If the choice of a is not feasible or
if x = 0 (in this case, no a E (0,1] will be feasible), Eq. (10) has no solution.
Future points z for which a; z :::; b* can then be considered as outliers with respect
to the region Q , with worst-case probability of occurrence outside Q given by 1- 0:.
One can obtain a nonlinear region Q in ]Rn for the single-class case, by mapping
the data into a feature space ]Rf: x f-t <p(x) ~ (<p(X) , ~ 'P(x)), and expressing and
solving Eq. (10) in the feature space, using <p(x), <p(x) and ~ 'P(x). This is achieved
using a kernel function K(Zl' Z2) = <p(zt)T <p(Z2) satisfying Mercer's condition as in
the classification setting. Notice that maximizing the Mahanalobis distance of Q
to the origin in ]Rf makes sense for novelty detection. For example, if we consider
a Gaussian kernel K(x,y) = e-lIx-YI12/0", all mapped data points have unit length
and positive dot products, so they all lie in the same orthant, on the unit ball, and
are linearly separable from the origin.
Our final result is thus the following: If the choice of 0: is feasible, i.e.,
3, : ,Tk - 12: ("(0:) + IIh/,T(LTL + pK)r,
then an optimal region Q(r, b) can be determined by solving the (convex) second
order cone programming problem:
m~n V ,T(LTL + pK)r
s.t.
,Tk - 1 2: ("(0:)
+ II)V,T(LTL + pK)r,
(15)
iJ
where "(0:) = .}0:/1- 0: and b = 1, with " k E ]RN, [kli =
2::;:1 K(Xj,Xi) and
{Xd~l the N given data points. L is defined as L = (K -lNkT)/~, where 1m
is a column vector with ones of dimension m. K is the Gram matrix and defined
as Kij = <p(zdT<p(zj) = K(Zi,Zj).
The worst-case probability of a point lying outside the region Q is given by 1 - 0:.
If LTL + pK is positive definite, the optimal half-space is unique and determined
by:
(LTL + pK) - lk
'* = (2 _ ("(0:) + 11)(
with
(=
./
V kT(LTL + pK) -lk,
(16)
ifthe choice of 0: is such that "(0:) :::; ( - II or 0: :::; 1~(((~~)2. If the choice of 0: is not
feasible or if k = 0 (in this case, no 0: E (O,ll will be feasible) , the problem does
not have a solution.
To solve the single-class problem, we can solve the second-order cone progam
Eq. (15) or directly use result Eq. (16): when numerically regularizing LTL + pK
with an extra term ElN , this unique solution can always be determined. Instead
of explicitly inverting the matrix, we can solve a system iteratively. All of these
approaches have a worst-case complexity of O(N3), comparable to the quadratic
program for single-class SVM (Sch6lkopf and Smola, 2001).
Once an optimal decision region is found , future points Z for which a; <p(z) =
2::~1 b*liK(Xi, z) :::; b* (notice that this can be evaluated only in terms of the
kernel function) , can then be considered as outliers with respect to the region Q,
with the worst-case probability of occurrence outside Q given by 1 - 0:.
4
Experiments
In this section we report the results of experiments comparing the robust singleclass MPM to the single-class SVM of Sch6lkopf and Smola (2001) and to a twoclass SVM approach where an artificial "negative class" is obtained by generating data points uniformly in T = {z E ]Rnlmin{[xdi,[x2li, ... ,[xNld :::; [Zli :::;
max{[x1l i' [x2l i, ... , [xNl i }}.
For the benchmark binary classification data sets we studied, we converted the data
sets into two single-class problems by treating each class in a separate experiment.
We chose 80% of the data points as training and the remaining 20% of the data
points as test, lumping the latter with the data points ofthe negative class (the class
of the binary classification data, not used for training). We report false positive and
false negative rates averaged over 30 random partitions in Table 1.1
We used a Gaussian kernel , K(x,y) = e- llx-yI12/0", of width (J. The kernel parameter (J was tuned using cross-validation over 20 random partitions, as was the
hyperparameter p. For simplicity, we set the hyperparameter v = 0 for the robust single-class MPM. Note that this choice has no impact on the MPM solution;
according to Eq. (16) its only effect is to alter the estimated false-negative rate.
The parameter a was varied throughout a range of values so as to explore the
tradeoff between the false positive (FP) rate and the false negative (FN) rate. A
small value a yields a good FP but poor FN, and large a yields good FN but
poor FP. For the single-class SVM and the two-class SVM, we varied the analogous
parameters- v (the fraction of support vectors and outliers) and C (the soft margin
weight parameter)-to cover a similar range of the FP /FN tradeoff. We envision
the end user deciding where he or she wishes to operate along the FP /FN tradeoff,
and tuning a, v or C accordingly. Thus we compare the different algorithms by
presenting in Table 1 an overview of the full tradeoff curves (essentially the ROC
curves). The specific values of a, v and C are chosen in each row so as to roughly
match corresponding points on the ROC curves. We use italic font to indicate the
best performing algorithm on a given row , choosing the algorithm with the best FP
rate if FN rates are similar and with the best FN rate if FP rates are similar.
The performance of the single-class MPM is clearly competitive with that of the
other algorithms, providing joint FP /FN values that equal or improve upon the
other algorithms in many cases, and spanning a broad range of FP /FN tradeoff.
Note that the two-class SVM can perform well if low FP rate is desired and high
FN rate is tolerated. However, the two-class SVM sometimes fails to provide an
extensive range of FP /FN tradeoff; in particular, with the twonorm dataset, the
algorithm is unable to provide solutions with small FN rate and large FP rate.
Note that the value I-a (the worst-case probability offalse negatives for the robust
single-class MPM) is indeed an upper bound for the average FN rate in all cases
except for the sonar dataset. Thus the simplifying assumption v = 0 appears to be
reasonable in all cases except the sonar case.
Finally, it is also worth noting that while the MPM algorithm is insensitive to the
choice of v, it is sensitive to the choice of p. When we fixed p = 0 (allowing no
uncertainty in the covariance estimate) we obtained poor performance, in particular
obtaining a small FP rate but a very poor FN rate.
5
Conclusions
We have presented a new algorithm for novelty detection , an important machine
learning problem with numerous real-world applications. Our "single-class MPM"
joins the "single-class SVM" of Scholkopf and Smola (2001) as a computationallyefficient, kernel-based method for solving this problem and the more general quantile
estimation problem. We view the single-class MPM as particularly appropriate for
these problems, given its formulation directly in terms of a worst-case probability
lThe Wisconsin breast cancer dataset contained 16 missing examples which were not
used. Data for the twonorm problem were generated as specified by Breiman (1997).
Table 1: Performance for single-class problems; the best performance in each row is
indicated in italic; FP = false positives (out-of-class data detected as in-class-data);
FN = false negatives (in-class-data detected as out-of-class-data) .
Dataset
Sonar
class +1
Sonar
class - 1
Breast
Cancer
class +1
Breast
Cancer
class -1
Twonorm
class +1
Twonorm
class -1
Heart
class + 1
Heart
class -1
Sin9le Class MPM
a
FP
FN
0.2
0.8
0.95
0.6
0.9
0.95
0.99
0.6
0.8
0.2
0.01
0.03
0.05
0.14
0.01
0.2
0.4
0.6
0.1
0.4
0.6
0.8
0.46
0.52
0.54
0.0001
0.0006
0.003
0.01
24?7 %
44-6 %
69.3 %
5?4 %
10.0 %
19.1 %
56.1 %
0.0 %
1.8 %
10.5 %
2.4 %
2.9 %
3 .0 %
5.9 %
6.3 %
13.9 %
22.5 %
36 .9 %
5.6 %
11.3 %
16. 9 %
30 .1 %
13.4 %
24.0 %
33.5 %
15.9 %
21.2 %
36.3 %
56.9 %
64?0 %
39.6 %
17.3 %
51.7 %
37?4 %
29.7%
5.7%
8.8 %
5 .9 %
2.7%
26.5 %
13.5 %
8.3 %
1.9 %
43.2 %
22.5 %
11.9 %
4 .5 %
43.7 %
23. 1 %
12.1 %
6.9 %
46.2 %
30.9 %
22.6 %
41.3 %
37.2 %
27.2 %
15.9 %
Sin9le Class SVM
v
0.6
0.2
0.0005
0.4
0.001
0.0006
0.0003
0.14
0.001
0.0003
0.4
0.2
0.1
0.0005
0.4
0.2
0.0008
0.0003
0.4
0.15
0.0005
0.0003
0.4
0.05
0.0008
0.4
0.002
0.0007
0.0005
Two-Class SVM approach
FP
FN
V
FP
FN
26.9 %
47 .3 %
75.4 %
8.5 %
15.7 %
36.1 %
82.6 %
0.0 %
2 .4 %
11.5 %
2 .5 %
2.8 %
3 .1 %
9 .2 %
6 .2 %
12. 7 %
23.3 %
33?4 %
6.0 %
11.8 %
35 .9 %
39 .3 %
13.5 %
24.8 %
38.8 %
20 .8 %
26 .3 %
43 .7 %
58.4 %
65.4 %
42.1 %
16.2 %
53.7 %
41.3 %
28.4 %
6.3 %
14.6 %
6.1 %
3.1 %
41.4 %
25.0 %
11.3 %
3.4 %
42.8 %
22.8 %
9.6 %
4?5 %
44.1 %
24.6 %
12.0 %
6.9 %
47.8 %
36.7 %
27.0 %
50.7 %
43.8 %
29.2 %
18.09 %
0.1
0.2
0.1
0.2
0.35
1
0.005
0 .1
10
0.8
1
2
100
0.13
0.17
5
23.8 %
48.3 %
75.2 %
9.7 %
34. 6 %
47.7 %
67.9 %
0.4 %
0.9 %
12.3 %
0.9 %
11.0 %
89.2 %
98.0 %
6.8 %
12.0 %
25.9 %
68.6 %
42 .3 %
16.0 %
70.0 %
40.6 %
26.0 %
6.1 %
8.0 %
4 ?3 %
3. 1 %
47.9 %
45 %
38 .2 %
23 .5 %
37.3 %
24.2 %
10.5 %
0.35
0.5
10
6.1 %
24.5 %
30.1 %
49.8 %
23.7 %
10.0 %
0.05
0.07
0.1
0.08
0.09
0.11
0.2
11.9
22. 1
35.8
13.9
21 .0
39.2
68.6
%
%
%
%
%
%
%
46?4
3 0. 3
22 .9
43.8
37.5
31.8
16.7
%
%
%
%
%
%
%
of falling outside of a given convex set in feature space.
While our simulation experiments illustrate the application of generic classification
techniques to the novelty detection problem- via the generation of data from an
artificial "negative class" enclosing the data- we view the single-class methods as
t he more viable general technology. In particular, in high-dimensional problems it
is difficult to specify a "negative class" in a way that yields comparable size training
sets while still yielding a good characterization of a discriminant boundary.
Acknowledgements
We acknowledge support from ONR MURI N00014-00-1-0637 and NSF grant IIS9988642. Sincere thanks to Alex Smola for helpful conversations and suggestions.
References
S. Ben-David and M. Lindenbaum. Learning distributions by their density levels: A
paradigm for learning without a teacher. Journal of Computer and System Sciences, 55:
171- 182, 1997.
L. Breiman. Arcing classifiers. Technical Report Technical Report 460, Statistics Department, University of California, 1997.
G. Lanckriet, L. El Ghaoui, C. Bhattacharyya, and M. 1. Jordan. A robust minimax
approach to classification. Journal of Machin e Learning Research, 3:555- 582, 2002.
B. SchOlkopf and A . Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2001.
| 2187 |@word version:2 norm:5 simulation:1 seek:1 covariance:20 simplifying:1 tr:1 tuned:1 envision:1 bhattacharyya:1 current:1 comparing:2 z2:2 must:2 readily:1 fn:18 partition:2 shape:1 treating:1 half:3 mpm:22 mln:1 accordingly:1 core:1 provides:1 characterization:1 lx:5 along:1 x1l:1 direct:1 viable:1 scholkopf:5 indeed:3 roughly:1 provided:1 estimating:3 underlying:2 begin:1 maximizes:1 mass:1 bounded:3 interpreted:1 minimizes:1 finding:1 guarantee:3 berkeley:6 every:1 xd:1 axt:1 classifier:2 zl:1 unit:2 grant:1 positive:8 treat:1 laurent:1 chose:1 studied:1 challenging:1 co:1 range:4 statistically:1 averaged:1 unique:3 practice:1 atx:4 definite:3 progam:1 empirical:2 word:2 confidence:1 refers:1 pln:1 lindenbaum:2 cannot:1 get:2 judged:1 applying:1 deterministic:1 lagrangian:1 missing:1 maximizing:1 convex:8 simplicity:3 lumping:1 gert:2 variation:1 laplace:1 analogous:1 nominal:1 machin:1 user:1 programming:2 homogeneous:1 origin:5 lanckriet:8 satisfying:1 particularly:2 mahanalobis:1 distributional:1 muri:1 i12:1 lix:1 worst:18 revisiting:1 region:11 movement:1 complexity:3 neglected:1 solving:5 tight:1 upon:1 easily:1 joint:1 separated:1 artificial:2 detected:2 outside:7 zdt:1 choosing:1 whose:1 xi12:1 solve:4 otherwise:3 statistic:2 superscript:1 final:1 obviously:1 ifthe:1 product:1 combining:1 frobenius:2 optimum:2 generating:1 ben:2 tk:2 illustrate:1 ij:1 eq:27 c:1 involves:1 implies:2 indicate:1 centered:2 computationallyefficient:1 ja:1 hold:1 mm:1 around:2 considered:2 lying:1 deciding:1 mapping:1 adopt:1 smallest:1 omitted:2 estimation:10 schwarz:1 sensitive:1 successfully:1 mit:1 clearly:1 gaussian:2 always:1 aim:1 breiman:2 arcing:1 yo:5 focus:1 she:1 modelling:1 likelihood:1 sense:3 helpful:1 el:1 sb:2 typically:1 manipulating:1 compatibility:1 overall:1 classification:10 equal:2 once:2 never:1 having:4 iif:2 broad:1 unsupervised:1 cancel:1 alter:1 future:3 minimized:1 report:4 sincere:1 attempt:1 detection:11 certainly:1 semidefinite:1 yielding:2 kt:1 plugged:1 euclidean:1 desired:1 minimal:1 kij:1 column:1 earlier:2 soft:1 ar:1 cover:1 zn:1 too:1 xdi:1 teacher:1 eec:4 x2l:1 tolerated:1 thanks:1 density:4 twonorm:4 probabilistic:3 invoke:1 michael:1 atz:2 again:2 satisfied:1 containing:3 leading:1 account:1 converted:1 explicitly:3 eyi:1 depends:1 view:2 competitive:1 parallel:1 halfspace:1 who:1 likewise:1 maximized:1 yield:3 ofthe:1 eln:1 worth:1 classified:3 ah:1 against:1 proof:1 boil:1 dataset:4 recall:1 conversation:1 appears:2 attained:1 methodology:1 specify:1 formulation:3 done:1 evaluated:1 generality:2 furthermore:2 xa:1 smola:8 hand:1 ei:1 nonlinear:2 infimum:1 indicated:1 perhaps:1 effect:1 contain:1 regularization:3 hence:1 symmetric:1 iteratively:1 mahalanobis:2 ll:1 width:1 essence:1 generalized:1 presenting:2 necessitating:1 consideration:1 mt:1 overview:1 ltl:7 insensitive:1 he:2 numerically:2 significant:1 expressing:1 cambridge:1 llx:1 tuning:1 dot:1 add:1 multivariate:1 perspective:1 inf:8 belongs:2 wellknown:1 n00014:1 inequality:1 binary:4 onr:1 aty2:1 ey:11 converting:1 arx:1 novelty:11 determine:1 maximize:1 paradigm:1 ii:6 lik:1 full:1 reduces:2 technical:2 match:1 plug:2 cross:1 feasibility:1 impact:2 breast:3 essentially:1 aty:8 kernel:9 sometimes:1 achieved:2 want:2 extra:1 operate:1 jordan:3 near:1 leverage:1 noting:1 easy:1 affect:1 xj:1 zi:1 twoclass:1 tradeoff:6 translates:1 motivated:2 clear:1 jrn:6 reduced:1 zj:2 nsf:1 notice:5 sign:1 estimated:3 fulfilled:1 singleclass:2 correctly:1 write:1 hyperparameter:2 falling:3 fraction:2 cone:3 znew:3 uncertainty:10 zli:1 throughout:1 reasonable:1 decision:2 comparable:2 bound:3 quadratic:1 constraint:5 alex:1 n3:1 aspect:1 min:3 performing:1 separable:1 department:1 according:1 ball:2 poor:4 belonging:2 jr:2 fey:1 making:1 outlier:3 pr:10 ghaoui:2 xo:2 taken:1 heart:2 computationally:2 discus:1 eventually:2 pin:17 turn:1 end:1 decomposing:1 apply:2 v2:3 appropriate:2 generic:1 occurrence:3 assumes:1 remaining:1 exploit:1 quantile:6 classical:2 quantity:1 font:1 costly:1 italic:2 distance:3 separate:2 mapped:1 unable:1 cauchy:1 discriminant:4 lthe:1 spanning:1 length:1 ellipsoid:1 providing:1 difficult:1 negative:9 jat:5 design:1 enclosing:1 zt:2 unknown:5 perform:1 sch6lkopf:2 upper:2 av:1 observation:1 allowing:1 benchmark:1 acknowledge:1 orthant:1 pat:1 rn:2 varied:2 arbitrary:3 david:2 inverting:2 specified:3 iih:1 extensive:1 california:1 able:1 usually:1 fp:17 elghaoui:1 program:1 yi12:2 built:1 max:8 rf:2 misclassification:7 treated:1 minimax:6 representing:1 improve:1 technology:2 numerous:1 lk:2 specializes:1 acknowledgement:1 wisconsin:1 loss:2 generation:1 suggestion:1 validation:1 xnl:1 incurred:1 mercer:2 thresholding:1 viewpoint:1 row:3 cancer:3 free:2 bias:1 side:1 allow:1 regard:1 boundary:5 dimension:2 curve:3 evaluating:1 gram:1 world:1 forward:1 made:1 assumed:2 xi:2 jrnxn:1 sonar:4 why:1 table:3 robust:20 obtaining:1 pk:7 linearly:1 nothing:1 allowed:1 positively:1 referred:1 join:1 roc:2 fails:1 wish:5 lie:4 theorem:4 down:1 specific:2 xt:2 showing:1 svm:12 false:7 margin:1 jar:1 simply:1 explore:1 contained:1 applies:1 determines:1 satisfies:1 ma:1 identity:1 kli:1 feasible:8 change:1 diminished:1 determined:6 typical:1 uniformly:1 except:2 hyperplane:6 ya:5 ew:1 support:3 latter:1 regularizing:1 ex:7 |
1,304 | 2,188 | Selectivity and Metaplasticity in a Unified
Calcium-Dependent Model
Luk Chong Yeung
Physics Department and
Institute for Brain & Neural Systems
Brown University
Providence, RI 02912
[email protected]
Brian S. Blais
Department of Science & Technology
Bryant College
Smithfield, RI 02917
Institute for Brain & Neural Systems
Brown University
[email protected]
Leon N Cooper
Institute for Brain & Neural Systems
Physics Department and
Department of Neuroscience
Brown University
Providence, RI 02912
Leon [email protected]
Harel Z. Shouval
Institute for Brain & Neural Systems
and Physics Department
Brown University
Providence, RI 02912
Harel [email protected]
Abstract
A unified, biophysically motivated Calcium-Dependent Learning
model has been shown to account for various rate-based and spike
time-dependent paradigms for inducing synaptic plasticity. Here,
we investigate the properties of this model for a multi-synapse
neuron that receives inputs with different spike-train statistics.
In addition, we present a physiological form of metaplasticity, an
activity-driven regulation mechanism, that is essential for the robustness of the model. A neuron thus implemented develops stable
and selective receptive fields, given various input statistics
1
Introduction
Calcium influx through NMDA receptors is essential for the induction of diverse
forms of bidirectional synaptic plasticity, such as rate-based [1, 2] and spike timedependent plasticity (STDP) [3, 4]. Activation of NMDA receptors is also essential
for functional plasticity in vivo [5]. An influential hypothesis holds that modest elevations of Ca above the basal line would induce LTD, while higher elevations would
induce LTP[6, 7]. Based on these observations, a Unified Calcium Learning Model
(UCM) has been proposed by Shouval et al. [8]. In this model, cellular activity is
translated locally into the dendritic calcium concentrations Cai , through the voltage and time-dependence of the NMDA channels. The level of Cai determines the
sign and magnitude of synaptic plasticity as determined through a function of local
calcium ?(Cai )(see Methods). A further assumption is that the Back-Propagating
Action Potentials (BPAP) has a slow after-depolarizing tail.
Implementation of this simple yet biophysical model has shown that it is sufficient
to account for the outcome of different induction protocols of synaptic plasticity in a
one-dimensional input space, as illustrated in Figure 1. In the pairing protocol, LTD
occurs when LFS is paired with a small depolarization of the postsynaptic voltage
while a larger depolarization yields LTP (Figure 1a), due to the voltage-dependence
of the NMDA currents. In the rate-based protocol, low-frequency stimulation (LFS)
gives rise to LTD while high-frequency stimulation (HFS) produces LTP (Figure
1b), due to the time-integration dynamics of the calcium transients. Finally, STDP
gives LTD if a post-spike comes before a pre-spike within a time-window, and LTP
if a post-spike comes after a pre-spike (Figure 1c); this is due to the coincidencedetector property of the NMDA receptors and the shape of the BPAP. In addition
to these results, the model also predicts a previously uncharacterized pre-beforepost depressing regime and rate-dependence of the STDP curve. These findings
have had preliminary experimental support [9, 3, 10], and as will be shown have
consequences in the multi-dimensional environment that impact the results of this
work.
Final weight
(% of initial w)
a)
b)
c)
200
200
150
150
150
100
100
50
50
100
?100
?50
0
Clamped voltage (mV)
0
5
10
Frequency (Hz)
15
50
?100?50 0 50 100 150
? t (ms)
Figure 1: Calcium-Dependent Learning Rule and the various experimental
plasticity-induction paradigms: implementation of (a) Pairing Protocol, (b) RateDependent Plasticity and (c) Spike-Time Dependent Plasticity. The Pairing Protocol was simulated with a fixed input rate of 3 Hz; STDP curve is shown for 1 Hz.
Notice the new pre-before-post depression regime.
In this study we investigate characteristics of the Calcium Control Hypothesis such
as cooperativity and competition, and examine how they give rise to input selectivity. A neuron is called selective to a specific input pattern if it responds strongly to
it and not to other patterns, which is equivalent to having a potentiated pathway
to this pattern. Input selectivity is a general feature of neurons and underlies the
formation of receptive fields and topographic mappings. We demonstrate that using
the UCM alone, selectivity can arise, but only within a narrow range of parameters.
Metaplasticity, the activity-dependent modulation of synaptic plasticity, is essential
for robustness of the BCM model [11]. Furthermore, it has significant experimental support [12]. Here we propose a more biologically realistic implementation,
compatible with the Calcium Control Hypothesis, which is based on experimental
observations [13]. We find that it makes the UCM model more robust significantly
expanding the range of parameters that result in selectivity.
2
Selectivity to Spike Train Correlations
The development of neuronal selectivity, given any learning rule, depends on the
statistical structures of the input environment. For spiking neurons, this structure
may include temporal, in addition to spatial statistics. One method of examining
this feature is to generate input spike trains with different statistics across synapses.
We use a simple scenario in which half of the synapses (group B) receive noisy
Poisson spike trains with a mean rate hrin i, and the other half (group A), receive
correlated spikes with the same rate hrin i. Input spikes in group A have an enhanced
probability of arriving together (see Methods). One might expect that, by firing
together, group A will gain control of the post-synaptic firing times and thus be
potentiated, while group B will be depressed, in a manner similar to the STDP
described by Song et al. [14]. In addition to the 100 excitatory neurons our neuron
receives 20 inhibitory inputs.
The results are shown in Figure 2. There exists a range of input frequencies (Figure 2a, left) at which segregation occurs between the correlated and uncorrelated
groups. The cooperativity among the synapses in group A enhances its probability
of generating a post-spike, which, through the BPAP causes strong depolarization.
Since the NMDA channels are still open due to a recent pre-spike, this is likely
to potentiates these synapses in a Hebbian-associative fashion. Group B will fire
with equal probability before and after a post-spike which, given a sufficiently low
NMDA receptor conductance, ensures that, on average, depression takes place. At
the final state, the output spike train is irregular (Figure 2a, right) but its rate is
stable (Figure 2a, center), indicating that the system had reached a fixed point with
a balance between excitation and inhibition.
0.5
0
0
5
10
5
Time (ms x 10 )
10
2
CV
Average weight
1
Output rate (Hz)
a)
5
0
0
10
5
5
Time (ms x 10 )
1
0
10
5
5
Time (ms x 10 )
b)
8 Hz
0.5
0
0
1
2
Time (ms x 10 5 )
1
Average weight
Average weight
1
12 Hz
0.5
0
0
1
2
Time (ms x 10 5 )
Figure 2: Segregation of the synapses for different input structures. (a) Segregation
at 10 Hz. Left, time evolutions of the average synaptic weight for the groups A
(solid) and B (dashed). Center, the output rate, calculated as the number of
output spikes over non-overlapping time bins of 20 seconds. Right, the coefficient
of variation, CV = std(isi)/ mean(isi), where isi is the interspike interval of the
output train. (b) Results for 8 Hz (left) and 12 Hz (right). All the synapses are
potentiated and depressed, respectively.
These results, however, are sensitive to the simulation parameters. In fact, a slight
change in the value of hrin i disrupts the segregation described previously (Figure
2b). For too high or too low values of hrin i, both channels are potentiated and
depressed, respectively. This occurs because, unlike standard STDP models, the
unified model exhibits frequency dependence in addition to spike-time dependence.
This suggests that a stabilizing mechanism must be incorporated into the model.
3
Metaplasticity
In the BCM theory the threshold between LTD and LTP moves as a function of
the history of postsynaptic activity [11]. This type of activity-dependent regulation of the properties of synaptic plasticity, or metaplasticity, was developed to
ensure selectivity and stability. Experimental results have linked some forms of
metaplasticity to the magnitude of the NMDA conductance; it is shown that as the
cellular activity increases, NMDA conductance is down-regulated, and vice-versa
[15, 16, 13, 17]. Under the Calcium Control Hypothesis, this sets the ground for a
more physiological formulation of metaplasticity [18].
NMDA conductance is interpreted here as the total number (gm ) of NMDA channels
inserted in the membrane of the postsynaptic terminal. Consider a simple kinetic
model in which additional channels can be inserted from an intracellular pool (gi )
or removed and returned to the pool in an activity dependent manner. We assume
a fixed removal rate k- and a voltage sensitive insertion rate k+ V ? :
gm
k?
??
g
?? i
?
k+ V
(Our results are not very sensitive to the details of the voltage dependence of insertion and removal rates)
This scheme leads us to a dynamic equation for gm , g? m = ? (k? + k+ V ? ) gm +
k+ V ? gt , where gt is a normalizing factor, gt = gm + gi . The fixed point is:
gt
?
gm
=
(1)
k? /(k+ V ? ) + 1
If, in this model, cellular activity is translated into Ca, then gm can be loosely
interpreted as the inverse of the BCM sliding threshold ?m [18]. Notice that in
the original form of BCM, ?m is the time average of a non-linear function of the
postsynaptic the activity level. In order to achieve competition, gm should not
depend solely on local (synaptic) variables, but should rather detect changes of the
global patterns of cellular activity. Here, the activity-signaling global variable is
taken to be postsynaptic membrane potential.
Implementation of metaplasticity widens significantly the range of input frequencies
for which segregation between the weights of correlated and uncorrelated synapses
is observed; this is shown in Figure 3a. At low spiking activity, the subthreshold
depolarization levels prevent significant inward Ca currents. Under these conditions
metaplasticity causes gm to grow. Persistent post-spike generation will lead gm
and therefore Ca to decrease, hence scaling the synaptic weights downwards. Competition arises as the system searches for the balance between the selective positive
feed-back of a standard Hebbian rule and the overall negative feed-back of a sliding
threshold mechanism. However, consistent with the rate-based protocol described
before, at too low and too high hrin i selectivity is disrupted, and the synapses will
eventually all depress or potentiate, regardless of the statistical structures of the
stimulus. Strengthening the correlation increases segregation (Figure 3b), demonstrating the effects of lateral cooperativity in potentiation. On the other hand,
increasing the fraction of correlated inputs weakens the final weight of the correlated group (Figure 3c), suggesting that less potentiation is needed to control the
output spike-timing. Notice that in the presence of metaplasticity, no upper saturation limit is required; the equilibrium of the fixed point is homeostatic, rather
than imposed.
b)
a)
c)
80
Average final weight
(arbitrary units)
50
20
40
15
10
20
5
10
0
60
30
0
10 20 30
Input Rate (Hz)
40
40
20
0
00
1
0.5
Correlation parameter
0
100
50
% of correlated inputs
Figure 3: The effects of metaplasticity. (a) The weights segregate within the range of
input frequency = [5, 35] Hz in a half correlated (solid), half uncorrelated (dashed)
input environment; shown are the average final weights within each group, correlation parameter c = 0.8 (see Methods). (b) The average final weight as a function of
the correlation parameter, hrin i = 10 Hz. (c) The average final weight as a function
of the fraction of correlated inputs, hrin i = 10 Hz, c = 0.8.
4
Selectivity to patterns of rate distribution
An alternative input environment is one in which the rates vary across the synapses
and over time. This is a plausible representation for sensory neurons that are
differentially excited. A straightforward method is to use rate distributions that
are piecewise constant. We use a simple example in which the rate distributions
are non-overlapping square patterns, as illustrated in Figure 4a (see Methods). The
patterns are randomly presented to the neuron, being switched at regular epochs.
Since the mean switching time is constant and much smaller than the time constant
of learning, each synapse receives the same average input over time. However, we
observe that, after training, the neuron spontaneously breaks the symmetry, as a
subset of synapses becomes potentiated, while others are depressed (Figure 4b). It
should be noticed that, because the choice of the training pattern at each epoch is
random, the selected pattern is different at each run. Due to metaplasticity, these
results are robust across different pattern amplitudes and pattern dimensions (not
shown).
a)
rate
synapse
1
50
100
b)
Average weight
Synapses 1?25
Synapses 26?50
Synapses 51?75
Synapses 76?100
10
10
10
10
5
5
5
5
0
0
0
1500
3000
0
1500
3000
0
0
1500
3000
0
0
1500
3000
Time (sec)
Figure 4: (a) Four non-overlapping patterns of input rate distribution and (b) the
average weight evolution of each channel. In this particular simulation, the higher
and the lower rates correspond to 30 Hz and 10 Hz, respectively. The final state of
the neuron is one that is selective to the last pattern ( a), left most).
5
Discussion
Neurons in many cortical areas develop receptive fields that are selective to a small
subset of stimulating inputs. This property has been shown to be experiencedependent [19, 20] and also dependent on NMDA receptors[5, 21]. It is likely,
therefore, that receptive field formation relies on the same type of NMDA-dependent
synaptic plasticity observed in vitro [1, 2, 4]. Previous work has shown that these in
vitro rate and spike time-induced plasticity can be accounted for by the biologicallyinspired Unified Calcium Model. In this work, we have shown that the same model
can lead to the experience-dependent development of neuronal selectivity.
Metaplasticity adds robustness to the system and reinforces temporal competition
between input patterns [11] , by controlled scaling of NMDAR currents. We have
shown here that even in simple input environments there is segregation among the
synaptic strengths, depending on the temporal input statistics of different channels.
This is analogous to the explanation of ocular dominance that depends on temporal
competition [22], and is likely to hold with more realistic assumptions.
Because the UCM is responsive to input rates, in addition to spike-timing, we are
able to achieve selectivity for rate-distribution patterns in spiking neurons that is
comparable to the selectivity obtained in simplified, continuous-valued systems [23].
This result suggests that the coexistence and complementarity of rate- and spike
time-dependent plasticities, previously demonstrated for a one-dimensional neuron
[8], can also be extended to multi-dimensional input environments. We are currently
investigating the formation of receptive fields in more realistic environments, such as
natural stimuli and examining how the their statistical properties can be translated
into a physiological mechanism for emergence of input selectivity.
6
Methods
We simulate a single neuron with 20 non-plastic inhibitory synapses and 100 excitatory synapses undergoing the Calcium-Dependent learning rule:
w? i = ?(Cai ) (?(Cai ) ? ?w) ,
(2)
where wi is the synaptic weight of the synapse i, i = 1, ..., 100, ? is a linear
calcium-dependent learning rate ? = 10?3 Ca and ? is a difference of sigmoids: ? =
?1
?(Ca, ?1 , ?1 )?0.5?(Ca, ?2 , ?2 ), with ?(x, a, b) := exp(b(x?a)) [1 + exp(b(x ? a))]
and (?1 , ?1 , ?2 , ?2 ) = (0.25, 60, 0.4, 20). Here, we use ? = 0. The initial condition for all weights is 0.5; additionally, wi is constrained within hard boundaries:
wi ? [0, 1] for the cases where no metaplasticity is used.
The NMDA-mediated calcium concentration varies as:
dCai
Cai
=I?
,
dt
?Ca
(3)
where I is the NMDA current and ?Ca = 20 ms is the passive decay time constant
[24]. I depends on the association between pre-spike times and postsynaptic depolarization level, described by I = gm f (t, tpre )H(V ) [7]. At the non-metaplastic
cases, we use gm = 2.53 ? 10?4 ?M/(mV.ms). Upon a pre-spike, f reaches its peak
value of 1. 70% of this value decays with time constant ?fN = 50 ms, the remaining
decays with time constant ?sN = 200 ms. H is the magnesium-block function:
H(V ) =
(V ? Vrev )
,
1 + e?0.062V /3.57
(4)
with the reversal potential for calcium Vrev = 130 mV.
The dynamics of the membrane potential is simulated with the standard Integrateand-Fire model:
dVm (t)
1
=
Vrest ? Vm (t) + Gex (t) (Vex ? Vm (t)) + Gin (t) (Vin ? Vm (t)) ,
dt
?m
(5)
where ?m =20 ms, (Vrest , Vex , Vin ) = (?65, 0, 65) mV. If a pre-spike arrives at the
max
excitatory [inhibitory] synapse i, Gex[in] (t) = Gex[in] (t ? 1) + gex[in]
gi ; otherwise,
Gex and Gin decay exponentially with time constant ? = 5 ms. For excitatory and
inhibitory synapses, (gi , g max ) = (wi , 0.09) and (1, 0.3) respectively. If Vm (t) reaches
firing threshold of -55 mV, a post-spike is generated and the BPAP is updated to
its peak value of 60 mV. 75% of this value decays rapidly (?fB = 3 ms) and the
remaining decays slowly (?sB = 35 ms) [25]. The voltage at the synaptic site is thus
given by the sum V = Vm +BPAP.
To implement input correlations, we adopt the method used by [26]. Let the number
of correlated input be N . For a pre-assigned
correlation parameter c, N0 Poisson
?
events are generated, N0 = N + c(1 ? N ), and, at each time step, randomly
distributed among the N synapses. It is clear that each resulting spike-train still
has the same Poisson distribution, but with a probability of spiking together with
other synapses.
For simulations involving different rates, the 100 synapses were first divided into 4
channels of 25 synapses. Time epochs were generated according to an exponential
distribution of mean ?c = 500 ms. At each epoch, one of the channels was randomly
chosen and assigned a mean rate r ? , while others receive spike-trains with mean rate
r < r? .
For metaplasticity in Equation 1, we use the parameters: k? /(k+ ) = 9.1739 ? 107 ,
gt = ?0.0184 and ? = 4. All of the simulations use time steps of dt = 1 ms.
Acknowledgments
This work is partly funded by the Brown Brain Science Program BurroughsWellcome Fund fellowship program. The authors thank the members of the Institute for Brain and Neural Systems and the participants of the 2001 EU Summer
School on Computational Neuroscience for helpful conversations.
References
[1] T.V.P. Bliss and G.L. Collingridge. A synaptic model of memory; long-term potentiation the hippocampus. Nature, 361:31?9, 1993.
[2] S.M. Dudek and M.F. Bear. Homosynaptic long-term depression in area CA1 of
hippocampus and the effects on NMDA receptor blockade. Proc. Natl. Acad. Sci.,
89:4363?7, 1992.
[3] H. Markram, J. L?
ubke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy
by coincidence of postsynaptic APs and EPSPs. Science, 275:213?5, 1997.
[4] G. Bi and M. Poo. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci., 18
(24):10464?72, 1998.
[5] A. Kleinschmidt, M.F. Bear, and W. Singer. Blockade of NMDA receptors disrupts
experience-dependent plasticity of kitten striate cortex. Science, 238:355?358, 1987.
[6] M.F. Bear, L.N Cooper, and F.F. Ebner. A physiological basis for a theory of synapse
modification. Science, 237:42?8, 1987.
[7] J.A. Lisman. A mechanism for the Hebb and the anti-Hebb processes underlying
learning and memory. Proc. Natl. Acad. Sci., 86:9574?8, 1989.
[8] H.Z. Shouval, M.F. Bear, and L.N Cooper. A unified theory of nmda receptordependent bidirectional synaptic plasticity. Proc. Natl. Acad. Sci., 99:10831?6, 2002.
[9] M. Nishiyama, K. Hong, K. Mikoshiba, M.M. Poo, and K. Kato. Calcium stores
regulate the polarity and input specificity of synaptic modification. Nature, 408:584?
8, 2000.
[10] P.J. Sj?
ostr?
om, G.G. Turrigiano, and S.B. Nelson. Rate, timing, and cooperativity
jointly determine cortical synaptic plasticity. Neuron, 32:1149?64, 2001.
[11] E.L. Bienenstock, L.N Cooper, and P.W. Munro. Theory for the development of
neuron selectivity: orientation specificity and binocular interaction in visual cortex.
J. Neurosci., 2:32?48, 1982.
[12] A. Kirkwood, M.G. Rioult, and M.F. Bear. Experience-dependent modification of
synaptic plasticity in visual cortex. Nature, 381:526?8, 1996.
[13] B.D. Philpot, A.K. Sekhar, H.Z. Shouval, and M.F. Bear. Visual experience and
deprivation bidirectionally modify the composition and function of NMDA receptors
in visual cortex. Neuron, 29:157?69, 2001.
[14] S. Song, K.D. Miller, and L.F. Abbott. Competitive hebbian learning through spiketiming dependent synaptic plasticity. Nature Neurosci., 3:919?26, 2000.
[15] G. Carmignoto and S. Vicini. Activity dependent increase in NMDA receptor responses during development of visual cotex. Science, 258:1007?11, 1992.
[16] E.M. Quinlan, B.D. Philpot, R.L. Huganir, and M.F. Bear. Rapid, experiencedependent expression of synaptic NMDA receptors in visual cortex in vivo. Nature
Neurosci., 2(4):352?7, 1999.
[17] A.J. Watt, M.C.W. van Rossum, K.M. MacLeod, S.B. Nelson, and G.G. Turrigiano.
Activity co-regulates quantal ampa and nmda currents at neocortical synapses. Neuron, 26:659?70, 2000.
[18] H.Z. Shouval, G.C. Castellani, L.C. Yeung, B.S. Blais, and L.N Cooper. Converging
evidence for a simplified biophysical model of synaptic plasticity. Bio. Cyb., 87:383?91,
2002.
[19] Y. Fr?egnac and M. Imbert. Early development of visual cortical cells in normal and
dark reared kittens: relationship between orientation selectivity and ocular dominance. J. Physiol. Lond., 278:27?44, 1978.
[20] B. Chapman, M.P. Stryker, and T. Bonhoeffer. Development of orientation preference
maps in ferret primary visual cortex. J. Neurosci., 16:6443?53, 1996.
[21] A.S. Ramoa, A.F. Mower, D. Liao, and S.I. Jafri. Suppression of cortical nmda
receptor function prevents development of orientation selectivity in the primary visual
cortex. J. Neurosci., 21:4299?309, 2001.
[22] B.S. Blais, H.Z. Shouval, and L.N Cooper. The role of presynaptic activity in monocular deprivation: Comparison of homosynaptic and heterosynaptic mechanisms. Proc.
Natl. Acad. Sci., 96:1083?7, 1999.
[23] E.E. Clothiaux, L.N Cooper, and M.F. Bear. Synaptic plasticity in visual cortex:
Comparison of theory with experiment. J. Neurophys., 66:1785?804, 1991.
[24] B.L. Sabatini, T.G. Oerthner, and K. Svoboda. The life cycle of ca2+ ions in dendritic
spines. Neuron, 33:439?52, 2002.
[25] J.C. Magee and D. Johnston. A synaptically controlled, associative signal for hebbian
plasticity in hippocampal neurons. Science, 275:209?13, 1997.
[26] M. Rudolph and A. Destexhe. Correlation detection and resonance in neural systems
with distributed noise sources. Phys. Rev. Lett., 86(16):3662?5, 2001.
| 2188 |@word luk:1 hippocampus:2 sabatini:1 open:1 simulation:4 excited:1 solid:2 initial:2 efficacy:1 current:5 neurophys:1 activation:1 yet:1 must:1 fn:1 physiol:1 realistic:3 interspike:1 shape:1 plasticity:22 fund:1 n0:2 aps:1 alone:1 half:4 selected:1 preference:1 pairing:3 persistent:1 pathway:1 manner:2 spine:1 rapid:1 isi:3 examine:1 disrupts:2 multi:3 brain:6 terminal:1 window:1 increasing:1 becomes:1 underlying:1 inward:1 interpreted:2 depolarization:5 developed:1 ca1:1 unified:6 finding:1 temporal:4 bryant:2 control:5 unit:1 bio:1 rossum:1 before:4 positive:1 local:2 timing:4 modify:1 limit:1 consequence:1 switching:1 receptor:11 acad:4 firing:3 modulation:1 solely:1 might:1 suggests:2 co:1 range:5 bi:1 acknowledgment:1 spontaneously:1 timedependent:1 block:1 implement:1 lf:2 ucm:4 signaling:1 area:2 significantly:2 pre:9 induce:2 regular:1 specificity:2 equivalent:1 imposed:1 demonstrated:1 center:2 map:1 poo:2 straightforward:1 regardless:1 mower:1 clothiaux:1 stabilizing:1 sekhar:1 rule:4 stability:1 variation:1 analogous:1 updated:1 enhanced:1 gm:12 cultured:1 svoboda:1 hypothesis:4 complementarity:1 std:1 predicts:1 observed:2 inserted:2 role:1 coincidence:1 ensures:1 cycle:1 eu:1 decrease:1 removed:1 environment:7 insertion:2 dynamic:3 depend:1 cyb:1 upon:1 basis:1 translated:3 various:3 shouval:7 train:8 cooperativity:4 formation:3 outcome:1 larger:1 plausible:1 valued:1 otherwise:1 statistic:5 gi:4 topographic:1 emergence:1 noisy:1 jointly:1 final:8 associative:2 rudolph:1 biophysical:2 turrigiano:2 cai:6 propose:1 interaction:1 strengthening:1 fr:1 kato:1 rapidly:1 achieve:2 inducing:1 competition:5 differentially:1 produce:1 generating:1 weakens:1 develop:1 depending:1 propagating:1 school:1 strong:1 epsps:1 implemented:1 kleinschmidt:1 come:2 vrest:2 transient:1 bin:1 potentiation:3 integrateand:1 preliminary:1 elevation:2 brian:1 dendritic:2 hold:2 sufficiently:1 ground:1 stdp:6 exp:2 normal:1 equilibrium:1 mapping:1 vary:1 adopt:1 early:1 proc:4 currently:1 sensitive:3 vice:1 rather:2 voltage:7 suppression:1 detect:1 helpful:1 dependent:18 sb:1 bienenstock:1 selective:5 overall:1 among:3 orientation:4 development:7 resonance:1 spatial:1 integration:1 constrained:1 frotscher:1 field:5 equal:1 having:1 chapman:1 ubke:1 others:2 stimulus:2 develops:1 piecewise:1 dudek:1 randomly:3 harel:2 metaplastic:1 fire:2 conductance:4 detection:1 investigate:2 chong:1 arrives:1 natl:4 experience:4 modest:1 loosely:1 subset:2 examining:2 too:4 providence:3 varies:1 disrupted:1 peak:2 physic:4 vm:5 pool:2 together:3 slowly:1 account:2 potential:4 suggesting:1 sec:1 coefficient:1 bliss:1 mv:6 depends:3 break:1 linked:1 reached:1 competitive:1 hf:1 participant:1 vin:2 depolarizing:1 vivo:2 om:1 square:1 characteristic:1 miller:1 yield:1 subthreshold:1 correspond:1 biophysically:1 plastic:1 history:1 synapsis:22 reach:2 phys:1 synaptic:26 frequency:7 ocular:2 gain:1 coexistence:1 conversation:1 nmda:23 amplitude:1 back:3 bidirectional:2 feed:2 higher:2 dt:3 response:1 synapse:6 depressing:1 formulation:1 strongly:1 furthermore:1 binocular:1 correlation:8 hand:1 receives:3 overlapping:3 effect:3 brown:8 evolution:2 hence:1 assigned:2 illustrated:2 blockade:2 during:1 excitation:1 imbert:1 m:16 hong:1 hippocampal:2 neocortical:1 demonstrate:1 passive:1 functional:1 spiking:4 stimulation:2 vitro:2 regulates:1 exponentially:1 tail:1 slight:1 association:1 egnac:1 significant:2 potentiate:1 composition:1 versa:1 cv:2 depressed:4 philpot:2 had:2 funded:1 stable:2 cortex:8 inhibition:1 gt:5 add:1 recent:1 driven:1 scenario:1 selectivity:17 lisman:1 store:1 life:1 additional:1 determine:1 paradigm:2 dashed:2 signal:1 sliding:2 hebbian:4 long:2 divided:1 post:8 paired:1 controlled:2 impact:1 converging:1 underlies:1 involving:1 liao:1 poisson:3 yeung:3 synaptically:1 cell:2 irregular:1 receive:3 addition:6 fellowship:1 ion:1 interval:1 ferret:1 grow:1 johnston:1 source:1 unlike:1 hz:15 induced:1 ltp:5 member:1 dvm:1 presence:1 destexhe:1 castellani:1 gex:5 motivated:1 expression:1 depress:1 munro:1 ltd:5 song:2 returned:1 cause:2 action:1 depression:3 clear:1 dark:1 locally:1 generate:1 inhibitory:4 notice:3 sign:1 neuroscience:2 reinforces:1 diverse:1 basal:1 group:11 four:1 dominance:2 threshold:4 demonstrating:1 prevent:1 abbott:1 fraction:2 sum:1 run:1 inverse:1 heterosynaptic:1 ca2:1 place:1 vex:2 scaling:2 comparable:1 summer:1 potentiates:1 activity:15 strength:2 ri:4 influx:1 homosynaptic:2 simulate:1 lond:1 leon:2 department:5 influential:1 according:1 watt:1 membrane:3 across:3 smaller:1 postsynaptic:8 wi:4 rev:1 biologically:1 modification:4 taken:1 segregation:7 equation:2 previously:3 monocular:1 eventually:1 mechanism:6 needed:1 singer:1 reversal:1 collingridge:1 observe:1 reared:1 regulate:1 responsive:1 alternative:1 robustness:3 original:1 remaining:2 include:1 ensure:1 quinlan:1 widens:1 spiketiming:1 macleod:1 move:1 noticed:1 spike:31 occurs:3 receptive:5 concentration:2 dependence:7 striate:1 responds:1 stryker:1 primary:2 enhances:1 exhibit:1 regulated:1 gin:2 thank:1 simulated:2 lateral:1 sci:4 nelson:2 presynaptic:1 cellular:4 uncharacterized:1 induction:3 experiencedependent:2 polarity:1 quantal:1 relationship:1 balance:2 regulation:3 negative:1 rise:2 implementation:4 sakmann:1 calcium:17 ebner:1 potentiated:5 upper:1 neuron:22 observation:2 anti:1 segregate:1 incorporated:1 blais:3 extended:1 homeostatic:1 arbitrary:1 required:1 bcm:4 metaplasticity:15 narrow:1 tpre:1 able:1 pattern:15 regime:2 saturation:1 program:2 max:2 memory:2 explanation:1 event:1 natural:1 scheme:1 technology:1 mediated:1 magee:1 sn:1 epoch:4 removal:2 expect:1 bear:8 generation:1 switched:1 sufficient:1 consistent:1 uncorrelated:3 compatible:1 excitatory:4 accounted:1 last:1 arriving:1 ostr:1 institute:5 markram:1 magnesium:1 distributed:2 van:1 curve:2 calculated:1 dimension:1 cortical:4 boundary:1 kirkwood:1 fb:1 sensory:1 author:1 lett:1 simplified:2 sj:1 global:2 investigating:1 search:1 continuous:1 additionally:1 channel:9 nature:5 robust:2 ca:9 expanding:1 symmetry:1 ampa:1 protocol:6 intracellular:1 neurosci:6 noise:1 arise:1 neuronal:2 site:1 fashion:1 downwards:1 cooper:8 slow:1 hebb:2 exponential:1 clamped:1 deprivation:2 nishiyama:1 down:1 specific:1 undergoing:1 decay:6 physiological:4 normalizing:1 evidence:1 essential:4 exists:1 magnitude:2 sigmoids:1 bonhoeffer:1 likely:3 bidirectionally:1 visual:10 prevents:1 determines:1 relies:1 kinetic:1 stimulating:1 change:2 hard:1 determined:1 called:1 total:1 partly:1 experimental:5 indicating:1 college:1 support:2 arises:1 kitten:2 correlated:9 |
1,305 | 2,189 | Optoelectronic Implementation of a
FitzHugh-Nagumo Neural Model
Alexandre R.S. Romariz , Kelvin Wagner
Optoelectronic Computing Systems Center
University of Colorado, Boulder, CO, USA 80309-0425
[email protected]
Abstract
An optoelectronic implementation of a spiking neuron model based on
the FitzHugh-Nagumo equations is presented. A tunable semiconductor laser source and a spectral filter provide a nonlinear mapping from
driver voltage to detected signal. Linear electronic feedback completes
the implementation, which allows either electronic or optical input signals. Experimental results for a single system and numeric results of
model interaction confirm that important features of spiking neural models can be implemented through this approach.
1 Introduction
Biologically-inspired computation paradigms take different levels of abstraction when
modeling neural dynamics. The production of action potentials or spikes has been abstracted away in many rate-based neurodynamic models, but recently this feature has gained
renewed interest [1, 2]. A computational paradigm that takes into account the timing of
spikes (instead of spike rates only) might be more efficient for signal representation and
processing, especially at short time windows [3, 4, 5].
Optics technology provides high bandwidth and massive parallelism for information processing. However, the implementation of digital primitives have not as yet proved competitive against the scalability and low power operation of digital electronic gates. It is then
natural to explore the features of optics for different computational paradigms. Artificial
neural networks promise an excellent match to the capabilities of optics, as they emphasize
simple analog operations, parallelism and adaptive interconnection[6, 7, 8, 9].
Optical implementations of Artificial Neural Networks have to deal with the problem of
representing the nonlinear activation functions that define the input-output mappings for
each neuron. Although nonlinear optics has been suggested for implementing neurons,
hybrid optoelectronic systems, where the task of producing nonlinearity is given to the
electronic circuits, may be more practical [10, 11]. In the case of pulsing neurons, the task
seems more difficult still, for instead of a nonlinear static map we are required to implement a nonlinear dynamical system. Several possibilities for the implementation of pulsed
optical neurons can be considered, including smart pixel pulsed electronic circuits with op
On leave from the Electrical Engineering Department, University of Bras??lia, Brazil
tical inputs [12], pulsing laser cavity feedback dynamics [13] and competitive-cooperative
phosphor feedback [14].
In this paper we demonstrate and evaluate an optoelectronic implementation of an artificial
spiking neuron, based on the FitzHugh-Nagumo equations. The proposed implementation
uses wavelength tunability of a laser source and a birefringent crystal to produce a nonlinear
mapping from driving voltage to detected optical output [15]. Linear electronic feedback to
the laser drive current completes the physical implementation of this model neuron. Inputs
can be presented optically or electronically, and output signals are also readily available as
optical or electronic pulses.
This work is organized as follows. Section 2 reviews the FitzHugh-Nagumo equations and
describes the particular optoelectronic spiking neuron implementation we propose here. In
Section 3 we analyze and illustrate dynamical properties of the model. Experimental results
of the optoelectronic system implementing one model are presented in Section 4. Numeric
results that illustrate features of the interaction between models are shown in Section 5.
2 Modified FN Neural Model and optoelectronic implementation
The FitzHugh-Nagumo neuron model [16, 17] is appealing for physical implementation, as
it is fairly simple and completely described by a pair of coupled differential equations:
!
"#
$&%
(1)
where is an excitable state variable that exhibits bi-stability as a result of the nonlinear
' term, and is a linear recovery variable, bringing the neuron back to a resting state.
In the original model proposal, ( is a third-degree polynomial[16, 17]. This model has
been previously implemented in CMOS integrated electronics [18].
In optical implementation of neural networks, the required nonlinear functions are usually
performed through electronic devices, with adaptive linear interconnection done in the optical domain. We here explore the possibility of optical implementation of the required
nonlinear function )* by using the nonlinear response of linear optical systems to variations of the wavelength.
Consider a birefringent material placed between crossed polarizers. Even though propagation of the field through the material is a linear phenomenon (a linear phase difference
among orthogonal polarization components is generated), the output power as a function
of incident wavelength is sinusoidal, according to
'+-,.0/21
det
0/43657#89
;:=<?>A@CBEH DGF
#89
JI
(2)
where / is the transimpedance gain of the detector amplifier, 3 is the responsivity (in
A/W), 57#89
is the optical power incident on the detector, which is a function of the laser
drive current 8 , F is the optical
H path difference (OPD) resulting from propagation through
the birefringent material and #89
is the laser wavelength.
In semiconductor lasers, and Vertical Cavity Surface Emitting Lasers (VCSELs) inH par8 produces a small modulation in the radiation wavelength 8K
.
ticular, an input current
H
Linearizing the LJM variation in Equation 2, we find a nonlinear mapping from driving
voltage to detected signal:
+-,. N#;
OQP7 R
:<S>A@ B D
#UTV;
TAW
I
(3)
w
Detected Optical Signal
0.5
?
u
?
+
+
0.4
i
VDet(V)
f(v)
v
PD
Driver
no
VCSEL
PBS
ne
Birefringent Crystal Mirror
0.3
0.2
0.1
0.0
0.05
0.10
Collimation
(a)
0.15
0.20
0.25
Driver Voltage (V)
0.30
0.35
(b)
Figure 1: a Experimental setup for the wavelength-based nonlinear oscillator, with simplified view of the electronic feedback. b Experimental evidence of nonlinear mapping from
driver voltage to detected signal (open loop), as a result of wavelength modulation as well
as laser threshold and saturation.
where is the driving voltage (linearly converted to an input current 8 through the driver
transconductance) and the function P7 R
includes all conversion factors in the detection
process, as well as nonlinear phenomena such as laser threshold and saturation.
A simple nonlinear feedback loop can now be established, by feeding the detected signal
back to the driver. This basic arrangement has been used to investigate chaotic behavior
in delayed-feedback tunable lasers [15] . It is used here as the nonlinearity for an optical
self-pulsing mechanism in order to implement neural-like pulses based on the following
dynamical system
#
O
$
!
"#
%
(4)
.
Again is a fast state variable, and a relatively slow recovery variable, so that J
The experimental setup is shown in Figure 1a. Light from the tunable source is collimated
and propagates through a piece of birefringent crystal. The crystal fast and slow axis are
at 45 degrees to the polarizer and analyzer passing axis. The effective propagation length
through the crystal (and corresponding wavelength selectivity) is doubled with the use of a
mirror. A polarizing beam splitter acts as both polarizer and analyzer. A simplified view
of the electronic feedback is also shown. Leaky integrators and linear analog summations
implement the linear part of Equation 4, while the nonlinear response (in intensity) of the
optical filter implements #;
.
A VCSEL was used as tunable laser source. These vertical-cavity semiconductor lasers
have, when compared to edge-emitting diode lasers, larger separation between longitudinal
modes, more circularly-symmetric beams and lower fabrication costs [19]. As the input
current is increased, the heating of the cavity red-shifts the resonant wavelength [20], and
this is the main mechanism we are exploring for wavelength modulation.
An experimental verification of the expected sinusoidal variation of detected power with
modulation voltage is given in Figure 1b. A slow (800Hz) modulation ramp was applied to
the driver, and the detected power variation was acquired. From this information, the static
transfer function shown in the right part of the figure was calculated. Unlike the experiment
with a DBR laser diode reported by Goedgebuer et al. [15], it is apparent that current
modulation is affecting not only wavelength (and hence effective optical path difference
among polarization components) but overall output power as well. Modulation depth is
limited (non-zero troughs in the sinusoidal variation), which we attribute to the multiple
transverse modes that the device supports. However, as we are going to be operating near
Figure 2: Continuous
. line: trajectory of the system under strong input, obtained by numeric integration ( -order Runge-Kutta) of Equation 4. Arrows represent the strength of
. Dashthe derivatives at a particular point in state space. Dashed line: nullcline
. Stability analysis show that the equilibrium point where the
dotted line: nullcline
nullclines meet is unstable, so the limit cycle is the sole attractor. Parameters L ,
0 L % L
T W
,
,
V,
V, TVC ?L V.
the first maximum (see Section 3), the power variation over successive maxima should not
affect the dynamical properties of the closed-loop system. The relatively smooth curve
obtained indicates that no mode hops occurred for this driving current range, which was
indeed confirmed with Optical Spectrum Analyzer measurements.
3 Simulations
FitzHugh-Nagumo models are known to have so-called class II neural excitability (see [21]
for a review). This class is characterized by an Andronov-Hopf bifurcation for increasing
excitation, and exhibits some dynamical phenomena that are not present in integrate-andfire dynamics. For equal intensity input pulses, integrators will respond maximally to the
pulse train with lowest inter-spike interval. Class II neurons have resonant response to a
range of input frequencies. There are non-trivial forms of excitation in resonator models that are not matched by integrators: the former can produce a spike at the end of an
inhibitory pulse, and conversely, can have a limit cycle condition interrupted (with the system recovering to rest) by an excitatory pulse.
We have verified that these characteristics are maintained in the modified
optical model,
+
despite the use of a sinusoidal nonlinearity instead of the original degree polynomial
function. Stability analysis based on the Jacobian of the dynamical system (Equation 4)
shows an Andronov-Hopf bifurcation, as in the original model. Limit cycle interruption
through exciting pulses is shown in Section 5.
Figure 2 shows a typical limit-cycle trajectory, for parameter values that match conditions
of the experiment reported in Section 4. Parameters were chosen so that a typical excursion
in modulation voltage goes from the dead zone (below the lasing threshold) to around the
first peak in the nonlinear detector transfer function. This is an interesting choice because
the optical output is only present during spiking, and can be used directly as an input to
other optoelectronic neurons.
Driver Voltage(V)
0.180
0.155
0.130
0.105
0.080
10
20
0.12
0.09
0.06
0.03
0.00
10
20
30
Time(?v)
Recovery
40
50
30
40
50
40
50
Detectd signal
0.100
0.075
0.050
0.025
0.000
10
20
30
(a)
(b)
Figure 3: Dynamical system response to strong constant input. a Simulation results. Parameters as in Figure 2. b Experimental results. Parameters: SL ms, L ms.
Input (V)
0.2
0.1
0.0
0
20
40
60
Time(?v)
Driver Voltage (V)
80
100
80
100
80
100
80
100
0.2
0.0
-0.2
0
20
40
60
Recovery (V)
0.10
0.05
0.00
0
20
40
60
Detected Optical Signal (V)
0.10
0.05
0.00
0
20
40
60
a
b
Figure 4: (a): Simulated response to a train of pulses. Parameters as in Figure 2. (b):
Experimental Results. Parameters as in Figure 3.
4 Experimental Results
Figure 3 presents a comparison between simulated waveforms for the various dynamic
variables involved (as the system performs the trajectory depicted in Figure 2) and the
experimental results obtained with the system described in Figure 1, revealing a good
agreement between simulated and experimental waveforms. The double-peak in the optical variable can be understood by following the trajectory indicated in Figure 2, bearing
in mind the non-monotonic mapping from driver voltage to detected signal. The decrease in
driver voltage observed as the recovery variable increases produces initially an increase
in detected power, and thus the second, broader peak at the end of the cycle.
The production of sustained oscillations for constant input is one of the desired characteristics of the model, but in a network, neurons will mostly communicate through their pulsed
output. The response of the system to pulsed inputs can be seen in Figure 4. The output
optical signal response is all-or-none, but sub-threshold integration of weak inputs is being
performed, as the waveform for driver voltage shows in the first pulse. As slowly returns
to 0, a new excitation just after a pulse is less likely, which can be seen at the response to
the third pulse. The experimentally observed waveforms agree with the simulations, though
details of the pulsing in the optical output are different.
Pulse advance vs input pulse phase
4
??o
rad
2
Bias
?i
0
No spikes
0
-2
2?
-4
a
0
2
4
Input Pulse Phase (rad)
6
b
Figure 5: Numeric illustration of the effect of input timing on the advance of the next spike,
in the modified FitzHugh-Nagumo system. a: Schematic view of simulation. See text for
details. b: Phase advance as a function of input phase. Bias 0.103V. Input pulse height
10 mV, duration L . Dynamic system parameters as in Figure 2.
5 Coupling
One of the main motivations for using optical technology in neural network implementation
is the possibility of massive interconnection, and so the definition of coupling techniques,
and the study of adaptation algorithms compatible with the dynamical properties of the
experimentally-demonstrated oscillators are the current focus of this research.
The most elegant optical implementation of adaptive interconnection is through dynamic
volume holography[6, 11], but that requires a set of coherent optical signals, not what we
have with an array of pulse emitters. In contrast, the matrix-vector multiplier architecture
allows parallel interconnection of incoherent optical signals, and has been used to demonstrate implementations of the Hopfield model [7] and Boltzman machines [9].
An interesting aspect of the coupled dynamics in oscillators exhibiting class II excitability
is that the timing of an input pulse can result in advance or retardation of the next spike [22].
This is potentially relevant for hardware implementation, as the excitatory (i.e., inducing an
early spike) or inhibitory character of the connection might be controlled without changing
signs of the coupling strength.
In Figure 5 we show a simulation illustrating the effect of input pulse timing in advancing
the output spike. A constant input to a model neuron (Equation 4) was maintained, producing periodic spiking. A second, positive, pulsed input was activated in between spikes,
and the effect of this coupling on the advance or retardation of the next spike was verified
) with
as the timing of the input was varied. A region of output spike retardation (
excitatory pulsed input can be seen. Even more interesting, for phases around D rad relative
to the latest spike, the excitatory pulse can terminate periodic spiking altogether.
This phenomenon is seen in detail in Figure 6, where both the time waveforms and statespace trajectories are shown. For this particular condition, the equilibrium point of the
system is stable. When correctly timed, the short excitatory pulse forces the system out
of its limit cycle, into the basin of attraction of the stable equilibrium, hence stopping the
periodic spiking. As the individual models used in this simulations were shown to match
experimental implementations in Section 4, we expect to observe the same kind of effect
in the coupling of the optoelectronic oscillators.
State Space Trajectory
Input
V
0.016
0.008
0.000
0.0800
0.0675
40
60
80
100
Time(t.c.)
120
140
0.0550
0.0425
0.2
0.1
0.0
w
V
Driving Voltage
40
60
80
100
Time(t.c.)
120
140
0.0175
0.0050
Output
V
0.0300
-0.0075
0.10
0.05
0.00
-0.0200
0.0800
40
60
80
100
Time(t.c.)
120
0.1000
0.1200
v
0.1400
0.1600
140
a
b
Figure 6: (a): Simulated response illustrating return to stability with excitatory pulse.
SL'L . Other parameters as in Figure 2. (b): Same results in state space. Continuous line:
Unperturbed trajectory. Dotted Line: Trajectory during excitatory pulse.
6 Ongoing work and conclusions
Implementation of a modified FN neuron model with a nonlinear transfer function realized
with a wavelength-tuned VCSEL source, a linear optical spectral filter and linear electronic feedback was demonstrated. The system dynamical behavior agrees with simulated
responses, and exhibits some of the basic features of neuron dynamics that are currently
being investigated in the area of spiking neural networks.
Further experiments are being done to demonstrate coupling effects like the ones described
in Section 5. In particular, the use of external optical signals directly onto the detector
to implement optical coupling has been demonstrated. Feedback circuit simplification is
another important aspect, since we are interested in implementing large arrays of spiking
neurons. With enough detection gain, Equation 4 should be implementable with simple
RLC circuits, as in the original work by Nagumo[17].
Results reported here were obtained at low frequency (1-100 KHz), limited by amplifier
and detector bandwidths. With faster electronics and detectors, the limiting factor in this
arrangement would be the time constant for thermal expansion of the VCSEL cavity, which
is around 1 . Pulsing operation at 1.2 MHz has been obtained in our latest experiments.
Even faster operation is possible when using the internal dynamics of wavelength modulation itself, instead of external electronic feedback. In addition to the thermally-induced
modulation of wavelength, carrier injection modifies the index of refraction of the active region directly, which results in an opposite wavelength shift. By using this carrier injection
effect to implement the recovery variable, feedback electronics is simplified and a much
faster time constant controls the model dynamics. Optical coupling of VCSELs has the
potential to generate over 40GHz pulsations [23]. Our goal is to investigate those optical
oscillators as a technology for implementing fast networks of spiking artificial neurons.
Acknowledgments
This research is supported in part by a Doctorate Scholarship to the first author from the
Brazilian Council for Scientific and Technological Development, CNPq.
References
[1] F. Rieke, D. Warland, R.R. von Steveninck, and W. Bialek. Spikes: Exploring the Neural Code.
MIT Press, Cambridge, USA, 1997.
[2] T.J. Sejnowski. Neural pulse coding. In W. Maass and C.M. Bishop, editors, Pulsed Neural
Networks, Cambridge, USA, 1999. The MIT Press.
[3] W. Maass. Lower bounds for the computational power of spiking neurons. Neural Computation,
8:1?40, 1996.
[4] J.J. Hopfield. Pattern recognition computation using action potential timing for stimulus representation. Nature, 376:33?36, 1995.
[5] R. van Rullen and S.J. Thorpe. Rate coding versus temporal order coding: what the retinal
ganglion cells tells the visual cortex. Neural Computation, 13:1255?1283, 2001.
[6] D. Psaltis, D. Brady, and K. Wagner. Adaptive optical networks using photorefractive crystals.
Applied Optics, 27(9):334?341, May 1988.
[7] N.H. Farhat, D. Psaltis, A. Prata, and E. Paek. Optical implementation of the Hopfield model.
Applied Optics, 24:1469?1475, 1985.
[8] S. Gao, J. Yang, Z. Feng, and Y. Zhang. Implementation of a large-scale optical neural network
by use of a coaxial lenslet array for interconnection. Applied Optics, 36(20):4779?4783, 1997.
[9] A.J. Ticknor and H.H. Barrett. Optical implementation of Boltzmann machines. Optical Engineering, 26(1):16?21, January 1987.
[10] K.S. Hung, K.M. Curtis, and J.W. Orton. Optoelectronic implementation of a multifunction
cellular neural network. IEEE Transactions on Circuits and Systems II, 43(8):601?608, August
1996.
[11] K. Wagner and T.M. Slagle. Optical competitive learning with VLSI liquid-crystal winner-takeall modulators. Applied Optics, 32(8):1408?1435, March 1993.
[12] K. Hynna and K. Boahen. Space-rate coding in an adaptive silicon neuron. Neural Networks,
14(6):645?656, July 2001.
[13] F. Di Theodoro, E. Cerboneschi, D. Hennequin, and E. Arimondo. Self-pulsing and chaos
in an extended-cavity diode laser with intracavity atomic absorber. International Journal of
Bifurcation and Chaos, 8(9), September 1998.
[14] J.L. Johnson. All-optical pulse generators for optical computing. In Proceedings of the 2002
International Topical Meeting on Optics in Computing, pages 195?197, Taipei, Taiwan, 2002.
[15] J. Goedgebuer, L. Larger, and H.Porte. Chaos in wavelength with a feedback tunable laser
diode. Physical Review E, 57(3):2795?2798, March 1998.
[16] R.FitzHugh. Impulses and physiological states in models of nerve membrane. Biophysical
Journal, 1:445?466, 1961.
[17] J. Nagumo, S. Arimoto, and S. Yoshizawa. An active pulse transmission line simulating nerve
axon. Proceedings of the IRE, 50:2061?2070, 1962.
[18] B. Linares-Barranco, E. S?anchez-Sinencio, A. Rodr??guez-V?azquez, and J.L. Huertas. A CMOS
implementation of FitzHugh-Nagumo neuron model. IEEE Journal of Solid-State Circuits,
26(7):956?965, July 1991.
[19] A. Yariv. Optical Electronics in Modern Communications. Oxford University Press, New York,
USA, fifth edition, 1997.
[20] W. Nakwaski. Thermal aspects of efficient operation of vertical-cavity surface-emitting lasers.
Optical and Quantum Electronics, 28:335?352, 1996.
[21] E.M. Izhikevich. Neural excitability, spiking and bursting. International Journal of Bifurcation
and Chaos, 2000.
[22] E.M. Izhikevich. Weakly pulse-coupled oscillators, FM interactions, synchronization, and oscillatory associative memory. IEEE Transactions on Neural Networks, 10(3):508?526, May
1999.
[23] C.Z. Ning. Self-sustained ultrafast pulsation in coupled vertical-cavity surface-emitting lasers.
Optics Letters, 27(11):912?914, June 2002.
| 2189 |@word illustrating:2 polynomial:2 seems:1 open:1 pulse:26 simulation:6 solid:1 electronics:5 responsivity:1 optically:1 liquid:1 tuned:1 renewed:1 longitudinal:1 current:8 activation:1 yet:1 guez:1 readily:1 fn:2 interrupted:1 v:1 device:2 p7:1 short:2 provides:1 ire:1 successive:1 zhang:1 height:1 differential:1 driver:12 hopf:2 sustained:2 acquired:1 inter:1 indeed:1 expected:1 behavior:2 integrator:3 inspired:1 nullcline:2 window:1 increasing:1 matched:1 circuit:6 lowest:1 what:2 kind:1 brady:1 temporal:1 act:1 control:1 kelvin:1 producing:2 positive:1 carrier:2 engineering:2 timing:6 understood:1 limit:5 semiconductor:3 despite:1 oxford:1 meet:1 path:2 modulation:10 might:2 bursting:1 conversely:1 co:1 limited:2 emitter:1 bi:1 range:2 steveninck:1 hynna:1 practical:1 acknowledgment:1 yariv:1 atomic:1 implement:6 chaotic:1 area:1 revealing:1 doubled:1 onto:1 map:1 demonstrated:3 center:1 modifies:1 primitive:1 go:1 latest:2 duration:1 recovery:6 attraction:1 array:3 hennequin:1 stability:4 rieke:1 variation:6 brazil:1 limiting:1 colorado:2 massive:2 us:1 agreement:1 recognition:1 cooperative:1 observed:2 electrical:1 region:2 cycle:6 decrease:1 technological:1 boahen:1 pd:1 dynamic:10 weakly:1 smart:1 completely:1 hopfield:3 various:1 laser:19 train:2 fast:3 effective:2 modulators:1 sejnowski:1 detected:11 artificial:4 tell:1 apparent:1 larger:2 ramp:1 interconnection:6 itself:1 runge:1 associative:1 biophysical:1 propose:1 interaction:3 pulsation:2 adaptation:1 relevant:1 loop:3 inducing:1 neurodynamic:1 scalability:1 double:1 transmission:1 produce:4 cmos:2 leave:1 illustrate:2 coupling:8 radiation:1 sole:1 op:1 strong:2 implemented:2 diode:4 recovering:1 exhibiting:1 waveform:5 ning:1 attribute:1 filter:3 vc:1 material:3 implementing:4 feeding:1 summation:1 exploring:2 around:3 considered:1 equilibrium:3 mapping:6 driving:5 early:1 psaltis:2 currently:1 collimated:1 council:1 agrees:1 mit:2 modified:4 nullclines:1 voltage:14 broader:1 focus:1 june:1 indicates:1 contrast:1 abstraction:1 stopping:1 integrated:1 initially:1 vlsi:1 going:1 interested:1 pixel:1 overall:1 among:2 rodr:1 development:1 integration:2 fairly:1 bifurcation:4 field:1 equal:1 taw:1 hop:1 stimulus:1 thorpe:1 modern:1 individual:1 delayed:1 phase:6 attractor:1 amplifier:2 detection:2 interest:1 possibility:3 investigate:2 light:1 activated:1 tical:1 edge:1 orthogonal:1 desired:1 timed:1 increased:1 modeling:1 mhz:1 cost:1 fabrication:1 johnson:1 reported:3 periodic:3 peak:3 international:3 again:1 von:1 rlc:1 slowly:1 dead:1 external:2 derivative:1 return:2 account:1 potential:3 sinusoidal:4 converted:1 retinal:1 coding:4 includes:1 trough:1 mv:1 crossed:1 piece:1 performed:2 view:3 closed:1 analyze:1 red:1 competitive:3 capability:1 parallel:1 characteristic:2 andronov:2 weak:1 none:1 trajectory:8 confirmed:1 drive:2 detector:6 oscillatory:1 farhat:1 definition:1 against:1 polarizing:1 frequency:2 involved:1 refraction:1 dbr:1 yoshizawa:1 di:1 static:2 gain:2 tunable:5 proved:1 ut:1 organized:1 back:2 nerve:2 alexandre:1 response:10 maximally:1 done:2 though:2 just:1 nonlinear:18 propagation:3 mode:3 thermally:1 holography:1 indicated:1 impulse:1 scientific:1 izhikevich:2 usa:4 effect:6 multiplier:1 former:1 hence:2 polarization:2 excitability:3 symmetric:1 linares:1 maass:2 deal:1 during:2 self:3 maintained:2 excitation:3 linearizing:1 m:2 crystal:7 demonstrate:3 performs:1 chaos:4 barranco:1 recently:1 spiking:13 physical:3 ji:1 arimoto:1 khz:1 winner:1 volume:1 analog:2 occurred:1 resting:1 measurement:1 silicon:1 cambridge:2 nonlinearity:3 analyzer:3 stable:2 cortex:1 surface:3 operating:1 pulsed:7 selectivity:1 meeting:1 seen:4 bra:1 paradigm:3 signal:15 dashed:1 ii:4 multiple:1 july:2 smooth:1 match:3 characterized:1 faster:3 nagumo:10 controlled:1 schematic:1 basic:2 opd:1 represent:1 cell:1 beam:2 proposal:1 affecting:1 addition:1 interval:1 completes:2 source:5 rest:1 unlike:1 bringing:1 hz:1 induced:1 elegant:1 near:1 yang:1 enough:1 affect:1 architecture:1 bandwidth:2 opposite:1 fm:1 det:1 shift:2 passing:1 york:1 action:2 hardware:1 generate:1 sl:2 inhibitory:2 dotted:2 sign:1 correctly:1 promise:1 huertas:1 threshold:4 pb:1 changing:1 andfire:1 verified:2 advancing:1 letter:1 respond:1 communicate:1 resonant:2 electronic:12 excursion:1 separation:1 oscillation:1 brazilian:1 bound:1 simplification:1 strength:2 optic:10 aspect:3 transconductance:1 fitzhugh:9 optical:41 relatively:2 injection:2 department:1 according:1 march:2 membrane:1 describes:1 character:1 appealing:1 biologically:1 boulder:1 equation:10 agree:1 previously:1 mechanism:2 mind:1 end:2 available:1 operation:5 takeall:1 observe:1 away:1 spectral:2 optoelectronic:11 simulating:1 gate:1 altogether:1 original:4 paek:1 retardation:3 warland:1 taipei:1 scholarship:1 especially:1 dgf:1 feng:1 arrangement:2 realized:1 spike:15 coaxial:1 interruption:1 bialek:1 exhibit:3 september:1 kutta:1 simulated:5 evaluate:1 unstable:1 trivial:1 cellular:1 taiwan:1 length:1 code:1 index:1 illustration:1 difficult:1 setup:2 mostly:1 potentially:1 implementation:25 boltzmann:1 conversion:1 vertical:4 neuron:21 anchez:1 implementable:1 polarizer:2 thermal:2 january:1 extended:1 communication:1 inh:1 topical:1 varied:1 august:1 intensity:2 transverse:1 pair:1 required:3 connection:1 rad:3 coherent:1 established:1 suggested:1 parallelism:2 dynamical:9 usually:1 below:1 pattern:1 saturation:2 including:1 memory:1 power:9 natural:1 hybrid:1 force:1 representing:1 technology:3 splitter:1 ne:1 axis:2 incoherent:1 excitable:1 coupled:4 text:1 review:3 relative:1 synchronization:1 expect:1 interesting:3 versus:1 generator:1 digital:2 integrate:1 degree:3 incident:2 verification:1 basin:1 propagates:1 exciting:1 editor:1 production:2 excitatory:7 compatible:1 placed:1 supported:1 cnpq:1 electronically:1 bias:2 wagner:3 fifth:1 leaky:1 ghz:1 van:1 feedback:13 calculated:1 depth:1 numeric:4 curve:1 quantum:1 author:1 ticular:1 adaptive:5 simplified:3 lia:1 emitting:4 boltzman:1 transaction:2 emphasize:1 cavity:8 confirm:1 abstracted:1 active:2 spectrum:1 continuous:2 terminate:1 transfer:3 nature:1 curtis:1 expansion:1 bearing:1 excellent:1 investigated:1 domain:1 main:2 linearly:1 arrow:1 motivation:1 edition:1 heating:1 resonator:1 slow:3 axon:1 sub:1 ultrafast:1 third:2 jacobian:1 bishop:1 unperturbed:1 barrett:1 physiological:1 evidence:1 circularly:1 gained:1 mirror:2 depicted:1 wavelength:16 explore:2 likely:1 ganglion:1 gao:1 visual:1 monotonic:1 goal:1 oscillator:6 experimentally:2 typical:2 photorefractive:1 called:1 pulsing:6 experimental:12 zone:1 internal:1 support:1 azquez:1 ongoing:1 statespace:1 phenomenon:4 hung:1 |
1,306 | 219 | A Systematic Study or the Input/Output Properties
A Systematic Study of the Input/Output Properties
of a 2 Compartment Model Neuron
With Active Membranes
Paul Rhodes
University of California, San Diego
ABSTRACT
The input/output properties of a 2 compartment model neuron are systematically
explored. Taken from the work of MacGregor (MacGregor, 1987), the model neuron
compartments contain several active conductances, including a potassium conductance in
the dendritic compartment driven by the accumulation of intradendritic calcium.
Dynamics of the conductances and potentials are governed by a set of coupled first order
differential equations which are integrated numerically. There are a set of 17 internal
parameters to this model, specificying conductance rate constants, time constants,
thresholds, etc.
To study parameter sensitivity, a set of trials were run in which the input driving the
neuron is kept fixed while each internal parameter is varied with all others left fixed.
To study the input/output relation, the input to the dendrite (a square wave) was varied
(in frequency and magnitude) while all internal parameters of the system were left flXed,
and the resulting output firing rate and bursting rate was counted.
The input/output relation of the model neuron studied turns out to be much more
sensitive to modulation of certain dendritic potassium current parameters than to
plasticity of synapse efficacy per se (the amount of current influx due to synapse
activation). This would in turn suggest, as has been recently observed experimentally,
that the potassium current may be as or more important a focus of neural plasticity than
synaptic efficacy.
INTRODUCTION
In order to model biologically realistic neural systems, we will ultimately be seeking to
construct networks with thousands of neurons and millions of interconnections. It is
therefor desireable to employ basic units with sufficient computational simplicity to
make meaningful simulations tractable, yet with sufficient fidelity to biological neurons
that we may retain a hope of gleaning by these simulations something about the activity
going on during biological information processing.
149
ISO
Rhodes
The types of neuron models employed in the computational neuroscience literature range
from binary threshold units to sigmoid transfer functions to 1500 compartment neurons
with Hodgkin-Huxley kinetics for a whole set of active conductances and spines with
rich internal structure. In principle, a model neuron's functional participation in the
operation of a network may be fully characterized by a complete description of its
transfer function, or input-output relation. This relation would necessarily be
parameterized by a host of internal variables (which would include conductance rate
constants and parameters defining the neuron's morphology) as well as a very rich space
characterizing possible variations in input (including location of input in dentritic tree).
In learning to judge which structural elements of highly realistic models must be
preserved and which may be simplified, one approach will be to test the degree to which
the input-output relation of the simplified neuron (given a physiologically relevant
parameter range and input space) is sufficiently close to the input-output properties of
the highly realistic model.
To define 'sufficiently close', we will ultimately refer to the operation of the network as
a whole as follows: the transfer function of a simplified neuron model will be considered
'sufficiently close' to a more realistic neuron model if a chosen information processing
task carried out by the overall network is performed by a network built up of the
simplified neurons in a manner close to that observed in a network of the more realistic
neurons.
We propose to begin by exploring the input/output properties of a greatly simplified 2
compartment model neuron with active conductances. Even in this very simple structure
there are many (17) internal parameters for things like time constants and activation rates
of currents. We wish to understand the parameter sensitivity of this model system and
characterize its input-output relation.
1.0 DESCRIPTION OF THE MODEL NEURON
THE MODEL NEURON CONSISTS OF A SOMA WITH A VOLTAGE-GATED
POTASSIUM CONDUCTANCE AND A SINGLE COMPARTMENT DENDRITE
WITH A VOLTAGE-GATED CALCIUM CONDUCTANCE AND A [CAl-GATED
POTASSIUM CONDUCTANCE
We will choose for this study a simple model neuron described by MacGregor (I987). It
possesses a single compartment dendrite. This is viewed as a crude approximation to the
lumped reduction of a dendritic tree. In this approximation, we are neglecting spatial and
temporal summing of individual synaptic EPSP's distributed over a dendritic tree, as well
as the spatial and temporal dispersion (smearing) due to transmission to the soma. The
individual inputs we will be using are large enough to drive the soma to firing, and so
would represent the summation of many relatively simultaneous individual EPSPs,
perhaps as from the set of contacts upon a neuron's dendritic tree made by the
arborization of one different axon. The dendritic membrane possesses a potassium
conductance gated by intradendritic calcium concentration and a voltage gated calcium
conductance. The soma contains its own voltage-gated potassium channels and
membrane time constants. Electrical connection between soma and dendrite is expressed
by an input impedance in each direction. The soma fires an action potential, simply
expressed by raising its voltage to 50 mv for one msec after its internal voltage has been
A Systematic Study or the Input/Output Properties
driven to firing threshold. Calcium accumulation in the dendrite is modelled assuming
accumulation proportional to calcium conductance. Calcium conductance itself increases
in proportion to the difference between the dendrite's voltage and a threshold, and calcium
is removed from the dendrite by means of an exponential decay. This system is modelled
by a set of coupled frrst order differential equations as follows:
1.1 THE SET OF EQUATIONS
VARIABLES OF THIS MODEL
GOVERNING THE
DYNAMIC
The soma's voltage ES is governed by:
dES/dt={ -ES+SOMAINPUT +GDS *(ED-ES)+GKS *(EK-ES)} IfS
where SOMAINPUT is obtained by dividing the input current by the total resting
conductance of the dendrite (therefor it has units of voltage). GDS is proportional to
input resistance from dendrite to soma, and multiplies the difference between the
dendrite's voltage ED and the soma's voltage ES; GKS is the soma's aggregate
potassium conductance (modelled below); EK is the voltage of the potassium battery
(assumed constant at -1 Omv); and TS is the soma's time constant. All potentials are
relative to resting potential, and all conductances are dimensionless.
The dendrite's voltage ED is govened by:
dED/dt={-ED+DENDINPUT+GSD*(ES-ED)+GCA*(ECA-ED)+ GKD*(EK-ED)}IID
where DENDINPUT is obtained by dividing the input current by the total resting
conductance of the dendrite and so has units of voltage. GSD is proportional to the
input resistance from soma to dendrite, and hence multiplies the difference between ES
and ED; GCA is the dendrite's calcium conductance (modelled below), ECA is the
calcium battery (assumed constant at 50mv), and GKD is proportional to the dendrite's
potassium conductance (modelled below). All potentials are relative to resting potential.
The soma's voltage is raised artificially to 50mv for I msec after the soma's voltage
exceeds a (fixed) threshold, thus simplifying the action potential.
The potassium conductance in the soma, GKS, is governed by:
dGKS/dt={ -GKS+S*B}lfGK
where S is 1 if an action potential has just fired and 0 otherwise, B is an activation rate
constant governing the rate of increase of potassium conductance, and TGK is the time
constant of the potassium conductance decay. This rather simplified picture of
potassium conductance will be replaced by a more realistic version with a Markov state
model of the potassium channel in a subsequent publication in preparation. For the
present investigation then we are modelling the voltage dependence of the potassium
conductance by the following: potassium conductance builds up by a fixed amount
(proportional to BlfGK) during each action potential, and thereafter decays exponentially
with time constant TGK.
151
152
Rhodes
The dendrite's calcium conductance is governed by:
dGCNdt={ -GCA +D*(ED-CSPlKETHRESH)} IfGCA
dGCNdt={ -GCNlGCA}
ED>CSPIKETHRESH
ED<CSPlKETHRESH
where CSPIKETHRESH is the minimum dendritic voltage above which calcium
conducting channels begin to be opened, D is an activation rate governing the rate of
increase in calcium conductance, and TGCA is the time constant assumed to govern
conductance decay when voltage is below threshold
The dendrite's internal calcium concentration [CA] is governed by:
d[ CAYdt={ -[ CA]+A *GCA}IfCA
where TCA is the time constant for the removal of internal CA, and A is a parameter
governing the accumulation rate of increase of internal CA for a given conductance and
time constant. A is inversely proportional to the effective relevant volume in which
calcium is accumulating. An increase in internal calcium buffer would decrease the
parameter A.
Finally, the dendrite's potassium conductance is governed by:
dGKD/dt={ -GKD+ BD} /TGKD
dGKDldt={ -GKD} IfGKD
[CA]>CALCTHRESH
[CA]<CALCTHRESH
where CALCTHRESH is the internal calcium concentration threshold above which the
calcium gated potassium channel begins to open, BD is the parameter governing the rate
of increase of dendritic potassium conductance, and TGKD is the time constant
governing the exponential decay of potassium conductance.
This entire system of equations is taken from the work of MacGregor
(MacGregor, 1987).
The system of coupled fIrst order differential equations is integrated using the exponential
method, also discussed in MacGregor. Generally a 1 msec timestep is used, with a
smaller timestep of .1 msec used for the relaxation between the dendritic voltage ED and
the somatic voltage ES.
2.0 THE EFFECT OF CHANGES IN PARAMETERS (TIME
CONSTANTS, CONDUCTANCE RATES, ETC.) ON THE
MODEL NEURON'S INPUT-OUTPUT PROPERTIES WILL
BE EXPLORED
As is clear from a review of the above set of interrelated equations governing the
dynamics of the state variables of the model neuron, there are quite a few externally
specified parameters (I7) even in such a simple model. Presumably the thresholds are
fairly well measureable, and the rate constants and time constants may be specified by
measurement of time courses in patch clamp experiments. We are nevertheless dealing
with parameters of which some are thought to be variable and which are probably
A Systematic Study or the Input/Output Properties
modulated explicitly by normal mechanisms in neurons. Therefor we wish to explore
the effect that variation of any of these parameters has on the input-output properties of
the model neuron. In fact, we will find indication that the modulation of
these parameters, in particular the rate constants governing the
dendritic potassium current and internal calcium accumulation, may be
very effective targets of neural plasticity. We find that the neuron's
input-output properties are more sensitive to these parameters than to
modulation of the efficacy of the synapse strength per see
2.1 PROTOCOL FOR SYSTEMATIC EXPLORATION OF THE
EFFECT OF VARIATION IN THE MODEL'S PARAMETERS ON THE
INPUT-OUTPUT PROPERTIES OF THE MODEL NEURON
We started with the parameters all set to a set of benchmarks and drove the neuron with a
constant input to the dendrite. (We could have driven the soma instead, or both soma
and dendrite, and we could have chosen more complex input streams. See below for
trials where we systematically vary the input but the parameter values are held steady.)
The input was a steady command input of 35mv. The values of all the benchmark
parameters are given in Table 1.
We then systematically halved and doubled each of the 17 parameters in turn, while
leaving all other parameters fixed. Note that in all cases and in fact with any driving
input this model neuron fires in bursts. This is due to the long time course of the
potassium current in the dendrite, which enforces a long refractory period (about 4080msec) even during continuous stimulation.
2.2 RESULTS OF SYSTEMATIC VARIATION OF PARAMETERS
OF MODEL NEURON
The results are summarized in the notes to Table 1. Following are several
observations about the different parameters' varying degree of efficacy in modulation of
the input-output function.
1) The most striking finding is that variation of the activation rate of the potassium
current, particularly the potassium current in the dendrite, is the most effective means of
modulating the input-output properties of the model neuron. The transfer function is
250% more sensitive to an increase in the [CA]-gated dendritic potassium current
activation rate than it is to an increase in synaptic efficacy ~~.
2) Changing the time constant of the [CA]-gated potassium current in the dendrite is
the only parameter change which effectively modulates the number of bursts per
second (see Figure I). Changing the time constant of the voltage-gated potassium
current in the soma, does not have any effect on the number of bursts per second.
153
154
Rhodes
3.0 MEASUREMENT OF THE INPUT/OUTPUT RELATION
OF THE MODEL NEURON
The input/output relation was detennined by the following protocol: The input was
supplied in the fonn of a square wave of current injected into the dendritic compartment,
and the frequency of the pulses and their magnitude was systematically varied.
The output of the soma, in the form of action potentials fIred per second, was plotted
against the input rate, defined as the product of the square wave frequency and the
magnitude of the injected current. The duration of pulses was kept fixed at 20 msec (but
see below), all internal parameters were fIXed at their benchmark levels.
3.1
THE SHAPE OF THE INPUT/OUTPUT RELATION
Figure 2 depicts the above described plot in the case where all the internal parameters
were fixed at purported "benchmark" values except for the parameters governing
intradendritic calcium accumulation.. It is clearly not strictly monotonic (there are
resonance points) though a smoothed version is monotonic, and it does not faithfully
render a sigmoid.
3.2 THE INPUT/OUTPUT RELATION IS UNCHANGED IF THE
SQUARE SHAPE OF THE EPSP DRIVING THE DENDRITE IS
REPLACED BY AN ALPHA FUNCTION
The trials in this study were largely conducted using a square wave as the input driving
the dendritic compartment. In order to check whether the unphysical square shape of the
envelope of this current injection was coloring the results, the input/output relation was
measured in a set of trials wherein the alpha function commonly used to model the time
course of EPSP's replaced the square pulse. The total current injected per pulse was kept
uniform. The results, shown in
Figure 3, are surprising:
The
input/output relation was almost completely unaltered by the
substitution.
This suggests that the detailed shape and fourier
spectrum of the time course of synaptic input has nearly no effect of
the neuron's output. Thus it is suggested that very adequate models can be built
without the need for a strict modelling of the synaptic EPSP. I expect this effect is due
to the temporal integration ongoing in the summation of input to this system, which
blurs the exact shape of any input envelope.
3.3 MODULATION OF THE INPUT/OUTPUT RELATION
VARIATION OF INTERNAL MODEL PARAMTERS
BY
Figure 1 portrays the input/output relation measured in three cases in which all internal
parameters are identical except the rate of accumulation of intradendric calcium. The
lower curve is the case where the calcium accumulation rate is highest. Since [Ca]
accumulation drives the dendritic potassium current, the activation of which in tum
hyperpolarizes the dendrite and thus indirectly suppresses firing in the soma, we expect
output in this case to be lower for a given input as is indeed the result observed. Note
that the parameter being varied would be expected to be inversely proportional to the
amount of available intradendritic calcium buffer. Hence the amount of
A Systematic Study or the Input/Output Properties
intradendritic buffer has a profound ability to modulate the transfer
function of the system.
4.0 CONCLUSIONS
As regards the shape of the transfer function itself, we have found it to be nonmonotonic (there are resonance points) unless it is smoothed. The shape of the transfer
function appears little effected by the envelope of the EPSP (Le. square pulse input
produces nearly the same transfer function as the case where alpha functions are
substituted for the square pulses in modelling the EPSP).
A parameter sensitivity analysis of a 2 compartment model neuron with active
membranes reveals some unexpected results. For example, the input/output (transfer)
function of the neuron is 250% more sensitive to the activation rate of the [CA]-gated
dendritic potassium current than it is to synaptic efficacy per se. This in turn suggests
that, as has indeed been observed (Alkon et~ 1988; Hawkins, 1989; Olds etal, 1989),
nature might employ mechanisms other than simply increasing synaptic conductance
during the EPSP to enhance the efficacy of the transfer function.
Alkon, D.L. et at, J. Neurochemistry, Volume 51, 903, (1988).
Hawkins, R. D. in Computational Models of Learning in Simple Neural Systems,
Hawkins and Bower, Eds., Academic Press, (1989).
MacGregor, R., Neural and Brain Modelling, Academic Press, (1988).
Olds, J. L. et ai, Science, Volume 245, 866, (1989).
TABLE 1
RESULTS OF PARAMETER SENSITIVITY ANALYSIS
PROTOCOL: EACH OF THE 17 INTERNAL PARAMETERS OF THE MODEL
NEURON WAS VARIED IN TURN, WHILE ALL THE OTHERS WERE KEPT
FIXED AT BENCHMARK VALUES. THE DENDRITE WAS DRIVEN IN
EACH CASE WITH A STEADY FIXED INPUT AND THE RESULTING
BURSTING RATE AND FIRING RATE WAS COUNTED. IN THE FINAL
TRIAL, ALL THE PARAMETERS WERE LEFT FIXED AND THE INPUT
MAGNITUDE WAS VARIED, TO SIMULATE FOR COMPARISON THE
EFFECT OF MODULATION OF SYNAPTIC EFFICACY.
PARAMETER
SYMBOL
VALUE
BURSTS SPIKES/ FIRING
SEC
BURST FREQ.
FIRING FREQ.
AS % OF
BE~CHMARK
SOMATIC MEMBRANE
TIME CONSTANT
TS
BENCHMARK
lOW
HIGH
5.0
2.5
10.0
13.51
13.70
12.82
2
2
2
27.03
27.40
2S.64
100.0%
101.4%
94.9%
DENDRITIC MEMBRANE
TIME CONSTANT
TD
BENCHMARK
lOW
HIGH
S.O
2.5
10.0
l3.S1
13.51
12.66
2
2
2
27.03
27.03
2S.32
100.0%
100.0%
93.7%
ISS
156
Rhodes
PARAMETER
VALUE
SYMBOL
BURSTS SPIKES! FIRING
BURST FREO.
SEC
FIRING FREQ.
AS%OF
BENCHMARK
20.0
10.0
40.0
13.51
12.82
13.51
2
I
3
27.03
12.82
40.54
100.0%
47.4%
150.0%
33.0
16.5
66.0
13.51
12.99
13.51
2
3
1
27.03
38.96
13.51
100.0%
144.2%
50.0%
75.0
37.5
150.0
13.51
12.35
13.16
2
4
2
27.03
49.38
26.32
100.0%
182.7%
97.4%
3.5
1.8
7.0
13.51
13.51
13.33
2
2
2
27.03
27.03
26.67
100.0%
100.0%
98.7%
10.0
5.0
20.0
13.51
21.74
8.00
2
2
3
27.03
43.48
24.00
100.0%
160.9%
88.8%
2.2
1.1
4.4
13.51
14.71
11.11
2
2
4
27.03
29.41
44.44
100.0%
108.8%
164.4%
5.0
2.5
10.0
13.51
14.29
12.82
2
2
2
27.03
28.57
25 .64
100.0%
105.7%
94.9%
2.0
1.0
4.0
13.51
13.51
12.99
2
3
I
27.03
40.54
12.99
100.0%
150.0%
48.1%
HIGH
5.0
2.5
10.0
13.51
14.71
II. 76
2
I
3
27.03
14.71
35.29
100.0%
54.4%
130.6%
INPUT CONDUCTANCE FROM GDS
DENDRITE TO SOMA (5)
BENCHMARK
lDW
HIGH
5.0
2.5
10.0
13.51
11.90
14.29
2
I
4
27.03
11.90
57.14
100.0%
44.0%
211.4%
INPUT CONDUCTANCE FROM GSD
SOMA TO DENDRITE
BENCHMARK
lDW
HIGH
5.0
2.5
10.0
13.51
13.89
10.75
2
2
2
27.03
27.78
2U1
100.0%
102.8%
79.6%
SOMATIC FIRING
THRESHOLD
BENCHMARK
lDW
HIGH
12.0
6.0
24.0
13.51
15.38
13. 16
2
4
1
27.03
61.54
13.16
100.00/0
227.7%
48.7%
BENCHMARK
mGH
12.0
6.0
24.0
13.51
14.08
13.70
2
2
2
27.03
28.17
27.40
100.0%
104.2%
101.4%
BENCHMARK
lDW(7)
HIGH
35.0
27.0
70.0
13.51
11.63
16.95
2
2
2
27.03
23.26
33.90
100.0%
86.0%
125.4%
CALCTHRESH
THRESHOLD FOR
(CAJ-GATED POTASSIUM
CURRENT IN DENDRITE (I)
ACTIVATION RATE OF
SOMA TIC POTASSIUM
CURRENT (2)
B
ACTIVATION RATE OF
DENDRITIC (CAJ-GATED
POTASSIUM CURRENT
BD
TIME CONSTANT OF
SOMATIC POTASSIUM
CURRENT (2)
TGK
TIME CONSTANT OF
DENDRITIC POTASSIUM
CURRENT (3)
TGKD
ACTIVATION RATE OF
CALCIUM CONDUCTANCE
D
BENCHMARK
LOW
HIGH
BENCHMARK
LOW
mGH
BENCHMARK
LOW
mGH
BENCHMARK
LOW
HIGH
BENCHMARK
LOW
HIGH
BENCHMARK
LOW
mGH
TIME CONSTANT
OF DENDRITIC CALCIUM
CONDUCTANCE
TGC
ACCUMULATION RATE OF
CALCIUM FOR A GIVEN
CALOUM CONDUCTANCE (4)
A
TIME CONSTANT FOR
CALCIUM ACCUMULATION
TCA
LOW
mGH
BENCHMARK
LOW
HIGH
BENCHMARK
LOW
THRESHOLD
CA SPIKE THRESHOLD CSPKTHRESH
IN DENDRITE (6)
SYNAPTIC INPUT TO
DENDRITE (8)
BENCHMARK
INPUT
LOW
A Systematic Study or the Input/Output Properties
NOTES TO PARAMETER SENSITIVITY ANALYSIS
(1) The number of spikes per burst is altered by modulating the internal calcium
concentration required to trigger the dendritic potassium current. In an observation
repeated several times herein, it seems clear that modulating the hyperpolarizing
potassium current has a marked effectiveness in modulating the neuron's output.
(2) Modulating the activation rate (B) of the somatic potassium current strongly effects
ruing, but changing the time constant of this current has almost no effect either on
bursts/second or spikeslburst.
(3) However, note that, among all 17 parameters of this model neuron, it is only the
time constant of the [CAl-gated dendritic potassium current which is effective in
modulating the rate of bursting (whereas th e somatic potassium current time constant
does not seem to effect the model neuron's output at alI).
(4) This quantity, the accumulation rate of calcium in the dendrite per unit calcium
conductance, would increase as the effectiveness of calcium buffers within the dendrite
decreased.
(5) Despite its efficacy in modulating the neuron's output, this parameter is presumably
not a likely candidate for plasticity, because it depends on the axial resistance of the
cytoplasm, the cross section of the base of the dendrite, and the volume of the soma, all
of which seem unlikely to be the subject to modulation.
(6) Surprisingly, the overall input-output relation for the neuron is not much effected by
changing the threshold for the voltage gated calcium spike activity in the dendrite.
(7) The minimum dendritic input required to produce any spike activity (that is, to
increase the voltage in the soma above firing threshold) may be calculated to be 26.4
with all the other parameters at benchmark values. Hence 27 is an input level that is
only 2% above the minimum level to get any firing at all. Note that it appears a 2
spike burst is always produced (with the internal parameters set at the benchmark levels)
if any firing at all is elicited. The number of spikes per burst, then, is modulated by
conductance activation rates and calcium accumulation rates but not by input. Tables 2
and 3 demonstrate this over a wide range of inputs.
(8) Note that doubling the synaptic input to the dendrite only increases the model
neuron's firing rate by 25.4%, but that, for example, doubling the activiation rate of the
dendritic calcium current increases the firing rate by 64.4%. Hence we suggest that
modulation of synaptic efficacy is not the only choice or even the most effective choice
for the mechanism underlying plasticity. Alkon (1988,1989) and others have in fact
recently reported that an increase in protein kinase C, leading to a reduction in calciumactivated potassium current, is observed to be associated with conditioning in
Hermissenda and rabbit. Thus, plasticity in the nervous system may indeed operate via a
whole set of internal dynamic parameters, of which synapse efficacy is only one.
157
158
Rhodes
DENDRITIC K-CLTRRENT TIl\fE CO NST: 5 !vISEC
FIRING RATE=43.48
BURST RATE=21.74
-
ro ..... .
SOMA VOLT.
-.---A---
OCN) VOLT.
?
I< roN) OCN
-10 'mnllll1l1lDD1lDD1l111DlllRlmnmnllllllllllRllllmmmmlllll'
1
20 40 ro 00 100 120 140 1ro 100 200 220 240
TlAE:(M5?C)
DENDRITIC K-CURRENT TIME CONST: 20 MSEC
FIRING RATE=24.00
BURST RATE=8.00
-
~~----------------.-,---------~
ro ..... .. ........... ..................... ............ ...... ..... ....... ... ... .
?
-
..
OCN) VOLT.
?
K OJN) OCN
20
40
ro
00 100 120 140 1ro 100 200 220 240
Th1E: ().15?C)
Figure 1
A Systematic Study or the Input/Output Properties
THE INPUT/OUTPUT RELATION
CA ACClThfUIATION RATE SET AT 3 lEVElS
M.-----------------------~
70 .............. .
0~-.r-~--~--~--~--~--4
o
100
200
300
400
500
roo
700
tRJf RAT[
Figure 2
COMP ARISON OF INPlIT,IOlITPUT REIATION
EPSP SQlTARE PlJIEE VS AlPHA FUNCTION
M~----------------------~
70 .............................................................................
ro ............................................................................ .
...... -- ...... ......................................................... - ............. - ...........................
~
-
10 .............................................................................
o~~--~--~--~--~--~~
o
100
200
3CO
400
tfllJr RAT[
Figure 3
500
roo
700
159
| 219 |@word trial:5 unaltered:1 version:2 proportion:1 seems:1 open:1 simulation:2 pulse:6 simplifying:1 fonn:1 reduction:2 substitution:1 contains:1 efficacy:11 current:33 surprising:1 activation:13 yet:1 must:1 bd:3 realistic:6 subsequent:1 blur:1 plasticity:6 shape:7 hyperpolarizing:1 plot:1 v:1 nervous:1 iso:1 location:1 ron:1 burst:13 differential:3 profound:1 consists:1 manner:1 expected:1 indeed:3 spine:1 morphology:1 brain:1 td:1 little:1 increasing:1 begin:3 underlying:1 tic:1 suppresses:1 finding:1 temporal:3 ifs:1 ro:7 unit:5 omv:1 despite:1 firing:17 modulation:8 might:1 studied:1 bursting:3 suggests:2 co:2 range:3 enforces:1 thought:1 suggest:2 protein:1 doubled:1 get:1 close:4 cal:2 dimensionless:1 accumulating:1 accumulation:13 duration:1 rabbit:1 simplicity:1 hermissenda:1 variation:6 diego:1 target:1 drove:1 trigger:1 exact:1 element:1 particularly:1 observed:5 electrical:1 thousand:1 decrease:1 removed:1 highest:1 govern:1 battery:2 dynamic:4 ultimately:2 ali:1 upon:1 completely:1 tca:2 effective:5 aggregate:1 nonmonotonic:1 quite:1 roo:2 interconnection:1 otherwise:1 ability:1 itself:2 final:1 indication:1 propose:1 clamp:1 product:1 epsp:8 relevant:2 detennined:1 fired:2 description:2 frrst:1 potassium:41 transmission:1 produce:2 axial:1 measured:2 epsps:1 dividing:2 judge:1 direction:1 opened:1 exploration:1 investigation:1 dendritic:25 biological:2 summation:2 exploring:1 kinetics:1 strictly:1 sufficiently:3 considered:1 hawkins:3 normal:1 presumably:2 mgh:5 driving:4 vary:1 rhodes:6 sensitive:4 modulating:7 faithfully:1 hope:1 gleaning:1 clearly:1 always:1 rather:1 varying:1 voltage:24 command:1 publication:1 focus:1 modelling:4 check:1 greatly:1 integrated:2 entire:1 unlikely:1 relation:16 going:1 overall:2 fidelity:1 among:1 smearing:1 multiplies:2 resonance:2 spatial:2 raised:1 fairly:1 integration:1 construct:1 identical:1 nearly:2 others:3 employ:2 few:1 alkon:3 individual:3 tgk:3 replaced:3 fire:2 flxed:1 arborization:1 conductance:42 highly:2 held:1 neglecting:1 unless:1 tree:4 old:2 plotted:1 desireable:1 uniform:1 conducted:1 characterize:1 reported:1 gd:3 sensitivity:5 retain:1 systematic:9 enhance:1 choose:1 ek:3 leading:1 til:1 potential:10 de:1 summarized:1 sec:2 explicitly:1 mv:4 depends:1 stream:1 performed:1 wave:4 effected:2 elicited:1 ocn:4 square:9 compartment:11 conducting:1 largely:1 modelled:5 intradendritic:5 iid:1 produced:1 comp:1 drive:2 simultaneous:1 synaptic:11 ed:13 against:1 frequency:3 associated:1 coloring:1 appears:2 tum:1 dt:4 wherein:1 synapse:4 though:1 strongly:1 governing:9 just:1 perhaps:1 effect:10 contain:1 hence:4 volt:3 freq:3 during:4 lumped:1 steady:3 macgregor:7 rat:2 m5:1 complete:1 demonstrate:1 eca:2 recently:2 sigmoid:2 functional:1 stimulation:1 refractory:1 exponentially:1 volume:4 million:1 discussed:1 conditioning:1 resting:4 numerically:1 refer:1 measurement:2 nst:1 ai:1 etal:1 therefor:3 l3:1 etc:2 base:1 something:1 halved:1 own:1 driven:4 certain:1 buffer:4 binary:1 gsd:3 minimum:3 employed:1 period:1 ii:1 exceeds:1 characterized:1 academic:2 cross:1 long:2 host:1 basic:1 represent:1 preserved:1 whereas:1 decreased:1 leaving:1 envelope:3 operate:1 posse:2 probably:1 strict:1 subject:1 thing:1 effectiveness:2 seem:2 structural:1 enough:1 i7:1 whether:1 gca:4 caj:2 render:1 resistance:3 action:5 adequate:1 generally:1 se:2 clear:2 detailed:1 gks:4 amount:4 supplied:1 neuroscience:1 per:10 paramters:1 thereafter:1 soma:26 threshold:14 nevertheless:1 changing:4 kept:4 timestep:2 relaxation:1 run:1 parameterized:1 injected:3 hodgkin:1 striking:1 almost:2 patch:1 purported:1 activity:3 strength:1 huxley:1 influx:1 fourier:1 simulate:1 u1:1 injection:1 relatively:1 hyperpolarizes:1 membrane:6 smaller:1 biologically:1 s1:1 taken:2 equation:6 turn:5 gkd:4 mechanism:3 tractable:1 available:1 operation:2 indirectly:1 include:1 const:1 build:1 contact:1 unchanged:1 seeking:1 quantity:1 spike:8 concentration:4 dependence:1 assuming:1 measureable:1 fe:1 calcium:34 kinase:1 gated:15 arison:1 neuron:42 dispersion:1 markov:1 observation:2 benchmark:24 t:2 defining:1 ruing:1 varied:6 somatic:6 smoothed:2 required:2 specified:2 connection:1 raising:1 california:1 herein:1 suggested:1 below:6 built:2 including:2 participation:1 altered:1 inversely:2 picture:1 started:1 carried:1 coupled:3 review:1 literature:1 removal:1 relative:2 fully:1 expect:2 proportional:7 degree:2 sufficient:2 principle:1 systematically:4 course:4 surprisingly:1 understand:1 wide:1 characterizing:1 distributed:1 regard:1 curve:1 calculated:1 rich:2 made:1 commonly:1 san:1 simplified:6 counted:2 alpha:4 cytoplasm:1 dealing:1 active:5 reveals:1 summing:1 assumed:3 spectrum:1 physiologically:1 continuous:1 table:4 impedance:1 channel:4 transfer:10 nature:1 ca:12 dendrite:36 necessarily:1 artificially:1 complex:1 protocol:3 substituted:1 whole:3 paul:1 ded:1 repeated:1 i:1 depicts:1 axon:1 wish:2 msec:7 exponential:3 candidate:1 governed:6 crude:1 bower:1 externally:1 dentritic:1 symbol:2 explored:2 decay:5 portrays:1 effectively:1 modulates:1 magnitude:4 neurochemistry:1 interrelated:1 simply:2 explore:1 likely:1 expressed:2 unexpected:1 doubling:2 monotonic:2 modulate:1 viewed:1 marked:1 experimentally:1 change:2 except:2 unphysical:1 total:3 e:8 meaningful:1 internal:21 modulated:2 preparation:1 ongoing:1 |
1,307 | 2,190 | Spike Timing-Dependent Plasticity
in the Address Domain
R. Jacob Vogelstein1 , Francesco Tenore2 , Ralf Philipp2 , Miriam S. Adlerstein2 ,
David H. Goldberg2 and Gert Cauwenberghs2
1
Department of Biomedical Engineering
2
Department of Electrical and Computer Engineering
Johns Hopkins University, Baltimore, MD 21218
{jvogelst,fra,rphilipp,mir,goldberg,gert}@jhu.edu
Abstract
Address-event representation (AER), originally proposed as a means
to communicate sparse neural events between neuromorphic chips, has
proven efficient in implementing large-scale networks with arbitrary,
configurable synaptic connectivity. In this work, we further extend the
functionality of AER to implement arbitrary, configurable synaptic plasticity in the address domain. As proof of concept, we implement a biologically inspired form of spike timing-dependent plasticity (STDP)
based on relative timing of events in an AER framework. Experimental results from an analog VLSI integrate-and-fire network demonstrate
address domain learning in a task that requires neurons to group correlated inputs.
1 Introduction
It has been suggested that the brain?s impressive functionality results from massively parallel processing using simple and efficient computational elements [1]. Developments in
neuromorphic engineering and address-event representation (AER) have provided an infrastructure suitable for emulating large-scale neural systems in silicon, e.g., [2, 3]. Although an integral part of neuromorphic engineering since its inception [1], only recently
have implemented systems begun to incorporate adaptation and learning with biological
models of synaptic plasticity.
A variety of learning rules have been realized in neuromorphic hardware [4, 5]. These systems usually employ circuitry incorporated into the individual cells, imposing constraints
on the nature of inputs and outputs of the implemented algorithm. While well-suited to
small assemblies of neurons, these architectures are not easily scalable to networks of hundreds or thousands of neurons. Algorithms based both on continuous-valued ?intracellular?
signals and discrete spiking events have been realized in this way, and while analog computations may be performed better at the cellular level, we argue that it is advantageous
to implement spike-based learning rules in the address domain. AER-based systems are
inherently scalable, and because the encoding and decoding of events is performed at the
periphery, learning algorithms can be arbitrarily complex without increasing the size of
repeating neural units. Furthermore, AER makes no assumptions about the signals repre-
1
2
Data bus
3 0 2 1
3
time
Decoder
0
Receiver
Encoder
Sender
0
1
2
3
REQ
REQ
ACK
ACK
Figure 1: Address-event representation. Sender events are encoded into an address, sent
over the bus, and decoded. Handshaking signals REQ and ACK are required to ensure that
only one cell pair is communicating at a time. Note that the time axis goes from right to
left.
sented as spikes, so learning can address any measure of cellular activity. This flexibility
can be exploited to achieve learning mechanisms with high degrees of biological realism.
Much previous work has focused on rate-based Hebbian learning (e.g., [6]), but recently,
the possibility of modifying synapses based on the timing of action potentials has been
explored in both the neuroscience [7, 8] and neuromorphic engineering disciplines [9]?[11].
This latter hypothesis gives rise to the possibility of learning based on causality, as opposed
to mere correlation. We propose that AER-based neuromorphic systems are ideally suited
to implement learning rules founded on this notion of spike-timing dependent plasticity
(STDP). In the following sections, we describe an implementation of one biologicallyplausible STDP learning rule and demonstrate that table-based synaptic connectivity can be
extended to table-based synaptic plasticity in a scalable and reconfigurable neuromorphic
AER architecture.
2 Address-domain architecture
Address-event representation is a communication protocol that uses time-multiplexing to
emulate extensive connectivity [12] (Fig. 1). In an AER system, one array of neurons encodes its activity in the form of spikes that are transmitted to another array of neurons. The
?brute force? approach to communicating these signals would be to use one wire for each
pair of neurons, requiring N wires for N cell pairs. However, an AER system identifies
the location of a spiking cell and encodes this as an address, which is then sent across a
shared data bus. The receiving array decodes the address and routes it to the appropriate
cell, reconstructing the sender?s activity. Handshaking signals REQ and ACK are required
to ensure that only one cell pair is using the data bus at a time. This scheme reduces the required number of wires from N to ? log2 N . Two pieces of information uniquely identify
a spike: its location, which is explicitly encoded as an address, and the time that it occurs,
which need not be explicitly encoded because the events are communicated in real-time.
The encoded spike is called an address-event.
In its original formulation, AER implements a one-to-one connection topology, which is
appropriate for emulating the optic and auditory nerves [12, 13]. To create more complex
neural circuits, convergent and divergent connectivity is required. Several authors have
discussed and implemented methods of enhancing the connectivity of AER systems to
this end [14]?[16]. These methods call for a memory-based projective field mapping that
enables routing an address-event to multiple receiver locations.
The enhanced AER system employed in this paper is based on that of [17], which en-
Sender address
Synapse index
Receiver address
Weight polarity
Weight magnitude
1
2
3
-1
8
4
(a)
0
1
1
0
1
1
-
3
1
8
4
-
REQ
POL
0
1
Encoder
0
0
0
2
2
-
Decoder
??Receiver??
??Sender??
0 0
1
2
1 0
1
2
2 0
1
2
2
EG
Integrate-and-fire array
2
Look-up table
(b)
Figure 2: Enhanced AER for implementing complex neural networks. (a) Example neural
network. The connections are labeled with their weight values. (b) The network in (a) is
mapped to the AER framework by means of a look-up table.
ables continuous-valued synaptic weights by means of graded (probabilistic or deterministic) transmission of address-events. This architecture employs a look-up table (LUT), an
integrate-and-fire address-event transceiver (IFAT), and some additional support circuitry.
Fig. 2 shows how an example two-layer network can be mapped to the AER framework.
Each row in the table corresponds to a single synaptic connection?it contains information
about the sender location, the receiver location, the connection polarity (excitatory or inhibitory), and the connection magnitude. When a spike is sent to the system, the sender
address is used as an index into the LUT and a signal activates the event generator (EG)
circuit. The EG scrolls through all the table entries corresponding to synaptic connections
from the sending neuron. For each synapse, the receiver address and the spike polarity
are sent to the IFAT, and the EG initiates as many spikes as are specified in the weight
magnitude field.
Events received by the IFAT are temporally and spatially integrated by analog circuitry.
Each integrate-and-fire cell receives excitatory and inhibitory inputs that increment or
decrement the potential stored on an internal capacitance. When this potential exceeds
a given threshold, the cell generates an output event and broadcasts its address to the AE
arbiter. The physical location of neurons in the array is inconsequential as connections are
routed through the LUT, which is implemented in random-access memory (RAM) outside
of the chip.
An interesting feature of the IFAT is that it is insensitive to the timescale over which events
occur. Because internal potentials are not subject to decay, the cells? activities are only
sensitive to the order of the events. Effects of leakage current in real neurons are emulated
by regularly sending inhibitory events to all of the cells in the array. Modulating the timing
of the ?global decay events? allows us to dynamically warp the time axis.
We have designed and implemented a prototype system that uses the IFAT infrastructure
to implement massively connected, reconfigurable neural networks. An example setup is
described in detail in [17], and is illustrated in Fig. 3. It consists of a custom VLSI IFAT
chip with a 1024-neuron array, a RAM that stores the look-up table, and a microcontroller
unit (MCU) that realizes the event generator.
As discussed in [18, p. 91], a synaptic weight w can be expressed as the combined effect
Weight polarity
RAM
DATA
RAM
Receiver address
ADDRESS
Sender
address
Weight polarity
DATA
ADDRESS
Sender
address
Receiver address
POL
POL
IN
IN
OUT
OUT
IN
MCU
Synapse
index
Weight
magnitude
MCU
Weight
magnitude
PC board
PC board
(a)
OUT
IFAT
IFAT
Synapse
index
OUT
(b)
Figure 3: Hardware implementation of enhanced AER. The elements are an integrate-andfire array transceiver (IFAT) chip, a random-access memory (RAM) look-up table, and a
microcontroller unit (MCU). (a) Feedforward mode. Input events are routed by the RAM
look-up table, and integrated by the IFAT chip. (b) Recurrent mode. Events emitted by the
IFAT are sent to the look-up table, where they are routed back to the IFAT. This makes
virtual connections between IFAT cells.
of three physical mechanisms:
w = npq
(1)
where n is the number of quantal neurotransmitter sites, p is the probability of synaptic
release per site, and q is the measure of the postsynaptic effect of the synapse. Many early
neural network models held n and p constant and attributed all of the variability in the
weight to q. Our architecture is capable of varying all three components: n by sending
multiple events to the same receiver location, p by probabilistically routing the events (as
in [17]), and q by varying the size of the potential increments and decrements in the IFAT
cells. In the experiments described in this paper, the transmission of address-events is
deterministic, and the weight is controlled by varying the number of events per synapse,
corresponding to a variation in n.
3 Address-domain learning
The AER architecture lends itself to implementations of synaptic plasticity, since information about presynaptic and postsynaptic activity is readily available and the contents of the
synaptic weight fields in RAM are easily modifiable ?on the fly.? As in biological systems,
synapses can be dynamically created and pruned by inserting or deleting entries in the LUT.
Like address domain connectivity, the advantage of address domain plasticity is that the
constituents of the implemented learning rule are not constrained to be local in space or
time. Various forms of learning algorithms can be mapped onto the same architecture by
reconfiguring the MCU interfacing the IFAT and the LUT.
Basic forms of Hebbian learning can be implemented with no overhead in the address domain. When a presynaptic event, routed by the LUT through the IFAT, elicits a postsynaptic
event, the synaptic strength between the two neurons is simply updated by incrementing the
data field of the LUT entry at the active address location. A similar strategy can be adopted
for other learning rules of the incremental outer-product type, such as delta-rule or backpropagation supervised learning.
Non-local learning rules require control of the LUT address space to implement spatial
and/or temporal dependencies. Most interesting from a biological perspective are forms of
?+
?w
Presynaptic Queue
x1x3 x2 x1 x3 x3 x1 x2 x1
t
x1
?w(tpre ? tpost)
??+
x2
Presynaptic
??
tpre ? tpost
x3
Postsynaptic
Presynaptic
presynaptic
Postsynaptic
postsynaptic
x2
y
x
y1
y2
y2 y1
y1
y1 y2 y2
Postsynaptic Queue
?w
t
y1 y2
?w
???
(a)
(b)
Figure 4: Spike timing-dependent plasticity (STDP) in the address domain. (a) Synaptic
updates ?w as a function of the relative timing of presynaptic and postsynaptic events, with
asymmetric windows of anti-causal and causal regimes ? ? > ?+ . (b) Address-domain
implementation using presynaptic (top) and postsynaptic (bottom) event queues of window
lengths ?+ and ?? .
spike timing-dependent plasticity (STDP).
4 Spike timing-dependent plasticity
Learning rules based on STDP specify changes in synaptic strength depending on the time
interval between each pair of presynaptic and postsynaptic events. ?Causal? postsynaptic
events that succeed presynaptic action potentials (APs) by a short duration of time potentiate the synaptic strength, while ?anti-causal? presynaptic events succeeding postsynaptic
APs by a short duration depress the synaptic strength. The amount of strengthening or
weakening is dependent on the exact time of the event within the causal or anti-causal
regime, as illustrated in Fig. 4 (a). The weight update has the form
(
??[?? ? (tpre ? tpost )]
0 ? tpre ? tpost ? ??
?[?+ + (tpre ? tpost )]
??+ ? tpre ? tpost ? 0
?w =
(2)
0
otherwise
where tpre and tpost denote time stamps of presynaptic and postsynaptic events.
For stable learning, the time windows of causal and anti-causal regimes ? + and ?? are
subject to the constraint ?+ < ?? . For more general functional forms of STDP ?w(t pre ?
tpost ), the area under the synaptic modification curve in the anti-causal regime must be
greater than that in the causal regime to ensure convergence of the synaptic strengths [7].
The STDP synaptic modification rule (2) is implemented in the address domain by augmenting the AER architecture with two event queues, one each for presynaptic and postsynaptic events, shown in Figure 4 (b). Each time a presynaptic event is generated, the
sender?s address is entered into a queue with an associated value of ? + . All values in the
queue are decremented every time a global decay event is observed, marking one unit of
time T . A postsynaptic event triggers a sequence of synaptic updates by iterating backwards through the queue to find the causal spikes, in turn locating the synaptic strength entries in the LUT corresponding to the sender addresses and synaptic index, and increasing
x1
x2
x3
x4
x5
y
x16
x17
x18
x19
x20
Figure 5: Pictorial representation of our experimental neural network, with actual spike
train data sent from the workstation to the first layer. All cells are identical, but x 18 . . . x20
(shaded) receive correlated inputs. Activity becomes more sparse in the hidden and output
layers as the IFAT integrates spatiotemporally. Note that connections are virtual, specified
in the RAM look-up-table.
the synaptic strengths in the LUT according to the values stored in the queue. Anti-causal
events require an equivalent set of operations, matching each incoming presynaptic spike
with a second queue of postsynaptic events. In this case, entries in the queue are initialized
with a value of ?? and decremented after every interval of time T between decay events,
corresponding to the decrease in strength to be applied at the presynaptic/postsynaptic pair.
We have chosen a particularly simple form of the synaptic modification function (2) as
proof of principle in the experiments. More general functions can be implemented by a
table that maps time bins in the history of the queue to specified values of ?w(nT ), with
positive values of n indexing the postsynaptic queue, and negative values indexing the
presynaptic queue.
5 Experimental results
We have implemented a Hebbian spike timing-based learning rule on a network of 21 neurons using the IFAT system (Fig. 5). Each of the 20 neurons in the input layer is driven by
an externally supplied, randomly generated list of events. Sufficiently high levels of input
cause these neurons to produce spikes that subsequently drive the output layer. All events
are communicated over the address-event bus and are monitored by a workstation communicating with the MCU and RAM. As shown in [7], temporally asymmetric Hebbian
learning using STDP is useful for detecting correlations between inputs. We have proved
that this can be accomplished in hardware in the address domain by presenting the network
with stimulus patterns containing a set of correlated inputs and a set of uncorrelated inputs:
neurons x1 . . . x17 are all stimulated independently with a probability of 0.05 per unit of
time, while neurons x18 . . . x20 have the same likelihood of stimulation but are always activated together. Thus, over a sufficiently long period of time each neuron in the input layer
will receive the same amount of activation, but the correlated group will fire synchronous
spikes more frequently than any other combination of neurons.
In the implemented learning rule (2), causal activity results in synaptic strengthening and
anti-causal activity results in synaptic weakening. As described in Section 4, for an anticausal regime ?? larger than the causal regime ?+ , random activity results in overall weak-
35
35
Maximum Strength = 31
30
30
25
25
Synaptic Strength
Synaptic Strength
Maximum Strength = 31
20
15
10
5
0
20
15
10
5
1
20
Synapse Address
(a)
0
1
20
Synapse Address
(b)
Figure 6: Experimental synaptic strengths in the second layer, recorded from the IFAT
system after the presentation of 200,000 input events. (a) Typical experimental run. (b)
Average (+SE) over 20 experimental runs.
ening of a synapse. All synapses connecting the input and output layers are equally likely
to be active during an anti-causal regime. However, the increase in average contribution
to the postsynaptic membrane potential for the correlated group of neurons renders this
population slightly more likely to be active during the causal regime than any single member of the uncorrelated group. Therefore, the synaptic strengths for this group of neurons
will increase with respect to the uncorrelated group, further augmenting their likelihood
of causing a postsynaptic spike. Over time, this positive feedback results in a random but
stable distribution of synaptic strengths in which the correlated neurons? synapses form
the strongest connections and the remaining neurons are distributed around an equilibrium
value for weak connections.
In the experiments, we have chosen ?+ = 3 and ?? = 6. An example of a typical distribution of synaptic strengths recorded after 200,000 events have been processed by the
input layer is shown in Fig. 6 (a). For the data shown, synapses driving the input layer were
fixed at the maximum strength (+31), the rate of decay was ?4 per unit of time, and the
plastic synapses between the input and output layers were all initialized to +8. Because
the events sent from the workstation to the input layer are randomly generated, fluctuations
in the strengths of individual synapses occur consistently throughout the operation of the
system. Thus, the final distribution of synaptic weights is different each time, but a pattern
can be clearly discerned from the average value of synaptic weights after 20 separate trials
of 200,000 events each, as shown in Fig. 6 (b).
The system is robust to changes in various parameters of the spike timing-based learning
algorithm as well as to modifications in the number of correlated, uncorrelated, and total
neurons (data not shown). It also converges to a similar distribution regardless of the initial
values of the synaptic strengths (with the constraint that the net activity must be larger than
the rate of decay of the voltage stored on the membrane capacitance of the output neuron).
6 Conclusion
We have demonstrated that the address domain provides an efficient representation to implement synaptic plasticity that depends on the relative timing of events. Unlike dedicated
hardware implementations of learning functions embedded into the connectivity, the address domain implementation allows for learning rules with interactions that are not constrained in space and time. Experimental results verified this for temporally-antisymmetric
Hebbian learning, but the framework can be extended to general learning rules, including
reward-based schemes [10].
The IFAT architecture can be augmented to include sensory input, physical nearestneighbor connectivity between neurons, and more realistic biological models of neural
computation. Additionally, integrating the RAM and IFAT into a single chip will allow for
increased computational bandwidth. Unlike a purely digital implementation or software
emulation, the AER framework preserves the continuous nature of the timing of events.
References
[1] C. Mead, Analog VLSI and Neural Systems. Reading, Massachusetts: Addison-Wesley, 1989.
[2] S. R. Deiss, R. J. Douglas, and A. M. Whatley, ?A pulse-coded communications infrastructure
for neuromorphic systems,? in Pulsed Neural Networks (W. Maas and C. M. Bishop, eds.),
pp. 157?178, Cambridge, MA: MIT Press, 1999.
[3] K. Boahen, ?A retinomorphic chip with parallel pathways: Encoding INCREASING, ON,
DECREASING, and OFF visual signals,? Analog Integrated Circuits and Signal Processing,
vol. 30, pp. 121?135, February 2002.
[4] G. Cauwenberghs and M. A. Bayoumi, eds., Learning on Silicon: Adaptive VLSI Neural Systems. Norwell, MA: Kluwer Academic, 1999.
[5] M. A. Jabri, R. J. Coggins, and B. G. Flower, Adaptive analog VLSI neural systems. London:
Chapman & Hall, 1996.
[6] T. J. Sejnowski, ?Storing covariance with nonlinearly interacting neurons,? Journal of Mathematical Biology, vol. 4, pp. 303?321, 1977.
[7] S. Song, K. D. Miller, and L. F. Abbott, ?Competitive Hebbian learning through spike-timingdependent synaptic plasticity,? Nature Neuroscience, vol. 3, no. 9, pp. 919?926, 2000.
[8] M. C. W. van Rossum, G. Q. Bi, and G. G. Turrigiano, ?Stable Hebbian learning from spike
timing-dependent plasticity,? Journal of Neuroscience, vol. 20, no. 23, pp. 8812?8821, 2000.
[9] P. Hafliger and M. Mahowald, ?Spike based normalizing Hebbian learning in an analog VLSI artificial neuron,? in Learning On Silicon (G. Cauwenberghs and M. A. Bayoumi, eds.), pp. 131?
142, Norwell, MA: Kluwer Academic, 1999.
[10] T. Lehmann and R. Woodburn, ?Biologically-inspired on-chip learning in pulsed neural networks,? Analog Integrated Circuits and Signal Processing, vol. 18, no. 2-3, pp. 117?131, 1999.
[11] A. Bofill, A. F. Murray, and D. P. Thompson, ?Circuits for VLSI implementation of temporallyasymmetric Hebbian learning,? in Advances in Neural Information Processing Systems 14
(T. Dietterich, S. Becker, and Z. Ghahramani, eds.), Cambridge, MA: MIT Press, 2002.
[12] M. Mahowald, An analog VLSI system for stereoscopic vision. Boston: Kluwer Academic
Publishers, 1994.
[13] J. Lazzaro, J. Wawrzynek, M. Mahowald, M. Sivilotti, and D. Gillespie, ?Silicon auditory processors as computer peripherals,? IEEE Trans. Neural Networks, vol. 4, no. 3, pp. 523?528,
1993.
[14] K. A. Boahen, ?Point-to-point connectivity between neuromorphic chips using address events,?
IEEE Trans. Circuits and Systems?II: Analog and Digital Signal Processing, vol. 47, no. 5,
pp. 416?434, 2000.
[15] C. M. Higgins and C. Koch, ?Multi-chip neuromorphic motion processing,? in Proceedings
20th Anniversary Conference on Advanced Research in VLSI (D. Wills and S. DeWeerth, eds.),
(Los Alamitos, CA), pp. 309?323, IEEE Computer Society, 1999.
[16] S.-C. Liu, J. Kramer, G. Indiveri, T. Delbr?uck, and R. Douglas, ?Orientation-selective aVLSI
spiking neurons,? in Advances in Neural Information Processing Systems 14 (T. Dietterich,
S. Becker, and Z. Ghahramani, eds.), Cambridge, MA: MIT Press, 2002.
[17] D. H. Goldberg, G. Cauwenberghs, and A. G. Andreou, ?Probabilistic synaptic weighting in a
reconfigurable network of VLSI integrate-and-fire neurons,? Neural Networks, vol. 14, no. 6/7,
pp. 781?793, 2001.
[18] C. Koch, Biophysics of Computation: Information Processing in Single Neurons. New York:
Oxford University Press, 1999.
| 2190 |@word trial:1 advantageous:1 scroll:1 pulse:1 jacob:1 covariance:1 initial:1 liu:1 contains:1 current:1 nt:1 activation:1 must:2 readily:1 john:1 realistic:1 plasticity:14 enables:1 designed:1 succeeding:1 update:3 aps:2 realism:1 short:2 infrastructure:3 detecting:1 provides:1 location:8 mathematical:1 transceiver:2 consists:1 overhead:1 pathway:1 frequently:1 multi:1 brain:1 inspired:2 decreasing:1 actual:1 window:3 increasing:3 becomes:1 provided:1 circuit:6 mcu:6 sivilotti:1 temporal:1 every:2 brute:1 unit:6 control:1 rossum:1 positive:2 engineering:5 timing:15 local:2 x19:1 encoding:2 oxford:1 mead:1 fluctuation:1 inconsequential:1 nearestneighbor:1 dynamically:2 shaded:1 projective:1 bi:1 implement:8 x17:2 communicated:2 backpropagation:1 x3:4 x1x3:1 area:1 jhu:1 matching:1 pre:1 spatiotemporally:1 integrating:1 onto:1 equivalent:1 deterministic:2 map:1 demonstrated:1 go:1 regardless:1 duration:2 independently:1 focused:1 thompson:1 communicating:3 rule:14 higgins:1 array:8 ralf:1 population:1 notion:1 gert:2 increment:2 variation:1 updated:1 enhanced:3 trigger:1 exact:1 goldberg:2 us:2 hypothesis:1 delbr:1 element:2 npq:1 particularly:1 asymmetric:2 labeled:1 bottom:1 observed:1 fly:1 electrical:1 thousand:1 connected:1 decrease:1 boahen:2 pol:3 reward:1 ideally:1 purely:1 req:5 easily:2 chip:10 emulate:1 neurotransmitter:1 various:2 train:1 describe:1 london:1 sejnowski:1 artificial:1 outside:1 encoded:4 larger:2 valued:2 otherwise:1 encoder:2 timescale:1 itself:1 final:1 advantage:1 sequence:1 whatley:1 net:1 turrigiano:1 propose:1 interaction:1 product:1 strengthening:2 adaptation:1 inserting:1 causing:1 entered:1 flexibility:1 achieve:1 constituent:1 los:1 convergence:1 transmission:2 produce:1 incremental:1 converges:1 depending:1 recurrent:1 avlsi:1 augmenting:2 received:1 miriam:1 implemented:11 anticausal:1 emulation:1 functionality:2 modifying:1 subsequently:1 routing:2 virtual:2 implementing:2 bin:1 require:2 ifat:21 biological:5 coggins:1 sufficiently:2 around:1 hall:1 stdp:9 koch:2 equilibrium:1 mapping:1 circuitry:3 driving:1 early:1 integrates:1 realizes:1 sensitive:1 modulating:1 create:1 mit:3 clearly:1 activates:1 interfacing:1 always:1 varying:3 voltage:1 probabilistically:1 release:1 indiveri:1 consistently:1 likelihood:2 dependent:8 integrated:4 weakening:2 hidden:1 vlsi:10 selective:1 overall:1 orientation:1 development:1 retinomorphic:1 constrained:2 spatial:1 field:4 chapman:1 x4:1 identical:1 biology:1 look:8 decremented:2 stimulus:1 employ:2 randomly:2 preserve:1 individual:2 pictorial:1 fire:6 possibility:2 custom:1 pc:2 activated:1 held:1 norwell:2 integral:1 capable:1 initialized:2 causal:17 increased:1 neuromorphic:10 mahowald:3 entry:5 hundred:1 configurable:2 stored:3 dependency:1 combined:1 probabilistic:2 off:1 receiving:1 decoding:1 discipline:1 together:1 hopkins:1 connecting:1 connectivity:9 recorded:2 containing:1 opposed:1 broadcast:1 potential:7 explicitly:2 depends:1 piece:1 performed:2 microcontroller:2 cauwenberghs:3 repre:1 competitive:1 parallel:2 contribution:1 miller:1 identify:1 weak:2 decodes:1 plastic:1 emulated:1 mere:1 drive:1 processor:1 history:1 synapsis:7 strongest:1 synaptic:39 ed:6 pp:11 proof:2 attributed:1 associated:1 workstation:3 monitored:1 auditory:2 proved:1 begun:1 massachusetts:1 back:1 nerve:1 wesley:1 originally:1 supervised:1 specify:1 synapse:9 discerned:1 formulation:1 furthermore:1 inception:1 biomedical:1 correlation:2 deweerth:1 receives:1 mode:2 effect:3 dietterich:2 concept:1 requiring:1 y2:5 spatially:1 illustrated:2 eg:4 x5:1 during:2 uniquely:1 timingdependent:1 ack:4 presenting:1 demonstrate:2 dedicated:1 motion:1 hafliger:1 recently:2 functional:1 spiking:3 physical:3 stimulation:1 insensitive:1 extend:1 analog:10 discussed:2 kluwer:3 silicon:4 potentiate:1 cambridge:3 imposing:1 access:2 stable:3 impressive:1 perspective:1 pulsed:2 driven:1 massively:2 periphery:1 route:1 store:1 arbitrarily:1 accomplished:1 exploited:1 transmitted:1 additional:1 greater:1 employed:1 period:1 signal:10 ii:1 multiple:2 reduces:1 hebbian:9 exceeds:1 academic:3 long:1 equally:1 coded:1 controlled:1 biophysics:1 scalable:3 basic:1 ae:1 enhancing:1 vision:1 cell:13 receive:2 baltimore:1 interval:2 publisher:1 unlike:2 mir:1 subject:2 sent:7 member:1 regularly:1 call:1 emitted:1 reconfiguring:1 backwards:1 feedforward:1 variety:1 architecture:9 topology:1 bandwidth:1 prototype:1 synchronous:1 depress:1 becker:2 song:1 routed:4 queue:13 locating:1 render:1 york:1 cause:1 lazzaro:1 action:2 useful:1 iterating:1 se:1 amount:2 repeating:1 handshaking:2 hardware:4 processed:1 supplied:1 inhibitory:3 stereoscopic:1 neuroscience:3 delta:1 per:4 modifiable:1 discrete:1 vol:8 group:6 threshold:1 douglas:2 andfire:1 verified:1 abbott:1 ram:10 run:2 lehmann:1 communicate:1 throughout:1 sented:1 layer:12 convergent:1 activity:10 aer:20 occur:2 optic:1 constraint:3 strength:19 x2:5 multiplexing:1 encodes:2 software:1 generates:1 ables:1 pruned:1 department:2 marking:1 according:1 peripheral:1 combination:1 membrane:2 across:1 slightly:1 reconstructing:1 postsynaptic:20 wawrzynek:1 biologically:2 modification:4 indexing:2 bus:5 turn:1 mechanism:2 initiate:1 addison:1 end:1 sending:3 adopted:1 available:1 operation:2 appropriate:2 original:1 top:1 remaining:1 ensure:3 assembly:1 include:1 log2:1 ghahramani:2 murray:1 graded:1 february:1 society:1 leakage:1 capacitance:2 alamitos:1 realized:2 spike:25 occurs:1 strategy:1 md:1 lends:1 elicits:1 mapped:3 separate:1 decoder:2 outer:1 argue:1 presynaptic:17 cellular:2 bofill:1 length:1 index:5 polarity:5 quantal:1 setup:1 x20:3 negative:1 rise:1 implementation:8 neuron:30 francesco:1 wire:3 anti:8 emulating:2 incorporated:1 extended:2 communication:2 variability:1 y1:5 interacting:1 arbitrary:2 david:1 pair:6 required:4 specified:3 extensive:1 connection:11 deiss:1 nonlinearly:1 andreou:1 tpre:7 address:48 trans:2 suggested:1 usually:1 pattern:2 flower:1 regime:9 reading:1 including:1 memory:3 deleting:1 gillespie:1 event:55 suitable:1 force:1 advanced:1 scheme:2 temporally:3 identifies:1 axis:2 created:1 bayoumi:2 relative:3 embedded:1 interesting:2 proven:1 generator:2 digital:2 integrate:6 degree:1 principle:1 x18:2 uncorrelated:4 storing:1 row:1 anniversary:1 excitatory:2 maas:1 allow:1 warp:1 sparse:2 distributed:1 van:1 curve:1 feedback:1 sensory:1 author:1 tpost:8 adaptive:2 founded:1 lut:10 global:2 active:3 incoming:1 receiver:9 arbiter:1 continuous:3 table:13 stimulated:1 additionally:1 nature:3 robust:1 ca:1 inherently:1 complex:3 domain:15 protocol:1 antisymmetric:1 jabri:1 intracellular:1 decrement:2 incrementing:1 x1:6 augmented:1 causality:1 fig:7 site:2 en:1 board:2 x16:1 decoded:1 stamp:1 weighting:1 externally:1 fra:1 reconfigurable:3 bishop:1 explored:1 divergent:1 decay:6 list:1 normalizing:1 magnitude:5 boston:1 suited:2 simply:1 likely:2 sender:11 visual:1 expressed:1 corresponds:1 ma:5 succeed:1 presentation:1 kramer:1 shared:1 content:1 change:2 typical:2 called:1 total:1 uck:1 experimental:7 internal:2 support:1 latter:1 incorporate:1 correlated:7 |
1,308 | 2,191 | Regularized Greedy Importance Sampling
Finnegan Southey Dale Schuurmans Ali Ghodsi
School of Computer Science
University of Waterloo
fdjsouth,dale,aghodsib @cs.uwaterloo.ca
Abstract
Greedy importance sampling is an unbiased estimation technique that reduces the variance of standard importance sampling by explicitly searching for modes in the estimation objective. Previous work has demonstrated the feasibility of implementing this method and proved that the
technique is unbiased in both discrete and continuous domains. In this
paper we present a reformulation of greedy importance sampling that
eliminates the free parameters from the original estimator, and introduces
a new regularization strategy that further reduces variance without compromising unbiasedness. The resulting estimator is shown to be effective
for difficult estimation problems arising in Markov random field inference. In particular, improvements are achieved over standard MCMC
estimators when the distribution has multiple peaked modes.
1 Introduction
Many inference problems in graphical models can be cast as determining the expected
value of a random variable of interest, , given observations drawn according to a target distribution . That is, we are interested in computing
. Unfortunately, in
natural situations is usually not in a form that we can sample from efficiently. For example, in standard Bayesian network inference corresponds to for a given
assignment to evidence variables in a given network . It is usually not possible to sample from this distribution directly, nor efficiently evaluate or even approximate at
given points [2]. It is therefore necessary to consider restricted architectures or heuristic
and approximate algorithms to perform these tasks [6, 3]. Among the most convenient and
successful techniques for performing inference are stochastic methods which are guaranteed to converge to a correct solution in the limit of large random samples [7, 14, 4]. These
methods can be easily applied to complex inference problems that overwhelm deterministic
approaches. The family of stochastic inference methods can be grouped into the independent Monte Carlo methods (importance sampling and rejection sampling [7, 4]) and the
dependent Markov Chain Monte Carlo (MCMC) methods (Gibbs sampling, Metropolis
sampling, and Hybrid Monte Carlo) [7, 5, 8, 14]. The goal of all these methods is to simulate drawing a random sample from a target distribution defined by a graphical model
that is hard to sample from directly.
In this paper we improve the greedy importance sampling (GIS) technique introduced in
[12, 11]. GIS attempts to improve the variance of importance sampling by explicitly searching for important regions in the target distribution . Previous work has shown that search
can be incorporated in an importance sampler while maintaining unbiasedness, leading to
improved estimation in simple problems. However, the drawbacks of the previous GIS
method are that it has free parameters whose settings affect estimation performance, and
its importance weights are directed at achieving unbiasedness without necessarily being
directed at reducing variance. In this paper, we introduce a new, parameterless form of
greedy importance sampling that performs comparably to the previous method given its
best parameter settings. We then introduce a new weight calculation scheme that preserves
unbiasedness, but provides further variance reduction by ?regularizing? the contributions
each search path gives to the estimator. We find that the new procedure significantly improves the original technique and achieves competitive results on difficult estimation problems arising in large discrete domains, such as those posed by Boltzmann machines. Below
we first review the generalized importance sampling procedure that forms the core of our
estimators before describing the innovations that lead to improved estimators.
2 Generalized importance sampling
Importance sampling is a useful technique for estimating when cannot
acbe sampled from directly. The basic idea is
to draw independent points
cording to a simple
proposal distribution
but then weight the points according to
. Assuming that we can evaluate the weighted sample can
be used to estimate desired expectations (Figure 1).1 The unbiasedness of this procedure is easy to establish, since for a random variable the expected weighted value of
under
is
. (For simplicity we will focus on the discrete case in this
paper.) The main difficulty
with importance sampling is that even though it is an effective
estimation
technique when approximates over most of the domain, it performs poorly
when does not have reasonable mass in high probability regions of . A mismatch of this
type results in a high variance estimator since the sample will almost always contains unrepresentative points but will intermittently be dominated by a few high weight points. The
idea behind greedy importance sampling (GIS) [11, 12] is to avoid generating under-weight
samples by explicitly searching for significant regions in the target distribution .
To develop a provably unbiased GIS procedure it is useful to first consider a generalization of standard importance sampling that can be proved to yield unbiased estimates: The
generalized importance sampling procedure introduced in [12] operates by sampling deterministic blocks of points instead of individual points (Figure 1). Here, to each domain
point we associate a fixed block
!" #%
$ , where &' is the length of block
( . When is drawn from the proposal distribution we recover block ) and add the
block points to the sample.2 Ensuring unbiasedness then reduces to weighting the sampled
points appropriately. To this end, [12] introduces an auxiliary weighting scheme that can
be used to obtain unbiased estimates: To each pair of points * , ,+ (such that ,+.- ( ) one
associates a weight / 0+ , where intuitively / ,+ is the weight that initiating point
assigns to sample point + in its block . The / + values can be arbitrary as long
1
Unfortunately, for standard inference problems in graphical models it is usually not possible to
evaluate 1243,5 directly but rather just 1
6 243058791243,5;: for some unknown constant : . However it is
still possible to apply the ?indirect? importance sampling procedure shown in Figure 1 by assigning
6 243,5;?@.243,5 and renormalizing. The drawback of the indirect procedure is
indirect weights <=243,587>1
that it is no longer unbiased at small sample sizes, but instead only becomes unbiased in the large
sample limit [4]. To keep the presentation simple we will focus on the ?direct? form of importance
sampling described in Figure 1 and establish unbiasedness for that case?keeping in mind that every
extended form of importance sampling we discuss below can be converted to an ?indirect? form.
2
There is no restriction on the blocks other than that they be finite?blocks can overlap and need
not even contain their initiating point 3,A ?however their union has to cover the sample space B , and
@ cannot put zero probability on initiating points which leaves sample points uncovered.
?Direct? importance sampling
Draw 3 3 indep. according
to
$ @ .
Weight each point by 2430A 5 7
$ .
Estimate
24305 by
7
A 243 A 5 243 A 5 .
3 + 0 3 + -
?Indirect? importance sampling
Draw 3 3 indep. according
to$ @ .
Weight each point by <=243 A 5 7
$
where 1 6 7
Estimate
?Generalized? importance sampling
Draw 3 3 indep. according to @ .
For
( each 30A , recover its block
A 7*) 3 A,+ 3 A,+ - $/. .
Create a large sample out of the blocks
Weight 33254
Estimate :
1 : for some unknown : .
24305 by
$!#" $$ $
7
%$& $ $'
7
10 3 + 0
3
7 + - .
A by A 24362"5 7
$98
$ +
7
24305 by
>
>
; = => - $
2430A'+ 5 A 2430A,+ 5
<
A
(
(direct form)
.
Figure 1: Basic importance sampling procedures
as they satisfy
+ @ ? + A
(1)
for every ,+ . (Here ? 0+ indicates ?
0+ BA if 0+ - and ? 0+ DC if
+F- E .) That is, for each destination point + , the total of the incoming / -weight has to
sum to A . In fact, it is quite easy to prove that this yields unbiased
estimates [12] since the
expected weighted value of when sampling initiating *
under
is
$ /
$ HG4
7JI6K $ 243 2 5 A 243 2 5L 7
$ INM
7JI6K
=
=
=
7
2430A;362"5 243325 1243325 O 2430AS;3325 7
$ IPM
7JINMRQ
7IPM
7
7 IPM 24332"5 124362"5
$ IPM 2430AS;3325 O 2430AS;3325 7
Q
7
$ 243 2 5
$O 243 A 3 2 5 @.243 A 5
=
$ INMRQ
2430AS;3325 243325 124332 5 O 2430AS;332"5
7 INM 243325 1243325 7T
243,5
Crucially, this argument does not depend on how the block decomposition is chosen or
how the / -weights are set, so long as they satisfy (1). That is, one could fix any block decomposition and weighting scheme, even one that depends on the target distribution and
random variable , without affecting the unbiasedness of the procedure. Intuitively, this
works because the block structure and weighting scheme are fixed a priori, and unbiasedness is achieved by sampling blocks and assigning fair weights to the points. The generality
of this outcome allows one to consider using a wide range of alternative importance sampling schemes, while employing appropriate / -weights to cancel any bias. In particular,
we will determine blocks on-line by following deterministic greedy search paths.
3 Parameter-free greedy importance sampling
Our first contribution in this paper is to derive an efficient greedy importance sampling (GIS) procedure that involves no free parameters, unlike the proposal in [12].
One key motivating principle behind GIS is to realize that the optimal proposal
dis
VU
tribution for estimating
with standard importance sampling is
, which minimizes the resulting variance [10]. GIS attempts to overcome a poor proposal distribution by explicitly searching for points that
maximally increase the objective (Figure 2). The primary difficulty in implementing GIS is finding ways to assign the auxiliary weights / + so that they satisfy
the constraint (1). If this can be achieved, the resulting GIS procedure will be unbiased via
the arguments of the previous section. However, the / -weights must not only satisfy the
constraint (1), they must also be efficiently calculable from a given sample.
?Greedy? importance sampling
Draw 3 3 independently from @ .
For each 30A , let 30( A,+ 7 30A and:
Compute block A 7 ) 3 A,+ ;3 A,+ 3 A,+ - $ .
by taking local
steps in the direction of
maximum 243,5 124( 3,5 until a local max.
A by A 243 2 5 7
Weight
3 2 4
7 each
$ 8
$ +
7 where 8
$ +
7 is defined in (2).
Create the final sample from the blocks
3 + 90 3 + - 10 3 + 0 3 + - .
>
Estimate
24305 by>
>
7
7
O
O
..
.
..
.
O
7
..
0
;
;
0
0
O
..
.
.
2 5 12 5
..
.
2 < 5 12 < 5
O N
2 5 12 5
O
- $
A 243 A'+ 5 A 243 A'+ 5 .
Figure 2: ?Greedy? importance sampling procedure (left); Section 4 matrix (right)
A computationally efficient / -weighting scheme can be determined by distributing weight
in a search tree in a top down manner: Note that to verify (1) for a domain point + we have
to consider every search path that starts at some other point * and passes through + . If the
search is deterministic (which we assume) then the set of search paths entering + will form
a tree. Let + denote the tree of points that lead into + and let / +
7 / + .
In principle, the tree will have unbounded depth since the greedy search procedure does not
stop until it has reached a local maximum. Therefore, to ensure / *+ A we distribute
weight down the tree from level C (the root, + ) to levels A 0 by a convergent series;
where for simplicity we set the total weight allocated at level , / + , to be / +
3
"$# / + A . (Finite depth bounds will be handled
. This trivially ensures !
automatically below.)
Having established the total weight at level , / + , we must then determine how much
of that weight is allocated to a particular point at that level. Given the entire search tree this
would be trivial, but the greedy search paths will typically provide only a single branch of
the tree. We accomplish the allocation by recursively dividing the weight equally amongst
branches, starting at the root of the tree. Thus, if % & is the inward branching factor at the
root, we divide / + by % & at the first level. Then, following the path to a desired point
, we successively divide the remaining weight at each point by the observed branching
factor % ')( , % &( , etc. until we reach . In the case % C , has no descendants
and we compensate by adding the mass of the missing subtree to * ?s weight. This scheme
is efficient to compute because we require only the branching factors along a given search
path to correctly allocate the weight. This yields the following weighting scheme that runs
in linear time and exactly satisfies the constraint (1): Given a start point and a search
path & ,+ from to 0+ , we assign a weight / = 0+ by
+ -$ ,& + .$ ,/102020 + .$ , 1
if % 4
3 C
(2)
/ ,+
*
+ -$ ,& + .$ ,/102020 + .$ ,
C
if % %
where % 65 denotes the inward branching factor of point 65 . A simple induction proof can
be used to show that $ / ,+ A . Therefore, the new / -weighting scheme provides
an efficient unbiased method for implementing GIS that does not use any free parameters.
4 Variance reduction
While GIS reduces variance by searching, the / -weight correction scheme outlined above
is designed only to correct bias and does not specifically address variance issues. However,
3
We merely chose the simplest heavy tailed convergent series available.
there is a lot of leeway in setting the / -weights since the normalization constraint (1) is
quite weak. In fact, one can exploit this additional flexibility to determine minimum variance unbiased estimators in simple cases. To illustrate, consider a toy domain consisting
of points A 0 , , where C
A
A . Assume the search is
constrained to move between adjacent points so that from every initial point the greedy
search will move to the right until it hits point . Any / -weighting scheme for this domain
can be expressed as a matrix, , shown in Figure 2, where row corresponds to the search
block retrieved by starting at point . Note that the constraint (1) amounts to requiring that
the columns of sum to A . However, it is the rows of that correspond
to search
blocks
sampled during estimation. If we assume a uniform proposal distribution
then gives the column vector of block estimates that correspond to each start point.
The variance of the overall estimator then becomes equal to the variance of the column
vector . In particular, if each row produces the same estimate, the estimator will have
zero variance. We conclude that zero variance is achieved iff equals a constant. Thus,
the unbiasedness constraints behave orthogonally to the zero variance constraints: unbiasedness imposes a constraint on columns of whereas zero variance imposes a constraint
on rows of . An optimal estimator will satisfy both sets of constraints. Since there are
constraints in total and %A variables, one can apparently solve for a zero variance
unbiased estimator (for 3 ). However, it turns out that the constraint matrix does not
have full rank, and it is not always possible to achieve zero bias and variance for given .
Nevertheless, one can obtain an optimal GIS estimator by solving a quadratic program for
the which minimizes variance subject to satisfying the linear unbiasedness constraints.
The point of this simple example is not to propose a technique that explicitly enumerates
the domain in order to construct a minimum variance GIS estimator. (Although the above
discussion applies to any finite domain?all one needs to do is encode the search topology
in the weight matrix .) Rather, the point is to show that a significant amount of flexibility remains in setting the / -weights?even after the unbiasedness constraints have been
satisfied?and that this additional flexibility can be exploited to reduce variance.
We can now extend these ideas to a more realistic, general situation: To reduce the variance
of the GIS estimator developed in Section 3, our idea is to equalize the block totals among
different search paths. The main challenge is to adjust / -weights in a way that equalizes
block totals without introducing bias, and without requiring excessive computational overhead. Here we follow the style of local correction employed in Section 3. First note that
when traversing a path
from to + , the blocks sampled by GIS produce estimates of the
.$ , .$ ,
form 5 "#
$
$ .$ , . Now consider an intermediate point = 65 in the
search. This point will have been arrived at via some predecessor 65 ( , but we could
have arrived at = 65 via any one of its possible predecessors . We would like to equalize
the block totals that would have been obtained by arriving via any one of these predecessor points. The key to maintaining unbiasedness is to ensure that any weight calculation
performed at a point in a search tree is consistent, regardless of the path taken to reach
that point. Since we cannot anticipate the initial points, it is only convenient to equalize
the subtotals from the predecessors , through 65 , and up to the root ,+ . Let $5 denote the total sum obtained by points after 65 ; i.e. from 65 to + . We equalize the
different predecessor totals by determining factors which satisfy the constraints
65 65 65
over the predecessors . This scales the parent quantity $5
65 65 on each
path to compensate for differences between predecessors. The equalization and unbiasedness constraints form a linear system whose solution we rescale to obtain positive ! . The
are computed starting at the end of the block and working backwards. The results can
be easily incorporated into the GIS procedure by multiplying the original / -weights in (2)
by the product , 0 65 ( . Importantly, at a given search point, any of its predecessors will calculate the same -correction scheme locally, regardless of which predecessor
is actually sampled. This means that the correction scheme is not sample-dependent but
fixed
+ .$ , ahead of time. It is easy to prove that any fixed -weighting scheme that satisfies
" % 65 , and is applied to an unbiased / -weighting, will satisfy (1). The benefit
of this scheme is that it reduces variance while preserving unbiasedness. 4
5 Empirical results: Markov random field estimation
To investigate the utility of the GIS estimators we conducted experiments on inference
problems in Markov random fields. Markov random fields are an important class of undirected graphical model which include Boltzmann machines as a special case [1]. These
models are known to pose intractable inference problems for exact methods. Typically,
standard MCMC methods such as Gibbs sampling and Metropolis sampling are applied
to such problems, but their success is limited owing to the fact that these estimators tend
to get trapped in local modes [7]. Moreover, improved MCMC methods such as Hybrid
Monte Carlo [8] cannot be directly applied to these models because they require continuous sample spaces, whereas Boltzmann machines and other random field models define
distributions on a discrete domain. Standard importance sampling is also a poor estimation
strategy for these models because a simple proposal distribution (like uniform) has almost
no chance of sampling in relevant regions of the target distribution [7]. Explicitly searching
for modes would seem to provide an effective estimation strategy for these problems.
We consider a generalization of Boltzmann machines that defines a joint distribution over
a set of discrete variables , A A , according to
where + += + +
Here is the ?temperature? of the model and defines the ?energy? of configuration
; the functions + and define the local energy between pairs of variables and individual variables respectively; and is a normalization constant. Exact inference in such a
model is difficult because the normalization constant is typically unknown. Moreover,
is usually not possible to obtain exactly because it is defined as an exponentially large
sum that is not prone simplification.5 We experimented with two classes of generalized
Boltzmann machines: generalized Ising models, where the underlying graph is a 2 dimensional grid, and random models, where the graph is generated by randomly choosing links
between variables. For each model, the function values were chosen randomly from a
standard normal distribution. We considered the objective functions (expected energy); A (expected number of 1?s in a configuration); and
+ + + A (expected number of pairwise ?and?s? in a configuration). The latter two objectives are summaries of the quantities needed to estimate gradients
in standard Boltzmann machine learning algorithms [1]. This would seem to be an ideal
model on which to test our methods.
We conducted experiments by fixing a model and temperature and ran the estimators for
a fixed amount of CPU time. Each estimator was re-run 1000 times to estimate their root
mean squared error (RMSE) on small models where exact answers could be calculated,
or standard deviation (STD) on large models where no such exact answer is feasible. We
compared estimators by controlling their run time (given a reasonable C implementation)
not just their sample size, because the different estimators use different computational overheads, and run time is the only convenient way to draw a fair comparison. For example, GIS
methods require a substantial amount of additional computation to find the greedy search
4
This variance reduction scheme applies naturally to unbiased direct estimators. With indirect
estimators, bias is typically more problematic than variance. Therefore, for indirect GIS we employ
an alternative -weighting scheme that attempts to maximize total block weight.
5
Interesting recent progress has been made on developing exact and approximate sampling methods for the special case of Ising models [9, 15, 13].
E(energy)
IS
GISold
GISnew
GISreg
Gibbs
Metro
Avg SS
5094
1139
1015
1015
36524
35885
RMSE @ T=1.0
27.75
13.89
14.31
3.01
0.21
0.28
T=0.5
68.96
12.93
13.73
4.10
0.37
0.53
25
T=0.1
374.04
13.35
15.25
6.61
21.86
24.56
T=0.05
749.42
10.46
11.78
6.20
53.44
56.16
T=0.025
1503.73
12.59
11.03
7.72
108.13
122.46
25
GISreg
GISreg
GISreg
GISreg
GISreg
20
4x4
5x5
6x6
7x7
8x8
20
15
RMSE
RMSE
T=0.25
145.97
12.96
13.94
5.57
4.44
5.75
10
15
10
5
Gibbs
Gibbs
Gibbs
Gibbs
Gibbs
5
0
4x4
5x5
6x6
7x7
8x8
0
1
0.1
0.01
1
0.1
Temperature
0.01
Temperature
Figure 3: Estimating average energy in a random field model (table shows results for ).
E(and?s)
IS
GISold
GISnew
GISreg
Gibbs
Metro
Avg SS
4764
1125
1015
1015
22730
25789
RMSE @ T=1.0
6.10
6.33
6.09
3.56
0.33
0.37
T=0.5
8.42
5.16
5.16
3.06
0.36
0.43
8
T=0.1
10.45
2.57
2.85
0.90
0.70
0.76
T=0.05
10.15
0.64
0.61
0.17
1.41
1.30
T=0.025
10.15
0.43
0.15
0.05
1.54
1.41
8
GISreg
GISreg
GISreg
GISreg
GISreg
7
6
4x4
5x5
6x6
7x7
8x8
Gibbs
Gibbs
Gibbs
Gibbs
Gibbs
7
6
4x4
5x5
6x6
7x7
8x8
5
RMSE
5
RMSE
T=0.25
9.60
4.03
4.30
2.43
0.59
0.63
4
4
3
3
2
2
1
1
0
0
1
0.1
Temperature
0.01
1
0.1
0.01
Temperature
Figure 4: Estimating average ?sum of and?s? in a random field model (table shows ).
paths and calculate inward branching factors, and consequently they must use substantially
smaller sample sizes than their counterparts to ensure a fair comparison. However, the GIS
estimators still seem to obtain reasonable results despite their sample size disadvantage.
For the GIS procedures we implemented a simple search that only ascends in not
, and we only used a uniform proposal distribution in all our experiments. We
also only report results for the indirect versions of all importance samplers (cf. Figure 1).
Figures 3 and 4 show typical outcomes of our experiments. Table 3 shows results for estimating expected energy in an generalized Ising model when temperature is dropped
from 1.0 to 0.025. Figure 4 shows comparable results for estimating the ?sum of and?s?.
Standard importance sampling (IS) is a poor estimator in this domain, even when it is
able to use 4.5 times as many data points as the GIS estimators. IS becomes particularly
poor when the temperature drops. Among GIS estimators, the new, parameter-free version
introduced in Section 3 (GIS new) compares favorably to the previous technique of [12]
(GIS old). The regularized GIS from Section 4 (GIS reg) is clearly superior to either.
Next, to compare the importance sampling approaches to the MCMC methods, we see the
dramatic effect of temperature reduction. Owing to their simplicity (and an efficient implementation), the MCMC samplers were able to gather about 20 to 30 times as many data
points as the GIS estimators in the same amount of time. The effect of this substantial sample size advantage is that the MCMC methods demonstrate far better performance at high
temperatures; apparently owing to an evidential advantage. However, as the temperature is
lowered, a well known effect takes hold as the the low energy configurations begin to dominate the distribution. At low temperatures the modes around the low energy configurations
become increasingly peaked and standard MCMC estimators become trapped in modes
from which they are unable to escape [8, 7]. This results in a very poor estimate that is
dominated by arbitrary modes. Figures 3 and 4 show the RMSE curves of Gibbs sampling
and GIS reg, side by side, as temperature is decreased in different models. By contrast to
MCMC procedures, the GIS procedures exhibit almost no accuracy loss as the temperature
is lowered, and in fact sometimes improve their performance. There seems to be a clear
advantage for GIS procedures in sharply peaked distributions. Also they appear to have
much more robustness against varying steepness in the underlying distribution. However,
at warmer temperatures the MCMC methods are clearly superior.
It is important to note that greedy importance sampling is not equivalent to adaptive importance sampling. Sample blocks are completely independent in GIS, but sample points
are not independent in AIS. Nevertheless, GIS can benefit from adapting the proposal distribution in the same way as standard IS. Clearly we cannot propose GIS methods as a
replacement for MCMC approaches, and in fact believe that useful hybrid combinations
are possible. Our goal in this research is to better understand a novel approach to estimation that appears to be worth investigating. Much work remains to be done in reducing
computational overhead and investigating additional variance reduction techniques.
References
[1] D. Ackley, G. Hinton, and T. Sejnowski. A learning algorithm for Boltzmann machines. Cognitive Science, 9:147?169, 1985.
[2] P. Dagum and M. Luby. Approximating probabilistic inference in Bayesian belief networks is
NP-hard. Artificial Intelligence, 60:141?153, 1993.
[3] P. Dagum and M. Luby. An optimal approximation algorithm for Bayesian inference. Artificial
Intelligence, 93:1?27, 1997.
[4] J. Geweke. Baysian inference in econometric models using Monte Carlo integration. Econometrica, 57:1317?1339, 1989.
[5] W. Gilks, S. Richardson, and D. Spiegelhalter. Markov Chain Monte Carlo in Practice. Chapman and Hall, 1996.
[6] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. An introduction to variational methods for
graphical models. In Learning in Graphical Models. Kluwer, 1998.
[7] D. MacKay. Intro to Monte Carlo methods. In Learning in Graphical Models. Kluwer, 1998.
[8] R. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Tech report, 1993.
[9] J. Propp and D. Wilson. Exact sampling with coupled Markov chains and applications to statistical mechanics. Random Structures and Algorithms, 9:223?253, 1996.
[10] R. Rubinstein. Simulation and the Monte Carlo Method. Wiley, New York, 1981.
[11] D. Schuurmans. Greedy importance sampling. In Proceedings NIPS-12, 1999.
[12] D. Schuurmans and F. Southey. Monte Carlo inference via greedy importance sampling. In
Proceedings UAI, 2000.
[13] R. Swendsen, J. Wang, and A. Ferrenberg. New Monte Carlo methods for improved efficiency
of computer simulations in statistical mechanics. In The Monte Carlo Method in Condensed
Matter Physics. Springer, 1992.
[14] M. Tanner. Tools for Statistical Inference: Methods for Exploration of Posterior Distributions
and Likelihood Functions. Springer, New York, 1993.
[15] D. Wilson. Sampling configurations of an Ising system. In Proceedings SODA, 1999.
| 2191 |@word version:2 seems:1 simulation:2 crucially:1 decomposition:2 dramatic:1 ipm:4 recursively:1 reduction:5 initial:2 configuration:6 contains:1 uncovered:1 series:2 warmer:1 assigning:2 must:4 realize:1 realistic:1 designed:1 drop:1 greedy:18 leaf:1 intelligence:2 core:1 provides:2 unbounded:1 along:1 direct:4 predecessor:9 become:2 calculable:1 prove:2 descendant:1 overhead:3 manner:1 introduce:2 pairwise:1 expected:7 nor:1 ascends:1 mechanic:2 initiating:4 automatically:1 cpu:1 becomes:3 begin:1 estimating:6 moreover:2 underlying:2 mass:2 inward:3 minimizes:2 substantially:1 developed:1 finding:1 every:4 exactly:2 hit:1 appear:1 before:1 positive:1 dropped:1 local:6 limit:2 despite:1 propp:1 path:13 chose:1 limited:1 range:1 directed:2 gilks:1 vu:1 union:1 block:27 tribution:1 practice:1 procedure:18 empirical:1 significantly:1 adapting:1 convenient:3 get:1 cannot:5 put:1 equalization:1 leeway:1 restriction:1 equivalent:1 deterministic:4 demonstrated:1 missing:1 regardless:2 starting:3 independently:1 simplicity:3 assigns:1 estimator:29 importantly:1 dominate:1 searching:6 target:6 controlling:1 exact:6 associate:2 satisfying:1 particularly:1 std:1 ising:4 observed:1 ackley:1 wang:1 calculate:2 region:4 ensures:1 indep:3 ran:1 equalize:4 substantial:2 econometrica:1 depend:1 solving:1 subtotal:1 ali:1 efficiency:1 completely:1 easily:2 joint:1 indirect:8 effective:3 monte:12 sejnowski:1 artificial:2 rubinstein:1 outcome:2 equalizes:1 choosing:1 whose:2 heuristic:1 posed:1 quite:2 solve:1 drawing:1 s:2 gi:35 richardson:1 final:1 advantage:3 propose:2 product:1 relevant:1 iff:1 poorly:1 flexibility:3 achieve:1 parent:1 produce:2 renormalizing:1 generating:1 derive:1 develop:1 illustrate:1 fixing:1 pose:1 rescale:1 school:1 progress:1 dividing:1 auxiliary:2 c:1 involves:1 implemented:1 direction:1 drawback:2 correct:2 compromising:1 owing:3 stochastic:2 exploration:1 implementing:3 require:3 assign:2 fix:1 generalization:2 anticipate:1 correction:4 hold:1 around:1 considered:1 hall:1 normal:1 swendsen:1 achieves:1 estimation:12 dagum:2 condensed:1 waterloo:1 grouped:1 create:2 tool:1 weighted:3 clearly:3 always:2 rather:2 avoid:1 varying:1 jaakkola:1 wilson:2 encode:1 focus:2 improvement:1 rank:1 indicates:1 likelihood:1 tech:1 contrast:1 inference:16 dependent:2 entire:1 typically:4 interested:1 provably:1 issue:1 among:3 overall:1 priori:1 constrained:1 special:2 integration:1 mackay:1 field:7 equal:2 construct:1 having:1 sampling:48 chapman:1 x4:4 cancel:1 excessive:1 peaked:3 report:2 np:1 escape:1 few:1 employ:1 randomly:2 preserve:1 individual:2 consisting:1 replacement:1 attempt:3 interest:1 investigate:1 adjust:1 introduces:2 behind:2 chain:4 necessary:1 traversing:1 tree:9 divide:2 old:1 desired:2 re:1 inm:2 column:4 metro:2 cover:1 disadvantage:1 assignment:1 introducing:1 deviation:1 uniform:3 successful:1 conducted:2 motivating:1 answer:2 accomplish:1 unbiasedness:16 destination:1 probabilistic:2 physic:1 tanner:1 squared:1 satisfied:1 successively:1 cognitive:1 leading:1 style:1 toy:1 converted:1 distribute:1 matter:1 satisfy:7 explicitly:6 depends:1 performed:1 root:5 lot:1 apparently:2 reached:1 competitive:1 recover:2 start:3 rmse:8 contribution:2 accuracy:1 variance:27 efficiently:3 yield:3 correspond:2 weak:1 bayesian:3 comparably:1 carlo:12 multiplying:1 worth:1 evidential:1 reach:2 against:1 energy:8 naturally:1 proof:1 sampled:5 stop:1 proved:2 enumerates:1 improves:1 geweke:1 actually:1 appears:1 follow:1 x6:4 improved:4 maximally:1 done:1 though:1 generality:1 just:2 until:4 working:1 defines:2 mode:7 believe:1 effect:3 contain:1 unbiased:14 verify:1 requiring:2 counterpart:1 regularization:1 entering:1 neal:1 adjacent:1 x5:4 during:1 branching:5 generalized:7 arrived:2 demonstrate:1 performs:2 temperature:15 variational:1 intermittently:1 novel:1 superior:2 exponentially:1 extend:1 approximates:1 kluwer:2 significant:2 gibbs:15 ai:1 trivially:1 outlined:1 grid:1 lowered:2 longer:1 etc:1 add:1 posterior:1 recent:1 retrieved:1 success:1 exploited:1 preserving:1 minimum:2 additional:4 employed:1 converge:1 determine:3 maximize:1 branch:2 multiple:1 full:1 reduces:5 calculation:2 long:2 compensate:2 equally:1 feasibility:1 ensuring:1 basic:2 expectation:1 normalization:3 sometimes:1 achieved:4 proposal:9 affecting:1 whereas:2 decreased:1 allocated:2 appropriately:1 eliminates:1 unlike:1 pass:1 subject:1 tend:1 undirected:1 seem:3 jordan:1 backwards:1 intermediate:1 ideal:1 easy:3 affect:1 architecture:1 topology:1 reduce:2 idea:4 handled:1 allocate:1 utility:1 distributing:1 york:2 useful:3 clear:1 amount:5 locally:1 simplest:1 problematic:1 trapped:2 arising:2 correctly:1 discrete:5 steepness:1 key:2 reformulation:1 nevertheless:2 achieving:1 drawn:2 econometric:1 graph:2 merely:1 sum:6 run:4 soda:1 family:1 reasonable:3 almost:3 draw:6 comparable:1 bound:1 guaranteed:1 simplification:1 convergent:2 quadratic:1 ahead:1 constraint:16 sharply:1 ghodsi:1 dominated:2 x7:4 simulate:1 argument:2 performing:1 developing:1 according:6 combination:1 poor:5 smaller:1 increasingly:1 metropolis:2 intuitively:2 restricted:1 taken:1 computationally:1 remains:2 overwhelm:1 describing:1 discus:1 turn:1 needed:1 mind:1 end:2 available:1 apply:1 appropriate:1 luby:2 alternative:2 robustness:1 original:3 top:1 remaining:1 ensure:3 denotes:1 include:1 graphical:7 cf:1 maintaining:2 exploit:1 ghahramani:1 establish:2 approximating:1 objective:4 move:2 quantity:2 intro:1 strategy:3 primary:1 exhibit:1 amongst:1 gradient:1 link:1 unable:1 trivial:1 induction:1 assuming:1 length:1 innovation:1 difficult:3 unfortunately:2 favorably:1 ba:1 implementation:2 boltzmann:7 unknown:3 perform:1 observation:1 markov:8 finite:3 behave:1 situation:2 extended:1 incorporated:2 hinton:1 dc:1 arbitrary:2 introduced:3 cast:1 pair:2 baysian:1 established:1 nip:1 address:1 able:2 usually:4 below:3 mismatch:1 challenge:1 program:1 max:1 belief:1 overlap:1 natural:1 hybrid:3 regularized:2 difficulty:2 scheme:17 improve:3 spiegelhalter:1 orthogonally:1 x8:4 coupled:1 review:1 determining:2 loss:1 interesting:1 allocation:1 southey:2 gather:1 consistent:1 imposes:2 principle:2 heavy:1 row:4 cording:1 prone:1 summary:1 free:6 keeping:1 arriving:1 dis:1 bias:5 side:2 understand:1 wide:1 saul:1 taking:1 benefit:2 overcome:1 depth:2 calculated:1 curve:1 dale:2 made:1 avg:2 adaptive:1 employing:1 far:1 finnegan:1 approximate:3 keep:1 incoming:1 investigating:2 uai:1 conclude:1 continuous:2 search:23 tailed:1 table:3 ca:1 schuurmans:3 complex:1 necessarily:1 domain:11 main:2 uwaterloo:1 fair:3 wiley:1 weighting:11 down:2 experimented:1 evidence:1 intractable:1 adding:1 importance:38 subtree:1 rejection:1 expressed:1 applies:2 springer:2 corresponds:2 satisfies:2 chance:1 goal:2 presentation:1 consequently:1 unrepresentative:1 feasible:1 hard:2 determined:1 specifically:1 reducing:2 operates:1 sampler:3 typical:1 total:10 latter:1 evaluate:3 mcmc:11 reg:2 regularizing:1 |
1,309 | 2,192 | Value-Directed Compression of POMDPs
Pascal Poupart
Craig Boutilier
Departement of Computer Science
University of Toronto
Toronto, ON, M5S 3H5
[email protected]
Department of Computer Science
University of Toronto
Toronto, ON, M5S 3H5
[email protected]
Abstract
We examine the problem of generating state-space compressions of POMDPs in a
way that minimally impacts decision quality. We analyze the impact of compressions on decision quality, observing that compressions that allow accurate policy
evaluation (prediction of expected future reward) will not affect decision quality. We derive a set of sufficient conditions that ensure accurate prediction in this
respect, illustrate interesting mathematical properties these confer on lossless linear compressions, and use these to derive an iterative procedure for finding good
linear lossy compressions. We also elaborate on how structured representations
of a POMDP can be used to find such compressions.
1 Introduction
Partially observable Markov decision processes (POMDPs) provide a rich framework for
modeling a wide range of sequential decision problems in the presence of uncertainty.
Unfortunately, the application of POMDPs to real world problems remains limited due to
the intractability of current solution algorithms, in large part because of the exponential
growth of state spaces with the number of relevant variables.
Ideally, we would like to mitigate this source of intractability by compressing the state
space as much as possible without compromising decision quality. Our aim in solving
a POMDP is to maximize future reward based on our current beliefs about the world.
By compressing its belief state, an agent may lose relevant information, which results in
suboptimal policy choice. Thus an important aspect of belief state compression lies in
distinguishing relevant information from that which can be safely discarded. A number of
schemes have been proposed for either directly or indirectly compressing POMDPs. For
example, approaches using bounded memory [8, 10] and state aggregation?either dynamic
[2] or static [5, 9]?can be viewed in this light.
In this paper, we study the effect of static state-space compression on decision quality. We
first characterize lossless compressions?those that do not lead to any error in expected
value?by deriving a set of conditions that guarantee decision quality will not be impaired.
We also characterize the specific case of linear compressions. This analysis leads to algorithms that find good compression schemes, including methods that exploit structure in the
POMDP dynamics (as exhibited, e.g., in graphical models). We then extend these concepts
to lossy compressions. We derive a (somewhat loose) upper bound on the loss in decision
quality when the conditions for lossless compression (of some required dimensionality) are
not met. Finally we propose a simple optimization program to find linear lossy compressions that minimizes this bound, and describe how structured POMDP models can be used
to implement this scheme efficiently.
2 Background and Notation
2.1 POMDPs
A POMDP is defined by: a set of states ; a set of actions ; a set of observations
; a transition function , where
denotes the transition probability
;
an observation function , where
denotes the probability
of making observation in state ; and a reward function , where
denotes the immediate reward
associated with state .1 We assume discrete state, action and observation sets and we focus
on discounted, infinite horizon POMDPs with discount factor
.
"!$#&%
Policies and value functions for POMDPs are typically defined over belief space, where
a belief state is a distribution over capturing an agent?s knowledge about the current
state of the world. Belief state can be updated in response to a specific action-observation
pair
using Bayes rule:
( is a normalization
, where, in matrix form, we have
constant). We denote the (unnormalized) mapping
. Note that a belief state and reward function can be
viewed respectively as -dimensional row and column vectors. We define
.
'
'
()
+*
' -,/.0213' 4 5
-
.
7698 :
=; 69< 8 : ,>+ <
; ? <
'
'@A,B'DCE
Solving a POMDP consists of finding an optimal policy F mapping belief states to actions.
The value GH of a policy F is the expected sum of discounted rewards and is defined as:
(1)
G H '@A,2I '@KJL!7M G H N H+OP Q8 : '@
:
A number of techniques [11] based on value iteration or policy iteration can be used to
compute optimal or approximately optimal policies for POMDPs.
2.2 Conditional Independence and Additive Separability
When our state space is defined by a set of variables, POMDPs can often be represented
concisely in a factored way by specifying the transition, observation and reward functions
using a dynamic Bayesian network (DBN). Such representations exploit the fact that transitions associated with each variable depend only on a small subset of variables. These
representations can often be exploited to solve POMDPs without state space enumeration
[2].
Recently, Pfeffer [13] showed that conditional independence combined with some form of
additive separability can enable efficient inference in many DBNs. Roughly, a function
can be additively separated when it decomposes into a sum of smaller terms. For instance,
is separable if there exist conditional distributions
and
,
and
, such that
. This ensures
that one need only know the marginals of and (instead of their joint distribution) to
infer . Pfeffer shows how additive separability in the CPTs of a DBN can be exploited
to identify families of self-sufficient variables. A self-sufficient family consists of a set
of subsets of variables such that the marginals of each subset are sufficient to predict the
marginals of the same subsets at the next time step. Hence, if we require the probabilities
of a few variables, and can identify a self-sufficient family containing those variables, then
we need only compute marginals over this family when monitoring belief state.
RTS
.ZY\[ ]
^%@_
1
U RV WX S
+ R`Sa,&.b9U RTcJde%gfL.be9Wh SI
R
S
The ideas presented in this paper generalize to cases when
i
and
j
also depend on actions.
T?
f
b
T?
g?
~
b
f
b?
~
R
R
f
b
~?
T
~
b
b?
~
b?
~
R
R
r?
R
r
b)
~?
T
T?
~
R
R
r
a)
g?
~
b?
~
R
T?
r?
Figure 1: a) Functional flow of a POMDP (dotted arrows) and a compressed POMDP (solid
arrows) where the next belief state is accurately predicted. b) Functional flow of a POMDP
(dotted arrows) and a compressed POMDP (solid arrows) where the next compressed belief
state is accurately predicted.
2.3 Invariant and Krylov Subspaces
We briefly review several linear algebraic concepts used later (see [15] for more details).
Let be a vector subspace. We say is invariant with respect to matrix
if it is closed
under multiplication by
(i.e.,
). A Krylov subspace
is
the smallest subspace that contains and is invariant with respect to . A basis for
a Krylov subspace can easily be generated by repeatedly multiplying by
(i.e.,
). If
is -dimensional, one can show that
is
the last linearly independent vector in this sequence and that all subsequent vectors are
linear combinations of .
Y
Y
$
&
$
&
,
c
c
c
In a DBN, families of self-sufficient variables naturally correspond to invariant subspaces.
For
suppose is a linear function that depends only on self-sufficient family
R!instance,
SD
" . If we regress through the dynamics of the DBN?i.e., if we multiply
by the transition
matrix 698 : ?the resulting function will also be defined over the truth values of R! and SD
" . Hence, when a family of variables is self-sufficient, the subspace
of linear functions defined over the truth values of that family is invariant w.r.t.
3698 : .
3 Lossless Compressions
If a compression of the state space of a POMDP allows us to accurately evaluate all policies,
we say the compression is lossless, since we have sufficient information to select the optimal policy. We provide one characterization of lossless compressions. We then specialize
this to the linear case, and discuss the use of compact POMDP representations.
#'
'#
'
Let be a compression function that maps each belief state into some lower dimensional
compressed belief state (see Figure 1(a)). Here can be viewed as a bottleneck (e.g., in the
sense of the information bottleneck [17]) that filters the information contained in before
it?s used to estimate future rewards. We desire a compression such that corresponds to
the smallest statistic sufficient for accurately predicting the current reward as well as the
next belief state (since we can accurately predict all following rewards from ). Such a
and such that:
compression exists if we can also find mappings
'
, &# %'
and
698 :
4$ 698 : #
,&$ 698 : %'3 Y
#'
Y
'
'
(2)
'#
Since we are only interested in predicting future rewards, we don?t really need to accurately
estimate the next belief state ; we could just predict the next compressed belief state
since it captures all information in relevant for estimating future rewards. Figure 1(b)
represents the transition function that
illustrates the resulting functional flow, where
directly maps one compressed belief state to the next compressed belief state. Eq. 2 can
'
'
# 698 :
then be replaced by the following weaker but still sufficient conditions:
&, # %' and %D 698 : , # 698 : %' 3 Y
Y
(3)
Given an , # and h
# 698 : satisfying Eq. 3, we can evaluate a policy F using the compressed
POMDP dynamics as follows:
(4)
G # H #'@A, # #'@ JL!7M G # H # H+O P Q8 : #' e
:
Once G# H is found, we can recover the original value function GIH , G-# H % . Indeed, Eq. 1
and Eq. 4 are equivalent:
#
Theorem 1 Let ,
Eq. 4 does.
Proof
and
h# 698 :
GHV, G-# H
% . Then Eq. 1 holds iff
G-H '@A,2 H '@KJL!70 : G-H )gH+OP Q8 : '@
G # H
'Ee , I# '@ JL!0 : G-# H NgH+O PQ48 :5'Eee
G # H
'Ee , I# '@ JL!0 : G-# H g# H+OP Q8 :
'Eee
G7# H '@# D, # '@# cJ !0 : G# H g# HOP Q8 :5 #'@e
3.1 Linear compressions
satisfy Eq. 3 and let
# 68 :
#
We say is a linear compression when is a linear function, representable by some matrix
. In this case, the approximate transition and reward functions
and must also be
linear (assuming Eq. 3 is satisfied). Eq. 3 can be rewritten in matrix notation:
, # and 698 : , # 698 : 3
(5)
In a linear compression, can be viewed as effecting a change of basis for the value function, with the columns of defining
a subspace in which the compressed value function
lies. Furthermore, the rank of indicates the dimensionality of the compressed state space.
When Eq. 5 is satisfied, the columns of span a subspace that contains and that is invariant with respect to each 698 : . Intuitively, Eq. 5 says that a sufficient statistic must be
able to ?predict itself? at the next time step (hence the subspace is invariant), and that it
must predict the current reward (hence the subspace contains ). Formally:
satisfy Eq. 5. Then the range of contains and is
Theorem 2 Let #698 : , # and
invariant with respect to each 698 : .
Proof Eq. 5 ensures is a linear combination of the columns
of , so it lies in the range
of . It also requires
that the columns of each 698 : are linear combinations of
the columns of , so is invariant with respect to each 7698 : .
$
698 :5
Y
Y
7
Thus, the best linear lossless compression corresponds to the smallest invariant subspace
that contains . This is by definition the Krylov subspace
.
Using this fact we can easily compute the best lossless linear compression by iteratively
multiplying by each
until the Krylov basis is obtained. We then let the Krylov
basis form the columns of , and compute and each
by solving each part of Eq. 5.
Finally, we can solve the POMDP in the compressed state space by using and
.
6 8 :
#
h# 698 :
#
# 698 :
Note that this technique can be viewed as a generalization of Givan et al?s MDP model
minimization technique [3]. It is interesting to note that Littman et al. [9] proposed a
similar iterative algorithm to compress POMDPs based on predicting future observations. 2
2
Assuming that rewards are functions of the observations.
3.2 Structured Linear Compressions
When a POMDP is specified in compactly, say, using a DBN, the size of the state space may
be exponentially larger than the specification. The practical need to avoid state enumeration
is a key motivation for POMDP compression. However, the complexity of the search for
a good compression must also be independent of the state space size. Unfortunately, the
iterative Krylov algorithm involves repeatedly multiplying explicit transition matrices and
basis vectors. We consider several ways in which a compact POMDP specification can be
exploited to construct a linear compression without state enumeration.
One solution lies in exploiting DBN structure and context-specific independence. If transition, observation and reward functions are represented using DBNs and structured CPTs
(e.g., decision trees or algebraic decision diagrams), then the matrix operations required by
the Krylov algorithm can be implemented effectively [1, 7]. Although this approach can
offer substantial savings, the DTs or ADDs that represent the basis vectors of the Krylov
subspace may still be much larger than the dimensionality of the compressed state space
and the original DBN specifications.
Alternatively, families of self-sufficient variables corresponding to invariant subspaces can
be identified by exploiting additive separability. Starting with the variables upon which
depends, we can recursively grow a family of variables until it is self-sufficient with respect
. The corresponding subspace is invariant and necessarily contains . Assumto each
ing a tractable self-sufficient family is found, a compact basis can then be constructed by
using all indicator functions for each subset of variables in this family (e.g., if
is one such subset of binary variables, then eight basis vectors will correspond to this set).
This approach allows us to quickly identify a good compression by a simple inspection of
the additive separability structure of the DBN. The resulting compression is not necessarily optimal; however, it is the best among those corresponding to some such family. It is
and reward of the compressed POMDP can
important to note that the dynamics
be constructed easily (i.e., without state enumeration) from this and the original DBN
model. Pfeffer [13] notes that observations tend to reduce the amount of additive separability present in a DBN, thereby increasing the size of self-sufficient families. Therefore, we
should point out that lossless compressions of POMDPs that exploit self-sufficiency and
offer an acceptable degree of compression may not exist. Hence lossy compressions are
likely to be required in many cases.
698 :
# 698 :
#
R$
S
Finally, we ask whether the existence of lossless compressions requires some form of structure in the POMDP. We argue that this is almost always the case. Suppose a transition
matrix
and a reward vector are chosen uniformly at random. The odds that falls
are essentially zero since there are infinitely more
into a proper invariant subspace of
vectors in the full space than in all the proper invariant subspaces put together. This means
that if a POMDP can be compressed, it must almost certainly be because its dynamics exhibit some structure. We have described how context-specific independence and additive
separability can be exploited to identify some linear lossless compressions. However they
do not guarantee that the optimal compression will be found, so it remains an open question
whether other types of structure could be used in similar ways.
h698 :
68 :
4 Lossy compressions
Since we cannot generally find effective lossless compressions, we also consider lossy
compressions. We propose a simple approach to find linear lossy compressions that ?almost
satisfy? Eq. 5. Table 1 outlines a simple optimization program to find lossy compressions
that minimize
a weighted sum of the max-norm residual errors, and , in Eq. 5. Here
and are weights that allow us to vary the degree to which the two components of Eq. 5
J
f @2f
#
f
h698 : f h# 698 :
, %
s.t.
(6)
3 Y
Y
(7)
Table 1: Optimization program for linear lossy compressions
, %
# h# 698 :
should be satisfied. The unknowns of the program are all the entries of ,
and as
well as and . The constraint
is necessary to preserve scale, otherwise
to 0. Since
could be driven down to 0 simply by setting all the entries of
and multiply , some constraints are nonlinear. However, it is possible to solve this
optimization program by solving a series of LPs (linear programs). We alternate solving
the LP that adjusts and
while keeping fixed, and solving the LP that adjusts
while keeping and
fixed. This guarantees that the objective function decreases at
each iteration and will converge, but not necessarily to a local optimum.
#
#
#
h# 68 :
h# 698 :
# 698 :
4.1 Max-norm Error Bound
The
quality of the compression resulting from this program depends
on the weights and
. Ideally, we would like to set and in a way that
represents the loss in
decision quality associated with compressing the state space. If we can bound the error
of evaluating any policy using the compressed POMDP, then the difference in expected total
return between the policy that is best w.r.t. the compressed POMDP and the true optimal
policy is at most
. Let
be
. Theorem 3 gives an upper bound
as a linear combination of the max-norm residual errors in Eq. 5.
on
J
E G7Hhf G-# H'%
H
L, H EGH f G7# H % ,
, E f # %
h# 698 : %' and G# g, H G-# H . Then
Z J
" ! $#
Theorem 3 Let
% &
'% (
,
.
, 96 8 : @h698 :
f
%2J (
)% *
J &
! + G,# .- ?%f"!
G,# /
We omit the proof due to lack of space. It essentially consists of a sequence of substitutions
of the type
and
. We suspect
that the above error bound will grossly overestimate the loss in decision quality, however
we intend to use it mostly as a guide for setting and . Here
is
typically much greater than
because of the factor
, which means that
has a much higher impact on the loss in decision quality than . Intuitively, this makes
sense because the error in
predicting the next compressed belief state may compound
over time, so we should set significantly higher than .
% - e% fV!
4.2 Structured Compressions
#
As with lossless compressions, solving the program in Table 1 may be intractable due to
the size of . There are
constraints and
unknown entries in matrix .3 We
describe several techniques that allow one to exploit problem structure to find an acceptable
lossy compression without state space enumeration.
0
One approach is related to the basis function model proposed in [4], in which we restrict
to be functions over some small set of factors (subsets of state variables.) This ensures that
the number of unknown parameters in any column of (which we optimize in Table 1) is
3
Assuming
21
is small, the
3 2 1 3 4
variables in each
571 6 8 9
and
3 2 1 3 variables in j 1
are unproblematic.
linear in the number of instantiations of each factor. By keeping factors small, we maintain a manageable set of unknowns. To deal with the
constraints, we can exploit
the structure imposed on and the DBN structure to reduce the number of constraints
to something (in the many cases) polynomial in the number of state variables. This can
be achieved using the techniques described in [4, 16] to rewrite an LP with many fewer
constraints or to generate small subsets of constraints incrementally. These techniques are
rather involved, so we refer to the cited papers for details.
0
By searching within a restricted set of structured compressions and by exploiting DBN
structure it is possible to efficiently solve the optimization program in Table 1. The question
of factor selection remains: on what factors should be defined? A version of this question
has been tackled in [12, 14] in the context of selecting a basis to approximately solve MDPs.
The techniques proposed in those papers could be adapted to our optimization program.
< <
#
# ; 0 < < < <
# ;
An alternative method for structuring the computation of involves additive separability.
(
) be subsets of variables, and
be a function over
and the
Let
compressed state space . We restrict each column of to be a separable function of the
; that is, column (corresponding to state ) is
for some parameters
. Here the can be viewed as weights indicating the importance of the contribution of
each in the separable function. Given a family of subsets, the parameters over which
we optimize to determine are now the and the entries of each function
.
While nonlinear, the same alternating minimization scheme described earlier can be used
to optimize these two classes of parameters of in turn. Note that the number of variand the compressed state space .
ables is dependent only on the size of the subsets
Furthermore, this form of additive separability lends itself to the same compact constraint
generation techniques mentioned above. Finally, the (discrete) search for decent subsets
can be interleaved with optimization of the compression mapping for fixed sets .
<
<
<
#
<
<
<
<
< <
A#
#
<
<
<
5 Preliminary Experiments
We report on preliminary experiments with the coffee problem described in [2]. Given its
relatively small size (32 states, 3 observations and 2 actions), these results should be viewed
as simply illustrating the feasibility and potential of the algorithms proposed in Secs. 3.1
and 4.1. Further experiments for the structured versions (Secs. 3.2 and 4.2) are necessary
to assess the degree of compression achievable with large, realistic problems.
The 32-dimensional belief space can be compressed without any loss to a 7-dimensional
subspace using the Krylov subspace algorithm described in Section 3.1. For further compression,
we applied the optimization program described in Table 1 by setting the weights
and to and
respectively. The alternating variable technique was iterated times,
with the best solution chosen from random restarts (to mitigate the effects of local optima). Figure 2 shows the loss in expected return (w.r.t. the optimal policy) when policy
computed using varying degrees of compression is executing for
stages. The loss is
sampled from 100,000 random initial belief states, averaged over 10 runs. These policies
manage to achieve expected returns with less than
loss. In contrast, the average loss of
a random policy is about (or
).
%
55
%
%
%95
6 Concluding Remarks
We have presented an in-depth theoretical analysis of the impact of static compressions on
decision quality. We derived a set of conditions that guarantee compression does not impair
decision quality, leading to interesting mathematical properties for linear compressions
that allow us to exploit structure in the POMDP dynamics. We also proposed a simple
3%
0.2
2%
0.1
1%
0
0%
3
4
5
6
Dimensionality of Compressed Space
Average Loss (Relative)
Average Loss (Absolute)
0.3
7
Figure 2: Average loss for various lossy compressions
optimization program to search for good lossy compressions. Preliminary results suggest
that significant compression can be achieved with little impact on decision quality.
This research can be extended in various directions. It would be interesting to carry out a
similar analysis in terms of information theory (instead of linear algebra) since the problem
of identifying information in a belief state relevant to predicting future rewards can be modeled naturally using information theoretic concepts [6]. Dynamic compressions could also
be analyzed since, as we solve a POMDP, the set of reasonable policies shrinks, allowing
greater compression.
References
[1] C. Boutilier, R. Dearden, and M. Goldszmidt. Stochastic dynamic programming with factored
representations. Artificial Intelligence, 121:49?107, 2000.
[2] C. Boutilier and D. Poole. Computing optimal policies for partially observable decision processes using compact representations. Proc. AAAI-96, pp.1168?1175, Portland, OR, 1996.
[3] R. Givan, T. Dean, and M. Greig. Equivalence notions and model minimization in Markov
decision processes. Artificial Intelligence, to appear, 2002.
[4] C. Guestrin, D. Koller, and R. Parr. Max-norm projections for factored MDPs. Proc. IJCAI-01,
pp.673?680, Seattle, WA, 2001.
[5] C. Guestrin, D. Koller, and R. Parr. Solving factored POMDPs with linear value functions.
IJCAI-01 Worksh. on Planning under Uncertainty and Inc. Info., Seattle, WA, 2001.
[6] C. Guestrin and D. Ormoneit. Information-theoretic features for reinforcement learning. Unpublished manuscript.
[7] J. Hoey, R. St-Aubin, A. Hu, and C. Boutilier. SPUDD: Stochastic planning using decision
diagrams. Proc. UAI-99, pp.279?288, Stockholm, 1999.
[8] M. L. Littman. Memoryless policies: theoretical limitations and practical results. In D. Cliff,
P. Husbands, J. Meyer, S. W. Wilson, eds., Proc. 3rd Intl. Conf. Sim. of Adaptive Behavior,
Cambridge, 1994. MIT Press.
[9] M. L. Littman, R. S. Sutton, and S. Singh. Predictive representations of state. Proc.NIPS-02,
Vancouver, 2001.
[10] R. A. McCallum. Hidden state and reinforcement learning with instance-based state identification. IEEE Transations on Systems, Man, and Cybernetics, 26(3):464?473, 1996.
[11] K. Murphy. A survey of POMDP solution techniques. Technical Report, U.C. Berkeley, 2000.
[12] R. Patrascu, P. Poupart, D. Schuurmans, C. Boutilier, C. Guestrin. Greedy linear valueapproximation for factored Markov decision processes. AAAI-02, pp.285?291, Edmonton,
2002.
[13] A. Pfeffer. Sufficiency, separability and temporal probabilistic models. Proc. UAI-01, pp.421?
428, Seattle, WA, 2001.
[14] P. Poupart, C. Boutilier, R. Patrascu, and D. Schuurmans. Piecewise linear value function
approximation for factored MDPs. AAAI-02, pp.292?299, Edmonton, 2002.
[15] Y. Saad. Iterative Methods for Sparse Linear Systems. PWS, Boston, 1996.
[16] D. Schuurmans and R. Patrascu. Direct value-approximation for factored MDPs. Proc. NIPS01, Vancouver, 2001.
[17] N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. 37th Annual
Allerton Conf. on Comm., Contr. and Computing, pp.368?377, 1999.
| 2192 |@word illustrating:1 briefly:1 manageable:1 compression:62 norm:4 polynomial:1 version:2 achievable:1 open:1 hu:1 additively:1 thereby:1 solid:2 recursively:1 carry:1 initial:1 substitution:1 contains:6 series:1 selecting:1 current:5 si:1 must:5 additive:9 subsequent:1 wx:1 realistic:1 intelligence:2 fewer:1 greedy:1 rts:1 inspection:1 mccallum:1 characterization:1 toronto:6 allerton:1 mathematical:2 constructed:2 direct:1 consists:3 specialize:1 indeed:1 expected:6 behavior:1 roughly:1 examine:1 planning:2 discounted:2 little:1 enumeration:5 increasing:1 estimating:1 bounded:1 notation:2 what:1 minimizes:1 finding:2 guarantee:4 safely:1 mitigate:2 berkeley:1 temporal:1 growth:1 omit:1 appear:1 overestimate:1 before:1 local:2 sd:2 sutton:1 cliff:1 approximately:2 minimally:1 equivalence:1 specifying:1 limited:1 g7:2 range:3 averaged:1 directed:1 practical:2 nips01:1 implement:1 procedure:1 significantly:1 projection:1 suggest:1 cannot:1 selection:1 put:1 context:3 optimize:3 equivalent:1 map:2 imposed:1 dean:1 starting:1 pomdp:25 survey:1 identifying:1 factored:7 rule:1 adjusts:2 deriving:1 searching:1 notion:1 updated:1 dbns:2 suppose:2 programming:1 distinguishing:1 satisfying:1 pfeffer:4 capture:1 hv:1 compressing:4 ensures:3 decrease:1 substantial:1 mentioned:1 comm:1 complexity:1 reward:19 ideally:2 littman:3 dynamic:10 depend:2 solving:8 rewrite:1 algebra:1 singh:1 predictive:1 upon:1 basis:10 compactly:1 easily:3 joint:1 represented:2 various:2 separated:1 describe:2 effective:1 artificial:2 larger:2 solve:6 say:5 otherwise:1 compressed:21 statistic:2 itself:2 sequence:2 propose:2 relevant:5 iff:1 achieve:1 ppoupart:1 exploiting:3 ijcai:2 impaired:1 optimum:2 seattle:3 intl:1 generating:1 executing:1 derive:3 illustrate:1 sim:1 eq:18 implemented:1 c:2 involves:2 predicted:2 met:1 direction:1 compromising:1 filter:1 stochastic:2 enable:1 require:1 generalization:1 really:1 givan:2 preliminary:3 aubin:1 stockholm:1 hold:1 mapping:4 predict:5 parr:2 vary:1 smallest:3 proc:7 lose:1 weighted:1 minimization:3 mit:1 always:1 aim:1 rather:1 avoid:1 varying:1 wilson:1 structuring:1 derived:1 focus:1 portland:1 rank:1 indicates:1 contrast:1 sense:2 contr:1 inference:1 dependent:1 typically:2 hidden:1 koller:2 interested:1 among:1 pascal:1 once:1 construct:1 saving:1 represents:2 future:7 report:2 piecewise:1 few:1 preserve:1 murphy:1 replaced:1 maintain:1 multiply:2 evaluation:1 certainly:1 analyzed:1 light:1 accurate:2 necessary:2 tree:1 kjl:2 theoretical:2 instance:3 column:10 modeling:1 earlier:1 subset:12 entry:4 tishby:1 characterize:2 combined:1 st:1 cited:1 probabilistic:1 together:1 quickly:1 aaai:3 satisfied:3 manage:1 containing:1 conf:2 leading:1 return:3 potential:1 sec:2 inc:1 satisfy:3 depends:3 cpts:2 later:1 closed:1 analyze:1 observing:1 bayes:1 aggregation:1 recover:1 contribution:1 minimize:1 ass:1 efficiently:2 correspond:2 identify:4 generalize:1 bayesian:1 identification:1 iterated:1 accurately:6 craig:1 zy:1 monitoring:1 pomdps:14 multiplying:3 cybernetics:1 m5s:2 husband:1 ed:1 definition:1 grossly:1 pp:7 involved:1 regress:1 naturally:2 associated:3 proof:3 static:3 sampled:1 wh:1 ask:1 knowledge:1 dce:1 dimensionality:4 cj:1 manuscript:1 higher:2 restarts:1 response:1 sufficiency:2 shrink:1 furthermore:2 just:1 stage:1 until:2 nonlinear:2 lack:1 incrementally:1 quality:14 mdp:1 lossy:12 effect:2 concept:3 true:1 hence:5 alternating:2 memoryless:1 iteratively:1 eg:1 deal:1 confer:1 self:11 unnormalized:1 outline:1 theoretic:2 gh:2 recently:1 functional:3 exponentially:1 jl:3 extend:1 marginals:4 refer:1 significant:1 cambridge:1 rd:1 dbn:12 specification:3 add:1 something:1 showed:1 driven:1 compound:1 binary:1 exploited:4 guestrin:4 greater:2 somewhat:1 converge:1 maximize:1 determine:1 rv:1 full:1 infer:1 ing:1 technical:1 offer:2 feasibility:1 impact:5 prediction:2 essentially:2 iteration:3 normalization:1 represent:1 achieved:2 background:1 diagram:2 grow:1 source:1 saad:1 exhibited:1 suspect:1 tend:1 flow:3 odds:1 presence:1 decent:1 affect:1 independence:4 identified:1 suboptimal:1 restrict:2 reduce:2 idea:1 greig:1 bottleneck:3 whether:2 algebraic:2 action:6 repeatedly:2 remark:1 boutilier:6 generally:1 amount:1 discount:1 gih:1 generate:1 exist:2 dotted:2 discrete:2 key:1 sum:3 run:1 uncertainty:2 h5:2 family:15 almost:3 reasonable:1 decision:21 acceptable:2 capturing:1 bound:6 interleaved:1 tackled:1 annual:1 adapted:1 constraint:8 aspect:1 ables:1 span:1 concluding:1 separable:3 relatively:1 department:1 structured:7 alternate:1 combination:4 representable:1 smaller:1 separability:10 lp:4 departement:1 making:1 intuitively:2 invariant:14 restricted:1 hoey:1 remains:3 discus:1 loose:1 turn:1 know:1 tractable:1 operation:1 rewritten:1 eight:1 indirectly:1 alternative:1 existence:1 original:3 compress:1 denotes:3 ensure:1 graphical:1 exploit:6 coffee:1 objective:1 intend:1 question:3 rt:1 bialek:1 exhibit:1 lends:1 subspace:20 poupart:3 argue:1 assuming:3 modeled:1 effecting:1 unfortunately:2 mostly:1 info:1 proper:2 policy:20 unknown:4 allowing:1 upper:2 observation:11 markov:3 discarded:1 immediate:1 defining:1 extended:1 spudd:1 pair:1 required:3 specified:1 unpublished:1 concisely:1 fv:1 dts:1 nip:1 able:1 impair:1 krylov:10 poole:1 program:12 including:1 memory:1 max:4 belief:22 dearden:1 predicting:5 indicator:1 ormoneit:1 residual:2 scheme:4 lossless:13 mdps:4 review:1 multiplication:1 vancouver:2 relative:1 loss:12 interesting:4 generation:1 limitation:1 agent:2 degree:4 sufficient:16 intractability:2 row:1 cebly:1 last:1 keeping:3 guide:1 allow:4 weaker:1 wide:1 fall:1 absolute:1 sparse:1 depth:1 world:3 transition:10 rich:1 evaluating:1 reinforcement:2 adaptive:1 approximate:1 observable:2 compact:5 instantiation:1 uai:2 alternatively:1 don:1 search:3 iterative:4 decomposes:1 table:6 schuurmans:3 necessarily:3 linearly:1 arrow:4 motivation:1 unproblematic:1 edmonton:2 elaborate:1 meyer:1 pereira:1 explicit:1 exponential:1 lie:4 theorem:4 down:1 pws:1 specific:4 exists:1 intractable:1 sequential:1 effectively:1 importance:1 illustrates:1 horizon:1 boston:1 simply:2 likely:1 infinitely:1 desire:1 contained:1 partially:2 patrascu:3 corresponds:2 truth:2 conditional:3 viewed:7 man:1 transations:1 change:1 infinite:1 uniformly:1 total:1 indicating:1 select:1 formally:1 goldszmidt:1 evaluate:2 |
1,310 | 2,193 | Hyperkernels
Cheng Soon Ong, Alexander J. Smola, Robert C. Williamson
Research School of Information Sciences and Engineering
The Australian National University
Canberra, 0200 ACT, Australia
Cheng.Ong, Alex.Smola, Bob.Williamson @anu.edu.au
Abstract
We consider the problem of choosing a kernel suitable for estimation
using a Gaussian Process estimator or a Support Vector Machine. A
novel solution is presented which involves defining a Reproducing Kernel Hilbert Space on the space of kernels itself. By utilizing an analog
of the classical representer theorem, the problem of choosing a kernel
from a parameterized family of kernels (e.g. of varying width) is reduced
to a statistical estimation problem akin to the problem of minimizing a
regularized risk functional. Various classical settings for model or kernel
selection are special cases of our framework.
1 Introduction
Choosing suitable kernel functions for estimation using Gaussian Processes and Support
Vector Machines is an important step in the inference process. To date, there are few if
any systematic techniques to assist in this choice. Even the restricted problem of choosing
the ?width? of a parameterized family of kernels (e.g. Gaussian) has not had a simple and
elegant solution.
A recent development [1] which solves the above problem in a restricted sense involves
the use of semidefinite programming to learn an arbitrary positive semidefinite matrix ,
subject to minimization of criteria such as the kernel target alignment [1], the maximum of
the posterior probability [2], the minimization of a learning-theoretical bound [3], or subject
to cross-validation settings [4]. The restriction mentioned is that the methods work with the
kernel matrix, rather than the kernel itself. Furthermore, whilst demonstrably improving the
performance of estimators to some degree, they require clever parameterization and design
to make the method work in the particular situations. There are still no general principles to
guide the choice of a) which family of kernels to choose, b) efficient parameterizations over
this space, and c) suitable penalty terms to combat overfitting. (The last point is particularly
an issue when we have a very large set of semidefinite matrices at our disposal).
Whilst not yet providing a complete solution to these problems, this paper presents a framework that allows the optimization within a parameterized family relatively simply, and crucially, intrinsically captures the tradeoff between the size of the family of kernels and the
sample size available. Furthermore, the solution presented is for optimizing kernels themselves, rather than the kernel matrix as in [1]. Other approaches on learning the kernel
include using boosting [5] and by bounding the Rademacher complexity [6].
Outline of the Paper We show (Section 2) that for most kernel-based learning methods
there exists a functional, the quality functional1, which plays a similar role to the empirical risk functional, and that subsequently (Section 3) the introduction of a kernel on kernels, a so-called hyperkernel, in conjunction with regularization on the Reproducing Kernel Hilbert Space formed on kernels leads to a systematic way of parameterizing function
classes whilst managing overfitting. We give several examples of hyperkernels (Section 4)
and show (Section 5) how they can be used practically. Due to space constraints we only
consider Support Vector classification.
2 Quality Functionals
&('*) /0$
+-, +-,.
!
1
2 43 +-,.
9
:
"#%$
!
5
6 7 83 7+-,.
; %7 $
Let
denote the set of training data and
the
set of corresponding labels, jointly drawn iid from some probability distribution
on
. Furthermore, let
and
denote the corresponding test sets (drawn from
the same
). Let
and
.
We introduce a new class of functionals on data which we call quality functionals. Their
purpose is to indicate, given a kernel and the training data
, how suitable
the kernel is for explaining the training data.
:
9<+-=?>%@ : C # D;< E A
:
:B BC# 0D$$
F
9 +-=?> @ : <AGHF0
IJ@ :B C # D $ A CK D
The basic idea is that 9L+-=?> could be used to adapt : in a manner such that 9M+-=?> is
minimized, based on this single dataset
. Given E a sufficiently rich class N of kerO N that attains arbitrarily small
nels : it is in general
possible
to
find
a
kernel
A for any training:Pset.
However, it is very unlikely that
values of 9 +-=?> @ :QO
9<+-=?>%@ :QO +-,. Q+-,.-A would be similarly small in general. Analogously to the standard
Definition 1 (Empirical Quality Functional) Given a kernel , and data
, define
to be an empirical quality functional if it depends on only via
where
; i.e. if there exists a function such that
where
is the kernel matrix.
methods of statistical learning theory, we aim to minimize the expected quality functional:
9 +-=?>
9!@ :RA/
2SUT K V @ 9L+-=?>R@ : < AWA
Definition 2 (Expected Quality Functional) Suppose
tional. Then
is an empirical quality func(1)
is the expected quality functional, where the expectation is taken with respect to
.
between
A and ] the empirical risk of an estimator
Note the
<AL [
X +-=?>%@ Y similarity
Z CW\"] C9 #+-=?C > YG@ : C^$#$ <(where
is a suitable loss function):
both
indrawn
cases we/compute
the
value
of
a
functional
which
depends
on
some
sample
#%$ and a function, and in both cases we have
from !
(2)
9!@ :RAP2S_T K V @ 9<+-=?>%@ : <A`A and X @ YAP6S_T K V @ X +-=?>%@ Y <A`A
X
Here @ YA is known as the expected risk. We now present some examples of quality functionals, and derive their exact minimizers whenever possible.
Example 1 (Kernel Target Alignment) This quality functional was introduced in [7] to
assess the ?alignment? of a kernel with training labels. It is defined by
%g
9 +-=?a b> =?+- c @ : A/
Jdfe hijhk h h
(3)
k k
Z C,K D hjh Ck k D denotes the l k norm of , and h h k
where denotes the vector
ofk elements of
h
h
g
k
nmo
is the Frobenius norm:
. Note that the definition in [7] looks
somewhat different, yet it is algebraically identical to (3).
1
We actually mean badness, since we are minimizing this functional.
g , in
g R g
hijh k
9 +-=?a b> =?+- c @ : O 7 % AHdfe hjhkk h g h k Hd4e hijhkk hjhkk (4)
a b =?+- c @ :QO A for data other than
It is clear that one cannot expect that 9 +-=?>
the set chosen to determine :PO .
By decomposing
which case
into its eigensystem, one can see that (3) is minimized if
Example 2 (Regularized Risk Functional) If is the Reproducing Kernel Hilbert Space
(RKHS) associated with the kernel , the regularized risk functionals have the form
:
X +-b @ Y A/
d
] C C YG C $#$
fh Y h k
(5)
CW\"
h hk
where Y is the RKHS norm of Y E . By virtue of the representer theorem (see e.g., [4, 8])
we know that the minimizer over Y
of (5) can be written as a kernel expansion.
]
For a given loss this leads to the quality functional
9 +-+-=?b>
, @ : 7 A
d C`\" ] C # C @ /A C $
g ! (6)
The minimizer of (6) is more difficult to find, since we
to carry out a double miniR g have
mization over and . First, note that for
and $&%(')%(* , 6
#
"
g +"-, . Thus 9 +-b ., @ : A< k / $ . For sufficiently large " , we and
can
+-b
, +-=?> A arbitrarily close to .
make 9 +-=?> @ :
d , we can determine the minimum
Even if we disallow setting to zero,
E 5 mo , and
6
g by setting
of (6) as follows. Set %10% *3242 , where 2
2 . Then 2 and so
d ] C C @ jA C $
g ] C C 2 C 7$
8h 2 h kk
CW\"
CW\"
k
C
] CC ? >7$ /k > yields the minimum with respect to 2 . The
Choosing each 2 98o;:4<=
proof that
is the global minimizer of this quality functional is omitted for brevity.
X +-b@ Y Q A
Example 3 (Negative Log-Posterior) In Gaussian processes, this functional is similar to
since it includes a regularization term (the negative log prior) and a
loss term (the negative log-likelihood). In addition, it also includes the log-determinant of
which measures the size of the space spanned by . The quality functional is
9 a-+ @=?b>>@,. @ : Q A/
AC B e `C \ 7DE :F" C;G C Y C $
d Y g
,
Y
d
DE :
G G
!
(7)
which does not have
full
rank
will
send
(7)
to
,
and
thus
such
cases
Note that any
I
H
G G
need to be excluded. When we fix
, to exclude the above case, we can set
e
J
d
J" hBh , k R g " ,LNK M K POMe hijh , k R g $
G G
C C (8)
d . UnderC the assumption
e
:
"
F
Y C$
which leads to C
that
the
minimum
of
D<E
C
with respect to Y is attained
a @b>@,. @ : at QY A . , we can see that "RQSH still leads to the overall
minimum of 9 +-=?>
Other examples, such as cross-validation, leave-one-out estimators, the Luckiness framework, the Radius-Margin bound also have empirical quality functionals which can be arbitrarily minimized.
The above examples illustrate how many existing methods for assessing the quality of a
kernel fit within the quality functional framework. We also saw that given a rich enough
class of kernels , optimization of
over would result in a kernel that would be
useless for prediction purposes. This is yet another example of the danger of optimizing
too much ? there is (still) no free lunch.
N
9 +-=?>
N
3 A Hyper Reproducing Kernel Hilbert Space
We now introduce a method for optimizing quality functionals in an effective way. The
method we propose involves the introduction of a Reproducing Kernel Hilbert Space on
the kernel itself ? a ?Hyper?-RKHS. We begin with the basic properties of an RKHS
(see Def 2.9 and Thm 4.2 in [8] and citations for more details).
:
&
Definition 3 (Reproducing Kernel Hilbert Space) Let be a nonempty set
5 (often called
the index set) and denote by a Hilbert space of functions
Q . Then
is
called a reproducing kernel Hilbert space endowed with the dot product
(and the
5
#Q
) if there exists a function
satisfying,
:
norm
1. has the reproducing
property
for all
; in particular,
.
G
8)
where is the completion of .
2. spans , i.e.
The advantage of optimization in an RKHS is that under certain conditions the optimal
solutions can be found as the linear combination of a finite number of basis functions,
regardless of the dimensionality of the space , as can be seen in the theorem below.
h Y h
Y Y
: $
& ' & P $
"
: / $ ^ $
Y :B
YG
:B :B
:B /# .$ / $ E
:
:B
&
Y
&
Y E
`
/ E &
k $ @ $ 5 Q 5 a strictly monotonic
]
&
4
& '
3 H an arbitrary loss
E
Y
] B #R YG $#$ # YG <$#$$ h Y h $
(9)
$
C
C
#
P
$
`
C
"
\
Z
admits a representation of the form G
Y
:B
.
5
The above definition allows us to define an RKHS on kernels & 5 ' & Q
, simply by
introducing &
6& ' & and by treating : as functions :
& Q
:
Definition 5 (Hyper Reproducing Kernel Hilbert Space) Let & be a nonempty set and
let &
5& ' & (the compounded index set).
Then the Hilbert space of functions
:
& Q , endowed with a dot product W
(and the norm h : h
: : ) is called5
a Hyper Reproducing Kernel Hilbert Space if there exists a hyperkernel :
& ' & Q
with the following properties:
1. : has
reproducing
property
: :
$ :B $ for all : E , in particular,
the
$
$
$
:
:
:
.
$ G E & .
2. : spans , i.e. E
8) :
the hyperkernel
: is a kernel in its second argument, i.e. for
3. For any fixed
E & , the& function
:B /# $
: / $#$ with "# E & is a kernel.
any fixed
k
What distinguishes from a normal RKHS is the particular form of its index set ( & & )
and the additional condition on : to be a kernel in its second argument for any fixed first
argument. This condition somewhat limits the choice of possible kernels. On
other
E the, which
hand, it allows for simple optimization algorithms which consider kernels :
are in the convex cone of : . Analogously to the definition of the regularized risk functional
(5), we define the regularized quality functional:
9 +-b @ : <A/
69 +-=?> @ : LA
h : h k
(10)
k
h
h
where
is a regularization constant and :
denotes the RKHS norm in . Minimization of 9<+-b is less prone to overfitting than minimizing 9M+-=?> , since the regularization
h h k effectively controls the complexity of the class of kernels under consideration.
term /k :
h h k are also possible. The question arising immediately from
Regularizers other than / k :
H
Theorem 4 (Representer Theorem) Denote by 5
a set, and by
increasing function, by
Q
function. Then each minimizer
of the regularized risk
(10) is how to minimize the regularized quality functional efficiently. In the following we
show that the minimum can be found as a linear combination of hyperkernels.
@ $
Corollary 6 (Representer
5 Theorem for Hyper-RKHS) Let be a hyper-RKHS and de
note by
a strictly monotonic increasing function, by a set, and by
H Q
of the regularized quality
an arbitrary quality functional. Then each minimizer
functional
(11)
&
: E
9
h: hk
9!@ : <A
$
/
#
admits a representation of the form :B
CK D\" " CD : # PC#0D $ /# #$ $ .
C D
PC#0D $
:
9 D @ : _A
C
;
Proof All we need to do is rewrite (11) so that it satisfies the conditions of Theorem 4. Let
. Then
has the properties of a loss function, as it only depends
on via its values at
. Furthermore, /
is an RKHS regularizer, so the representer
theorem applies and the expansion of follows.
:
k h: hk
This result shows that even though we are optimizing over an entire (potentially infinite
dimensional) Hilbert space of kernels, we are able to find the optimal solution by choosing
among a finite dimensional subspace. The dimension required ( ) is, not surprisingly, significantly larger than the number of kernels required in a kernel function expansion which
makes a direct approach possible only for small problems. However, sparse expansion
techniques, such as [9, 8], can be used to make the problem tractable in practice.
k
4 Examples of Hyperkernels
Having introduced the theoretical basis of the Hyper-RKHS, we need to answer the question whether practically useful exist which satisfy the conditions of Definition 5. We
address this question by giving a set of general recipes for building such kernels.
:
C
B $ Z C`\ /] C
:
Example
Series Construction) Denote by a positive semidefinite kernel, and
5 4 (Power
5
Q
a function with positive Taylor
expansion coefficients
and
by
convergence radius . Then for
we have that
: k / $ X
C
: # $
B7:j $ :B $#$ WC \ ] C :B $ :B $#$
(12)
, : "# $#$ is a sum of kernel functions, hence it is
C /fixed
is a hyperkernel: for any
.$
a kernel
itself (since :
is a kernel if : is). To show that : is a kernel, note that
: $ $i $ , where $
J ] ] : $i ] k : k $i$ .
X
@ d A
d
C
(13)
: # $ dfe $ C`\ :B $ :B $ $ fd e dfe:B $ :j $
"# $ /#e k h e -h k $ ,
Example 6 (Gaussian Harmonic Hyperkernel) For :B
: # /# $i # $$ dfe Le k h dfe e h k hi e h k $#$ (14)
K ; that is, the expression h : h k converges to the Frobenius
For Q d , : converges to
norm of : on 1'
.
Example 5 (Harmonic Hyperkernel) A special case of (12) is the harmonic hyperkernel:
#Q
Denote by a kernel with
(e.g., RBF kernels satisfy this property), and
for some . Then we have
set
] C
H d : e
$ C
:
&(';&
B $
;<
E
2
6
* 2
k
[
* [
k
2
2
*
k K
*
[
k
*
6
k K
2
Power
series
expansion
d
d
8o m )8
e D / dfe
$
X
We can find further hyperkernels,
simply by consulting tables on
power series of functions. Table 1 contains a list of suitable
expansions. Recall that expansions such as (12) were mainly
chosen for computational convenience, in particular whenever it
is not clear which particular class
of kernels would be useful for the
expansion.
H
H
H
1
1
Table 1: Examples of Hyperkernels
Example 7 (Explicit Construction) If we know or have a reasonable guess as to which
kernels could be potentially relevant (e.g., a range of scales of kernel width, polynomial
degrees, etc.), we may begin with a set of candidate kernels, say , . . . , and define
:
:
] C C $ C $
$
#
:
`C \" : :
: C $
: W$ $ . $ ,
Clearly
$ ] : since
$
$
] : : $i is ] k a: k hyperkernel,
.
where
(15)
$
5 An Application: Minimization of the Regularized Risk
Recall that in the case of the Regularized Risk functional, the regularized quality optimization problem takes on the form
d
] PCC YG PC $$
h Y h k
h : h k
(16)
C KB
CW\"
Z C C :B C #$ , the second term h Y h k is a linear function of : . Given a convex
For Y
]
loss function , the regularized quality functional (16) is convex in : . The corresponding
regularized quality functional is:
9 +-+-bb ,
@ : <A 9 -+ +-=?b> ,
@ : <A
h : h k
(17)
:
Y
For fixed , the problem can be formulated as a constrained minimization problem in , and
subsequently expressed in terms of the Lagrange multipliers . However, this minimum
depends on , and for efficient minimization we would like to compute the derivatives with
respect to . The following lemma tells us how (it is an extension of a result in [3] and we
omit the proof for brevity):
:
:
E 5
$
YG / $i ] C
5
5
Y
Lemma 7 Let
and denote by
convex functions, where is
Q
parameterized
by
.
Let
be
the
minimum
of
the
following
optimization problem (and
denote by
its minimizer):
$
X
YG " $ subject to ] C $
D X $
D
'& k YG $ $ , where ( E*) and & k
Then $ %
the second argument of Y .
minimize
for all
d
!
?
#"
(18)
denotes the derivative with respect to
Since the minimizer of (17) can be written as a kernel expansion (by the representer theorem for Hyper-RKHS), the optimal regularized quality functional can be written as (using
CD
: # C#0D$i %# $$ :
C
d
9 +-+-bb ,
@ " MA CW\" 8
fd e DK K \" D " C
D
C D
C
D
CD
CK DK K \" "
CK DK K \" " "
the soft margin loss and
(19)
CD
Minimization of (19) is achieved by alternating between minimization over for fixed "
(this
is a quadratic optimization problem), and subsequently minimization over " (with
"
to ensure positivity of the kernel matrix) for fixed .
C
D
Low Rank Approximation While being finite in the number of parameters (despite the
optimization over two possibly infinite dimensional Hilbert spaces and ), (19) still
presents a formidable optimization problem in practice (we have
coefficients for " ).
For an explicit
expansion of type (15) we can optimize in the expansion coefficients of
directly, which means that we simply have a quality functional with an
penalty on the expansion coefficients. Such
an approach is recommended if there are few
"
), we resort to a low-rank approximation, as
terms in (15). In the general case (or if
a small
with
described in [9, 8]. This means that we pick from
(
fraction of terms which approximate on
sufficiently well.
k
: C $:C $
lk
:
'
: BC#0D $ $
c
d
6 Experimental Results and Summary
Experimental Setup To test our claims of kernel adaptation via regularized quality functionals we performed preliminary tests on datasets from the UCI repository (Pima, Ionosphere, Wisconsin diagnostic breast cancer)
and the USPS database
of handwritten digits
training data and
test data, except for
(?6? vs. ?9?). The datasets were split into
the USPS data, where the provided split was used. The experiments were repeated over
200 random 60/40 splits. We deliberately did not attempt to tune parameters and instead
made the following choices uniformly for all four sets:
5d
The kernel width was set to ,
, where is the dimensionality of the
data. We deliberately chose a too large value in comparison with the usual rules
of thumb [8] to avoid good default
kernels.
4
was adjusted so that
(that is
in the Vapnik-style parameterization of SVMs). This/ has commonly been reported to yield good results.
for the Gaussian Harmonic Hyperkernel was chosen to be throughout, giving adequate coverage over various kernel widths in (13) (small focus almost
exclusively on wide kernels, close to will treat
all widths equally).
The hyperkernel regularization was set to
, .
d
d
Hd
d
We compared the results with the performance of a generic Support Vector Machine with
the same values chosen for and and one for which had been hand-tuned using cross
validation.
Results Despite the fact that we did not try to tune the parameters we were able to achieve
highly competitive results as shown in Table 2. It is also worth noticing that the number of
hyperkernels required after a low-rank decomposition of the hyperkernel matrix contained
typically less than 10 hyperkernels, thus rendering the optimization problem not much
more costly than a standard Support Vector Machine (even with a very high quality ,
approximation of ) and that after the optimization of (19), typically less than were being
used. This dramatically reduced the computational burden.
Using the same non-optimized parameters for different data sets we achieved results comparable to other recent work on classification such as boosting, optimized SVMs, and kernel
target alignment [10, 11, 7] (note that we use a much smaller part of the data for training:
d
Data(size)
pima(768)
ionosph(351)
wdbc(569)
usps(1424)
X +-b
Train
25.2 2.0
13.4 2.0
5.7 0.8
2.1
Test
26.2 3.3
16.5 3.4
5.7 1.3
3.4
9 +-b
Train
22.2 1.4
10.9 1.5
2.1 0.6
1.5
Test
23.2 2.0
13.4 2.4
2.7 1.0
2.8
Best in
[10, 11]
23.5
6.2
3.2
NA
Tuned
SVM
22.9 2.0
6.1 1.9
2.5 0.9
2.5
Table 2: Training and test error in percent
9M+-b
only
rather than
). Results based on
are comparable to hand tuned SVMs
(right most column), except for the ionosphere data. We suspect that this is due to the small
training sample.
Summary and Outlook The regularized quality functional allows the systematic solution of problems associated with the choice of a kernel. Quality criteria that can be used
include target alignment, regularized risk and the log posterior. The regularization implicit
in our approach allows the control of overfitting that occurs if one optimizes over a too
large a choice of kernels.
A very promising aspect of the current work is that it opens the way to theoretical analyses
of the price one pays by optimizing over a larger set
of kernels. Current and future
research is devoted to working through this analysis and subsequently developing methods
for the design of good hyperkernels.
N
Acknowledgements This work was supported by a grant of the Australian Research
Council. The authors thank Grace Wahba for helpful comments and suggestions.
References
[1] G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. Jordan. Learning the
kernel matrix with semidefinite programming. In ICML. Morgan Kaufmann, 2002.
[2] C. K. I. Williams. Prediction with Gaussian processes: From linear regression to
linear prediction and beyond. In M. I. Jordan, editor, Learning and Inference in
Graphical Models. Kluwer Academic, 1998.
[3] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing kernel parameters
for support vector machines. Machine Learning, 2002. Forthcoming.
[4] G. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional
Conference Series in Applied Mathematics. SIAM, Philadelphia, 1990.
[5] K. Crammer, J. Keshet, and Y. Singer. Kernel design using boosting. In Advances in
Neural Information Processing Systems 15, 2002. In press.
[6] O. Bousquet and D. Herrmann. On the complexity of learning the kernel matrix. In
Advances in Neural Information Processing Systems 15, 2002. In press.
[7] N. Cristianini, A. Elisseeff, and J. Shawe-Taylor. On optimizing kernel alignment.
Technical Report NC2-TR-2001-087, NeuroCOLT, http://www.neurocolt.com, 2001.
[8] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, 2002.
[9] S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representation. Technical report, IBM Watson Research Center, New York, 2000.
[10] Y. Freund and R. E. Schapire. Experiments with a new boosting algorithm. In ICML,
pages 148?146. Morgan Kaufmann Publishers, 1996.
[11] G. R?atsch, T. Onoda, and K. R. M?uller. Soft margins for adaboost. Machine Learning,
42(3):287?320, 2001.
| 2193 |@word determinant:1 repository:1 polynomial:1 norm:7 open:1 crucially:1 decomposition:1 pset:1 pick:1 elisseeff:1 tr:1 outlook:1 carry:1 series:4 contains:1 exclusively:1 tuned:3 bc:2 rkhs:13 existing:1 current:2 com:1 yet:3 written:3 treating:1 v:1 guess:1 parameterization:2 parameterizations:1 boosting:4 consulting:1 direct:1 manner:1 introduce:2 ra:1 expected:4 themselves:1 increasing:2 begin:2 provided:1 formidable:1 what:1 sut:1 whilst:3 ag:1 combat:1 act:1 control:2 grant:1 omit:1 positive:3 engineering:1 treat:1 limit:1 despite:2 ap:1 chose:1 au:1 range:1 practice:2 digit:1 danger:1 empirical:6 significantly:1 cannot:1 clever:1 selection:1 close:2 bh:1 convenience:1 risk:11 www:1 restriction:1 optimize:1 center:1 send:1 williams:1 regardless:1 convex:4 immediately:1 estimator:4 parameterizing:1 utilizing:1 rule:1 spanned:1 hd:1 target:4 play:1 suppose:1 construction:2 exact:1 programming:2 lanckriet:1 element:1 satisfying:1 particularly:1 mukherjee:1 database:1 role:1 capture:1 mentioned:1 complexity:3 cristianini:2 ong:2 rewrite:1 basis:2 usps:3 po:1 mization:1 various:2 regularizer:1 train:2 effective:1 tell:1 hyper:8 choosing:7 larger:2 say:1 jointly:1 itself:4 advantage:1 propose:1 product:2 adaptation:1 relevant:1 uci:1 date:1 achieve:1 frobenius:2 olkopf:1 recipe:1 hf0:1 convergence:1 double:1 assessing:1 rademacher:1 leave:1 converges:2 derive:1 illustrate:1 hyperkernel:11 ac:1 completion:1 school:1 solves:1 coverage:1 involves:3 indicate:1 australian:2 radius:2 subsequently:4 australia:1 observational:1 require:1 ja:1 fix:1 preliminary:1 adjusted:1 strictly:2 extension:1 practically:2 sufficiently:3 normal:1 claim:1 omitted:1 fh:1 purpose:2 estimation:3 label:2 saw:1 council:1 minimization:9 uller:1 mit:1 clearly:1 gaussian:7 aim:1 rather:3 avoid:1 varying:1 conjunction:1 corollary:1 focus:1 rank:5 likelihood:1 mainly:1 hk:3 attains:1 sense:1 helpful:1 inference:2 tional:1 minimizers:1 el:1 unlikely:1 entire:1 typically:2 issue:1 classification:2 overall:1 among:1 development:1 constrained:1 special:2 having:1 identical:1 look:1 icml:2 representer:6 future:1 minimized:3 report:2 spline:1 few:2 distinguishes:1 national:1 attempt:1 fd:2 highly:1 alignment:6 semidefinite:5 pc:4 regularizers:1 devoted:1 taylor:2 theoretical:3 rap:1 column:1 soft:2 badness:1 introducing:1 too:3 reported:1 answer:1 siam:1 systematic:3 analogously:2 yg:9 na:1 nm:1 choose:1 possibly:1 positivity:1 resort:1 derivative:2 style:1 exclude:1 de:1 includes:2 coefficient:4 satisfy:2 depends:4 performed:1 try:1 competitive:1 nc2:1 minimize:3 formed:1 ass:1 kaufmann:2 efficiently:1 yield:2 handwritten:1 thumb:1 iid:1 worth:1 bob:1 whenever:2 definition:8 associated:2 proof:3 dataset:1 intrinsically:1 recall:2 dimensionality:2 hilbert:13 actually:1 cbms:1 disposal:1 attained:1 adaboost:1 though:1 furthermore:4 smola:3 implicit:1 hand:3 working:1 qo:3 quality:31 building:1 multiplier:1 deliberately:2 regularization:6 hence:1 excluded:1 alternating:1 width:6 criterion:2 eigensystem:1 outline:1 complete:1 percent:1 harmonic:4 consideration:1 novel:1 functional:28 volume:1 analog:1 kluwer:1 mathematics:1 similarly:1 shawe:1 had:2 dot:2 chapelle:1 similarity:1 etc:1 posterior:3 recent:2 optimizing:6 optimizes:1 certain:1 arbitrarily:3 watson:1 seen:1 minimum:7 additional:1 somewhat:2 morgan:2 managing:1 algebraically:1 determine:2 recommended:1 full:1 compounded:1 technical:2 adapt:1 academic:1 cross:3 equally:1 prediction:3 basic:2 regression:1 breast:1 expectation:1 kernel:81 achieved:2 qy:1 addition:1 fine:1 publisher:1 sch:1 regional:1 comment:1 subject:3 suspect:1 elegant:1 jordan:2 call:1 split:3 enough:1 rendering:1 b7:1 fit:1 forthcoming:1 wahba:2 idea:1 tradeoff:1 whether:1 expression:1 bartlett:1 assist:1 akin:1 penalty:2 york:1 adequate:1 dramatically:1 useful:2 clear:2 tune:2 awa:1 demonstrably:1 svms:3 reduced:2 http:1 schapire:1 exist:1 nsf:1 diagnostic:1 arising:1 four:1 drawn:2 fraction:1 cone:1 sum:1 parameterized:4 noticing:1 family:5 reasonable:1 throughout:1 almost:1 comparable:2 bound:2 hi:5 def:1 pay:1 cheng:2 quadratic:1 constraint:1 alex:1 bousquet:2 wc:1 aspect:1 argument:4 span:2 relatively:1 developing:1 combination:2 smaller:1 lunch:1 restricted:2 ghaoui:1 taken:1 scheinberg:1 nonempty:2 singer:1 know:2 tractable:1 available:1 decomposing:1 endowed:2 generic:1 denotes:4 ofk:1 include:2 ensure:1 graphical:1 dfe:5 giving:2 classical:2 question:3 occurs:1 costly:1 usual:1 grace:1 subspace:1 cw:6 thank:1 neurocolt:2 useless:1 index:3 kk:4 providing:1 minimizing:3 mini:1 difficult:1 setup:1 robert:1 potentially:2 pima:2 negative:3 design:3 datasets:2 finite:3 defining:1 situation:1 reproducing:11 arbitrary:3 thm:1 introduced:2 required:3 optimized:2 address:1 able:2 beyond:1 below:1 power:3 suitable:6 regularized:17 lk:1 philadelphia:1 func:1 prior:1 acknowledgement:1 wisconsin:1 freund:1 loss:7 expect:1 suggestion:1 nels:1 validation:3 degree:2 principle:1 editor:1 ibm:1 prone:1 cancer:1 summary:2 surprisingly:1 last:1 soon:1 free:1 supported:1 guide:1 jh:7 disallow:1 explaining:1 wide:1 sparse:1 dimension:1 default:1 rich:2 author:1 made:1 commonly:1 herrmann:1 functionals:8 citation:1 approximate:1 global:1 overfitting:4 table:5 promising:1 learn:1 onoda:1 improving:1 expansion:13 williamson:2 did:2 bounding:1 repeated:1 canberra:1 explicit:2 candidate:1 theorem:9 luckiness:1 list:1 admits:2 svm:2 virtue:1 ionosphere:2 exists:4 burden:1 vapnik:2 effectively:1 keshet:1 anu:1 margin:3 nk:1 wdbc:1 simply:4 lagrange:1 expressed:1 contained:1 monotonic:2 applies:1 minimizer:7 satisfies:1 ma:1 formulated:1 rbf:1 price:1 infinite:2 except:2 uniformly:1 hyperkernels:9 lemma:2 called:3 experimental:2 la:1 atsch:1 support:6 crammer:1 alexander:1 brevity:2 |
1,311 | 2,194 | Informed Projections
David Cohn
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
Low rank approximation techniques are widespread in pattern recognition research ? they include Latent Semantic Analysis (LSA), Probabilistic LSA, Principal Components Analysus (PCA), the Generative Aspect Model, and many forms of bibliometric analysis. All make use of a
low-dimensional manifold onto which data are projected.
Such techniques are generally ?unsupervised,? which allows them to
model data in the absence of labels or categories. With many practical problems, however, some prior knowledge is available in the form
of context. In this paper, I describe a principled approach to incorporating such information, and demonstrate its application to PCA-based
approximations of several data sets.
1
Introduction
Many practical problems involve modeling large, high-dimensional data sets to uncover
similarities or latent structure. Linear low rank approximation techniques such as PCA [12],
LSA [5], PLSA [6] and generative aspect models [1] are powerful tools for approaching
these tasks. They identify (relatively) low-dimensional hyperplanes that best approximate
the data according to a given noise model. In doing so, they exploit and expose regularities
in the data: the hyperplanes represent a latent space whose dimensions are often observed
to correspond to distinct latent categories in the data set. For example, an LSA-derived
low-rank approximation to a corpus of news stories may have dimensions corresponding
to ?politics,? ?finance,? ?sports,? etc. Documents with the same inferred sources (therefore
?about? the same topic) generally lie close to each other in the latent space.
The broad applicability of these techniques comes from the fact that they are essentially
?unsupervised? ? a model is learned in the absence of labels indicating class or category
memberships. There are, however, many situations in which some prior information is
available; in these cases, we would like to have some way of using that information to
improve our model.
Nigam et al. [10] studied the problem of learning to classify data into pre-existing categories in the presence of labeled and unlabeled examples. Their approach augmented a
traditional supervised learning algorithm with distribution information made available from
the unlabeled data. In contrast, this paper considers a method for augmenting a traditional
unsupervised learning problem with the addition of equivalence classes.
Equivalence classes are a natural concept for many real-world problems. We frequently
have some reason for believing that a set of observations are similar in some sense without
wanting to or being able to say why they are similar. Note that the sets are not required to
be comprehensive ? we may only have known associations between a handful of observations. Further, the sets are not required to be disjoint; we may know that members of a set
are similar, but there is no implication that members of two different sets are dissimilar.
In any case, the hope is that by indicating which observations are similar, we can bias our
model focus on relevant features and to ignore differences that, while statistically significant, are not correlated with our idea of similarity in the problem at hand. This paper
describes an algorithm validating the use of this approach.
1.1
Related work
There is too large a literature examining the combination of supervised and unsupervised
learning to cover here; below I mention in passing some of the most relevant research.
In terms of conceptual similarity, multiple discriminant analysis (MDA) and oriented principal components analysis (OPCA) are techniques that attempt to maximize the fidelity of
a linear low rank approximation while minimizing the variance of data belonging to designated equivalence classes [2]. The difference with the approach discussed here is that
MDA and OPCA maximize a ratio of variances rather than a mixture; this is equivalent to
making the assumption that the covariance matrices for each set are tied. Another related
technique is multidimensional scaling (MDS) which, aside from sharing the ratio-based criterion, makes the added assumption that the precise degree of similarity (or dissimilarity)
of two data points is to be enforced. In general, which set of assumptions is best depends
on the problem at hand.
In terms of implementation, the present algorithm owes a great deal to the ?shadow targets?
algorithm for Neuroscale [8, 15], whose eponymous data points enforce equivalence classes
on sets of (otherwise) unsupervised data. That algorithm trades fidelity of representation
against fidelity of equivalence classes much in the same way as Equation 4, although it
does so in the context of a Kohonen neural network instead of a linear mapping.
Another closely-related technique is CI-LSI [7], which uses latent semantic analysis for
cross-language retrieval. The technique involves training on text documents from a parallel
corpus for two or more languages (e.g. French and English), such that each document exists
as both an English and French version. In CI-LSI, each document is merged with its twin,
and the hyperplane is fit to the set of paired documents.
The goal of CI-LSI matches the goal of this paper, and the technique can in fact be seen as
a special case of the informed projections discussed here. By using the ?mean? of a pair of
documents as a proxy for the documents themselves, we assert that the two come from a
common source; fitting a model to a collection of such means finds a maximum likelihood
solution subject to the constraint that both members of a pair comes from a common source.
2
Informed and uninformed projections
To introduce informed projections, I will first briefly review principal components analysis
(PCA) and an algorithm for efficiently computing the principal components of a data set.
2.1
PCA and EMPCA
Given a finite data set X ? Rn , where each column corresponds to one observation, PCA
can be used to find a rank m approximation X? (where m < n) which minimizes the sum
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
?0.2
?0.2
?0.4
?0.4
?0.6
?0.6
?0.8
?0.8
?1
?1
?2
?1.5
?1
?0.5
0
0.5
1
1.5
2
?2
?1.5
?1
?0.5
0
0.5
1
1.5
2
Figure 1: PCA maximizes the variance of the observations (on left), while an informed
projection minimizes variance of projections from observations belonging to the same set.
squared distortion with respect to X. It does this by identifying the m orthogonal directions
in which X exhibits the greatest variance, corresponding to the m largest eigenvectors C =
[C1 , . . . ,Cm ]. X can then be projected onto the hyperplane defined by C as
X? = C(CT C)?1CT X.
(1)
Although not strictly a generative model, PCA offers a probabilistic interpretation: C represents a maximum likelihood model of the data under the assumption that X consists of
(Gaussian) noise-corrupted observations taken from linear combinations of m sources in an
n-dimensional space. The values for X? then represent maximum likelihood estimates of the
mixtures responsible for the corresponding values in X.
Roweis [13] described an efficient iterative technique for identifying C using an EM procedure. Beginning with an arbitrary guess for C, the latent representation of X is computed
Y = (CT C)?1CT X
(2)
after which C is updated to maximize the estimated likelihoods
C = XY T (YY T )?1 .
(3)
Equations 2 and 3 are iterated until convergence (typically less than 10 iterations), at which
? approximation to X will have been minimized.
time the sum squared error of X?s
2.2
Informed projections
PCA only penalizes according to squared distance of an observation xi from its projection
x?i . Given a Gaussian noise model, x?i is the maximum likelihood estimate of xi ?s ?source,?
which is the only constraint with which PCA is concerned.
If we believe that a set of observations Si = {x1 , x2 , . . . , xn } have a common cause, then they
should share a common source. For a hyperplane defined by eigenvectors C, the maximum
likelihood source is the mean of Si ?s projections onto C, denoted Si . As such, the likelihood
should be penalized not only
on the basis of the variance of observations around their
projections ? j ||x j ? x? j ||2 , but also the variance of the projections around their set means
?i ?x j ?Si ||x? j ? Si ||2 .
These two penalty terms may be at odds with each other, so we must introduce a hyperparameter ? representing how much weight to place on accurately reproducing the original
observations and how much to place on preserving the integrity of the known sets:
E? = (1 ? ?) ? ||x j ? x? j ||2 + ? ?
j
?
i x j ?Si
||x? j ? Si ||2 .
(4)
When ? = 0.5, Equation 4 is equivalent to minimizing ?i ?x j ?Si ||x j ? Si ||2 under the assumption that all otherwise unaffiliated xi are members of their own singleton sets. This is
just the squared distance from each observation to its projected cluster mean, which appears
to be the criterion CI-LSI minimizes by averaging documents.
2.3
Finding an informed projection
The error criterion in 4 may be efficiently optimized with an expectation-maximization
(EM) procedure based on Roweis? EMPCA [13], alternately computing estimated sources
x? and maximizing the likelihoods of the observed data given those sources.
The likelihood of a set is maximized by minimizing the variance of projections from members of a set around their mean. This is at odds with the efforts of PCA to maximize
likelihood by maximizing the variance of projections from the data set at large. We can
make these forces work together by adding a ?complement set? S?i for each set Si such that
the variance of Si ?s projections is minimized by maximizing the variance of S?i ?s projections.
The complement set may be determined analytically, but can also be computed efficiently
as an extra step between the ?E? and ?M? steps of the EM iteration. Given an observation
x j ? Si , the complement for x j may be computed in terms of its projection x? j onto the
hyperplane and Si , the mean of the set.
Figure 2: Location of a point?s complement x? j with respect to its mean set projection Si
and the current hyperplane.
In order to ?pull? the current hyperplane in the direction that will minimize x j ?s distance
from the set mean, x? j must be positioned at a distance of ||x j ? x? j || from the hyperplane
such that its projection lies along line from Si to x? j at a distance from Si equal to ||x j ? x? j ||.
With some geometric manipulation (Figure 2), it can be shown that
x? j = Si + (x? j ? Si )
||x? j ? x||
||x? j ? Si ||
+ (x? j ? x)
.
||x? j ? x||
||x? j ? Si ||
For efficiency, it is worth noting that by subtracting each set?s mean from its constituent
observations, all sets may be combined into a single zero-mean ?superset? S? from which
complements are computed.
Once the complement set has been computed, it can be appended to the original observa? and the ?M? step of the EM procedure
tions a to create a joint data set, denoted X + = [X|S],
1
is continued as before:
Y = (CT C)?1CT X + ,
C = X +Y T (YY T )?1 .
(5)
Applying ? to the optimization is straightforward ? if we preprocess the data by subtracting
the mean of the observations (as is standard for PCA), the effect of each observation is to
1 Since S? depends on the projections, and therefore the position of the hyperplane, it must be
i
recomputed with each iteration.
apply a ?torque? to the current hyperplane around the origin. By multiplying all coordinates
of an observation by the same scalar, we scale the torque applied by the same amount. As
such, we can trade off the weight attached to enforcing the sets against the weight attached
to reconstructing the original data by multiplying S? and X by ? and 1 ? ? respectively:
?
X?+ = [(1 ? ?)X|? ? S]
3
Experiments
I examined the effect of ?informing? projections on three data sets from two domains.
The first two were text data sets taken from the WebKB project and the ?20 newsgroups?
data set. The third data set consisted of acoustic features from recorded music. Finally, I
examine the effect of adding set information to the joint probabilistic model described by
Cohn and Hofmann [3].
3.1
WebKB data
The first set of experiments began with a subset of the WebKB data set [4]. Using Rainbow [9], I tokenized 1000 randomly-selected documents, stripping out HTML and digits,
and kept the 1000 terms with highest class-dependent information gain (the reduced vocabulary greatly decreased processing times). The result was 1000 documents with 1000
features, where feature f i, j represented the frequency with which term j occurred in document xi . Sets were constructed from the categories provided with each document.
The experiments varied both the fraction of the training data for which set associations were
provided (0-1) and the weight given to preserving those sets (also 0-1). For each combination, I ran 40 trials, each using a randomized split of 200 training documents and 100 test
documents. Accuracy was evaluated based on leave-one-out nearest neighbor classification
over the test set.2
0.9
0.88
weight = 0.4
weight = 0.5
weight = 0.6
weight = 0.7
0.86
0.85
0.8
0.84
0.75
accuracy
accuracy
0.82
0.8
0.78
0.7
frac = 0.2
frac = 0.4
frac = 0.6
frac = 0.8
0.65
0.76
0.6
0.74
0.55
0.72
0.7
0.5
0
0.1
0.2
0.3
0.4
0.5
0.6
fraction of data with set labels
0.7
0.8
0.9
0
0.1
0.2
0.3
0.4
0.5
weight given to sets
0.6
0.7
0.8
0.9
Figure 3: Nearest neighbor classification of WebKB data, where a 5D PCA of document
terms has been informed by web page category-determined sets (40 independent train/test
splits). The fraction of observations that have been given set assignments is varied from 0
to 1 (left plot), as is ?, the weight attached to preserving set associations (right plot).
Figure 3 summarizes the results of these experiments. As expected, the more documents
that had set associations, the greater the improvement in classification accuracy, but this
2 Obviously,
simple nearest neighbor is far from the most effective classification technique for
this domain. But the point of the experiment is to evaluate to what degree informing a projection
preserves or improves topic locality, which nearest neighbor classifiers are well-suited to measure.
improvement was only evident for 0.3 ? ? ? 0.7; below 0.3, the sets were not given enough
weight to make a difference, while above 0.7 there is a rapid deterioration in accuracy.
20 Newsgroups
The second set of experiments also used a standard text classification corpus, but with an unrestricted vocabulary. Beginning with the documents
of the 20 newsgroups data set, I again preprocessed the documents as above with Rainbow, but
this time kept the entire vocabulary (27214 unique
terms), instead of preselecting maximally informative terms.
Because of the additional running time required to
handle the complete vocabularies, the experiments
used all set labels and only varied the weighting.
Thirty independent training and test sets of 100
documents each were run for 0 ? ? ? 1, and as
before, accuracy was evaluted in terms of leaveone-out classification error on the test set.
0.36
0.34
0.32
accuracy
3.2
0.3
0.28
0.26
0.24
0
0.1
0.2
0.3
0.4
0.5
0.6
alpha (set weighting)
0.7
0.8
0.9
Figure 4: Five categories from 20
newsgroups data set, where a 5D
PCA of document terms has been
informed by source category (30
train/test splits, for 0 < ? < 1).
Figure 4 summarizes the results of these experiments. The characteristic learning curve is very
similar to that for the WebKB data ? an intermediate set weighting yields significantly better performance than the purely supervised or
unsupervised cases. There is, however, one notable distinction: in these experiments, there
is much less variation in accuracy for large values of ? ? it almost appears that there are
three stable regions of performance.
3.3
Album recognition from acoustic features
The third test used a proprietary data set of acoustic properties of recorded music. The data
set contained 11252 recorded music tracks from 939 albums. Each observation consisted of
85 highly-processed acoustic features extracted automatically via digital signal processing.
The goal of this experiment was to determine whether informing a projected model could
improve the accuracy with which it could identify tracks from the same album. Recalling
Platt?s playlist selection problem [11], this can serve as a proxy for estimating how well the
model can predict whether two tracks ?belong together? by the subjective measure of the
artist who created the album.
For these experiments, I selected the first 8439 tracks (3/4 of the data) for training, assigning each track to be a member of the set defined by the album it came from. Many tracks
appeared on multiple albums (?Best of...? and soundtrack collections). The remaining 2813
tracks were used as test data.
The 85 dimensional features were projected down into a 10 dimensional space, informing the projection with sets defined by tracks from the same album. The relatively low
dimension of the problem permitted also running OPCA on the data set for comparison.
As above, I measured the frequency with which each test track had another track from the
same album as its nearest neighbor when projected down into this same space.
While the improvements in performance are not as striking as those from the previous experiments, they are nonetheless significant (Table 1). One reason for the meager improvement may be that the features from which the projections were computed had already been
weight
accuracy
ratio
? = 0.0
0.1070
0.3859
? = 0.5
0.1241
0.3223
? = 1.0
0.0551
0.3414
OPCA
0.1340
0.3144
Table 1: Album recognition results using 2813 test tracks from 316 albums. For each
weighting ?, ?accuracy? is the fraction of times which the closest track to a test track came
from the same album; ?ratio? indicates the average ratio of intra-album distances to interalbum distances in the test set. In all cases, informing the projection with a weight of
? = 0.5 increases the accuracy and decreases the ratio of the model.
manually optimized for classification accuracy. Interestingly, OPCA slightly outperforms
the informed projection for both criteria on this problem.
3.4
Content, context and connections
Prior work [3] discussed building joint probabilistic models of a document base, using both
the content of the documents and the connections (citations or hyperlinks) between them.
A document base frequently contains context as well, in the form of documents from the
same source or by the same author. Informed projection provides a way for us to inject this
third form of information and further improve our models.
Figure 5 summarizes the results of using set information to ?inform? the joint content+link
models discussed in the previous paper. That work used a multinomial model for its approximation, so we can not use the equations defined in Section 2.3. Instead, we can make
use of the observation of Section 1.1 to approximate the informing process by merging
documents from the same set. Figure 5 illustrates that this process complements the earlier
content+connections approach, providing a joint model of document content, context and
connections.
0.6
links
both
0.55
uninformed
0.19
(0.017)
0.11
(0.013)
0.21
(0.023)
informed
0.33
(0.039)
0.23
(0.098)
0.33
(0.057)
0.5
accuracy
accuracy
(std err)
content
0.45
0.4
0.35
0.3
0.25
1
0.8
0.6
0.4
0.2
connection weight
0
1
0.5
set membership
Figure 5: (left) Classification accuracy of informed vs. uninformed models of separate and
joint models of document content and connections, using the WebKB dataset. (right) Effect
of adding more document context in the form of set membership information on the Cora
data set. See Cohn and Hofmann [3] for details.
4
Discussion and future work
The experiments so far indicate that adding set information to a low rank approximation
does improve the quality of a model, but only to the extent that the information is used
in conjunction with the unsupervised information already present in the data set. The improvement in performance is evident for content models (such as LSA), connection models,
and joint models of content and connections.
4.1
Future work
Beyond experiments that to clarify the effect of ? on model fitness, there are many obvious
directions for future work. The first is further exploration on the relationship between
informed PCA and and the variants of MDA discussed in Section 1.1. While the differences
are mathematically straightforward, the effect of sum-vs.-ratio criteria bears further study.
A second broad area for future work is the application of the techniques described here to
richer low rank approximation models. While this paper considered the effect of informing
PCA, it would be fruitful to examine both the process and effect of informing multinomialbased models [3, 6], fully-generative models [1] and local linear embeddings [14].
A third area for exploration is the study of potential applications for this approach, which
include improved relevance modeling, directed web crawling, and personalized search and
recommendation across a wide variety of media.
References
[1] D. Blei, A. Ng, and M. I. Jordan. Latent dirichlet allocation. In Advances in Neural Information
Processing Systems 14, 2002.
[2] C.J.C. Burges, J.C. Platt, and S. Jana. Extracting noise-robust features from audio data. In
Proceedings of ICASSP, 2002.
[3] D. Cohn and T. Hofmann. The missing link - a probabilistic model of document content and
hypertext connectivity. In T. Leen et al., editor, Advances in Neural Information Processing
Systems 13, 2001.
[4] M. Craven, D. DiPasquo, D. Freitag, A. McCallum, T. Mitchell, K. Nigam, and S. Slattery.
Learning to extract symbolic knowledge from the world wide web. In Proceedings of the 15th
National Conference on Artificial Intelligence (AAAI-98), 1998.
[5] S. Dumais, G. Furnas, T. Landauer, S. Deerwester, and R. Harshman. Using latent semantic
analysis to improve access to textual information. In Proceedings of the Conference on Human
Factors in Computing Systems CHI?88, 1988.
[6] T. Hofmann. Probabilistic latent semantic analysis. In Proc. of Uncertainty in Artificial Intelligence, UAI?99, Stockholm, 1999.
[7] M. Littman, S. Dumais, and T. Landauer. Automatic cross-language information retrieval using
latent semantic indexing. In G. Grefenstette, editor, Cross Language Information Retrieval.
Kluwer, 1998.
[8] D. Lowe and M. E. Tipping. Feed-forward neural networks and topographic mappings for
exploratory data analysis. Neural Computing and Applications, 4:83?95, 1996.
[9] A. K. McCallum. Bow: A toolkit for statistical language modeling, text retrieval, classification
and clustering. http://www.cs.cmu.edu/ mccallum/bow, 1996.
[10] K. Nigam, A. K. McCallum, S. Thrun, and T. M. Mitchell. Learning to classify text from
labeled and unlabeled documents. In Proceedings of AAAI-98, pages 792?799, Madison, US,
1998. AAAI Press, Menlo Park, US.
[11] J. Platt, C. Burges, S. Swenson, C. Weare, and A. Zheng. Learning a gaussian process prior for
automatically generating music playlists. In T. G. Dietterich, S. Becker, and Z. Ghahramani,
editors, Advances in Neural Information Processing Systems 14. MIT Press, 2002.
[12] B. D. Ripley. Pattern Recognition and Neural Networks. Cambridge: University Press, 1996.
[13] S. Roweis. EM algorithms for PCA and SPCA. In M. I. Jordan, M. J. Kearns, and S. A. Solla,
editors, Advances in Neural Information Processing Systems, volume 10. MIT Press, 1998.
[14] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323?2326, Dec 2000.
[15] M. E. Tipping and D. Lowe. Shadow targets: A novel algorithm for topographic projections by
radial basis functions. Neurocomputing, 19(1):211?222, 1998.
| 2194 |@word trial:1 briefly:1 version:1 plsa:1 covariance:1 mention:1 reduction:1 contains:1 document:30 interestingly:1 subjective:1 existing:1 outperforms:1 current:3 err:1 si:20 assigning:1 crawling:1 must:3 informative:1 hofmann:4 plot:2 aside:1 v:2 generative:4 selected:2 guess:1 intelligence:2 mccallum:4 beginning:2 blei:1 provides:1 location:1 hyperplanes:2 five:1 along:1 constructed:1 consists:1 freitag:1 fitting:1 introduce:2 expected:1 rapid:1 themselves:1 frequently:2 examine:2 chi:1 torque:2 automatically:2 frac:4 project:1 webkb:6 provided:2 estimating:1 maximizes:1 medium:1 what:1 cm:1 minimizes:3 informed:14 finding:1 assert:1 multidimensional:1 finance:1 classifier:1 platt:3 lsa:5 harshman:1 before:2 local:1 studied:1 examined:1 equivalence:5 statistically:1 directed:1 practical:2 responsible:1 unique:1 thirty:1 swenson:1 digit:1 procedure:3 area:2 significantly:1 projection:28 pre:1 radial:1 symbolic:1 onto:4 close:1 unlabeled:3 selection:1 context:6 applying:1 www:1 equivalent:2 fruitful:1 missing:1 maximizing:3 straightforward:2 identifying:2 continued:1 pull:1 embedding:1 handle:1 exploratory:1 coordinate:1 variation:1 updated:1 target:2 us:1 origin:1 pa:1 recognition:4 std:1 labeled:2 observed:2 hypertext:1 region:1 news:1 solla:1 trade:2 highest:1 decrease:1 ran:1 principled:1 slattery:1 littman:1 purely:1 serve:1 efficiency:1 basis:2 icassp:1 joint:7 represented:1 train:2 distinct:1 describe:1 effective:1 artificial:2 whose:2 richer:1 say:1 distortion:1 otherwise:2 topographic:2 obviously:1 subtracting:2 kohonen:1 relevant:2 bow:2 roweis:4 constituent:1 convergence:1 regularity:1 cluster:1 generating:1 leave:1 tions:1 augmenting:1 measured:1 uninformed:3 nearest:5 c:2 shadow:2 come:3 involves:1 indicate:1 direction:3 closely:1 merged:1 exploration:2 human:1 stockholm:1 mathematically:1 strictly:1 clarify:1 around:4 considered:1 great:1 mapping:2 predict:1 proc:1 label:4 expose:1 largest:1 create:1 tool:1 hope:1 cora:1 mit:2 gaussian:3 rather:1 conjunction:1 derived:1 focus:1 improvement:5 rank:7 believing:1 likelihood:10 indicates:1 greatly:1 contrast:1 sense:1 dependent:1 membership:3 typically:1 entire:1 playlist:2 fidelity:3 html:1 classification:9 denoted:2 special:1 equal:1 once:1 ng:1 manually:1 represents:1 broad:2 park:1 unsupervised:7 future:4 minimized:2 oriented:1 randomly:1 preserve:1 national:1 comprehensive:1 neurocomputing:1 fitness:1 attempt:1 recalling:1 highly:1 intra:1 zheng:1 mixture:2 implication:1 xy:1 orthogonal:1 owes:1 penalizes:1 observa:1 classify:2 modeling:3 column:1 earlier:1 cover:1 dipasquo:1 assignment:1 maximization:1 applicability:1 subset:1 examining:1 too:1 stripping:1 corrupted:1 combined:1 dumais:2 randomized:1 probabilistic:6 off:1 together:2 connectivity:1 squared:4 again:1 recorded:3 aaai:3 inject:1 potential:1 singleton:1 twin:1 notable:1 depends:2 lowe:2 doing:1 parallel:1 minimize:1 appended:1 accuracy:16 variance:11 characteristic:1 efficiently:3 maximized:1 correspond:1 identify:2 preprocess:1 yield:1 who:1 iterated:1 accurately:1 artist:1 multiplying:2 worth:1 inform:1 sharing:1 against:2 nonetheless:1 frequency:2 obvious:1 gain:1 dataset:1 mitchell:2 knowledge:2 improves:1 dimensionality:1 positioned:1 uncover:1 appears:2 feed:1 tipping:2 supervised:3 permitted:1 maximally:1 improved:1 leen:1 evaluated:1 just:1 until:1 hand:2 web:3 cohn:5 nonlinear:1 widespread:1 french:2 quality:1 believe:1 building:1 effect:8 dietterich:1 concept:1 consisted:2 analytically:1 semantic:5 deal:1 jana:1 criterion:5 evident:2 complete:1 demonstrate:1 novel:1 began:1 common:4 multinomial:1 attached:3 volume:1 association:4 discussed:5 interpretation:1 occurred:1 belong:1 kluwer:1 mellon:1 significant:2 cambridge:1 automatic:1 language:5 had:3 toolkit:1 stable:1 access:1 similarity:4 etc:1 base:2 integrity:1 closest:1 own:1 manipulation:1 came:2 seen:1 preserving:3 greater:1 unrestricted:1 additional:1 determine:1 maximize:4 signal:1 multiple:2 match:1 cross:3 offer:1 retrieval:4 preselecting:1 paired:1 variant:1 essentially:1 cmu:2 expectation:1 iteration:3 represent:2 deterioration:1 dec:1 c1:1 addition:1 decreased:1 source:11 extra:1 subject:1 validating:1 member:6 jordan:2 odds:2 extracting:1 presence:1 noting:1 spca:1 split:3 superset:1 concerned:1 enough:1 newsgroups:4 intermediate:1 fit:1 embeddings:1 variety:1 approaching:1 idea:1 politics:1 whether:2 pca:17 becker:1 effort:1 penalty:1 passing:1 cause:1 proprietary:1 generally:2 involve:1 eigenvectors:2 amount:1 locally:1 processed:1 category:8 reduced:1 http:1 lsi:4 estimated:2 disjoint:1 track:13 yy:2 carnegie:1 hyperparameter:1 recomputed:1 preprocessed:1 neuroscale:1 kept:2 fraction:4 sum:3 deerwester:1 enforced:1 run:1 powerful:1 uncertainty:1 striking:1 place:2 almost:1 summarizes:3 scaling:1 ct:6 mda:3 handful:1 constraint:2 x2:1 personalized:1 aspect:2 relatively:2 designated:1 according:2 combination:3 craven:1 belonging:2 describes:1 slightly:1 em:5 reconstructing:1 across:1 making:1 indexing:1 taken:2 equation:4 know:1 meager:1 available:3 apply:1 enforce:1 original:3 running:2 include:2 remaining:1 dirichlet:1 clustering:1 madison:1 music:4 exploit:1 ghahramani:1 added:1 already:2 md:1 traditional:2 exhibit:1 distance:7 link:3 separate:1 thrun:1 topic:2 manifold:1 considers:1 discriminant:1 extent:1 reason:2 enforcing:1 tokenized:1 relationship:1 ratio:7 minimizing:3 providing:1 implementation:1 weare:1 observation:20 finite:1 situation:1 precise:1 rn:1 varied:3 reproducing:1 arbitrary:1 inferred:1 david:1 complement:7 pair:2 required:3 optimized:2 connection:8 acoustic:4 learned:1 distinction:1 textual:1 alternately:1 able:1 beyond:1 below:2 pattern:2 appeared:1 hyperlink:1 greatest:1 natural:1 force:1 wanting:1 representing:1 improve:5 created:1 extract:1 text:5 prior:4 literature:1 review:1 geometric:1 fully:1 bear:1 allocation:1 digital:1 degree:2 proxy:2 editor:4 story:1 share:1 penalized:1 english:2 bias:1 burges:2 neighbor:5 wide:2 saul:1 leaveone:1 rainbow:2 dimension:3 xn:1 world:2 vocabulary:4 curve:1 author:1 made:1 collection:2 projected:6 forward:1 far:2 approximate:2 alpha:1 ignore:1 citation:1 uai:1 corpus:3 pittsburgh:1 conceptual:1 evaluted:1 xi:4 landauer:2 ripley:1 search:1 latent:11 iterative:1 why:1 table:2 opca:5 robust:1 menlo:1 nigam:3 domain:2 noise:4 x1:1 augmented:1 furnas:1 position:1 lie:2 tied:1 third:4 weighting:4 down:2 incorporating:1 exists:1 adding:4 bibliometric:1 merging:1 ci:4 dissimilarity:1 album:12 illustrates:1 locality:1 suited:1 contained:1 sport:1 scalar:1 recommendation:1 corresponds:1 extracted:1 grefenstette:1 goal:3 informing:8 absence:2 content:10 determined:2 hyperplane:9 averaging:1 principal:4 kearns:1 indicating:2 dissimilar:1 relevance:1 evaluate:1 audio:1 correlated:1 |
1,312 | 2,195 | A Minimal Intervention Principle for
Coordinated Movement
Emanuel Todorov
Department of Cognitive Science
University of California, San Diego
[email protected]
Michael I. Jordan
Computer Science and Statistics
University of California, Berkeley
[email protected]
Abstract
Behavioral goals are achieved reliably and repeatedly with movements
rarely reproducible in their detail. Here we offer an explanation: we show
that not only are variability and goal achievement compatible, but indeed
that allowing variability in redundant dimensions is the optimal control
strategy in the face of uncertainty. The optimal feedback control laws for
typical motor tasks obey a ?minimal intervention? principle: deviations
from the average trajectory are only corrected when they interfere with
the task goals. The resulting behavior exhibits task-constrained variability, as well as synergetic coupling among actuators?which is another
unexplained empirical phenomenon.
1 Introduction
Both the difficulty and the fascination of the motor coordination problem lie in the apparent conflict between two fundamental properties of the motor system: the ability to
accomplish its goal reliably and repeatedly, and the fact that it does so with variable movements [1]. More precisely, trial-to-trial fluctuations in individual degrees of freedom are on
average larger than fluctuations in task-relevant movement parameters?motor variability
is constrained to a redundant or ?uncontrolled? manifold [16] rather than being suppressed
altogether. This pattern has now been observed in a long list of behaviors [1, 6, 16, 14].
In concordance with such naturally occurring variability, experimentally induced perturbations [1, 3, 12] are compensated in a way that maintains task performance rather than a
specific stereotypical movement pattern.
This body of evidence is fundamentally incompatible with standard models of motor coordination that enforce a strict separation between trajectory planning and trajectory execution [2, 8, 17, 10]. In such serial planning/execution models, the role of the planning
stage is to resolve the redundancy inherent in the musculo-skeletal system, by replacing
the behavioral goal (achievable via infinitely many movement trajectories) with a specific
?desired trajectory.? Accurate execution of the desired trajectory guarantees achievement
of the goal, and can be implemented with relatively simple trajectory-tracking algorithms.
While this approach is computationally viable (and often used in engineering), the numerous observations of task-constrained variability and goal-directed corrections indicate that
the online execution mechanisms are able to distinguish, and selectively enforce, the details
that are crucial for the achievement of the goal. This would be impossible if the behavioral
goal were replaced with a specific trajectory.
Instead, these observations imply a very different control scheme, one which pursues the
behavioral goal more directly. Efforts to delineate such a control scheme have led to the
idea of motor synergies, or high-level ?control knobs,? that have invariant and predictable
effects on the task-relevant movement parameters despite variability in individual degrees
of freedom [9, 11]. But the computational underpinnings of such an approach?how the
synergies appropriate for a given task and plant can be constructed, what control scheme is
capable of utilizing them, and why the motor system should prefer such a control scheme
in the first place?remain unclear. This general form of hierarchical control implies correlations among the control signals sent to multiple actuators (i.e., synergetic coupling) and
a corresponding reduction in control space dimesionality. Such phenonema have indeed
been observed [4, 18], but the relationship to the hypothetical functional synergies remains
to be established.
In this paper we aim to resolve the apparent conflict at the heart of the motor coordination problem, and clarify the relationship between variability, task goals, and motor synergies. We treat motor coordination within the framework of stochastic optimal control,
and postulate that the motor system approximates the best possible control scheme for a
given task. Such a control scheme will generally take the form of a feedback control law.
Whenever the task allows redundant solutions, the initial state of the plant is uncertain, the
consequences of the control signals are uncertain, and the movement duration exceeds the
shortest sensory-motor delay, optimal performance is achieved by a feedback control law
that resolves redundancy moment-by-moment?using all available information to choose
the most advantageous course of action under the present circumstances. By postponing
all decisions regarding movement details until the last possible moment, this control law
takes advantage of the opportunities for more successful task completion that are constantly being created by unpredictable fluctuations away from the average trajectory. Such
exploitation of redundancy not only results in higher performance, but also gives rise to
task-constrained variability and motor synergies?the phenomena we seek to explain.
The present paper is related to a recent publication targeted at a neuroscience audience
[14]. Here we provide a number of technical results missing from [14], and emphasize the
aspects of our work that are most likely to be of interest to the computational modeling
community.
2 The Minimal Intervention principle
Our general explanation of the above phenomena follows from an intuitive property of optimal feedback controllers which we call the ?minimal intervention? principle: deviations
from the average trajectory are corrected only when they interfere with task performance.
If this principle holds, and the noise perturbs the system in all directions, the interplay of
the noise and control processes will result in variability which is larger in task-irrelevant
directions. At the same time, the fact that certain deviations are not being corrected implies that the corresponding control subspace is not being used?which is the phenomenon
typically interpreted as evidence for motor synergies [4, 18].
Why should the minimum intervention principle hold? An optimal feedback controller has
nothing to gain from correcting task-irrelevant deviations, because its only concern is task
performance and by definition such deviations do not interfere with performance. On the
other hand, generating a corrective control signal can be detrimental, because: 1) the noise
in the motor system is known to be multiplicative [13] and therefore could increase; 2) the
cost being minimized most likely includes a control-dependent effort penalty which could
also increase.
We now formalize the notions of ?redundancy? and ?correction,? and show that for a surprisingly general class of systems they are indeed related?as our intuition suggests.
2.1 Local analysis of a general class of optimal control problems
Redundancy is not easy to define. Consider the task of reaching, which requires the fingertip to be at a specified target at some point in time . At time , all arm configurations for
which the fingertip is at the target are redundant. But at times different from this geometric approach is insufficient to define redundancy. Therefore we follow a more general
approach.
"!#$% '&)(*+$, .-
Consider a system with state
, and dynamics
- /
0
3
, control
, instantaneous scalar cost
where
is multidimensional standard Brownian motion. Control signals are
generated by a feedback control law, which can be any mapping of the form
. The analysis below heavily relies on properties of the optimal cost-to-go function, defined as
12
465 78:>@9?AC;=BC< A DFEHG ?A DJILK NOPNF 3 +NOPNF N
M
where the minimum is achieved by the optimal control law 3 5 .
Suppose that in a given task the system of interest (driven by the optimal control law)
generates an average trajectory
. On a given trial, let
be the deviation form the
average trajectory at time . Let
be the change in the optimal cost-to-go
due to
; i.e.,
. Now we are ready to define
the deviation
redundancy: the deviation
is redundant iff
. Note that our definition
reduces to the intuitive geometric definition at the end of the movement, where the cost
function and optimal cost-to-go
are identical.
QR
4 5
QR
4 5
Q 45
Q P QR7S &QR784 T 5 4 5 @
QR
Q UQR7 /
45
45
To define the notion of ?correction,? we need to separate the passive and active dynamics:
!#$,#VS77&LW76
The (infinitesimal) expected change in due to the control * 3 5 1&XQR7 can now
be identified: @Y Z"[W+ &LQR7 3 5 1&LQR7 . The corrective action of the control
signal is naturally defined as \^]O__'PQR78a`TbY Z cQd7e .
In order to relate the quantities Q 4 5 UQR7 and \^]O__'UQd7 , we obviously need to know
something about the optimal control law 3 5 . For problems in the above general form, the
optimal control law 3 5 is given [7] by the minimum
f _g$9;=< h$,@&)!#$%ji 465 7@&lmok n _ f \^prqs(*+$%ji 465 76(*$%u
G
GtG
Z
4
4
5
5
where G 7 and GtG 7 are the gradient and Hessian of the optimal cost-to-go function 4 5 7 . To be able to minimize this expression explicitly, we will restrict the class of
problems to
yw xHz 76 {s{|{ x 0 7}~
??77&lmk i?? +7}
The matrix notation means that the ?c?? column of ( is xy? +7} . Note that the latter
(+$,v
$,v
formulation is still very general, and can represent realistic musculo-skeletal dynamics and
motor tasks.
(S( i ?0 z x ? , i x ? i
n _ f \|p
RS n _ f \|p
Using the fact1 that
and
, and eliminating terms that do not depend on , the expression that has to be minimized w.r.t
becomes
i W7 i 4 G 5 7@&lmk i ? 77&
?0 z x ? +7 i 4 tG 5 G 7 x? +7
? M B G D
Therefore the optimal control law is
3 5 +7
T 7 z W7 i 4 5 7
G
We now return to the relationship between ?redundancy? and ?correction.? The time index will be suppressed for clarity. We expand the optimal cost-to-go to second order:
, also expand its gradient
to first order:
, and approximate all other quantities
as being constant in a small neighborhood of . The effect of the control signal becomes
. Substituting in the above definitions yields
4 5 &XQR7 4 5 1&XQR78&QR i 4 5 ,
&QR i 4 5 @ Qd
4 5 1&XQd7 4 5 7$& G 4 5 @ Qd GtG
G
G
GtG
'Y
Z TyW , 7 z W @ i 4 G 5 7@& 4 Gt5 G 76QR7
Q 4}5 +QR7 `+QR$ 4}G 5 77& 46Gt5 G 7?QR7e
\|] __'+QR7
`+QR$ 4}G 5 77& 46Gt5 G 7?QR7e ? G D ? G D "! ? G D$#
where the weighted dot-product notation `$& %@(
e ' stands for i*) % .
4
5
Thus both Q
\^]O__7UQR7 are dot-products of the same two vectors. When
4 5 @?& 4 5 7+6QRQR7 and
?which can happen for infinitely many QR when the HesG
GtG
sian 4 Gt5 G 7 is singular?the deviation is redundant and the optimal controller takes no
corrective action. Furthermore, Q 4 5 UQR7 and \^] __'PQd7 are positively correlated because
W @ 7 z W , i is a positive semi-definite matrix2. Thus the optimal controller resists single-trial deviations that take the system to more costly states, and magnifies deviations to less costly states.
This analysis confirms the minimal intervention principle to be a very general property
of optimal feedback controllers, explaining why variability patterns elongated in taskirrelevant dimensions (as well as synergetic actuator coupling) have been observed in such
a wide range of experiments involving different actuators and behavioral goals.
2.2 Linear-Quadratic-Gaussian (LQG) simulations
The local analysis above is very general, but it leaves a few questions open: i) what happens
when the deviation
is not small; ii) how does the optimal cost-to-go (which defines
redundancy) relate to the cost function (which defines the task); iii) what is the distribution
of states resulting from the sequence of optimal control signals? To address such questions
(and also build models of specific motor control experiments) we need to focus on a class of
control problems for which the optimal control law can actually be found. To that end, we
have modified [15] the extensively studied LQG framework to include the multiplicative
control noise characteristic of the motor system. The control problems studied here and in
QR
132543,
6 798;: ,$<*+(,=
FHG IKJ
1
1>1 +-=, 32 4 , 4@? : . ,$<*+(=, + ? <A= : / ? = 2B0 4 C, : ,$<A<*= : ,= +(=, + ? 2ED ?,
L FNM 8 LPO
Defining the unit vector as having a in position and in all other positions, we can write
. Then
, since
.
2
has to be positive semi-definite?or else we could find a control signal that makes the
instantaneous cost negative, and that is impossible by definition. Therefore
is also positive
semi-definite.
1
dv : dq
0.5
dv : corr
0
dcov : dq
?0.5
0
W x ?
k
M
M k
4 ? ? & 7 i ? & 7HT
? & @ i \|] 7' ? & @
T
25
Time Step
50
were generated randomly, with the restiction that
Figure 1:
has singular values less than (i.e. the passive dynamics is stable); the last component of the
state is (for similarity with motor control tasks),
and are positive semi-definite,
. For each problem (
) and each point in time , we genand
erated 100 random unit vector
and scaled them by mean(sqrt(svd(cov( )))). Then
,
,
,
. The notation ?dv :
and the
, etc.
dq? stands for the correlation between the
?
i ? ? ? & 7 i * ? & 7HT i 4 ?
i ^\ ] 7 4 \^] _ _ ? ? i W b ? & 7
??
?
?
MM z yM M &)M WR M &wyx?z M {|{s{ x 0 M ~ - M
% M M M &M M
i &) i ?
Note that the system state M is now partially observable, through noisy sensor readings % M .
the next section are in the form
Dynamics
Feedback
Cost
When the noise is additive instead of being multiplicative, the optimal control problem has
the well-known solution [5]
M
3rM5 M
aT! M M#" M z $% M &LWd M &'& M $% M T() M
where is an internal estimate of the system state, updated recursively by a Kalman filter.
The sequences of matrices and & are computed from the associated discrete-time Ricatti
equations [5]. Multiplicative noise complicates matters, but we have found [15] that for
systems with stable passive dynamics a similar control strategy is very close to optimal.
The modified equations for and & are given in [15]. The optimal cost-to-go function is
M
4 M 5 M v
M
MK M M &L\^]O<+* n
M '
& K M z *T W M
"
,
M
*T W M
The Hessian of the optimal cost-to-go is closely related to the task cost , but also
includes future task costs weighted by the passive and closed-loop
-
dynamics.
Specific motor control tasks are considered below. Here we generate 100 random problems
in the above form, compute the optimal control law in each case, and correlate the quantities
and corr. As the ?dv : corr? curve in Figure 1 shows, they are positively correlated at
all times. We also show in Figure 1 that the Hessian of the optimal cost-to-go has similar
shape to the task cost (?dv : dq? curve), and that the state covariance is smaller along
dimensions where the task cost is larger; i.e., the correlation ?dcov : dq? is negative. See
the figure legend for details.
Q 45
TU> VW@EFMXO@EYIH@ML
NL
*+
=?> @BADCE> FHGJIK@MLONL
&)
(
&'
"
!
RS
"%
#$ "
6HZ48
[H\ ] ] ^4_ ^4`bac^
PDQ
!
5768:9<; ;
,.-0/2143 3
Figure 2: Simulations of motor control tasks ? see text.
3 Applications to motor coordination
We have used the modified LQG framework to model a wide range of specific motor control
tasks [14, 15], and always found that optimal feedback controllers generate variability that
is elongated in redundant dimensions. Here we illustrate two such models. The first model
(Figure 2, Bimanual Tasks) includes two 1D point masses with positions X1 and X2, each
driven with a force actuator whose output is a noisy second-order low-pass filtered version
of the corresponding control signal. The feedback contains noisy position, velocity, and
force information?delayed by 50 msec (by augmenting the system state with a sequence
of recent sensor readings). The ? Difference? task requires the two points to start moving
20cm apart, and stop at identical but unspecified locations. The covariance of the final
state is elongated in the task-irrelevant dimension: the two points always stop close to each
other, but the final location can vary substantially from trial to trial. A related phenomenon
has been observed in the more complex bimanual task of inserting a pointer in a cup [6].
We now modify the task: in ?Sum,? the two points start at the same location and have
to stop so that the midpoint between them is at zero. Note that the state covariance is
reoriented accordingly. We also illustrate a Via Point task, where a 2D point mass has to
pass through a sequence of two intermediate targets and stop at a final target (tracing an
S-shaped curve). Variability is minimal at the via points. Furthermore, when one via point
is made smaller (i.e., the weight of the corresponding positional constraint is increased),
the variability decreases at that point. Due to space limitations, we refer the reader to [14]
for details of the models. In [14] we also report a via point experiment that closely matches
the predicted effect.
4 Multi-attribute costs and desired trajectory tracking
As we stated earlier, replacing the task goal with a desired trajectory (which achieves the
goal if executed precisely) is generally suboptimal. A number of examples of such suboptimality are provided in [14]. Here we present a more general view of desired trajectory
tracking which clarifies its relationship to optimal control.
Desired trajectory tracking can be incorporated in the present framework by using a modified cost, one that specifies a desired state at each point in time, and penalizes the deviations
from that state. Such a modified cost would normally include the original task cost (e.g.,
the terms that specify the desired terminal state), but also a large number of additional
terms that do not need to be minimized in order to accomplish the actual task. This raises
the question: what happens to the expected values of the terms in the original cost, when
we attempt to minimize other costs simultaneously? Intuitively, one would expect the orig-
inal costs to increase (relative to the costs obtained by the task-optimal controller). The
geometric argument below formalizes these ideas, and confirms our intuition.
$%: ? ? $,
?
?
z
?
!
? ? k
3 7
" #1"%$&!
3
' ? #1H EyG ?)( D I *K ? + 3 + . Consider a weight vector and its corresponding " " 1 , such that the mapping " )1 is locally smooth and invertible. Then
we can define the inverse mapping a " from the expected component cost manifold $ to
the weight manifold , as illustrated in Figure 3.
and ' ? , the total expected cost achieved by 3 is `# " " e .
From the
definitions of
3
is an optimal control law for the problem defined by the weight vector , no other
Since
control law can achieve a smaller total expected cost, and so #` a " " ,
e + -.a " " "0/1
"
/
2$ . Therefore, if we construct the T k dimensional hyperplane 3) that
for all
contains " and is orthogonal to a " , the entire manifold
$ has to lie in the half-space
not containing the origin. Thus 3) " is tangent to the manifold $ at point " , $ has
non-negative curvature, and the unit vector 4 " which is normal to $ at " satisfies 4
41 " #5
a " .
Let " 7 6%R8
curve that passes through the point of interest " :
$ , 6 X
be a parametric
" +O? " . Define
"
4# 6%9
4"< ; # 6% and a"# 6$?:
" " 7 6% . By differentiating " 7 6$ at
6 " ; we obtain the tangent to the curve 7 6, at . Since 4 is normal
to $ , we have
once again yields ` 4# " ; ; e$& ` 4 ; " ; e l .
` 4# e . Differentiating the latter equality
The non-negative curvature of $ implies ` 4# " ; ; eHX ; i.e., the tangent " ; cannot turn away
from the normal 4 without " crossing the hyperplane 3 . Therefore ` 4 ; " ; <
e + , and since
4 =
, we have ` ; ">; <e +* .
Consider a family of optimal control problems parameterized by the vector , with
cost functions
. Here
are different component
are the corresponding non-negative weights. Without loss of generality
costs, and
we can assume that
, i.e., the weight vector
lies in the
positive quadrant of the unit sphere. Let
be an optimal control law 3 , and
be the vector of expected component costs achieved by
; i.e.,
3
F HG G J 2CI GHG
?
@A
?CBED
D
If we assume that the optimal control law is unique, all inequalities below become strict.
on the unit sphere
For a general 2D manifold embedded in , the mapping
that satisfies
is known as the Gauss map, and plays an important role in surface
differential geometry.
4
J
" #1
The above result means that whenever we change the weight vector , the corresponding
vector
of expected component costs achieved by the (new) optimal control law will
change in an ?opposite? direction. More precisely, suppose we vary along a great circle
that passes through one of the corners of , say
, so that
decreases and all
increase. Then the component cost
will increase.
? z
'
z #1
k ^O
z
References
[1] Bernstein, N.I. The Coordination and Regulation of Movements. Pergamon Press,
(1967).
[2] Bizzi, E., Accornero, N., Chapple, W. & Hogan, N. Posture control and trajectory
formation during arm movement. J Neurosci 4, 2738-44 (1984).
[3] Cole, K.J. & Abbs, J.H. Kinematic and electromyographic responses to perturbation
of a rapid grasp. J Neurophysiol 57, 1498-510 (1987).
[4] D?Avella, A. & Bizzi, E. Low dimensionality of supraspinally induced force fields.
PNAS 95, 7711-7714 (1998).
[5] Davis, M.H.A. & Vinter, R. Stochastic Modelling and Control. Chapman and Hall,
(1985).
[6] Domkin D., Laczko, J., Jaric, S., Johansson, H., & Latash, M. Structure of joint
variability in bimanual pointing tasks. Exp Brain Res 143, 11-23 (2002).
[7] Fleming, W. and Soner, H. (1993). Controlled Markov Processes and Viscosity Solutions. Applications of Mathematics, Springer-Verlag, Berlin.
[8] Flash, T. & Hogan, N. The coordination of arm movements: an experimentally confirmed mathematical model. J Neuroscience 5, 1688-1703 (1985).
[9] Gelfand, I., Gurfinkel, V., Tsetlin, M. & Shik, M. In Models of the structuralfunctional organization of certain biological systems. Gelfand, I., Gurfinkel, V.,
Fomin, S. & Tsetlin, M. (eds.) MIT Press, 1971.
[10] Harris, C.M. & Wolpert, D.M. Signal-dependent noise determines motor planning.
Nature 394, 780-784 (1998).
[11] Hinton, G.E. Parallel computations for controlling an arm. Journal of Motor Behavior
16, 171-194 (1984).
[12] Robertson, E.M. & Miall, R.C. Multi-joint limbs permit a flexible response to unpredictable events. Exp Brain Res 117, 148-52 (1997).
[13] Sutton, G.G. & Sykes, K. The variation of hand tremor with force in healthy subjects.
Journal of Physiology 191(3), 699-711 (1967).
[14] Todorov, E. & Jordan, M. Optimal feedback control as a theory of motor coordination.
Nature Neuroscience, 5(11), 1226-1235 (2002).
[15] Todorov, E. Optimal feedback control under signal-dependent noise: Methodology
for modeling biological movement. Neural Computation, under review. Available at
http://cogsci.ucsd.edu/?todorov. (2002).
[16] Scholz, J.P. & Schoner, G. The uncontrolled manifold concept: Identifying control
variables for a functional task. Exp Brain Res 126, 289-306 (1999).
[17] Uno, Y., Kawato, M. & Suzuki, R. Formation and control of optimal trajectory in
human multijoint arm movement: Minimum torque-change model. Biological Cybernetics 61, 89-101 (1989).
[18] Santello, M. & Soechting, J.F. Force synergies for multifingered grasping. Exp Brain
Res 133, 457-67 (2000).
| 2195 |@word trial:6 exploitation:1 version:1 eliminating:1 achievable:1 advantageous:1 johansson:1 open:1 confirms:2 seek:1 simulation:2 r:1 covariance:3 recursively:1 moment:3 reduction:1 configuration:1 contains:2 schoner:1 initial:1 lqr:2 bc:1 realistic:1 happen:1 additive:1 pqd:1 shape:1 lqg:3 motor:26 reproducible:1 v:1 half:1 leaf:1 accordingly:1 gtg:5 filtered:1 pointer:1 location:3 mathematical:1 along:2 constructed:1 become:1 differential:1 viable:1 behavioral:5 expected:7 indeed:3 rapid:1 magnifies:1 planning:4 behavior:3 multi:2 brain:4 terminal:1 torque:1 resolve:3 actual:1 unpredictable:2 becomes:2 provided:1 notation:3 mass:2 what:4 musculo:2 cm:1 interpreted:1 substantially:1 unspecified:1 guarantee:1 formalizes:1 berkeley:2 hypothetical:1 multidimensional:1 scaled:1 control:61 unit:5 normally:1 intervention:6 positive:5 engineering:1 local:2 treat:1 modify:1 consequence:1 despite:1 sutton:1 fluctuation:3 studied:2 suggests:1 scholz:1 range:2 directed:1 unique:1 definite:4 empirical:1 physiology:1 quadrant:1 cannot:1 close:2 impossible:2 elongated:3 map:1 compensated:1 pursues:1 missing:1 go:9 duration:1 tremor:1 identifying:1 correcting:1 fomin:1 stereotypical:1 utilizing:1 notion:2 variation:1 updated:1 diego:1 target:4 heavily:1 suppose:2 play:1 controlling:1 origin:1 velocity:1 crossing:1 robertson:1 erated:1 observed:4 role:2 grasping:1 movement:15 decrease:2 cqd:1 intuition:2 predictable:1 dynamic:7 hogan:2 depend:1 raise:1 orig:1 neurophysiol:1 joint:2 corrective:3 cogsci:2 formation:2 neighborhood:1 apparent:2 whose:1 larger:3 gelfand:2 say:1 ability:1 statistic:1 cov:1 noisy:3 ghg:1 final:3 online:1 obviously:1 interplay:1 advantage:1 sequence:4 product:2 tu:1 relevant:2 loop:1 inserting:1 iff:1 achieve:1 intuitive:2 qr:16 achievement:3 generating:1 coupling:3 illustrate:2 completion:1 augmenting:1 ac:1 b0:1 implemented:1 c:1 predicted:1 indicate:1 implies:3 qd:2 direction:3 closely:2 attribute:1 filter:1 stochastic:2 human:1 accornero:1 biological:3 correction:4 clarify:1 hold:2 mm:1 considered:1 avella:1 normal:3 hall:1 great:1 exp:4 mapping:4 pointing:1 substituting:1 vary:2 achieves:1 bizzi:2 chapple:1 multijoint:1 soechting:1 unexplained:1 coordination:8 healthy:1 cole:1 weighted:2 mit:1 sensor:2 gaussian:1 always:2 aim:1 modified:5 rather:2 reaching:1 publication:1 knob:1 focus:1 modelling:1 dependent:3 typically:1 entire:1 expand:2 fhg:1 among:2 flexible:1 constrained:4 field:1 construct:1 once:1 having:1 shaped:1 chapman:1 identical:2 future:1 minimized:3 report:1 fundamentally:1 inherent:1 few:1 randomly:1 simultaneously:1 individual:2 delayed:1 replaced:1 geometry:1 attempt:1 freedom:2 organization:1 interest:3 fingertip:2 kinematic:1 grasp:1 hg:1 accurate:1 underpinnings:1 capable:1 xy:1 orthogonal:1 penalizes:1 desired:8 circle:1 re:4 minimal:6 uncertain:2 complicates:1 mk:1 column:1 modeling:2 increased:1 earlier:1 tg:1 cost:35 deviation:13 delay:1 successful:1 accomplish:2 fundamental:1 invertible:1 michael:1 again:1 postulate:1 containing:1 choose:1 cognitive:1 corner:1 dcov:2 return:1 concordance:1 includes:3 matter:1 coordinated:1 explicitly:1 multiplicative:4 view:1 closed:1 start:2 maintains:1 parallel:1 minimize:2 characteristic:1 yield:2 clarifies:1 trajectory:18 confirmed:1 cybernetics:1 lwd:1 sqrt:1 explain:1 whenever:2 ed:2 definition:6 infinitesimal:1 naturally:2 associated:1 gain:1 emanuel:1 stop:4 dimensionality:1 formalize:1 actually:1 higher:1 follow:1 methodology:1 specify:1 response:2 formulation:1 delineate:1 generality:1 furthermore:2 stage:1 correlation:3 until:1 hand:2 replacing:2 interfere:3 defines:2 effect:3 concept:1 equality:1 illustrated:1 during:1 abb:1 davis:1 xqr:3 suboptimality:1 motion:1 passive:4 instantaneous:2 kawato:1 functional:2 ji:2 approximates:1 refer:1 cup:1 mathematics:1 dot:2 moving:1 stable:2 similarity:1 surface:1 etc:1 something:1 curvature:2 brownian:1 recent:2 irrelevant:3 driven:2 apart:1 certain:2 verlag:1 inequality:1 lpo:1 minimum:4 additional:1 ey:1 vinter:1 shortest:1 redundant:7 signal:11 semi:4 ii:1 multiple:1 pnas:1 reduces:1 exceeds:1 technical:1 match:1 smooth:1 offer:1 long:1 sphere:2 serial:1 controlled:1 involving:1 controller:7 circumstance:1 represent:1 achieved:6 audience:1 dfehg:1 else:1 singular:2 crucial:1 strict:2 pass:2 induced:2 subject:1 sent:1 legend:1 jordan:3 call:1 vw:1 intermediate:1 iii:1 easy:1 bernstein:1 todorov:5 identified:1 restrict:1 suboptimal:1 opposite:1 idea:2 regarding:1 expression:2 effort:2 synergetic:3 penalty:1 hessian:3 repeatedly:2 action:3 generally:2 yw:1 viscosity:1 extensively:1 locally:1 bac:1 generate:2 specifies:1 http:1 neuroscience:3 wr:1 skeletal:2 write:1 discrete:1 redundancy:9 clarity:1 ht:2 sum:1 inverse:1 parameterized:1 uncertainty:1 place:1 family:1 reader:1 separation:1 matrix2:1 incompatible:1 prefer:1 decision:1 uncontrolled:2 distinguish:1 quadratic:1 precisely:3 constraint:1 electromyographic:1 uno:1 x2:1 generates:1 aspect:1 argument:1 relatively:1 department:1 o_:3 cbed:1 remain:1 smaller:3 soner:1 suppressed:2 happens:2 ikj:1 dv:5 invariant:1 intuitively:1 heart:1 computationally:1 equation:2 remains:1 turn:1 mechanism:1 know:1 end:2 available:2 permit:1 obey:1 actuator:5 away:2 hierarchical:1 enforce:2 appropriate:1 limb:1 altogether:1 original:2 include:2 opportunity:1 bimanual:3 fnm:1 build:1 pergamon:1 question:3 quantity:3 posture:1 strategy:2 costly:2 parametric:1 unclear:1 exhibit:1 gradient:2 detrimental:1 subspace:1 perturbs:1 separate:1 pnf:2 berlin:1 manifold:7 kalman:1 index:1 relationship:4 insufficient:1 regulation:1 executed:1 postponing:1 relate:2 negative:5 rise:1 stated:1 reliably:2 allowing:1 observation:2 markov:1 defining:1 hinton:1 variability:15 incorporated:1 ucsd:2 perturbation:2 wyx:1 community:1 prqs:1 specified:1 conflict:2 california:2 established:1 fleming:1 address:1 able:2 below:4 pattern:3 reading:2 explanation:2 event:1 difficulty:1 force:5 eh:1 sian:1 arm:5 resists:1 scheme:6 imply:1 numerous:1 created:1 ready:1 text:1 review:1 geometric:3 tangent:3 relative:1 law:17 embedded:1 plant:2 expect:1 loss:1 limitation:1 degree:2 principle:7 dq:5 compatible:1 course:1 surprisingly:1 last:2 explaining:1 wide:2 face:1 differentiating:2 midpoint:1 tracing:1 feedback:12 dimension:5 curve:5 stand:2 sensory:1 made:1 suzuki:1 san:1 miall:1 correlate:1 approximate:1 emphasize:1 observable:1 synergy:7 ml:1 active:1 why:3 nature:2 complex:1 neurosci:1 noise:8 nothing:1 body:1 positively:2 x1:1 position:4 msec:1 lie:3 lw:1 ricatti:1 specific:6 r8:1 list:1 evidence:2 concern:1 ih:1 corr:3 ci:1 execution:4 occurring:1 wolpert:1 led:1 likely:2 infinitely:2 positional:1 tracking:4 partially:1 scalar:1 springer:1 satisfies:2 constantly:1 relies:1 harris:1 determines:1 inal:1 goal:14 targeted:1 flash:1 experimentally:2 change:5 typical:1 corrected:3 hyperplane:2 total:2 pas:2 svd:1 gauss:1 rarely:1 selectively:1 tby:1 internal:1 latter:2 lmk:2 phenomenon:5 correlated:2 |
1,313 | 2,196 | Effective Dimension and Generalization of
Kernel Learning
Tong Zhang
IBM T.J. Watson Research Center
Yorktown Heights, NY 10598
[email protected]
Abstract
We investigate the generalization performance of some learning problems in Hilbert function Spaces. We introduce a concept of scalesensitive effective data dimension, and show that it characterizes the convergence rate of the underlying learning problem. Using this concept, we
can naturally extend results for parametric estimation problems in finite
dimensional spaces to non-parametric kernel learning methods. We derive upper bounds on the generalization performance and show that the
resulting convergent rates are optimal under various circumstances.
1 Introduction
The goal of supervised learning is to predict an unobserved output value based on an
observed input vector . This requires us to estimate a functional relationship
from a set of training examples. Usually the quality of the predictor
can be measured
by a loss function
. In machine learning, we assume that the data
are drawn
so that the expected true
from an unknown underlying distribution. Our goal is to find
loss of given below is as small as possible:
where we use
to denote the expectation with respect to the true (but unknown) underlying distribution.
In this paper we focus on smooth convex loss functions that are second order differentiable with respect to the first component. In addition we assume that the second derivative
is bounded both above and below (away from zero).1 For example, our analysis applies to
important methods such as least squares regression (aka, Gaussian processes) and logistic
regression in Hilbert spaces.
!
In order to obtain a good predictor
from training data, it is necessary to start with a
model of the functional relationship. In this paper, we consider models that are subsets in
some Hilbert function space . Denote by
the norm in . In particular, we consider
models in a bounded convex subset of . We would like to find the best model in
"
&
"
# #%$
"
&
1
This boundedness assumption is not essential. However in this paper, in order to emphasize the
main idea, we shall avoid using a more complex derivation that handles more general situations.
(1)
In supervised learning, we construct an estimator ! of from a set of training examples
!
! " . Throughout the paper, we use symbol ! to denote
empirical quantities based on the observed training data . Specifically, we use ! to
defined as:
denote the empirical expectation with respect to the training samples, and
! !
# $% !
&
%
%
'
# # $
#$
#
"
"
"
"
"
"
"
It is clear that M
can be regarded as a representing feature vector of in " . In the literature,
the inner product N
8O PM Q M S R is often referred to as the kernel of " , and " as the
reproducing kernel Hilbert space which is determined by the kernel function N T
O .
The purpose of this paper is to develop bounds on the true risk ! of any empirical esti
mator ! compared to the optimal risk based on its observed risk ! ! . Specifically
we seek a bound of the following form:
! VU 8 @GXW ! ! 6 ! @GKY !
Z
[
where W is a positive constant that only depends on the loss function , and Z is a parameter
that characterizes the effective data dimensionality for the learning problem.
If ! is the empirical estimator that minimizes ! in & , then the second term on the right
hand side is non-positive. We are thus mainly interested in the third term. It will be shown
a>
. "
that if " is a finite dimensional space, then the third term is \ ^ ]`_ where ] b
]
is the dimension of " . If " is an infinite dimensional space (or when is large compared
to ), one can adjust Z appropriately based on the sample size to get a bound \ c ] ! _
where the effective dimension ] ! at the optimal scale Z becomes sample-size dependent.
hence even in the
However the dimension will never grow faster than ] ! d
\ f e # and
_ge .
worse case, Y !
Z
[ converges to zero at a rate no worse than \
#
A consequence of our analysis is to obtain convergence rates better than \ _`e . For
empirical estimators with least squares loss, this issue has been considered in [1, 2, 4]
among others. The approach
in [1] won?t lead to the optimal rate of convergence for nonparametric classes. The O -covering number based analysis in [2, 4] use the chaining
Assume that input belongs to a set ( . We make the reasonable assumption that is
32
point-wise continuous under the
topology: ) +*,( , -
. /0'1
where
54 2 is in the sense that 76 82
4:9 . This assumption is equivalent to the condition
;<>=@? ?BADC
FEHGJI
) K*L( , implying that each data point can be regarded as
*
a bounded linear functional M on
such that )
: M
. Since a Hilbert
space is self-dual, we can represent M by an element in . Therefore ) we can define
as M
for all *
, where denotes the inner product of .
M *
argument [4] and ratio large deviation inequalities. However, it is known that chaining does
not always lead to the optimal convergence rate, and for many problems covering numbers
can be rather difficult to estimate. The effective dimension based analysis presented here,
while restricted to learning problems in Hilbert spaces (kernel methods), addresses these
issues.
2 Decomposition of loss function
Consider a convex subset &ih " , which is closed under the uniform norm topology. Let
be the optimal predictor j in & defined in (1). By differentiating (1) at the optimal
solution, and using the convexity of
condition:
&
with respect to , we obtain the following first order
6 9 ) * &
(2)
where
is the derivative of
with respect to . This inequality will be very
important in our analysis.
(with respect to its first variable) is defined as:
6
6
6
Definition 2.1 The Bregman distance of
]
It is well known (and easy to check) that for a convex function, its Bregman divergence is
always non-negative. As mentioned in the introduction, we assume for simplicity that there
_
U W
, where is the
exist positive constants W and W
such that 9 E WJU
second order derivative of with respect to the first variable. Using Taylor expansion of ,
it is easy to see that we have the following inequality for ] :
W
Now, )
6 O U ]
UKW
6 O
(3)
* & , we consider the following decomposition:
6 8 ]
G
6
Clearly by the non-negativeness of Bregman divergence and (2), the two terms on the right
hand side of the above equality are all non-negative. This fact is very important in our approach. The above decomposition of gives the following decomposition of loss function:
6
]
G
6
'
We thus obtain from (3):
6 O G
U
6
UVW
6
O G
W
6
6 8 '
(4)
3 Empirical ratio inequality and generalization bounds
"
4
Given a positive definite self-adjoint operator
structure on as:
"
The corresponding norm is # #
O .
Given a positive number Z , and let
self-adjoint operator on :
"
"
, we define an inner product
be the identity operator, we define the following
M
g M GXZ# %$
where we have used
the matrix notation M M to denote the self-adjoint operator " 4 "
defined as: M M & M M & '&
M .
In addition, we consider the inner product space ( ! on the set of self-adjoint operators on
" , with the inner product defined) as
)
+
*, .- 0
/ 1! 1!*
where / is the trace of a linear operator (sum of eigenvalues). The corresponding
2norm is denoted as # # .
"!
We start our analysis with the following simple lemma:
, the following bounds are valid:
6
;<>= !
U # !
M
6
[M
# -
O GXZ# # O$
$
O 6
O ! 6
2
;<>= !
U
#
M
M
M
M
#
$ O GKZ# # O$
O
O
Proof Note that GZ# # $ 1! $ . Therefore let !
[M
[ M
, we obtain from Cauchy-Schwartz inequality
!
6
U ! $ O ! O
Lemma 3.1 For any function
6
This proves the first inequality.
!
To show the second inequality, we) simply observe that the left
hand side is the largest
6
'
M
M
M
M
absolute
eigenvalue
of
the
operator
, which is upper bounded
!
)
O
by /
. Therefore the second inequality follows immediately from the definition of
( ! -norm.
"
The importance of Lemma 3.1 is that it bounds the behavior of any estimator
*
(which can be sample dependent) in terms of the norm of the empirical mean of zeromean Hilbert-space valued random vectors. The convergence rate of the latter can be easily
estimated from the variance of the random vectors, and therefore we have significantly
simplified the problem.
In order to estimate the variance of the random vectors on the right hand sides of
Lemma 3.1, and hence characterize the behavior of the learning problem, we shall introduce the following notion of effective data dimensionality at a scale Z :
!
M
! M
#M # O
-
Some properties of ! are listed in Appendix A, which can be used to estimate the quantity.
In particular for a finite dimensional space , ! is upper bounded by the dimensionality
a`
of the space. Moreover the equality can be achieved by letting Z 4 9 as long as
M M is full rank. Thus this quantity behaves like (scale-sensitive) data dimension.
"
"
We also define the following quantities to measure the boundedness of the input data:
$
It is easy to see that
! U
;[<3=
# #$
;[<3=
M
$
_ e Z .
!
;[<> = # M #
, then we have
#
M 6
#
[ M
# O - U W O !
# M
M # 2- !
# M
M 6
fM
fM # O -
Lemma 3.2 Let W
[ M , then we have
#
[M
6 M # O - #
[M
# O
Proof Let M
" : # MjM # 2- # M# O - . Therefore
#M
M # 2- #M
# O -
U
# M # O
- 6
which gives the first inequality.
Note that )M *
-
!
(5)
!
- UKW O
O
!
!
# M # 2- # M
# O - U
#M M # O2- U #M M # ! O U !
leading to the second equality. Since M
! O , we have
O
!
Similar to the proof of the first inequality, it is easy to check that this implies the third
inequality.
Next we need to use the following
version of Bernstein inequality in Hilbert spaces.
%
%
%
# # $
Proposition 3.1 ([5])
Let be zero-mean independent random vectors in a! Hilbert space.
U
9 such that for all
:
% % natural numbers
If there exist *
&
!
R O
9 :
.
U = [6 ! O O _ * O G
$ . Then for all
O
!
#
#$
In this paper, we shall use the following variant of the above bound for convenience.
# $ %
%
$
G
* U
= [6
(6)
O ! G"! $#!
Lemma 3.3 Under the assumptions of Lemma 3.2, let !
#
= 6 :
probability of at least 6
%
. Then with
!
6
U&%! W
$
O GXZ# # O$
#
Similarly, with probability of at least 6
= [ 6 , we have:
O 6
O
;[<>= !
U'! !3
$ O GXZ# # O$
; <>=
Proof The bounds are straight forward applications of (6) and the previous two lemmas.
Due to the limitation of space, we skip the details.
We are now ready to derive the following main result of the paper:
. Let ( W
_ W where W and W
satisfy
! such- that ! - * & . That is, ! * & is a
O G*! $#
function of the training sample . Let ! )
!
! . If we choose Z such that
9
(+%! ! U -, , then with probability of at least # 6/.0
= 6 , the generalization error
is bounded as:
! U TG . (21 ! ! 6 ! 43>G5 Z3W #
! 6 # O$ G . ( O W O ! O
Theorem 3.1 Assume ;[<>=
UW
8
(3). Consider any sample dependent estimator
W
Proof We introduce the following notations for convenience:
! !
6 8
) 8
6 8
6 8 O
* ! !
6 8 O
*
6 * @ GKZ# 6 # O$
#
We obtain from Lemma 3.3 that with probability of at least 67.0
= 6 :
) 6 )
! !
! U&! W 6 ! O
* ! ! 6 * ! U&! ! 6 !
)
Combining the above two inequalities, we obtain:
)
! !
! GXW * ! !
)
6
6 *
! U'! 1 W 6 ! O
GXW
6 ! 43B
!
Using (4) and recalling (2), we obtain
W
W
Let
1 !
3@U 1 ! ! 6 ! 3`G ! 1 W 6 ! O
6
# # $
N ! O 1 ! 6 ! 3G
6
then (2) and (4) imply that W U N5 . We can derive from (7)
W
N5 !
N ! U N ! O ! TG ! W
G
! N !
N5
1 6 8 4 3G
GXW
W
Using the assumption that
Z3W J6 O
W
(+ !
W
N
W
!
! U 9 , , we obtain
! ! @G ! [ W
W
!
W O J6 8 O
W
# #$
O
which immediately implies the theorem.
Z
(7)
! . Solving the inequality using
! V U . ( N ! O ! @ G . ( O W O ! O
W
which can be regarded as a quadratic inequality of N
elementary algebra, we obtain:
N
N5
U N O
6 ! 43B
!
(
Note that both ! and ! go to zero as Z 4 I , therefore the assumption !
!U 9
2
can be satisfied as long as we pick Z that is larger than a critical value Z . Using the bound
_ e Z , we easily obtain the following result.
!U
$
-,
&
# #$
5 (
$
! U @ G . (21 ! ! 6 ! 8 3`GKZ 5 )
Corollary 3.1 ) Under
the assumptions of Theorem 3.1. Assume also that the diameter of
)
;<>=
6 . Then for all Z and an upper bound ! of ! . If
is bounded by :
#
O _ , we have with probability of at least # 6
= [6 ,
!
O
and Z _ !
(
4 Examples
O W3G
! &
!
! 3
W
WO
$ O '
.0
in . In this case,
We will only consider empirical estimator that minimizes
TU 9 in Corollary 3.1. We shall thus only focus on the third term.
Worst case effective dimensionality and generalization
$O
1! !
_ Z . Therefore if U , we can always let Z
In the worst
case, we have !LU
e O _ in Corollary 3.1 and obtain with probability at least # 6
= [6 :
.
(
$
! U @ G . ( 5 W )
O
O O G W
W
$
/.0
6
Finite dimensional problems
" . Therefore we can let Z "5
We can use the bound !5Ua`
Corollary 3.1 and obtain:
a`
" ( O $O
_ in
" ( O $ O 5W ) O $ O G W O
W
_
It is well known that the rate of the order \ a`
" is optimal in this case.
! U 8 @ G 5
a>
.
Smoothing splines
For simplicity, we only consider 1-dimensional problems. For smoothing splines, the corresponding Hilbert space consists of functions satisfying the smoothness condition that
# _
O ] is bounded ( is the -th derivative of and
). We may consider periodic functions (or their restrictions in an interval) and the condition corresponds
to a decaying Fourier coefficients condition. Specifically, the space can be regarded as the
reproducing kernel Hilbert space with kernel
1
3
8O
N
$
2
G
#
$ O ;
;
O @G ; ; 8O
O %!
G O R Q . Therefore
R
O
_ 6 # .$ Therefore assuming
U O ! . Note
that we may take
2
6 # O $ O ( O we can let Z
$ O in Corollary 3.1 where is the largest
R
O
!
$ R . This gives the following bound (with probability at
O
integer
such that U
O! #"
# 6/
=
0
.
[
6
least
).
) O
O O
( ( O
W
WO
O
%
$
U
TG
6 # G W '& 6 # O '
)
Now, using Proposition A.3, we have
!
!
U
$
This rate matches the best possible convergence rate for any data-dependent estimator. 2
Exponential kernel
/1
3
8O%
= O
Exponential kernel has recently been popularized
# # by Vapnik. Again for simplicity we consider 1-dimensional problems where * 6
. The kernel % function
is given by
%
N
$% !
*
& 2
#
O
!
+ 3 . We obtain an upper bound
+U
G
!
, . ! , implying that the effective dimension is at most \ -
_ -
-. for exponen\ ! , ./, . !
Therefore
! U
tial kernels.
2 1 G G
5 Conclusion
In this paper, we introduced a concept of scale-sensitive effective data dimension, and
used it to derive generalization bounds for some kernel learning problems. The resulting
convergence rates are optimal for various learning problems. We have also shown that the
2
The lower bound is well-known in the non-parametric statistical literature (for example, see [3]).
effective dimension at the appropriate chosen optimal scale can be sample-size dependent
and behaves like e in the worst case.
This shows that despite the claim that a kernel method learns a predictor from an infinite
dimensional Hilbert space, for a fixed sample size, the effective dimension is rather small.
This in fact indicates that they are not any more powerful than learning in an appropriately
chosen finite dimensional space. This observation also raises the following computational
question: given -samples, kernel methods use parameters in the computation but as
we have shown, the effective number of parameters (effective dimension) is not more than
\ e . Therefore it could be possible to significantly reduce the computational cost of
kernel methods by explicitly parameterizing the effective dimensions.
A
Properties of scale-sensitive effective data dimension
We list some properties of the scale-sensitive data dimension ! . Due to the limitation
of
space, we shall skip the proofs. The following lemma implies that the quantity ! behaves
like dimension if the underlying space is finite dimensional.
"
" . Moreover, for all
"
$ is %defined
in (5).
$
%
%
%
%
* #
Proposition A.2 Consider
eigen-pairs
# Z
"
the complete
%
*%
% % % % set of9 ortho-normal
of the operator M M , where
if and
. This gives the
%
O
Z
decomposition: M M
, where Z . We have the identity:
!
! ! ! .
"
Proposition A.1 If is a finite dimensional space,
then
U a>
.
O _! K
Z , where
Hilbert spaces , we have the following bound ! U
% %
%
%
Proposition A.3 Consider the
following
feature
decomposition of kernel: M Q
%
%
%
% space
M
R
j% O , where each is a real valued
function. If Zj
Z O % , then we
O
have the following bound:
Z U
. This implies
%
$%
O_
! UH
2 G ;[<>=
Z
In many cases, we can find a so-called feature representation of the kernel function
N j O PM Q M R . In such cases the eigenvalues Z can be easily bounded.
References
[1] W.S. Lee, P.L. Bartlett, and R.C. Williamson. The importance of convexity in learning
with squared loss. IEEE Trans. Inform. Theory, 44(5):1974?1980, 1998.
[2] Shahar Mendelson. Learning relatively small classes. In COLT 01, pages 273?288,
2001.
[3] Charles J. Stone. Optimal global rates of convergence for nonparametric regression.
Annals of Statistics, 10:1040?1053, 1982.
[4] S.A. van de Geer. Empirical Processes in -estimation. Cambridge University Press,
2000.
[5] Vadim Yurinsky. Sums and Gaussian vectors. Springer-Verlag, Berlin, 1995.
| 2196 |@word prof:1 concept:3 version:1 implies:4 skip:2 norm:6 true:3 hence:2 equality:3 question:1 quantity:5 parametric:3 seek:1 decomposition:6 pick:1 self:5 covering:2 distance:1 boundedness:2 yorktown:1 won:1 chaining:2 berlin:1 generalization:7 stone:1 proposition:5 of9:1 elementary:1 complete:1 cauchy:1 assuming:1 com:1 considered:1 wise:1 normal:1 relationship:2 recently:1 charles:1 ratio:2 predict:1 difficult:1 claim:1 behaves:3 functional:3 trace:1 negative:2 purpose:1 estimation:2 extend:1 implying:2 unknown:2 upper:5 observation:1 sensitive:4 cambridge:1 largest:2 smoothness:1 finite:7 pm:2 similarly:1 situation:1 clearly:1 gaussian:2 always:3 rather:2 zhang:1 avoid:1 height:1 reproducing:2 corollary:5 introduced:1 consists:1 focus:2 pair:1 emphasize:1 belongs:1 rank:1 check:2 introduce:3 mainly:1 aka:1 indicates:1 verlag:1 expected:1 inequality:15 behavior:2 sense:1 shahar:1 watson:2 address:1 dependent:5 trans:1 usually:1 below:2 becomes:1 interested:1 underlying:4 bounded:9 notation:2 moreover:2 issue:2 among:1 dual:1 colt:1 denoted:1 full:1 minimizes:2 smoothing:2 smooth:1 faster:1 match:1 long:2 natural:1 unobserved:1 construct:1 never:1 representing:1 imply:1 esti:1 variant:1 regression:3 ready:1 n5:4 circumstance:1 expectation:2 others:1 spline:2 schwartz:1 kernel:18 represent:1 achieved:1 literature:2 addition:2 positive:5 divergence:2 interval:1 loss:8 grow:1 consequence:1 despite:1 appropriately:2 limitation:2 vadim:1 recalling:1 investigate:1 adjust:1 integer:1 bernstein:1 ibm:2 easy:4 bregman:3 vu:1 topology:2 fm:2 definite:1 necessary:1 inner:5 idea:1 reduce:1 critical:1 side:4 taylor:1 differentiating:1 absolute:1 empirical:9 van:1 significantly:2 bartlett:1 dimension:16 valid:1 gxz:4 wo:2 forward:1 tial:1 get:1 convenience:2 simplified:1 operator:8 risk:3 tg:3 cost:1 deviation:1 subset:3 restriction:1 equivalent:1 predictor:4 uniform:1 center:1 clear:1 listed:1 go:1 nonparametric:2 global:1 convex:4 characterize:1 diameter:1 simplicity:3 immediately:2 periodic:1 exist:2 zj:1 estimator:7 parameterizing:1 ju:1 regarded:4 estimated:1 continuous:1 lee:1 handle:1 notion:1 ortho:1 shall:5 annals:1 expansion:1 again:1 squared:1 satisfied:1 williamson:1 drawn:1 choose:1 complex:1 element:1 worse:2 satisfying:1 main:2 derivative:4 leading:1 yurinsky:1 sum:2 observed:3 de:1 powerful:1 referred:1 worst:3 coefficient:1 tzhang:1 throughout:1 satisfy:1 reasonable:1 explicitly:1 ny:1 depends:1 tong:1 appendix:1 closed:1 mentioned:1 exponential:2 characterizes:2 convexity:2 start:2 decaying:1 bound:18 third:4 convergent:1 learns:1 quadratic:1 theorem:3 raise:1 solving:1 square:2 algebra:1 mator:1 variance:2 badc:1 scalesensitive:1 symbol:1 list:1 uh:1 easily:3 essential:1 fourier:1 mendelson:1 argument:1 various:2 ih:1 vapnik:1 importance:2 lu:1 derivation:1 uvw:1 relatively:1 gkz:3 effective:15 straight:1 j6:2 o_:1 popularized:1 inform:1 simply:1 definition:2 larger:1 valued:2 statistic:1 naturally:1 proof:6 restricted:1 applies:1 springer:1 corresponds:1 differentiable:1 eigenvalue:3 dimensionality:4 goal:2 hilbert:13 identity:2 product:5 letting:1 ge:1 tu:1 combining:1 supervised:2 specifically:3 determined:1 infinite:2 observe:1 adjoint:4 away:1 appropriate:1 lemma:10 zeromean:1 mjm:1 called:1 geer:1 convergence:8 hand:4 eigen:1 denotes:1 converges:1 latter:1 derive:4 develop:1 logistic:1 quality:1 measured:1 |
1,314 | 2,197 | Learning to Classify Galaxy Shapes Using the
EM Algorithm
Sergey Kirshner
Information and Computer Science
University of California
Irvine, CA 92697-3425
[email protected]
Igor V. Cadez
Sparta Inc.,
23382 Mill Creek Drive #100,
Laguna Hills, CA 92653
igor [email protected]
Padhraic Smyth
Information and Computer Science
University of California
Irvine, CA 92697-3425
[email protected]
Chandrika Kamath
Center for Applied Scienti?c Computing
Lawrence Livermore National Laboratory
Livermore, CA 94551
[email protected]
Abstract
We describe the application of probabilistic model-based learning to the
problem of automatically identifying classes of galaxies, based on both
morphological and pixel intensity characteristics. The EM algorithm can
be used to learn how to spatially orient a set of galaxies so that they
are geometrically aligned. We augment this ?ordering-model? with a
mixture model on objects, and demonstrate how classes of galaxies can
be learned in an unsupervised manner using a two-level EM algorithm.
The resulting models provide highly accurate classi?cation of galaxies in
cross-validation experiments.
1
Introduction and Background
The ?eld of astronomy is increasingly data-driven as new observing instruments permit the
rapid collection of massive archives of sky image data. In this paper we investigate the
problem of identifying bent-double radio galaxies in the FIRST (Faint Images of the Radio
Sky at Twenty-cm) Survey data set [1]. FIRST produces large numbers of radio images of
the deep sky using the Very Large Array at the National Radio Astronomy Observatory. It
is scheduled to cover more that 10,000 square degrees of the northern and southern caps
(skies). Of particular scienti?c interest to astronomers is the identi?cation and cataloging
of sky objects with a ?bent-double? morphology, indicating clusters of galaxies ([8], see
Figure 1). Due to the very large number of observed deep-sky radio sources, (on the order
of 106 so far) it is infeasible for the astronomers to label all of them manually.
The data from the FIRST Survey (http://sundog.stsci.edu/) is available in both raw image
format and in the form of a catalog of features that have been automatically derived from
the raw images by an image analysis program [8]. Each entry corresponds to a single
detectable ?blob? of bright intensity relative to the sky background: these entries are called
Figure 1: 4 examples of radio-source galaxy images. The two on the left are labelled as
?bent-doubles? and the two on the right are not. The con?gurations on the left have more
?bend? and symmetry than the two non-bent-doubles on the right.
components. The ?blob? of intensities for each component is ?tted with an ellipse. The
ellipses and intensities for each component are described by a set of estimated features such
as sky position of the centers (RA (right ascension) and Dec (declination)), peak density
?ux and integrated ?ux, root mean square noise in pixel intensities, lengths of the major and
minor axes, and the position angle of the major axis of the ellipse counterclockwise from
the north. The goal is to ?nd sets of components that are spatially close and that resemble
a bent-double. In the results in this paper we focus on candidate sets of components that
have been detected by an existing spatial clustering algorithm [3] where each set consists
of three components from the catalog (three ellipses). As of the year 2000, the catalog
contained over 15,000 three-component con?gurations and over 600,000 con?gurations
total. The set which we use to build and evaluate our models consists of a total of 128
examples of bent-double galaxies and 22 examples of non-bent-double con?gurations. A
con?guration is labelled as a bent-double if two out of three astronomers agree to label it
as such. Note that the visual identi?cation process is the bottleneck in the process since it
requires signi?cant time and effort from the scientists, and is subjective and error-prone,
motivating the creation of automated methods for identifying bent-doubles.
Three-component bent-double con?gurations typically consist of a center or ?core? component and two other side components called ?lobes?. Previous work on automated classi?cation of three-component candidate sets has focused on the use of decision-tree classi?ers
using a variety of geometric and image intensity features [3]. One of the limitations of the
decision-tree approach is its relative in?exibility in handling uncertainty about the object
being classi?ed, e.g., the identi?cation of which of the three components should be treated
as the core of a candidate object. A bigger limitation is the ?xed size of the feature vector. A primary motivation for the development of a probabilistic approach is to provide a
framework that can handle uncertainties in a ?exible coherent manner.
2
Learning to Match Orderings using the EM Algorithm
We denote a three-component con?guration by C = (c 1 , c2 , c3 ), where the ci ?s are the
components (or ?blobs?) described in the previous section. Each component c x is represented as a feature vector, where the speci?c features will be de?ned later. Our approach
focuses on building a probabilistic model for bent-doubles: p (C) = p (c1 , c2 , c3 ), the likelihood of the observed ci under a bent-double model where we implicitly condition (for
now) on the class ?bent-double.?
By looking at examples of bent-double galaxies and by talking to the scientists studying them, we have been able to establish a number of potentially useful characteristics
of the components, the primary one being geometric symmetry. In bent-doubles, two of
the components will look close to being mirror images of one another with respect to a
line through the third component. We will call mirror-image components lobe compo-
core
core
1
lobe 2
2
3
lobe 2
lobe 1
lobe 1
component 2
lobe 1
component 3
lobe 2
lobe 1
4
core
lobe 1
5
lobe 2
lobe 2
6
core
core
component 1
core
lobe 2
lobe 1
Figure 2: Possible orderings for a hypothetical bent-double. A good choice of ordering
would be either 1 or 2.
nents, and the other one the core component. It also appears that non-bent-doubles either
don?t exhibit such symmetry, or the angle formed at the core component is too straight?
the con?guration is not ?bent? enough. Once the core component is identi?ed, we can
calculate symmetry-based features. However, identifying the most plausible core component requires either an additional algorithm or human expertise. In our approach we use a
probabilistic framework that averages over different possible orderings weighted by their
probability given the data.
In order to de?ne the features, we ?rst need to determine the mapping of the components to
labels ?core?, ?lobe 1?, and ?lobe 2? (c, l1 , and l2 for short). We will call such a mapping
an ordering. Figure 2 shows an example of possible orderings for a con?guration. We can
number the orderings 1, . . . , 6. We can then write
p (C)
=
6
X
p (cc , cl1 , cl2 |? = k) p (? = k) ,
(1)
k=1
i.e., a mixture over all possible orientations. Each ordering is assumed a priori to be equally
likely, i.e., p(? = k) = 61 . Intuitively, for a con?guration that clearly looks like a bentdouble the terms in the mixture corresponding to the correct ordering would dominate,
while the other orderings would have much lower probability.
We represent each component cx by M features (we used M = 3). Note that the features
can only be calculated conditioned on a particular mapping since they rely on properties of
the (assumed) core and lobe components. We denote by fmk (C) the values corresponding
to the mth feature for con?guration C under the ordering ? = k, and by f mkj (C) we
denote the feature value of component j: fmk (C) = (fmk1 (C) , . . . , fmkBm (C)) (in our
case, Bm = 3 is the number of components). Conditioned on a particular mapping ? = k,
where x ? {c, l1 , l2 } and c,l1 ,l2 are de?ned in a cyclical order, our features are de?ned as:
? f1k (C) : Log-transformed angle, the angle formed at the center of the component
(a vertex of the con?guration) mapped to label x;
|center of x to center of next(x)|
? f2k (C) : Logarithms of side ratios, center of x to center of prev(x) ;
|
|
peak ?ux of next(x)
? f3k (C) : Logarithms of intensity ratios, peak ?ux of prev(x) ,
and so (C|? = k) = (f1k (C) , f2k (C) f3k (C)) for a 9-dimensional feature vector in total.
Other features are of course also possible. For our purposes in this paper this particular set
appears to capture the more obvious visual properties of bent-double galaxies.
For a set D = {d1 , . . . , dN } of con?gurations, under an i.i.d. assumption for con?gurations, we can write the likelihood as
P (D) =
N X
K
Y
P (?i = k) P (f1k (di ) , . . . , fM k (di )) ,
i=1 k=1
where ?i is the ordering for con?guration d i . While in the general case one can model
P (f1k (di ) , . . . , fM k (di )) as a full joint distribution, for the results reported in this paper
we make a number of simplifying assumptions, motivated by the fact that we have relatively little labelled training data available for model building. First, we assume that the
fmk (di ) are conditionally independent. Second, we are also able to reduce the number
of components for each fmk (di ) by noting functional dependencies. For example, given
two angles of a triangle, we can uniquely determine the third one. We also assume that
the remaining components for each feature are conditionally independent. Under these
assumptions the multivariate joint distribution P (f1k (di ) , . . . , fM k (di )) is factored into
a product of simple distributions, which (for the purposes of this paper) we model using
Gaussians. If we know for every training example which component should be mapped to
label c, we can then unambiguously estimate the parameters for each of these distributions.
In practice, however, the identity of the core component is unknown for each object. Thus,
we use the EM algorithm to automatically estimate the parameters of the above model.
We begin by randomly assigning an ordering to each object. For each subsequent iteration
the E-step consists of estimating a probability distribution over possible orderings for each
object, and the M-step estimates the parameters of the feature-distributions using the probabilistic ordering information from the E-step. In practice we have found that the algorithm
converges relatively quickly (in 20 to 30 iterations) on both simulated and real data. It is
somewhat surprising that this algorithm can reliably ?learn? how to align a set of objects,
without using any explicit objective function for alignment, but instead based on the fact
that feature values for certain orderings exhibit a certain self-consistency relative to the
model. Intuitively it is this self-consistency that leads to higher-likelihood solutions and
that allows EM to effectively align the objects by maximizing the likelihood.
After the model has been estimated, the likelihood of new objects can also be calculated
under the model, where the likelihood now averages over all possible orderings weighted
by their probability given the observed features.
The problem described above is a speci?c instance of a more general feature unscrambling
problem. In our case, we assume that con?gurations of three 3-dimensional components
(i.e. 3 features) each are generated by some distribution. Once the objects are generated, the
orders of their components are permuted or scrambled. The task is then to simultaneously
learn the parameters of the original distributions and the scrambling for each object. In the
more general form, each con?guration consists of L M -dimensional con?gurations. Since
there are L! possible orderings of L components, the problem becomes computationally
intractable if L is large. One solution is to restrict the types of possible scrambles (to cyclic
shifts for example).
3
Automatic Galaxy Classi?cation
We used the algorithm described in the previous section to estimate the parameters of features and orderings of the bent-double class from labelled training data and then to rank
candidate objects according to their likelihood under the model. We used leave-one-out
cross-validation to test the classi?cation ability of this supervised model, where for each
of the 150 examples we build a model using the positive examples from the set of 149
?other? examples, and then score the ?left-out? example with this model. The examples are
then sorted in decreasing order by their likelihood score (averaging over different possi-
1
0.9
True positive rate
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
False positive rate
Figure 3: ROC plot for a model using angle, ratio of sides, and ratio of intensities, as
features, and learned using ordering-EM with labelled data.
ble orderings) and the results are analyzed using a receiver operating characteristic (ROC)
methodology. We use AROC , the area under the curve, as a measure of goodness of the
model, where a perfect model would have AROC = 1 and random performance corresponds to AROC = 0.5. The supervised model, using EM for learning ordering models,
has a cross-validated AROC score of 0.9336 (Figure 3) and appears to be quite useful at
detecting bent-double galaxies.
4
Model-Based Galaxy Clustering
A useful technique in understanding astronomical image data is to cluster image objects
based on their morphological and intensity properties. For example, consider how one
might cluster the image objects in Figure 1 into clusters, where we have features on angles,
intensities, and so forth. Just as with classi?cation, clustering of the objects is impeded by
not knowing which of the ?blobs? corresponds to the true ?core? component.
From a probabilistic viewpoint, clustering can be treated as introducing another level of
hidden variables, namely the unknown class (or cluster) identity of each object. We can
generalize the EM algorithm for orderings (Section 2) to handle this additional hidden
level. The model is now a mixture of clusters where each cluster is modelled as a mixture
of orderings. This leads to a more complex two-level EM algorithm than that presented
in Section 2, where at the inner-level the algorithm is learning how to orient the objects,
and at the outer level the algorithm is learning how to group the objects into C classes.
Space does not permit a detailed presentation of this algorithm?however, the derivation is
straightforward and produces intuitive update rules such as:
?
?cmj =
1
N P? (cl = c|?)
N X
K
X
P (cli = c|?i = k, D, ?) P (?i = k|D, ?) fmkj (di )
i=1 k=1
where ?cmj is the mean for the cth cluster (1 ? c ? C), the mth feature (1 ? m ? M ),
and the jth component of fmk (di ), and ?i = k corresponds to ordering k for the ith
object.
We applied this algorithm to the data set of 150 sky objects, where unlike the results in
Section 3, the algorithm now had no access to the class labels. We used the Gaussian
conditional-independence model as before, and grouped the data into K = 2 clusters.
Figures 4 and 5 show the highest likelihood objects, out of 150 total objects, under the
Bent?double
Bent?double
Bent?double
Bent?double
Bent?double
Bent?double
Bent?double
Bent?double
Figure 4: The 8 objects with the highest likelihood conditioned on the model for the larger
of the two clusters learned by the unsupervised algorithm.
Bent?double
Non?bent?double
Non?bent?double
Non?bent?double
Non?bent?double
Non?bent?double
Bent?double
Non?bent?double
Figure 5: The 8 objects with the highest likelihood conditioned on the model for the
smaller of the two clusters learned by the unsupervised algorithm.
150
Unsupervised Rank
bent?doubles
non?bent?doubles
100
50
0
0
50
100
150
Supervised Rank
Figure 6: A scatter plot of the ranking from the unsupervised model versus that of the
supervised model.
models for the larger cluster and smaller cluster respectively. The larger cluster is clearly a
bent-double cluster: 89 of the 150 objects are more likely to belong to this cluster under the
model and 88 out of the 89 objects in this cluster have the bent-double label. In other words,
the unsupervised algorithm has discovered a cluster that corresponds to ?strong examples?
of bent-doubles relative to the particular feature-space and model. In fact the non-bentdouble that is assigned to this group may well have been mislabelled (image not shown
here). The objects in Figure 5 are clearly inconsistent with the general visual pattern of
bent-doubles and this cluster consists of a mixture of non-bent-double and ?weaker? bentdouble galaxies. The objects in Figures 5 that are labelled as bent-doubles seem quite
atypical compared to the bent-doubles in Figure 4.
A natural hypothesis is that cluster 1 (88 bent-doubles) in the unsupervised model is in fact
very similar to the supervised model learned using the labelled set of 128 bent-doubles in
Section 3. Indeed the parameters of the two Gaussian models agree quite closely and the
similarity of the two models is illustrated clearly in Figure 6 where we plot the likelihoodbased ranks of the unsupervised model versus those of the supervised model. Both models
are in close agreement and both are clearly performing well in terms of separating the
objects in terms of their class labels.
5
Related Work and Future Directions
A related earlier paper is Kirshner et al [6] where we presented a heuristic algorithm for
solving the orientation problem for galaxies. The generalization to an EM framework in
this paper is new, as is the two-level EM algorithm for clustering objects in an unsupervised
manner.
There is a substantial body of work in computer vision on solving a variety of different
object matching problems using probabilistic techniques?see Mjolsness [7] for early ideas
and Chui et al. [2] for a recent application in medical imaging. Our work here differs in
a number of respects. One important difference is that we use EM to learn a model for
the simultaneous correspondence of N objects, using both geometric and intensity-based
features, whereas prior work in vision has primarily focused on matching one object to
another (essentially the N = 2 case). An exception is the recent work of Frey and Jojic
[4, 5] who used a similar EM-based approach to simultaneously cluster images and estimate
a variety of local spatial deformations. The work described in this paper can be viewed as
an extension and application of this general methodology to a real-world problem in galaxy
classi?cation.
Earlier work on bent-double galaxy classi?cation used decision tree classi?ers based on a
variety of geometric and intensity-based features [3]. In future work we plan to compare
the performance of this decision tree approach with the probabilistic model-based approach
proposed in this paper. The model-based approach has some inherent advantages over
a decision-tree model for these types of problems. For example, it can directly handle
objects in the catalog with only 2 blobs or with 4 or more blobs by integrating over missing
intensities and over missing correspondence information using mixture models that allow
for missing or extra ?blobs?. Being able to classify such con?gurations automatically is of
signi?cant interest to the astronomers.
Acknowledgments
This work was performed under a sub-contract from the ASCI Scienti?c Data Management Project of the Lawrence Livermore National Laboratory. The work of S. Kirshner
and P. Smyth was also supported by research grants from NSF (award IRI-9703120), the
Jet Propulsion Laboratory, IBM Research, and Microsoft Research. I. Cadez was supported
by a Microsoft Graduate Fellowship. The work of C. Kamath was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore
National Laboratory under contract No. W-7405-Eng-48. We gratefully acknowledge our
FIRST collaborators, in particular, Robert H. Becker for sharing his expertise on the subject.
References
[1] R. H. Becker, R. L. White, and D. J. Helfand. The FIRST Survey: Faint Images of the
Radio Sky at Twenty-cm. Astrophysical Journal, 450:559, 1995.
[2] H. Chui, L. Win, R. Schultz, J. S. Duncan, and A. Rangarajan. A uni?ed feature
registration method for brain mapping. In Proceedings of Information Processing in
Medical Imaging, pages 300?314. Springer-Verlag, 2001.
[3] I. K. Fodor, E. Cant?u-Paz, C. Kamath, and N. A. Tang. Finding bent-double radio
galaxies: A case study in data mining. In Proceedings of the Interface: Computer
Science and Statistics Symposium, volume 33, 2000.
[4] B. J. Frey and N. Jojic. Estimating mixture models of images and inferring spatial
transformations using the EM algorithm. In Proceedings of IEEE Computer Society
Conference on Computer Vision and Pattern Recognition, 1999.
[5] N. Jojic and B. J. Frey. Topographic transformation as a discrete latent variable. In
Advances in Neural Information Processing Systems 12. MIT Press, 2000.
[6] S. Kirshner, I. V. Cadez, P. Smyth, C. Kamath, and E. Cantu? -Paz. Probabilistic modelbased detection of bent-double radio galaxies. In Proceedings 16th International Conference on Pattern Recognition, volume 2, pages 499?502, 2002.
[7] E. Mjolsness. Bayesian inference on visual grammars by neural networks that optimize. Technical Report YALEU/DCS/TR-854, Department of Computer Science, Yale
University, May 1991.
[8] R. L. White, R. H. Becker, D. J. Helfand, and M. D. Gregg. A catalog of 1.4 GHz radio
sources from the FIRST Survey. Astrophysical Journal, 475:479, 1997.
| 2197 |@word nd:1 lobe:17 simplifying:1 eng:1 eld:1 tr:1 yaleu:1 cyclic:1 score:3 cadez:4 subjective:1 existing:1 com:1 surprising:1 assigning:1 scatter:1 subsequent:1 cant:3 shape:1 plot:3 mislabelled:1 update:1 ith:1 core:16 short:1 compo:1 detecting:1 dn:1 c2:2 symposium:1 consists:5 prev:2 manner:3 indeed:1 ra:1 rapid:1 morphology:1 brain:1 decreasing:1 automatically:4 gov:1 little:1 becomes:1 begin:1 estimating:2 project:1 xed:1 cm:2 astronomy:2 gurations:10 finding:1 transformation:2 sky:10 every:1 hypothetical:1 medical:2 grant:1 positive:3 before:1 scientist:2 frey:3 local:1 sparta:2 laguna:1 might:1 graduate:1 acknowledgment:1 practice:2 differs:1 likelihoodbased:1 area:1 matching:2 f2k:2 word:1 integrating:1 mkj:1 close:3 bend:1 optimize:1 center:8 maximizing:1 missing:3 straightforward:1 iri:1 survey:4 focused:2 impeded:1 identifying:4 factored:1 rule:1 array:1 dominate:1 his:1 handle:3 fodor:1 massive:1 smyth:4 hypothesis:1 agreement:1 recognition:2 observed:3 capture:1 calculate:1 morphological:2 ordering:26 mjolsness:2 highest:3 substantial:1 solving:2 creation:1 triangle:1 joint:2 represented:1 derivation:1 astronomer:4 describe:1 detected:1 quite:3 heuristic:1 larger:3 plausible:1 grammar:1 ability:1 statistic:1 topographic:1 blob:7 advantage:1 product:1 uci:2 aligned:1 gregg:1 forth:1 intuitive:1 cl1:1 cli:1 rst:1 double:50 cluster:21 rangarajan:1 produce:2 perfect:1 converges:1 leave:1 object:35 minor:1 strong:1 resemble:1 signi:2 direction:1 aroc:4 closely:1 correct:1 human:1 kirshner:4 fmk:5 generalization:1 collaborator:1 extension:1 ic:2 lawrence:3 mapping:5 major:2 nents:1 early:1 purpose:2 radio:10 observatory:1 label:8 grouped:1 weighted:2 scramble:1 mit:1 clearly:5 gaussian:2 derived:1 ax:1 focus:2 validated:1 rank:4 likelihood:11 inference:1 integrated:1 typically:1 guration:9 mth:2 hidden:2 transformed:1 pixel:2 orientation:2 augment:1 priori:1 development:1 plan:1 spatial:3 once:2 manually:1 look:2 unsupervised:9 igor:2 future:2 report:1 inherent:1 primarily:1 randomly:1 simultaneously:2 national:4 microsoft:2 detection:1 interest:2 highly:1 investigate:1 mining:1 alignment:1 mixture:8 analyzed:1 scienti:3 accurate:1 tree:5 logarithm:2 deformation:1 instance:1 classify:2 earlier:2 cover:1 goodness:1 introducing:1 vertex:1 entry:2 paz:2 too:1 motivating:1 reported:1 dependency:1 density:1 peak:3 international:1 probabilistic:9 contract:2 modelbased:1 quickly:1 management:1 padhraic:1 de:4 north:1 inc:1 ranking:1 astrophysical:2 later:1 root:1 performed:2 observing:1 ascus:1 possi:1 square:2 bright:1 formed:2 characteristic:3 who:1 generalize:1 modelled:1 raw:2 bayesian:1 expertise:2 drive:1 cc:1 straight:1 cation:10 simultaneous:1 sharing:1 ed:3 energy:1 galaxy:20 obvious:1 f3k:2 di:10 con:19 irvine:2 astronomical:1 cap:1 appears:3 higher:1 supervised:6 unambiguously:1 methodology:2 just:1 scheduled:1 building:2 true:2 assigned:1 spatially:2 jojic:3 laboratory:4 illustrated:1 white:2 conditionally:2 self:2 uniquely:1 f1k:5 hill:1 demonstrate:1 llnl:1 l1:3 interface:1 image:17 permuted:1 functional:1 volume:2 belong:1 automatic:1 consistency:2 gratefully:1 had:1 access:1 similarity:1 operating:1 align:2 multivariate:1 recent:2 driven:1 certain:2 verlag:1 additional:2 somewhat:1 speci:2 determine:2 full:1 technical:1 match:1 jet:1 cross:3 bent:51 equally:1 bigger:1 ellipsis:2 award:1 vision:3 essentially:1 iteration:2 represent:1 sergey:1 dec:1 c1:1 background:2 whereas:1 fellowship:1 source:3 extra:1 unlike:1 archive:1 subject:1 counterclockwise:1 inconsistent:1 seem:1 call:2 noting:1 enough:1 automated:2 variety:4 independence:1 declination:1 fm:3 restrict:1 reduce:1 inner:1 idea:1 knowing:1 shift:1 bottleneck:1 motivated:1 becker:3 effort:1 deep:2 useful:3 detailed:1 http:1 northern:1 nsf:1 estimated:2 write:2 discrete:1 group:2 registration:1 imaging:2 geometrically:1 year:1 orient:2 angle:7 uncertainty:2 decision:5 ble:1 duncan:1 correspondence:2 yale:1 auspex:1 chui:2 cl2:1 performing:1 format:1 relatively:2 ned:3 department:2 according:1 smaller:2 em:15 increasingly:1 cth:1 intuitively:2 computationally:1 agree:2 detectable:1 know:1 instrument:1 studying:1 available:2 gaussians:1 permit:2 original:1 clustering:5 remaining:1 build:2 ellipse:2 establish:1 society:1 objective:1 primary:2 southern:1 exhibit:2 win:1 mapped:2 simulated:1 separating:1 outer:1 propulsion:1 length:1 ratio:4 robert:1 kamath:4 potentially:1 reliably:1 twenty:2 unknown:2 acknowledge:1 looking:1 dc:1 scrambling:1 discovered:1 intensity:13 namely:1 livermore:4 c3:2 catalog:5 california:3 coherent:1 learned:5 identi:4 able:3 pattern:3 program:1 treated:2 rely:1 natural:1 ne:1 axis:1 prior:1 geometric:4 l2:3 understanding:1 relative:4 limitation:2 versus:2 validation:2 degree:1 viewpoint:1 ibm:1 prone:1 course:1 supported:2 infeasible:1 jth:1 side:3 weaker:1 allow:1 ghz:1 curve:1 calculated:2 world:1 collection:1 bm:1 schultz:1 far:1 uni:1 implicitly:1 receiver:1 assumed:2 don:1 scrambled:1 latent:1 learn:4 ca:4 symmetry:4 complex:1 cl:1 motivation:1 noise:1 cataloging:1 body:1 roc:2 sub:1 position:2 inferring:1 explicit:1 candidate:4 atypical:1 third:2 tang:1 exible:1 er:2 faint:2 consist:1 intractable:1 false:1 effectively:1 ci:2 mirror:2 conditioned:4 cx:1 mill:1 likely:2 visual:4 contained:1 ux:4 talking:1 cyclical:1 springer:1 corresponds:5 conditional:1 goal:1 identity:2 sorted:1 presentation:1 viewed:1 labelled:7 tted:1 averaging:1 classi:10 called:2 total:4 indicating:1 exception:1 evaluate:1 exibility:1 d1:1 handling:1 |
1,315 | 2,198 | Dyadic Classification Trees
via
Structural Risk Minimization
Clayton Scott and Robert Nowak
Department of Electrical and Computer Engineering
Rice University
Houston, TX 77005
cscott,nowak @rice.edu
Abstract
Classification trees are one of the most popular types of classifiers, with
ease of implementation and interpretation being among their attractive
features. Despite the widespread use of classification trees, theoretical
analysis of their performance is scarce. In this paper, we show that a new
family of classification trees, called dyadic classification trees (DCTs),
are near optimal (in a minimax sense) for a very broad range of classification problems. This demonstrates that other schemes (e.g., neural
networks, support vector machines) cannot perform significantly better
than DCTs in many cases. We also show that this near optimal performance is attained with linear (in the number of training data) complexity
growing and pruning algorithms. Moreover, the performance of DCTs
on benchmark datasets compares favorably to that of standard CART,
which is generally more computationally intensive and which does not
possess similar near optimality properties. Our analysis stems from theoretical results on structural risk minimization, on which the pruning rule
for DCTs is based.
1 Introduction
Let
be a jointly distributed pair of random variables. In pattern
recognition, is called an input vector, and contains the measurements from an experiment. The values in are referred to as features, attributes, or predictors. is called a
response variable, and
is thought of as a class label associated with . A classifier is a
function
that attempts to match an input vector with the appropriate class.
The performance of for a given distribution of the data is measured by the probability of
error:
"!$#
%& ($
'!
)+*
The classifier with the smallest probability of error, denoted -, , is called the Bayes classifier. The Bayes classifier is given by
1
, ./ 0!
if 23. 54786
otherwise
where 23. "! # ! !$. !
.
of error for the Bayes classifier is denoted
,
is the regression of
.
on . The probability
The true distribution on the data is generally unknown. In such cases, we may construct a
***
of independent,
classifier based on a training dataset !
6
6
identically distributed samples. A procedure that constructs a classifier for all is called a
discrimination rule. The performance
of ! .
is measured by
the conditional probability of error. Note that is random, since
!
!
#
($
'!
+
is random.
In this paper, we examine a family of classifiers called dyadic classification trees (DCTs),
built by recursive, dyadic partitioning of the input space. The appropriate tree from this
family is obtained by building an initial tree (in a data-independent fashion), followed by a
data-dependent pruning operation based on structural risk minimization (SRM). Thus, one
important distinction between our approach and usual decision trees is that the initial tree
is not adaptively grown to fit the data. The pruning strategy resembles that used by CART,
except that the penalty assigned to a subtree is proportional to the square root of its size.
SRM penalized DCTs lead
to a strongly consistent discrimination rule for input data with
support in the unit cube . We also derive bounds on the rate of convergence of DCTs
to the Bayes error. Under a modest regularity assumption (in terms of the box-counting
dimension) on the underlying optimal Bayes decision boundary, we show that complexityregularized DCTs converge to the Bayes decision at a rate of
6
6 . Moreover, the
minimax error rate for this class is at least 6 . This shows that dyadic classification trees
are near minimax-rate optimal, i.e., that no discrimination rule can perform significantly
better in this minimax sense. We also present an efficient algorithm for implementing the
pruning strategy, which leads to an algorithm for DCT construction. The pruning
operations to prune an initial tree with
algorithm requires
terminal nodes,
and is based on the familiar pruning algorithm used by CART [1]. Finally, we compare
DCTs with a CART-like tree classifier on four common datasets.
!#"%$&
2 Dyadic Classification Trees
Throughout
this work we assume that the input data is restricted to the unit hypercube,
. This is a realistic assumption for real-world data, provided appropriate translation
!
***
be a tree-structured partition of the input
and scaling is applied. Let
6
is a hyperrectangle with sides parallel to the coordinate axes. Given
space, where each
an integer , let
denote the element of *** 8 that is congruent to modulo . If
is a cell at depth in the tree, let 6 and
be the rectangles formed by splitting
at its midpoint along coordinate
. As a convention, assume 6 contains those
points of
that are less than or equal to the midpoint along the dimension being split.
(0, 1' 0( ,
(0,
-.
)(
'
(+,
(+*
( , /
( ,
2+3
2
( ,
Definition 1 A sequential dyadic partition (SDP) is any partition of
obtained by applying the following rules recursively:
is an SDP,
)(
+
(
*
2. If '
is an SDP, then so is
4(
(5, ( , ( , (0,
where 6 may be any integer, 87
6&7:9 .
1. The trivial partition
!
6
***
'
!
8
***
/
that can be
6
-
6
6
6
***
(*
+
We define a dyadic classification tree (DCT) to be a sequential dyadic partition with a class
label (0 or 1) assigned to each node in the tree.
The partitions are sequential because children must be split along the next coordinate after
the coordinate where their parent was split. Such splits are referred to as forced splits, as
opposed to free splits, in which any coordinate may be split. The partitions are dyadic
because we only allow midpoint splits.
By a complete DCT of depth , we mean a DCT such that every possible split up to depth
has been made. In a complete DCT, every terminal node has volume
. If is a multiple
of , then the terminal nodes of a complete DCT are hypercubes of sidelength
.
/
3 SRM for DCTs
Structural risk minimization (SRM) is an inductive principle for selecting a classifier from
a sequence of sets of classifiers based on complexity regularization. It was introduced by
Vapnik and Chervonenkis (see [2]), and later analyzed by Lugosi and Zeger [3], [4, Ch.
18]. We formulate structural risk minimization for dyadic classification trees by applying
results from [4, Ch. 18].
SRM is formulated in terms of the VC
dimension, which we briefly review. Let be a
collection of classifiers 5
, and let
***
. If each of the
6
possible labellings of *** can be correctly classified by some , we say
6
shatters *** . The Vapnik-Chervonenkis dimension (or VC dimension) of , denoted
6
by , is the largest integer for which there exist ***
such that shatters
6
*** . If shatters some
points for every , then ! by definition. The VC
6
dimension is a measure of the capacity of . As increases, is able to separate more
complex patterns.
4
*
+7:9 7
If
!
for some integer , we say
is dyadic. For dyadic
, and for
,
denote the collection of all DCTs with terminal nodes and depth not exceeding
let
, so that no terminal node has a side of length less than
! . It is easily shown
that the VC dimension of
is [5].
/
9
*
9
, , , , for 9
, define
arg min
where
, ,
,
*
*
is the empirical risk of . Thus, is selected by empirical risk minimization over .
Define the penalty term
9& " $ )
9
(1)
*
, define the penalized risk
and for
3 9 *
The SRM principle
, that
* selects the classifier from among 9
. We refer to as a penalized or complexity-regularized dyadic
minimizes
classification tree. We have the following risk bound.
*
Given a dyadic integer
, and training data
!
*
"!
)
+
)!+ ***
6
3
"!$( #&%'
)
%
($
'!
6
,
2
!.-
3 "!
3
#<;
*
,
4 ***
!
5
3
,
Theorem 1 For all
,
)
3
3
10
/
and
,
9 7
,
6
9 ,
*
7 3
, and for all 78:9
>
=
? @5A
"!$( #&%'
/
4B7DC
:0
EGF
6
8H
JI
0
KE1F
L
8
6
and in particular, for all
and
,
* ? @
6
9 " $ 3 3
7
*
Sketch of proof: Apply Theorem 18.3 in [4] with
,
=
,
-
4 ***
J9
? @5A
"! # %'
(
!
.
*
and
=
,
*
9
*
!
for
9
!
The first term on the right-hand side of the second bound is an upper bound on the expected
estimation error. The second term is the approximation error. Even though the penalized
DCT does not know the value of that optimally balances the two terms, it performs as
though it does, because of the ?min? in the expression. Nobel [6] gives similar results for
classifiers based on initial trees that depend on the data.
9
The next result demonstrates
strong consistency for the penalized DCT, where strong con
sistency means
, with probabilty one.
#"%$
Theorem 2 Suppose
, with
!
assuming only dyadic integer values. If
!
, then the penalized dyadic classification tree is strongly consistent for
all distributions supported on the unit hypercube.
Sketch of proof: The proof follows from the first part of Theorem 1 and strong universal
consistency of the regular histogram classifier. See [5] for details.
4 Rates of Convergence
In this section, we investigate bounds on the rate of convergence of complexity-regularized
DCTs. First we obtain upper bounds on the rate of convergence for a particular class of
distributions on . We then state a minimax lower bound on the rate of convergence
of any data based classifier for this class.
Most rate of convergence studies in pattern recognition place a constraint on the regression
function 23. "! # ! !$. by requiring it to belong to a certain smoothness class
(e.g. Lipschitz, Besov, bounded variation). In contrast, the class we study is defined in
terms of the regularity of the Bayes decision boundary, denoted . We allow 23. to be
arbitrarily irregular away from , so long as it is well behaved near . The Bayes decision
.
23. !
" . A more rigorous definition
boundary is informally defined as !
should take into account the fact that 2 might not take on the value K [5].
We now define a class of distributions. Let
takes on values in .
denote a random pair, as before, where
Definition 2 Let 8 4 . Define 8 to be the collection of all distributions on
6
6
such that for all dyadic integers
, if we subdivide the unit cube into cubes of side
length
,
A1 (Bounded marginal): For any such cube intersecting the Bayes decision boundary,
#
( 0!
, where denotes the Lebesgue measure.
7
6
6
A2 (Regularity): The Bayes decision boundary passes through at most
resulting
cubes.
Define
to be the class of all
belonging to
6
8
7
for some
6
8
8
6
of the
.
The first condition holds, for example, if the density of is essentially bounded with
respect to the Lebesgue measure, with essential supremum . The second condition
6
can be shown to hold when one coordinate of the Bayes decision boundary is a Lipschitz
function of the others. See, for example, the boundary fragment class of [7] with !
therein.
The regularity condition A2 is closely related to the notion of box-counting dimension of
the Bayes decision boundary [8]. Roughly speaking, A2 holds for some 8 if and only if the
Bayes decision boundary has box-counting dimension = . The box-counting dimension
is an upper bound on the Hausdorff dimension, and the two dimensions are equal for most
?reasonable? sets. For example, if is a smooth -dimensional submanifold of
, then
has box-counting dimension .
9
/
9
4.1 Upper Bounds on DCT Rate of Convergence
Theorem 3 Assume the distribution of belongs to
6
penalized dyadic classification tree,
as described in Section 3. If
then there exists a constant 4
such that for all &4 ,
9
. Let
,
6
be the
6 ,
,
#"%$
8
7 #"%$
" $ , we mean " $ " $ " $ 3
When we write
, where
=
6
,
8
6
6
6
*
8
!
6
6
is arbitrary.
7
Sketch of proof: It can be shown that for
each dyadic
, there exists a pruned DCT with
=
,
6 8
. Plugging this into the risk
!
6 leaf nodes, such that
bound in Theorem 1 and minimizing over
produces the desired result [5].
/
The minimal value of in the above theorem tends to 8 as . Note that similar
6
rate of convergence results for data-grown trees would be more difficult to establish, since
the approximation error is random in those cases.
It is possible to eliminate the log factor in the upper bound by means of Alexander?s inequality, as discussed in [4, Ch. 12]. This leads to a much larger value of , but an
improved asymptotic rate.
To illustrate the significance of Theorem 3, consider a penalized histogram classifer, with
bin width determined adaptively by structural risk minimization, as described in [4, Problem 18.6]. For that rule, the best exponent on the rate of convergence for our class is
B , compared with
for our rule. Intuitively, this is because the adaptive
resolution of dyadic classification trees enables them to focus on the = dimensional
decision boundary, rather than the dimensional regression function.
/83
/83
/
/
/ /
/ 3
/3
dimensional subset of , the proof of
In the event that the data occupies a
Theorem 3 follows through as before, but with an exponent of
instead of
. Thus,
the penalized DCT is able to automatically adapt to the dimensionality of the input data.
4.2 Minimax Lower Bound
The next result demonstrates that complexity-regularized DCTs nearly achieve the minimimax rate for our class of distributions.
Theorem 4 Let
denote any discrimination rule based on training data. There exists a
constant 4
such that for sufficiently large,
?@5A
=
,
6
*
Sketch of proof: This result follows from Theorem 2 in [7] (with
proof of that result is in turn based on Assouad?s lemma.
!
!
therein). The
Theorems 3 and 4, together with the above remark on Alexander?s inequality, show that
complexity-regularized DCTs are close to minimax-rate optimal for the class . We suspect that the class studied by Tsybakov [7], used in our minimax proof, is more restrictive
than our class. Therefore, it may be that the exponent in the above theorem can be
decreased to $ , in which case we achieve the minimax rate.
/
/53
Although bounds on the minimax rate of convergence in pattern recognition have been investigated in previous work [9, 10], the focus has been on placing regularity assumptions
on the regression function 2 ./ )! # ! ! . . Yang demonstrates that in such
cases, for many common function spaces (e.g. Lipschitz, Besov, bounded variation), classification is not easier than regression function estimation [10]. This contrasts with the
conventional wisdom that, in general, classification is easier than regression function estimation [4, Ch. 6]. Our approach is to study minimax rates for distributions defined in terms
of the regularity of the Bayes decision boundary. With this framework, we see that minimax rates for classification can be orders of magnitude faster than for estimation of 23./ ,
since 23./ may be arbitrarily irregular away from the decision boundary for distributions
in our class. This view of minimax classification has also been adopted by Mammen and
Tsybakov [7,11]. Our contribution with respect to their work is an implementable discrimination rule, with guaranteed computational complexity, that nearly achieves the minimax
lower bounds. We also remark that ?fast rates? (e.g.,
6 ) obtained by those authors
require much stronger assumptions on the smoothness of the decision boundary and 23 ./
than we employ in this paper.
5 An Efficient Pruning Algorithm
7
In this section we describe an algorithm to compute the penalized DCT efficiently. We
switch notation, using to denote an arbitrary classification tree. Let
denote that
is a pruned version of (possibly itself). For
, define
arg
min 3
and
arg min 3
where denotes the number of leaf nodes
of . We are interested in computing
#"%$ ) 4 .
when is a complete dyadic tree, and
,
Breiman, et.al. [1] showed the existence of weights
suchthat
and subtrees
whenever
, , . Moreover, the weights , and subtrees , may be found in " $
operations [12, 13]. A similar result holds for the square-root penalty, and the trees produced are a subset of the trees produced by the additive penalty [5].
.
Theorem 5 For each , there exists such that
Therefore, pruning with the square-root penalty always produces one of the trees .
minimizing the penalized risk 3
We
determine the pruned tree 7
may
bythenminimizing
,
. Thus, square-root pruning can
this quantity
over 6
be performed in
" $ operations.
is a
In the context of constructing a penalized DCT, we start with an initial tree that
complete DCT. For the classifiers in Theorems 2 and 3, this initial tree has size
6
)
"!
)
8
"!
3
8
&!
6
6
G0
/
=
!
!
8
%
6
"!
6
6
%
!
!
)
!
+***
!
Table 1: Comparison of a greedy tree growing procedure, with model selection based on
holdout error estimate, and two DCT based methods. Numbers shown are test errors.
Pima Indian Diabetes
Wisconsin Breast Cancer
Ionosphere
Waveform
!
" $
also requires
CART-HOLD
26.8 %
4.7 %
12.88 %
19.8 %
DCT-HOLD
27.2 %
6.4 %
18.6 %
29.1 %
DCT-SRM
33.0 %
6.3 %
18.8 %
31.0 %
, and so pruning requires operations. Since the growing procedure
operations, the overall construction is .
6 Experimental Comparison
To gain a rough idea of the usefulness of dyadic classification trees in practice, we compared two DCT based classifiers with a greedy tree growing procedure, similar to that used
by CART [1] or C4.5 [14], where each successive split is chosen to maximize an information gain defined in terms of an impurity function. We considered four two-class datasets,
available on the web at http://www.ics.uci.edu/ mlearn/MLRepository.html. For each
dataset, we randomly split the data into two halves to form training and testing datasets.
For the greedy growing scheme, we used half of the training data to grow the tree, and constructed every possible pruning of the initial tree with an additive penalty. The best pruned
tree was chosen to minimize the holdout error on the rest of the training data. We call this
classifier CART-HOLD. The second classifier, DCT-HOLD, was constructed in a similar
manner, except that the initial tree was a complete DCT, and all of the training data was
used for computing the holdout error estimate. Finally, we implemented the complexityregularized DCT, denoted DCT-SRM, with square-root penalty determined by Equation 1.
Table 1 shows the misclassification rate for each algorithm on each dataset.
From these experiments, we might conclude two things: (i) The greedily-grown partition
outperforms the dyadic partition; and (ii) Much of the discrepancy between CART-HOLD
and DCT-SRM comes from the partitioning, and not from the model selection method
(holdout versus SRM). Indeed, DCT-SRM beats or nearly equals DCT-HOLD on three of
the four datasets. Conclusion (i) may be premature, for it is shown in [4, Ch. 20] that
greedy partitioning based on impurity functions can perform arbitrarily poorly for some
distributions, while this is never the case for complexity-regularized DCTs. In light of (ii),
it may be possible to apply Nobel?s pruning rules for data-grown trees [6], which can now
be implemented with our algorithm, to equal or surpass the performance of CART, while
avoiding the heuristic and computationally expensive cross-validation technique usually
employed by CART to determine the appropriately pruned tree.
7 Conclusion
Dyadic classification trees exhibit desirable theoretical properties (finite sample risk
bounds, consistency, near minimax-rate optimality) and can be trained extremely rapidly.
The minimax result demonstrates that other discrimination rules, such as neural networks
or support vector machines, cannot significantly outperform DCTs (in this minimax sense).
This minimax result is asymptotic, and considers worst-case distributions. From a practical standpoint, with finite samples and non-worst-case distributions, other rules may beat
DCTs, which our experiments on benchmark datasets confirm. The sequential dyadic partitioning scheme is especially susceptible when many of the features are irrelevant, since
it must cycle through all features before splitting a feature again. Several modifications to
the current dyadic partitioning scheme may be envisioned, such as free dyadic or median
splits.
Such modified tree induction strategies would still possess many of the desirable theoretical
properties of DCTs. Indeed, Nobel has derived risk bounds and consistency results for
classification trees grown according to data [6]. Our square-root pruning algorithm now
provides a means of implementing his pruning schemes for comparison with other model
selection techniques (e.g., holdout or cross-validation). It remains to be seen whether the
rate of convergence analysis presented here extends to his work.
Further details on this work, including full proofs, may be found in [5].
Acknowledgments
This work was partially supported by the National Science Foundation, grant no. MIP?
9701692, the Army Research Office, grant no. DAAD19-99-1-0349, and the Office of
Naval Research, grant no. N00014-00-1-0390.
References
[1] L. Breiman, J. Friedman, R. Olshen, and C. Stone,
Wadsworth, Belmont, CA, 1984.
Classification and Regression Trees,
[2] V. Vapnik, Estimation of Dependencies Based on Empirical Data, Springer-Verlag, New York,
1982.
[3] G. Lugosi and K. Zeger, ?Concept learning using complexity regularization,? IEEE Transactions on Information Theory, vol. 42, no. 1, pp. 48?54, 1996.
[4] L. Devroye, L. Gy?orfi, and G. Lugosi, A Probabilistic Theory of Pattern Recognition, Springer,
New York, 1996.
[5] C. Scott and R. Nowak, ?Complexity-regularized dyadic classification trees: Efficient pruning and rates of convergence,? Tech. Rep. TREE0201, Rice University, 2002, available at
http://www.dsp.rice.edu/ cscott.
[6] A. Nobel, ?Analysis of a complexity based pruning scheme for classification trees,? IEEE
Transactions on Information Theory, vol. 48, no. 8, pp. 2362?2368, 2002.
[7] A. B. Tsybakov, ?Optimal aggregation of classifiers in statistical learning,? preprint, 2001,
available at http://www.proba.jussieu.fr/mathdoc/preprints/.
[8] K. Falconer, Fractal Geometry: Mathematical Foundations and Applications, Wiley, West
Sussex, England, 1990.
[9] J. S. Marron, ?Optimal rates of convergence to Bayes risk in nonparametric discrimination,?
Annals of Statistics, vol. 11, no. 4, pp. 1142?1155, 1983.
[10] Y. Yang, ?Minimax nonparametric classification?Part I: Rates of convergence,? IEEE Transactions on Information Theory, vol. 45, no. 7, pp. 2271?2284, 1999.
[11] E. Mammen and A. B. Tsybakov, ?Smooth discrimination analysis,? Annals of Statistics, vol.
27, pp. 1808?1829, 1999.
[12] P. Chou, T. Lookabaugh, and R. Gray, ?Optimal pruning with applications to tree-structured
source coding and modeling,? IEEE Transactions on Information Theory, vol. 35, no. 2, pp.
299?315, 1989.
[13] B. Ripley, Pattern Recognition and Neural Networks, Cambridge University Press, Cambridge,
UK, 1996.
[14] R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, San Mateo, 1993.
| 2198 |@word version:1 briefly:1 stronger:1 recursively:1 initial:8 contains:2 fragment:1 selecting:1 chervonenkis:2 outperforms:1 current:1 must:2 belmont:1 additive:2 partition:9 zeger:2 dct:25 realistic:1 enables:1 discrimination:8 greedy:4 selected:1 leaf:2 half:2 provides:1 node:8 successive:1 mathematical:1 along:3 constructed:2 manner:1 indeed:2 expected:1 roughly:1 examine:1 dcts:18 growing:5 sdp:3 terminal:5 automatically:1 provided:1 moreover:3 underlying:1 bounded:4 notation:1 minimizes:1 every:4 classifier:22 demonstrates:5 uk:1 partitioning:5 unit:4 grant:3 before:3 engineering:1 tends:1 despite:1 lugosi:3 might:2 therein:2 resembles:1 studied:1 mateo:1 ease:1 range:1 practical:1 acknowledgment:1 testing:1 recursive:1 practice:1 daad19:1 procedure:4 universal:1 empirical:3 significantly:3 orfi:1 thought:1 regular:1 sidelength:1 cannot:2 close:1 selection:3 risk:15 applying:2 context:1 preprints:1 www:3 conventional:1 formulate:1 resolution:1 splitting:2 rule:12 his:2 notion:1 coordinate:6 variation:2 annals:2 construction:2 suppose:1 modulo:1 diabetes:1 element:1 recognition:5 expensive:1 preprint:1 electrical:1 worst:2 cycle:1 besov:2 envisioned:1 complexity:11 trained:1 depend:1 impurity:2 classifer:1 easily:1 tx:1 grown:5 forced:1 fast:1 describe:1 heuristic:1 larger:1 say:2 otherwise:1 statistic:2 jointly:1 itself:1 sequence:1 fr:1 uci:1 rapidly:1 poorly:1 achieve:2 convergence:14 regularity:6 parent:1 congruent:1 produce:2 derive:1 illustrate:1 measured:2 strong:3 implemented:2 come:1 convention:1 waveform:1 closely:1 attribute:1 vc:4 occupies:1 implementing:2 bin:1 require:1 hold:10 sufficiently:1 considered:1 ic:1 achieves:1 smallest:1 a2:3 estimation:5 label:2 largest:1 minimization:7 rough:1 always:1 modified:1 rather:1 breiman:2 office:2 ax:1 focus:2 derived:1 naval:1 dsp:1 tech:1 contrast:2 rigorous:1 greedily:1 chou:1 sense:3 dependent:1 eliminate:1 selects:1 interested:1 arg:3 classification:27 among:2 overall:1 denoted:5 exponent:3 html:1 wadsworth:1 cube:5 equal:4 construct:2 marginal:1 never:1 placing:1 broad:1 nearly:3 discrepancy:1 others:1 employ:1 randomly:1 national:1 familiar:1 geometry:1 lebesgue:2 proba:1 attempt:1 friedman:1 investigate:1 analyzed:1 light:1 subtrees:2 nowak:3 modest:1 tree:49 desired:1 mip:1 theoretical:4 minimal:1 modeling:1 subset:2 predictor:1 srm:11 submanifold:1 usefulness:1 optimally:1 dependency:1 marron:1 adaptively:2 hypercubes:1 density:1 probabilistic:1 together:1 intersecting:1 again:1 opposed:1 possibly:1 account:1 gy:1 coding:1 later:1 root:6 view:1 performed:1 start:1 bayes:15 aggregation:1 parallel:1 contribution:1 minimize:1 square:6 formed:1 kaufmann:1 efficiently:1 wisdom:1 produced:2 classified:1 mlearn:1 whenever:1 definition:4 pp:6 associated:1 proof:9 con:1 gain:2 dataset:3 holdout:5 popular:1 cscott:2 dimensionality:1 attained:1 response:1 improved:1 box:5 strongly:2 though:2 sketch:4 hand:1 web:1 widespread:1 gray:1 behaved:1 building:1 requiring:1 true:1 concept:1 hausdorff:1 inductive:1 regularization:2 assigned:2 attractive:1 width:1 sussex:1 mammen:2 mlrepository:1 stone:1 complete:6 performs:1 common:2 ji:1 volume:1 belong:1 interpretation:1 discussed:1 measurement:1 refer:1 cambridge:2 smoothness:2 falconer:1 consistency:4 showed:1 belongs:1 irrelevant:1 certain:1 n00014:1 verlag:1 inequality:2 rep:1 arbitrarily:3 seen:1 morgan:1 houston:1 employed:1 prune:1 converge:1 determine:2 maximize:1 ii:2 multiple:1 desirable:2 full:1 stem:1 smooth:2 match:1 adapt:1 faster:1 cross:2 long:1 england:1 a1:1 plugging:1 regression:7 breast:1 essentially:1 histogram:2 cell:1 irregular:2 decreased:1 grow:1 median:1 source:1 standpoint:1 appropriately:1 rest:1 posse:2 pass:1 cart:10 suspect:1 thing:1 integer:7 call:1 structural:6 near:6 counting:5 yang:2 split:12 identically:1 switch:1 fit:1 idea:1 intensive:1 whether:1 expression:1 penalty:7 speaking:1 york:2 remark:2 fractal:1 generally:2 probabilty:1 j9:1 informally:1 nonparametric:2 tsybakov:4 http:3 outperform:1 exist:1 correctly:1 write:1 vol:6 four:3 shatters:3 rectangle:1 place:1 family:3 throughout:1 reasonable:1 extends:1 decision:14 scaling:1 bound:16 followed:1 guaranteed:1 constraint:1 optimality:2 min:4 pruned:5 extremely:1 department:1 structured:2 according:1 belonging:1 labellings:1 modification:1 intuitively:1 restricted:1 computationally:2 equation:1 remains:1 turn:1 know:1 adopted:1 available:3 operation:6 apply:2 away:2 appropriate:3 subdivide:1 existence:1 denotes:2 quinlan:1 restrictive:1 especially:1 establish:1 hypercube:2 g0:1 quantity:1 strategy:3 usual:1 exhibit:1 separate:1 capacity:1 considers:1 trivial:1 nobel:4 induction:1 assuming:1 length:2 devroye:1 balance:1 minimizing:2 difficult:1 susceptible:1 olshen:1 robert:1 pima:1 favorably:1 implementation:1 unknown:1 perform:3 upper:5 datasets:6 benchmark:2 implementable:1 finite:2 beat:2 arbitrary:2 clayton:1 introduced:1 pair:2 hyperrectangle:1 c4:2 distinction:1 able:2 usually:1 pattern:6 scott:2 lookabaugh:1 program:1 built:1 including:1 event:1 misclassification:1 regularized:6 scarce:1 minimax:19 scheme:6 review:1 asymptotic:2 wisconsin:1 proportional:1 versus:1 validation:2 foundation:2 consistent:2 principle:2 translation:1 cancer:1 penalized:12 supported:2 free:2 side:4 allow:2 midpoint:3 distributed:2 boundary:13 dimension:13 depth:4 world:1 author:1 made:1 collection:3 adaptive:1 san:1 premature:1 transaction:4 pruning:18 supremum:1 confirm:1 conclude:1 ripley:1 table:2 ca:1 investigated:1 complex:1 constructing:1 significance:1 dyadic:29 child:1 referred:2 west:1 fashion:1 wiley:1 exceeding:1 jussieu:1 theorem:15 ionosphere:1 essential:1 exists:4 vapnik:3 sequential:4 magnitude:1 subtree:1 easier:2 army:1 partially:1 springer:2 ch:5 assouad:1 rice:4 conditional:1 formulated:1 lipschitz:3 determined:2 except:2 surpass:1 lemma:1 called:6 egf:1 experimental:1 support:3 alexander:2 indian:1 avoiding:1 |
1,316 | 2,199 | Information Regularization with Partially
Labeled Data
Tommi Jaakkola
MIT AI Lab
Cambridge, MA 02139
[email protected]
Martin Szummer
MIT AI Lab & CBCL
Cambridge, MA 02139
[email protected]
Abstract
Classification with partially labeled data requires using a large number
of unlabeled examples (or an estimated marginal P (x)), to further constrain the conditional P (y|x) beyond a few available labeled examples.
We formulate a regularization approach to linking the marginal and the
conditional in a general way. The regularization penalty measures the
information that is implied about the labels over covering regions. No
parametric assumptions are required and the approach remains tractable
even for continuous marginal densities P (x). We develop algorithms for
solving the regularization problem for finite covers, establish a limiting
differential equation, and exemplify the behavior of the new regularization approach in simple cases.
1 Introduction
Many modern classification problems are rife with unlabeled examples. To benefit from
such examples, we must exploit either implicitly or explicitly the link between the marginal
density P (x) over examples x and the conditional P (y|x) representing the decision boundary for the labels y. High density regions or clusters in the data, for example, can be expected to fall solely in one or another class.
Most discriminative methods do not attempt to explicitly model or incorporate information
from the marginal density P (x). However, many discriminative algorithms such as SVMs
exploit the notion of margin that effectively relates P (x) to P (y|x); the decision boundary
is biased to fall preferentially in low density regions of P (x) so that only a few points fall
within the margin band.
The assumptions relating P (x) to P (y|x) are seldom made explicit. In this paper we appeal
to information theory to explicitly constrain P (y|x) on the basis of P (x) in a regularization
framework. The idea is in broad terms related to a number of previous approaches including
maximum entropy discrimination [1], data clustering by information bottleneck [2], and
minimum entropy data partitioning [3]. See also [4].
+
I(x; y)
+ +
+
+ +
+
+ +
+
+ ?
0
0.65
+
+ ?
?
+ ?
1
?
+ ?
+
+ ?
1
Figure 1: Mutual information I(x; y) measured in bits for four regions with different configurations of labels y= {+,-}. The marginal P (x) is discrete and uniform across the points.
The mutual information is low when the labels are homogenous in the region, and high
when labels vary. The mutual information is invariant to the spatial configuration of points
within the neighborhood.
2 Information Regularization
We begin by showing how to regularize a small region of the domain X . We will subsequently cover the domain (or any chosen subset) with multiple small regions, and describe
criteria that ensure regularization of the whole domain on the basis of the individual regions.
2.1 Regularizing a Single Region
Consider a small contiguous region Q in the domain X (e.g., an -ball). We will regularize
the conditional probability P (y|x) by penalizing the amount of information the conditionals imply about the labels within the region.
The regularizer is a function of both P (y|x) and P (x), and will penalize changes in P (y|x)
more in regions with high P (x). Let L be the set of labeled points (size N L ) and L ? U
be the set of labeled and unlabeled points (size NLU ). The marginal P (x) is assumed to
be given, and may be available
P directly in terms of a continuous density, or as an empirical
density P (x) = 1/NLU ? i?L?U ?(x ? xi ) corresponding to a set of points {xi } that
may not have labels (?(?) is the Dirac delta function integrating to 1).
As a measure of information, we employ mutual information [5], which is the average
number of bits that x contains about the label in region Q (see Figure 1.) The measure
depends both on the
R marginal density P (x) (specifically its restriction to x ? Q namely
P (x|Q) = P (x)/ Q P (x) dx) and the conditional P (y|x). Equivalently, we can interpret
mutual information as a measure of disagreement among P (y|x), x ? Q. The measure is
zero for any constant P (y|x). More precisely, the mutual information in region Q is
IQ (x; y) =
XZ
y
P (x|Q)P (y|x) log
x?Q
P (y|x)
dx,
P (y|Q)
(1)
R
where P (y|Q) = x?Q P (x|Q)P (y|x) dx. The densities conditioned on Q are normalized
to integrate to 1 within the region Q. Note that the mutual information is invariant to
permutations of the elements of X within Q, which suggests that the regions must be small
enough to preserve locality.
The regularization penalty has to further scale with the number of points in the region (or
the probability mass). We introduce the following regularization principle:
Information regularization
penalize (MQ /VQ ) ? IQ (x; y), which is the information about the labels
within a local region Q, weighted by the overall probability mass M Q in
the region, and normalized by a measure of variability VQ (variance) of
x in the region.
R
Here MQ = x?Q P (x) dx. The mutual information IQ (x; y) measures the information
per point, and to obtain the total mutual information contained in a region, we must multiply by the probability mass MQ . The regularization will be stronger in regions with high
P (x).
VQ is a measure of variance of x restricted to the region, and is introduced to remove
overall dependence on the size of the region. In one dimension, V Q = var(x|Q). When the
region is small, then the marginal will be close to uniform over the region and V Q ? R2 ,
where R is, e.g., the radius for spherical regions. We omit here the analysis of the ddimensional
case and only note that we may choose VQ = tr ?Q , where the covariance
R
?Q = x?Q (x ? EQ (x))(x ? EQ (x))T P (x|Q) dx. The choice of VQ is based on the
limiting argument discussed next.
2.2 Limiting Behavior for Vanishing Size Regions
When the size of the region is scaled down, the mutual information will go to zero for any
continuous P (y|x). We derive here the appropriate regularization penalty in the limit of
vanishing regions. For simplicity, we only consider the one-dimensional case.
Within a small region Q we can (under mild continuity assumptions) approximate P (y|x)
by a Taylor expansion around the mean point x0 ? Q, obtaining P (y|Q) ? P (y|x0 ) to
first order. By using log(1 + z) ? z ? z 2 /2 and substituting the approximate P (y|x) and
P (y|Q) into IQ (x; y), we get the following first order expression for mutual information:
2
X
1
d log P (y|x)
IQ (x; y) = var(x|Q)
(2)
P (y|x0 )
2 | {z } y
dx
x0
size-dependent |
{z
}
size-independent
var(x|Q) is dependent on the size (and more generally shape) of region Q while the remaining parts are independent of the size (and shape). The regularization penalty should
not scale with the resolution at which we penalize information and we thus divide out the
size-dependent part.
The size-independent part is the Fisher information [5], where we think of P (y|x) as parameterized by x. The expression d log P (y|x)/dx is known as the Fisher score.
2.3 Regularizing the Domain
We want to regularize the conditional P (y|x) across the domain X (or any subset of interest). Since individual regions must be relatively small to preserve locality, we need multiple
regions to cover the domain. The cover is the set C of these regions. Since the regularization
penalty is assigned to each region, the regions must overlap to ensure that the conditionals
in different regions become functionally dependent. See Figure 2.
In general all areas with significant marginal density P (x) should be included in the cover
or will not be regularized (areas of zero marginal need not be considered). The cover should
generally be connected (with respect to neighborhood relations of the regions) so that labeled points have potential to influence all conditionals. The amount of overlap between
any two regions in the cover determines how strongly the corresponding conditionals are
tied to each other. On the other hand, the regions should be small to preserve locality.
The limit of a large number of small overlapping regions can be defined, and we ensure
continuity of P (y|x) when the offset between regions vanishes relative to their size (in all
dimensions).
3 Classification with Information Regularization
Information regularization across multiple regions can be performed, for example, by
minimizing the maximum information per region, subject to correct classification of the
labeled points. Specifically, we constrain each region in the cover (Q ? C) to carry at most
? units of information.
min
P (y|xk ), ?
?
s.t. (MQ /VQ ) ? IQ (x; y) ? ?
?Q ? C
P (y|xk ) = ?(y, y?k )
?k ? L
P
P
(y|x
)
=
1
?k
? L ? U, ?y.
0 ? P (y|xk ) ? 1,
k
y
(3a)
(3b)
(3c)
(3d)
We have incorporated the labeled points by constraining their conditionals to the observed
values (eq. 3c) (see below for other ways of incorporating labeled information). The
solution P (y|x) to this optimization problem is unique in regions that achieve the
information constraint with equality (as long as P (x) > 0). (Uniqueness follows from the
strict convexity of mutual information as a function of P (y|x) for nonzero P (x)).
Define an atomic subregion as a non-empty intersection of regions that cannot be further
intersected by any region (Figure 2). All unlabeled points in an atomic subregion belong
to the same set of regions, and therefore participate in exactly the same constraints. They
will be regularized the same way, and since mutual information is a convex function, it will
be minimized when the conditionals P (y|x) are equal in the atomic subregion. We can
therefore parsimoniously represent conditionals of atomic subregions, instead of individual
points, merely by treating such atomic subregions as ?merged points? and weighting the
associated constraint by the probability mass contained in the subregion.
3.1 Incorporating Noisy Labels
Labeled points participate in the information regularization in the same way as unlabeled
points. However, their conditionals have additional constraints, which incorporate the label
information. In equation 3c we used the constraint P (y|xk ) = ?(y, y?k ) for all labeled
points. This constraint does not permit noise in the labels (and cannot be used when two
points at the same location have disagreeing labels.) Alternatively, we can apply either of
the constraints
(fix-lbl): P (y|xi ) = (1 ? b)?(y,?yi ) b1??(y,?yi ) , ?i ? L
(exp-lbl): EP (i) [P (?
yi |xi )] ? 1 ? b. The expectation is over the labeled set L,
where P (i) = 1/NL.
The parameter b ? [0, 0.5) models the amount of label noise, and is determined from prior
knowledge or can be optimized via cross-validation.
Constraint (fix-lbl) is written out for the binary case for simplicity. The conditionals
of the labeled points are directly determined by their labels, and are treated as fixed constants. Since b < 0.5, the thresholded conditional classifies labeled points in the observed
class. In constraint (exp-lbl), the conditionals for labeled points can have an average
error at most b, where the averaged is over all labeled points. Thus, a few points may have
conditionals that deviate significantly from their observed labels, giving robustness against
mislabeled points and outliers.
To obtain classification decisions, we simply choose the class with the maximum posterior
yk = argmaxy P (y|xk ). Working with binary valued P (y|x) ? 0, 1 directly would yield a
more difficult combinatorial optimization problem.
3.2 Continuous Densities
Information regularization is also computationally feasible for continuous marginal densities, known or estimated. For example, we may be given a continuous unlabeled data
distribution P (x) and a few discrete labeled points, and regularize across a finite set of
covering regions. The conditionals are uniform inside atomic subregions (except at labeled
points), requiring estimates of only a finite number of conditionals.
3.3 Implementation
Firstly, we choose appropriate regions forming a cover, and find the atomic subregions.
The choices differ depending on whether the data is all discrete or whether continuous
marginals P (x) are given. Secondly, we perform a constrained optimization to find the
conditionals.
If the data is all discrete, create a spherical region centered at every labeled and unlabeled
point (or over some reduced set still covering all the points). We have used regions of fixed
radius R, but the radius could also be set adaptively at each point to the distance of its Knearest neighbor. The union of such regions is our cover, and we choose the radius R (or
K) large enough to create a connected cover. The cover induces a set of atomic subregions,
and we merge the parameters P (y|x) of points inside individual atomic subregions (atomic
subregions with no observed points can be ignored). The marginal of each atomic subregion
is proportional to the number of (merged) points it contains.
If continuous marginals are given, they will put probability mass in all atomic subregions
where the marginal is non-zero. To avoid considering an exponential number of subregions,
we can limit the overlap between the regions by creating a sparser cover.
Given the cover, we now regularize the conditionals P (y|x) in the regions, according to
eq. 3a. This is a convex minimization problem with a global minimum, since mutual information is convex in P (y|x). It can be solved directly in the given primal form, using a
quasi-Newton BFGS method. For eq. 3a, the required gradients of the constraints for the
binary class (y = {?1}) case (region Q, atomic subregion r) are:
MQ dIQ (x; y)
MQ
P (y = 1|xr ) P (y = ?1|Q)
=
P (xr |Q) log
.
(4)
VQ dP (y = 1|xr )
VQ
P (y = ?1|xr ) P (y = 1|Q)
The Matlab BFGS implementation fmincon can solve 100 subregion problems in a few
minutes.
3.4 Minimize Average Information
An alternative regularization criterion minimizes the average mutual information across
regions. When calculating the average, we must correct for the overlaps of intersecting
regions to avoid doublecounting (in contrast, the previous regularization criterion (eq. 3b)
avoided doublecounting by restricting information in each region individually). The influence of a region is proportional to the probability mass MQ contained in it. However, a
point x may belong to N (x) regions. We define an adjusted density P ? (x) = P (x)/N (x)
?
to calculate an adjusted probability mass MQ
which discounts overlap. We can then
minimize average mutual information according to
min
P (y|xk )
?
X MQ
Q
VQ
IQ (x; y)
s.t. P (y|xk ) = ?(y, y?k )
?k ? L
P
0 ? P (y|xk ) ? 1,
P
(y|x
)
=
1
?k
? L ? U, ?y.
k
y
(5a)
(5b)
(5c)
with similar necessary adjustments to incorporate noisy labels.
3.4.1 Limiting Behavior
The above average information criterion is a discrete version of a continuous regularization
criterion. In the limit of a large number of small regions in the cover (where the spacing of
the regions vanishes relative to their size), we obtain a well-defined regularization criterion
resulting in continuous P (y|x):
2
Z X
d log P (y|x)
min
(6)
P (x0 )P (y|x0 )
dx0 .
P (y|x) s.t.
dx
x0
y
P (?
yk |xk )=?(y,?
yk ) ?k?L
The regularizer can also be seen as the average Fisher information (see section 2.2). More
generally, we can formulate the regularization problem as a Tikhonov regularization, where
the loss is the negative log-probability of labels:
2
Z X
1 X
d log P (y|x)
min
? log P (?
yk |xk ) + ?
P (x0 )P (y|x0 )
dx0 . (7)
dx
P (y|x) NL
x0
y
k?L
3.4.2 Differential Equation Characterizing the Solution
The optimization problem (eq. 6) can be solved using calculus of variations. Consider the
one-dimensional
binary class case and write
R
the problem as
min
f x, P (y = 1|x), P 0 (y = 1|x) dx where f (?) = P (x)P 0 (y = 1|x)2 /[P (y =
P (y=1|x)
1|x)(1 ? P (y = 1|x))]. Necessary conditions for the solution P (y = 1|x) are provided by
the Euler-Lagrange equations [6]
?f
d
?f
?
= 0 ?x.
?P (y = 1|x) dx ?P 0 (y = 1|x)
(8)
(natural boundary conditions apply since we can assume P (x) = 0 and P 0 (y|x) = 0 at the
boundary of the domain X ). After substituting f and simplifying we have
P 00 (y = 1|x) =
P 0 (y = 1|x)2 (1 ? 2P (y = 1|x)) P 0 (x)P 0 (y = 1|x)
?
.
2P (y = 1|x)(1 ? P (y = 1|x))
P (x)
(9)
This differential equation governs the solution and we solve it numerically. The labeled
points provide boundary conditions, e.g. P (y = y?k |xk ) = 1 ? b for some small fixed
b ? 0. We must search for initial values of P 0 (?
yk |xk ) to match the boundary conditions of
P (?
yk |xk ). The solution is continuous and piecewise differentiable.
4 Results and Discussion
We have experimentally studied the behavior of the regularizer with different marginal densities P (x). Figure 3 shows the one-dimensional case with a continuous marginal density
1.6
P(y|x)
P(x)
labeled points
Posterior P(y|x)
1.4
3
2
1
5
4
6
1.2
1
0.8
0.6
0.4
0.2
7
0
?1
?0.5
0
0.5
1
Figure 2: (Left) Three intersecting regions, and their atomic subregions (numbered).
P (y|x) for unlabeled points will be constant in atomic subregions.
Figure 3: (Right) The conditional (solid line) for a continuous marginal P (x) (dotted line)
consisting of a mixture of two continuous Gaussian and two labeled points at (x=-0.8,y=-1)
and (x=0.8,y=1). The row of circles at the top depicts the region structure used (a rendering
of overlapping one-dimensional intervals.)
1
0.8
Posterior P(y|x)
Posterior P(y|x)
0.8
1
P(y|x)
P(x)
labeled points
0.6
0.6
0.4
0.4
0.2
0
?1
P(y|x)
P(x)
labeled points
0.2
?0.5
0
0.5
1
0
?1
?0.5
0
0.5
1
Figure 4: Conditionals (solid lines) for two continuous marginals (dotted lines) plus two
labeled points. Left: the marginal is uniform, and the conditional approaches a straight
line. Right: the marginal is a mixture of two Gaussians (with lower variance and shifted
compared to Figure 3.) The conditional changes slowly in regions of high density.
(mixture of two Gaussians), and two discrete labeled points. We choose N Q =40 regions
centered at uniform intervals of [?1, 1], overlapping each other half-way, creating N Q + 1
atomic subregions. There are two labeled points. We show the solution attained by minimizing the maximum information (eq. 3a), and using the (fix-lbl) constraint with
label noise b = 0.05.
The conditional varies smoothly between the labeled points of opposite classes. Note the
dependence on the marginal density P (x). The conditional is smoother in high-density
regions, and changes more rapidly in low-density regions, as expected. Figure 4 shows
more examples, and Figure 5 illustrates solutions obtained via the differential equation
(eq. 6).
1
1
x
0.9
0.9x
0.8
0.8
P(y|x)
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
p(x)
0.3
p(x)
0.3
0.2
0.2
x
0.1
0
?2
P(y|x)
?1.5
?1
?0.5
0
0.5
1
1.5
2
x
0.1
0
?2
?1.5
?1
?0.5
0
0.5
1
1.5
2
Figure 5: Conditionals for two other continuous marginals plus two labeled points (marked
as crosses and located at x=-1, 2 in the left figure and x=-2, 2 in the right), solved via the
differential equation (eq. 6). The conditionals are continuous but non-differentiable at the
two labeled points (marked as crosses).
5 Conclusion
We have presented an information theoretic regularization framework for combining conditional and marginal densities in a semi-supervised estimation setting. The framework
admits both discrete and continuous (known or estimated) densities. The tractability is
largely a function of the number of nonempty intersections of chosen covering regions.
The principle extends beyond the presented scope. It provides flexible means of tailoring
the regularizer to particular needs. The shape and structure of the regions give direct ways
of imposing relations between particular variables or values of those variables. The regions
can be easily defined on low-dimensional data manifolds.
In future work we will try the regularizer on large high-dimensional datasets and explore
theoretical connections to network information theory.
Acknowledgements
The authors gratefully acknowledge support from Nippon Telegraph & Telephone (NTT) and NSF
ITR grant IIS-0085836. Tommi Jaakkola also acknowledges support from the Sloan Foundation in
the form of the Sloan Research Fellowship. Martin Szummer would like to thank Thomas Minka for
valuable comments.
References
[1] Tommi Jaakkola, Marina Meila, and Tony Jebara. Maximum entropy discrimination. Technical
Report AITR-1668, Mass. Inst. of Technology AI lab, 1999. http://www.ai.mit.edu/.
[2] Naftali Tishby and Noam Slonim. Data clustering by markovian relaxation and the information
bottleneck method. In Advances in Neural Information Processing Systems (NIPS), volume 13,
pages 640?646. MIT Press, 2001.
[3] Stephen Roberts, C. Holmes, and D. Denison. Minimum-entropy data partitioning using reversible jump Markov chain Monte Carlo. IEEE Trans. Pattern Analysis and Mach. Intell.
(PAMI), 23(8):909?914, 2001.
[4] Matthias Seeger. Input-dependent regularization of conditional density models. Unpublished.
http://www.dai.ed.ac.uk/homes/seeger/, 2001.
[5] Thomas Cover and Joy Thomas. Elements of Information Theory. Wiley, 1991.
[6] Robert Weinstock. Calculus of Variations. Dover, 1974.
| 2199 |@word mild:1 version:1 stronger:1 calculus:2 covariance:1 simplifying:1 tr:1 solid:2 carry:1 initial:1 configuration:2 contains:2 score:1 dx:11 must:7 written:1 tailoring:1 shape:3 remove:1 treating:1 joy:1 discrimination:2 half:1 denison:1 xk:13 dover:1 vanishing:2 provides:1 location:1 firstly:1 direct:1 differential:5 become:1 rife:1 inside:2 introduce:1 x0:10 expected:2 behavior:4 xz:1 spherical:2 considering:1 begin:1 classifies:1 provided:1 mass:8 minimizes:1 every:1 exactly:1 scaled:1 uk:1 partitioning:2 unit:1 fmincon:1 omit:1 grant:1 local:1 limit:4 slonim:1 mach:1 solely:1 merge:1 pami:1 plus:2 studied:1 suggests:1 averaged:1 unique:1 atomic:16 union:1 xr:4 area:2 empirical:1 significantly:1 integrating:1 numbered:1 get:1 cannot:2 unlabeled:8 close:1 put:1 influence:2 www:2 restriction:1 go:1 convex:3 formulate:2 resolution:1 simplicity:2 holmes:1 regularize:5 mq:9 notion:1 variation:2 limiting:4 nippon:1 element:2 located:1 labeled:30 observed:4 disagreeing:1 ep:1 solved:3 calculate:1 region:74 connected:2 valuable:1 yk:6 vanishes:2 convexity:1 solving:1 basis:2 mislabeled:1 easily:1 regularizer:5 describe:1 monte:1 neighborhood:2 valued:1 solve:2 knearest:1 think:1 noisy:2 differentiable:2 matthias:1 combining:1 rapidly:1 achieve:1 dirac:1 cluster:1 empty:1 iq:7 develop:1 derive:1 depending:1 ac:1 measured:1 eq:10 subregion:7 ddimensional:1 tommi:4 differ:1 radius:4 merged:2 correct:2 subsequently:1 centered:2 fix:3 secondly:1 adjusted:2 around:1 considered:1 cbcl:1 exp:2 scope:1 substituting:2 vary:1 uniqueness:1 estimation:1 label:19 combinatorial:1 individually:1 create:2 weighted:1 minimization:1 mit:6 gaussian:1 avoid:2 jaakkola:3 contrast:1 seeger:2 inst:1 dependent:5 relation:2 quasi:1 overall:2 classification:5 among:1 flexible:1 spatial:1 constrained:1 mutual:16 marginal:21 homogenous:1 equal:1 broad:1 future:1 minimized:1 report:1 piecewise:1 few:5 employ:1 modern:1 preserve:3 intell:1 individual:4 parsimoniously:1 consisting:1 attempt:1 interest:1 multiply:1 argmaxy:1 mixture:3 nl:2 primal:1 chain:1 necessary:2 taylor:1 divide:1 lbl:5 circle:1 theoretical:1 markovian:1 cover:16 contiguous:1 tractability:1 subset:2 euler:1 uniform:5 tishby:1 varies:1 adaptively:1 density:22 telegraph:1 intersecting:2 choose:5 slowly:1 creating:2 potential:1 bfgs:2 explicitly:3 sloan:2 depends:1 performed:1 try:1 lab:3 minimize:2 variance:3 largely:1 yield:1 carlo:1 straight:1 ed:1 against:1 minka:1 associated:1 exemplify:1 knowledge:1 attained:1 supervised:1 strongly:1 hand:1 working:1 overlapping:3 reversible:1 continuity:2 normalized:2 requiring:1 regularization:27 assigned:1 equality:1 nonzero:1 covering:4 naftali:1 criterion:6 theoretic:1 volume:1 linking:1 discussed:1 belong:2 relating:1 numerically:1 functionally:1 marginals:4 significant:1 interpret:1 cambridge:2 imposing:1 ai:6 seldom:1 meila:1 gratefully:1 posterior:4 tikhonov:1 binary:4 yi:3 seen:1 minimum:3 additional:1 dai:1 semi:1 relates:1 multiple:3 ii:1 smoother:1 stephen:1 ntt:1 technical:1 match:1 cross:3 long:1 marina:1 expectation:1 represent:1 penalize:3 fellowship:1 want:1 spacing:1 interval:2 conditionals:18 biased:1 strict:1 comment:1 subject:1 constraining:1 enough:2 rendering:1 opposite:1 idea:1 itr:1 bottleneck:2 whether:2 expression:2 penalty:5 matlab:1 ignored:1 generally:3 governs:1 amount:3 discount:1 band:1 subregions:12 induces:1 svms:1 reduced:1 http:2 nsf:1 shifted:1 dotted:2 estimated:3 delta:1 per:2 discrete:7 write:1 four:1 intersected:1 penalizing:1 thresholded:1 relaxation:1 merely:1 parameterized:1 extends:1 home:1 decision:3 bit:2 precisely:1 constraint:11 constrain:3 argument:1 min:5 martin:2 relatively:1 according:2 ball:1 across:5 outlier:1 invariant:2 restricted:1 computationally:1 equation:7 vq:9 remains:1 nonempty:1 tractable:1 available:2 gaussians:2 permit:1 apply:2 appropriate:2 disagreement:1 alternative:1 robustness:1 thomas:3 top:1 remaining:1 clustering:2 ensure:3 tony:1 newton:1 calculating:1 exploit:2 giving:1 establish:1 implied:1 parametric:1 dependence:2 gradient:1 dp:1 distance:1 link:1 thank:1 participate:2 manifold:1 minimizing:2 preferentially:1 equivalently:1 difficult:1 robert:2 noam:1 negative:1 implementation:2 perform:1 datasets:1 markov:1 finite:3 acknowledge:1 variability:1 incorporated:1 jebara:1 introduced:1 namely:1 required:2 unpublished:1 optimized:1 connection:1 nip:1 trans:1 beyond:2 below:1 pattern:1 including:1 overlap:5 treated:1 natural:1 regularized:2 representing:1 technology:1 imply:1 acknowledges:1 deviate:1 prior:1 acknowledgement:1 relative:2 loss:1 permutation:1 proportional:2 var:3 validation:1 foundation:1 integrate:1 principle:2 row:1 fall:3 neighbor:1 characterizing:1 benefit:1 boundary:6 dimension:2 author:1 made:1 jump:1 avoided:1 approximate:2 implicitly:1 global:1 b1:1 assumed:1 discriminative:2 xi:4 alternatively:1 continuous:18 search:1 obtaining:1 expansion:1 domain:8 whole:1 noise:3 depicts:1 wiley:1 explicit:1 exponential:1 tied:1 weighting:1 down:1 minute:1 showing:1 appeal:1 r2:1 offset:1 admits:1 incorporating:2 restricting:1 effectively:1 conditioned:1 illustrates:1 margin:2 sparser:1 locality:3 entropy:4 intersection:2 smoothly:1 simply:1 explore:1 forming:1 lagrange:1 contained:3 adjustment:1 partially:2 determines:1 ma:2 conditional:14 marked:2 fisher:3 feasible:1 change:3 experimentally:1 included:1 specifically:2 determined:2 except:1 telephone:1 total:1 support:2 szummer:3 dx0:2 incorporate:3 regularizing:2 |
1,317 | 22 | 201
NEW HARDWARE FOR MASSIVE NEURAL NETWORKS
D. D. Coon and A. G. U. Perera
Applied Technology Laboratory
University of Pittsburgh
Pittsburgh, PA 15260.
ABSTRACT
Transient phenomena associated with forward biased silicon p + - n - n + structures at 4.2K show remarkable similarities with biological neurons. The devices play
a role similar to the two-terminal switching elements in Hodgkin-Huxley equivalent
circuit diagrams. The devices provide simpler and more realistic neuron emulation
than transistors or op-amps. They have such low power and current requirements
that they could be used in massive neural networks. Some observed properties of
simple circuits containing the devices include action potentials, refractory periods,
threshold behavior, excitation, inhibition, summation over synaptic inputs, synaptic
weights, temporal integration, memory, network connectivity modification based on
experience, pacemaker activity, firing thresholds, coupling to sensors with graded signal outputs and the dependence of firing rate on input current. Transfer functions
for simple artificial neurons with spiketrain inputs and spiketrain outputs have been
measured and correlated with input coupling.
INTRODUCTION
Here we discuss the simulation of neuron phenomena by electronic processes in
silicon from the point of view of hardware for new approaches to electronic processing
of information which parallel the means by which information is processed in intelligent organisms. Development of this hardware basis is pursued through exploratory
work on circuits which exhibit some basic features of biological neural networks. Fig. 1
shows the basic circuit used to obtain spiketrain outputs. A distinguishing feature
of this hardware basis is the spontaneous generation of action potentials as a device
physics feature.
,----_O_u-f)tput
) ! -_ _
R
JLJLL
Figure 1: Spontaneous,
neuronlike spiketrain
generating circuit. The
spikes are nearly equal in
amplitude so that
information is contained in
the frequency and
temporal pattern of the
spiketrain generation.
? American Institute of Physics 1988
202
TWO-TERMINAL SWITCHING ELEMENTS
The use of transistor based circuitry 1 is avoided because transistor electrical
characteristics are not similar to neuron characteristics. The use of devices with
fundamentally non-neuronlike character increases the complexity of artificial neural
networks. Complexity would be an important drawback for massive neural networks
and most neural networks in nature achieve their remarkable performance through
their massive size. In addition) transistors have three terminals whereas the switching
elements of Hodgkin-Huxley equivalent circuits have two terminals. Motivated in
part by Hodgkin-Huxley equivalent circuit diagrams) we employ two-terminal p+ n - n+ devices which execute transient switching between low conductance and high
conductance states. (See Fig. 2) We call these devices injection mode devices (IMDs).
In the "OFF-STATE", a typical current through the devices is '" 100fA/mm2) and
in the "ON-STATE" a typical current is '" 10mA/mm2. Hence this device is an
extremely good switch with a ON / 0 F F ratio of 1011. As in real neurons 2, the current
in the device is a function of voltage and time, not only voltage. The devices require
cryogenic cooling but this results in an advantageously low quiescent power drain of
< 1 nanowatt/cm2 of chip area and the very low leakage currents mentioned above.
In addition, the highly unique ability of the neural networks described here to operate
in a cryogenic environment is an important advantage for infrared image processing
at the focal plane (see Fig. 3 and further discussion below). Vision systems begin
processing at the focal plane and there are many benefits to be gained from the
vision system approach to IR image processing.
/ \
-----/
-----
...-.
I( V, t)
I
I
R
VD C ~--~--VV~--~------~
IR
;;SS:Ulse
Output
1----0
+Q
C
- Q
Figure 2:
Switching element
in Hodgkin-Huxley equivalent circuits.
Figure 3: Single stage conversion of
infrared intensity to spiketrain frequency with a neuron-like semiconductor device. No pre-amplifiers
are necessary.
Coding of graded input signals (see Fig. 4) such as photocurrents into action potential spike trains with millimeter scale devices has been experimentally
demonstrated3 with currents from 1 IlA down to about 1 picoampere with coding
noise referred to input of < 10 femtoamperes. Coding of much smaller current levels
should be possible with smaller devices. Figure 5 clearly shows the threshold behavior
of the IMD. For devices studied to date, a transition from action potential output to
graded signal output is observed for input currents of the order of 0.5 picoamperes 1~
203
--..
o
Z
o
10
4
U
w
(f)
CURRENT (AMPERES)
Figure 4: Coding of NIR-VISmLE-UV intensity into firing frequency of a spiketrain
and the experimentally determined firing rate vs. the input current for one device.
Note that the dynamic range is about 107 .
>
'0
'>
o
UBI)
E2
Figure 5: mustration of the threshold firing of the
device in response to input step functions.
---PL
500 fLS/ div
This transition is remarkably well described in von Neumann's discussion 5 ,6 of
the mixed character of neural elements which he relates to the concept of subliminal stimulation levels which are too low to produce the stereotypical all-or-nothing
response. Neural network modelers frequently adopt viewpoints which ignore this
interesting mixed character. The von Neumann viewpoint links the mixed character
to concepts of nonlinear dynamics in a way which is not apparent in recent neural
network modeling literature. The scaling down of IMD size should result in even
lower current requirements for all-or-nothing response.
DEVICE PHYSICS
Recently, neuronlike action potential transients in IMDs have been the subject
of considerable research3 ,4,7,8,9,1O,1l,12,13. In the simple circuits of Fig. 1, the IMD
gives rise to a spontaneous neuronlike spiketrain output. Between pulses, the IMD is
polarized in the sense that it is in a low conductance state with a substantial voltage
occurring across it, even though it is forward biased. The low conductance has been
attributed to small interfacial work functions due to band offsets at the n+ -n and
p+ -n interfaces 8 ?
Low temperatures inhibit thermionic injection of electrons and holes into the
n-region from the n+ -layer and p+ -layer impurity bands 14 . Pulses are caused by
204
switching to depolarized states with low diode potential drops and large injection
currents which are believed to be triggered by the slow buildup of a small thermionic
injection current from the n+ -layer into the n-region. The injection current can cause
impact ionization of n-region donor impurities resulting in an increasingly positive
space charge which further enhances the injection current to the point where the IMD
abruptly switches to the low conductance state with large injection current. Switching
times are typically under lOOns. Charging of the load capacitance CL cuts off the
large injection current and resets the diode to its low conductance state. The load
capacitor CL then discharges through RL. During the CL discharging time constant
RLCL the voltage across the IMD itself is low and therefore the bias voltage would
have to be raised substantially to cause further firing. Thus, RLCL is analogous to
the refractory period of a neuron. The output pulses of an IMD generally have about
the same amplitude while the rate of pulsing varies over a wide range depending on
the bias voltage and the presence of electromagnetic radiation. 7,8,10
~ DETECTOR ARRAY
?=::I
TRANSIENT SENSING
?=::I MOTION
SENSING -
TRACKING
2-D PARALLEL OUTPUT
Figure 6: lllustrative
laminar architecture
showing stacked wafers in
3-dimensions.
LAMINAR
NEURAL NETWORK
REAL TIME PARALLEL ASYNCHRONOUS PROCESSING
The devices described here could form the hardware basis for a parallel asynchronous processor in much the same way that transistors form the basis for digital
computers. The devices could be used to construct networks which could perform real
time signal processing. Pulse propagation through silicon chips (parallel firethrough,
see Fig. 7) as opposed to the lateral planar propagation in conventional integrated
circuits has been proposed. 1S This would permit the use of laminar, stacked wafer
architectures. See Fig. 6.
Such architectures would eliminate the serial processing limitations of standard processors which utilize multiplexing and charge transfer. There are additional
advantages in terms of elimination of pre-amplifiers and reduction in power consumption. The approach would utilize the low power, low noise devices lO described here
to perform input signal-to-frequency conversion in every processing channel.
POWER CONSUMPTION FOR A BRAIN SCALE SYSTEM
The low power and low current requirements together with the electronic simplicity (lower parts-count as compared with transistor and op-amp approaches) and
205
INPUTS
;1"*"*"* *'"*"* '*"* '* '*"* "*1
111111111111
;1* ***'* *"*"*"* *' **1
;1*
*
*
*
*
*
*
*
**
*
*1
;1*"*"*"*
*
**
*"*
'
*
"*
"*1
;1* **"* ***"*"* '*"* "*1
Siwafer
Siwafer
Siwaf.r
Siwaf.r
Figure 7: Schematic illustration of the signal flow
pattern through a real time
parallel asynchronous processor consisting of stacked
silicon wafers.
wafer
; I I I I I I I I I I I I I 1SiSiwaf.r
! ! ! ! ! ! ! ! ! ! ! !
OUTPUTS
the natural emulation of neuron features means that the approach described here
would be especially advantageous for very large neural networks, e.g. systems comparable to supercomputers in which power dissipation and system complexity are important considerations. The power consumption of large scale analog 16 and digital 17
systems is always a major concern. For example, the power consumption of the
CRAY XMP-48 is of the order of 300 kilowatts. For the devices described here, the
power consumption is very low. For these devices, we have observed quiescent power
drains of about 1 n W /cm 2 and pulse power consumption of about 500 nJ/pulse/cm 2 ?
We estimate that a system with 1011 active 10~m x 10~m elements (comparable
to the number of neurons in the brain 18 ) all firing with an average pulse rate of 1
KHz (corresponding to a high neuronal firing rateS) would consume about 50 watts.
The quiescent power drain for this system would be 0.1 milliwatts. Thus, power
(P) requirements for such an artificial neural network with the size scale (1011 pulse
generating elements) of the human brain and a range of activity between zero and
the maximum conceivable sustained activity for neurons in the brain would be 0.1
milliwatts < P < 50 watts for 10 micron technology. For comparison, we note that
von Neumann's estimate for the power dissipation of the brain is of order 10 to 25
watts. S,6 Fabrication of a 1011 element 10 ~m artificial neural network would require
processing of about 1500 four inch wafers.
NETWORK CONNECTIVITY
For a network with coupling between many IMD's3 we have shown" that
(1)
where Vj is the voltage across the diode and the input capacitance Cj of the i-th
network node, Rj represents a leakage resistance in parallel with Cil and Ij represents
an external current input to the i-th diode. iJ=1,2,3, ..... label different network nodes
and Tij incoporates coupling between network elements. Equation 1 has the same
form as equations which occur in the Hopfield modeI 2o ,21,22,23 for neural networks.
Sejnowski has also discussed similar equations in connection with skeleton filters in
206
OUTPUTS
INPUTS
;~
t----o
~f---r----t---t t----o
t----o
c);
o---j
R
o---j~~~~~'fi-~
o---j
R
:P---o
~:
TRANSMISSION LINE
Figure 8: a) Main features of a typical neuron from Kandel and Schwartz. 19 b) Our
artificial neuron) which shows the summation over synaptic inputs and fan-out.
the brain. 24 ?25 Nonlinear threshold behavior of IMD)s enters through F(V) as it does
in the neural network models.
In Fig. 8-b a range of input capacitances is possible. This range of capacitances
is related to the range of possible synaptic weights. The circuit in Fig. 8 accomplishes
pulse height discrimination and each pulse can contribute to the charge stored on
the central node capacitance C. The charge added to C during each input pulse is
linearly related to the input capacitance except at extreme limits. The range of input
capacitances for a particular experiment was .002 J-lF to .2 J-lF which differ by a factor
of about 100. The effect of various input capacitance values (synaptic weights) on
input-output firing rates is shown in Fig. 9. Also the Fig. 8-b shows many capacitive
inputs/outputs to/from a single IMD. i.e. fan-in and fan-out. For pulses which arrive
at different inputs at about the same time) the effect of the pulses is additive. The
time within which inputs are summed is just the stored charge lifetime. Summation
over many inputs is an important feature of neural information processing.
EXCITATION) INHIBITION) MEMORY
Both excitatory and inhibitory input circuits are shown in Fig. 10. Input pulses
cause the accumulation of charge on C in excitatory circuits and the depletion of
charge on C in inhibitory circuits. Charge associated with input spiketrains is integrated/stored on C. The temporally integrated charge is depleted by the firing of the
IMD. Thus) the storage time is related to the firing rate. After an input spiketrain
raises the potential across C to a value above the firing threshold) the resulting IMD
207
5
O.2}J F
O.03}JF
,--.,.
N
I
.;,<
4
Figure 9: Output pulse
rate vs. the input
pulse rate for different
input capacitance
values Ci values
W
t-
~
3
w
Vl
---1
::J
(L
2
t-
::J
CL
t-
6
1
20
40
80
60
100
INPUT PULSE RATE (Hz )
(0)
R
I NP~ I----'-~-r_{)l__--,----<>
OUTP UT
Figure 10: Circuits which incorporate rectifying synaptic inputs. a) an excitatory
input. b) an inhibitory input.
(b)
R
R'
INP~
c?
L
R'L
output spiketrain codes the input information. The output firing rate is linearly related to the input firing rate times the synaptic coupling strength (linearly related to
Ci). See Fig. 9. If the input ceases, then the potential across C relaxes back to a value
just below the firing threshold. When not firing, the IMD has a high impedance. If
there is negligible leakage of charge from C, then V can remain near V T (threshold
voltage) for a long time and a new input signal will quickly take the IMD over the
firing threshold. See Fig. 11. We have observed stored charge lifetimes of 56 days and
longer times may be acheivable. The lifetime of charge stored on C can be reduced
by adding a resistance in parallel with C.
From the discussion of integration, we see that long term storage of charge on C
is equivalent to long term memory. The memory can be read by seeing if a new input
pulse or spiketrain produces a prompt output pulse or spiketrain. The read signal
input channel in Fig. 8-b can be the same as or different from the channel which
resulted in the charge storage. In either case memory would produce a change in the
pattern of connectivity if the circuit was imbedded in a neural network. Changes in
patterns of connectivity are similar to Hebb's ruie considerations26 in which memory
is associated with increases in the strength (weight) of synaptic couplings. Frequently,
208
-
-QJ 13
o
a::
11
Figure 11: Firing rate vs. the bias voltage.
The region where the firing is negligible is
associated with memory. The state of the
memory is associated with the proximity
to the firing threshold.
Input Potential
the increase in synaptic weights is modeled by increased conductance whereas in the
circuits in Figs. lO(a) and 8-b memory is achieved by integration and charge storage.
Note that for these particular circuits, the memory is not eraseable although volatile
(short term) memory can easily be constructed by adding a resistor in parallel with
C. Thus, a continuous range of memory lifetimes can be achieved.
2-D PARALLEL ASYNCHRONOUS CHIP-TO-CHIP TRANSMISSION
For many IMD's the output pulse heights for a circuit like that in Fig. 1 are
>3 volts. Thus, output from the first stage or any later stage of the network could
easily be transmitted to other parts of an overall system. Two-dimensional arrays
of devices on different chips could be coupled by indium bump bonding to form
the laminar architecture described above. Planar technology could be used for local
lateral interconnections in the processor. (See Fig. 7) In addition to transmission of
electrical pulses, optical transmission is possible because the pulses can directly drive
LED's.
Emerging GaAs-on-Si technology is interesting as a means of fabricating two
dimensional emitter arrays. Optical transmission is not necessary but it might be
useful (A) for processed image data transfer, (B) for coupling to an optical processor, or (C) to provide 2-0 optical interconnects between chips bearing 2-D arrays of
p+ - n - n+ diodes. Note that with optical interconnects between chips, the circuits
employed here would be internal receivers. The p-i-n diodes employed in the present
work would be well suited to the receiver role. An interesting possibility would entail the use optical interconnects between chips to achieve local, lateral interaction.
This would be accomplished by having each optical emitter in a 2-D array broadcast
locally to multiple receivers rather than to a single receiver. Similarly, each receiver
would have a reeeptive field extending over multiple transmitters. It is also possible
that an optical element could be placed in the gap between parallel transmitter and
receiver planes to structure, control or alter 2-D patterns of interconnection. This
would be an alternative to a planar technology approach to lateral interconnection.
IT the optical elements were active then the system would constitute a hybrid optical/electronic processor, whereas if passive optical elements were employed, we would
regard the system as an optoelectronic processor. In either case, we picture the processing functions of temporal integration, spatial summation over inputs, coding and
pulse generation as residing on-chip.
209
ACKNOWLEDGEMENTS
The work was supported in part by U.S. DOE under contract #DE-AC0280ER10667 and NSF under grant # ECS-8603075.
References
[1] L. D. Harmon, Kybernetik 1,89 (1961).
[2] A. L. Hodgkin and A. F. Huxley, J. Physioll17, 500 (1952).
[3] D. D. Coon and A. G. U. Perera, Int. J. Electronics 63, 61 (1987).
[4] K. M. S. V. Bandara, D. D. Coon and R. P. G. Karunasiri, Infrared 'lransient
Sensing, to be published.
[5] J. von Neumann, The Computer and the Brain, Yale University Press, New
Haven and London, 1958.
[6] J. von Neumann, Collected Works, Pergamon Press, New York, 1961.
[7] D. D. Coon and A. G. U. Perera, Int. J. Infrared and Millimeter Waves 7, 1571
(1986).
[8] D. D. Coon and S. D. Gunapala, J. Appl. Phys 57, 5525 (1985).
[9] D. D. Coon, S. N. Ma and A. G. U. Perera, Phys. Rev. Let. 58, 1139 (1987).
[10] D. D. Coon and A. G. U. Perera, Applied Physics Letters 51, 1711 (1987).
[11] D. D. Coon and A. G. U. Perera, Solid-State Electronics 29, 929 (1986).
[12] D. D. Coon and A. G. U. Perera, Applied Physics Letters 51, 1086 (1987).
[13] K. M. S. V. Bandara, D.D. Coon and R. P. G. Karunasiri, Appl. Phys. Lett 51,
961 (1987).
[14] Y. N. Yang, D. D. Coon and P. F. Shepard, Applied Physics Letters 45, 752
(1984).
[15] D. D. Coon and A. G. U. Perera, Int. J. IR and Millimeter Waves 8, 1037 (1987).
[16] M. A. Sivilotti, M. R. Emerling and C. A. Mead, VLSI Arcbitectures for Implementation of Neural Networks, Neural Networks for Computing, A.J.P.,
1986, pp. 408-413.
[17] R. W. Keyes, Proc. IEEE 63, 740 (1975).
[18] E . R. Kandel and J. H. Schwartz, Principles of Neural Science, Elsevier, New
York, 1985.
210
[19] E. R. Kandel and J. H. Schwartz, Principles of Neural Science, Elsevier, New
York, 1985, page 15, Reproduced by permission of Elsevier Science Publishing
Co., N.Y ..
[20] J. J. Hopfield, Proc. Nat!. Acad. Sci. U.S.A 81, 3088 (1984).
[21] J. J. Hopfield and D. W. Tank, BioI. Cybern 52, 141 (1985).
[22] J. J. Hopfield and D. W. Tank, Science 233,625 (1986).
[23] D. W. Tank and J. J. Hopfield, IEEE. Circuits Syst. CAS-33, 533 (1986).
[24] T. J. Sejnowski, J. Math. Biology 4, 303 (1977).
[25] T. J. Sejnowski, Skeleton Filters in tbe Brain, Lawrence Erlbaum, New Jersey,
1981, pp. 189-212, edited by G. E. Hinton and J. A. Anderson.
[26] J. L. McClelland, D. E. Rumelhart and the PDP research group, Parallel Distributed Processing, The MIT Press, Cambridge, Massachusetts, 1986, two volumes.
| 22 |@word advantageous:1 cm2:1 pulse:23 simulation:1 solid:1 reduction:1 electronics:2 l__:1 amp:2 current:20 si:1 additive:1 realistic:1 drop:1 v:3 discrimination:1 pursued:1 pacemaker:1 device:25 plane:3 short:1 fabricating:1 math:1 node:3 contribute:1 simpler:1 height:2 constructed:1 cray:1 sustained:1 behavior:3 frequently:2 brain:8 terminal:5 kilowatt:1 begin:1 circuit:21 sivilotti:1 cm:2 substantially:1 emerging:1 nj:1 temporal:3 every:1 charge:15 schwartz:3 control:1 grant:1 discharging:1 positive:1 negligible:2 local:2 limit:1 semiconductor:1 switching:7 kybernetik:1 acad:1 mead:1 firing:20 might:1 gaas:1 studied:1 appl:2 co:1 emitter:2 range:8 unique:1 lf:2 area:1 pre:2 inp:1 seeing:1 ila:1 coon:12 storage:4 cybern:1 accumulation:1 equivalent:5 conventional:1 simplicity:1 stereotypical:1 array:5 exploratory:1 analogous:1 discharge:1 spontaneous:3 play:1 massive:4 distinguishing:1 perera:8 pa:1 element:12 rumelhart:1 infrared:4 cooling:1 donor:1 cut:1 observed:4 role:2 electrical:2 enters:1 region:4 inhibit:1 edited:1 mentioned:1 substantial:1 environment:1 complexity:3 skeleton:2 dynamic:2 milliwatt:2 raise:1 impurity:2 basis:4 easily:2 hopfield:5 chip:9 various:1 jersey:1 train:1 stacked:3 london:1 sejnowski:3 artificial:5 apparent:1 consume:1 s:1 interconnection:3 ability:1 itself:1 reproduced:1 advantage:2 triggered:1 transistor:6 tput:1 interaction:1 reset:1 date:1 achieve:2 requirement:4 neumann:5 transmission:5 produce:3 generating:2 extending:1 coupling:7 depending:1 radiation:1 measured:1 ij:2 op:2 diode:6 differ:1 imd:15 emulation:2 drawback:1 filter:2 human:1 transient:4 elimination:1 require:2 electromagnetic:1 biological:2 summation:4 pl:1 proximity:1 residing:1 lawrence:1 electron:1 circuitry:1 bump:1 major:1 adopt:1 proc:2 label:1 mit:1 clearly:1 sensor:1 always:1 emerling:1 rather:1 r_:1 voltage:9 transmitter:2 sense:1 elsevier:3 vl:1 typically:1 integrated:3 eliminate:1 vlsi:1 tank:3 overall:1 development:1 raised:1 integration:4 summed:1 spatial:1 equal:1 construct:1 field:1 having:1 mm2:2 represents:2 biology:1 nearly:1 alter:1 np:1 loon:1 intelligent:1 fundamentally:1 employ:1 haven:1 resulted:1 consisting:1 amplifier:2 conductance:7 neuronlike:4 highly:1 possibility:1 extreme:1 necessary:2 experience:1 harmon:1 increased:1 modeling:1 fabrication:1 erlbaum:1 too:1 stored:5 spiketrain:12 varies:1 contract:1 physic:6 off:2 together:1 quickly:1 xmp:1 connectivity:4 von:5 rlcl:2 central:1 containing:1 opposed:1 broadcast:1 external:1 american:1 syst:1 potential:9 de:1 coding:5 int:3 caused:1 later:1 view:1 wave:2 parallel:12 spiketrains:1 rectifying:1 ir:3 characteristic:2 millimeter:3 inch:1 drive:1 published:1 processor:7 detector:1 phys:3 synaptic:9 frequency:4 pp:2 e2:1 associated:5 attributed:1 modeler:1 massachusetts:1 ut:1 cj:1 amplitude:2 back:1 day:1 planar:3 response:3 execute:1 though:1 anderson:1 lifetime:4 just:2 stage:3 nonlinear:2 propagation:2 mode:1 effect:2 concept:2 hence:1 read:2 volt:1 laboratory:1 during:2 excitation:2 motion:1 interface:1 temperature:1 dissipation:2 passive:1 image:3 consideration:1 recently:1 fi:1 volatile:1 stimulation:1 rl:1 refractory:2 khz:1 shepard:1 volume:1 analog:1 organism:1 discussed:1 he:1 silicon:4 cambridge:1 uv:1 focal:2 similarly:1 entail:1 similarity:1 longer:1 inhibition:2 recent:1 accomplished:1 transmitted:1 additional:1 employed:3 accomplishes:1 period:2 signal:8 relates:1 multiple:2 rj:1 believed:1 long:3 serial:1 impact:1 schematic:1 basic:2 vision:2 mustration:1 achieved:2 addition:3 whereas:3 remarkably:1 diagram:2 biased:2 operate:1 depolarized:1 subject:1 hz:1 flow:1 capacitor:1 call:1 near:1 presence:1 depleted:1 yang:1 relaxes:1 switch:2 architecture:4 qj:1 motivated:1 ubi:1 abruptly:1 advantageously:1 buildup:1 resistance:2 york:3 cause:3 constitute:1 action:5 generally:1 tij:1 useful:1 band:2 locally:1 hardware:5 processed:2 mcclelland:1 reduced:1 nsf:1 inhibitory:3 s3:1 ionization:1 wafer:5 group:1 four:1 threshold:10 utilize:2 tbe:1 micron:1 letter:3 hodgkin:5 arrive:1 electronic:4 polarized:1 scaling:1 comparable:2 fl:1 layer:3 yale:1 laminar:4 fan:3 activity:3 strength:2 occur:1 huxley:5 multiplexing:1 extremely:1 injection:8 optical:11 watt:3 smaller:2 across:5 increasingly:1 character:4 remain:1 rev:1 modification:1 depletion:1 equation:3 discus:1 count:1 keyes:1 permit:1 optoelectronic:1 ulse:1 alternative:1 permission:1 supercomputer:1 capacitive:1 include:1 publishing:1 especially:1 graded:3 leakage:3 outp:1 capacitance:9 added:1 pergamon:1 spike:2 fa:1 imbedded:1 dependence:1 exhibit:1 div:1 enhances:1 conceivable:1 link:1 lateral:4 sci:1 vd:1 consumption:6 collected:1 code:1 modeled:1 illustration:1 ratio:1 rise:1 implementation:1 perform:2 conversion:2 neuron:13 nanowatt:1 hinton:1 pdp:1 intensity:2 prompt:1 connection:1 bonding:1 below:2 pattern:5 memory:12 charging:1 power:15 natural:1 hybrid:1 interconnects:3 technology:5 temporally:1 picture:1 coupled:1 nir:1 literature:1 acknowledgement:1 drain:3 mixed:3 generation:3 interesting:3 limitation:1 remarkable:2 digital:2 principle:2 viewpoint:2 lo:2 excitatory:3 placed:1 supported:1 asynchronous:4 bias:3 vv:1 institute:1 wide:1 benefit:1 regard:1 distributed:1 dimension:1 lett:1 transition:2 forward:2 avoided:1 ec:1 ignore:1 active:2 receiver:6 pittsburgh:2 quiescent:3 continuous:1 impedance:1 nature:1 transfer:3 channel:3 ca:1 bearing:1 cl:4 vj:1 main:1 linearly:3 noise:2 nothing:2 neuronal:1 fig:18 referred:1 hebb:1 slow:1 cil:1 resistor:1 kandel:3 down:2 load:2 showing:1 subliminal:1 offset:1 sensing:3 cease:1 concern:1 adding:2 gained:1 ci:2 nat:1 occurring:1 hole:1 gap:1 suited:1 led:1 contained:1 tracking:1 ma:2 bioi:1 jf:1 considerable:1 experimentally:2 change:2 typical:3 determined:1 except:1 pulsing:1 internal:1 incorporate:1 phenomenon:2 correlated:1 |
1,318 | 220 | 18
Harris-Warrick
MECHANISMS FOR NEUROMODULATION
OF BIOLOGICAL NEURAL NETWORKS
Ronald M. Harris-Warrick
Section of Neurobiology and Behavior
Cornell University
Ithaca, NY 14853
ABSTRACT
The pyloric Central Pattern Generator of the crustacean stomatogastric
ganglion is a well-defined biological neural network. This 14-neuron
network is modulated by many inputs. These inputs reconfigure the
network to produce multiple output patterns by three simple
mechanisms: 1) detennining which cells are active; 2) modulating the
synaptic efficacy; 3) changing the intrinsic response properties of
individual neurons. The importance of modifiable intrinsic response
properties of neurons for network function and modulation is discussed.
1 INTRODUCTION
Many neural network models aim to understand how a particular process is accomplished
by a unique network in the nervous system. Most studies have aimed at circuits for
learning or sensory processing; unfortunately, almost no biological data are available on
the actual anatomical structure of neural networks serving these tasks, so the accuracy of
the theoretical models is unknown. Much more is known concerning the structure and
function of motor circuits generating simple rhythmic movements, especially in simpler
invertebrate nervous systems (Getting, 1988). Called Central Pattern Generators (CPGs),
these are rather small circuits of relatively well-defined composition. The output of the
network is easily measured by monitoring the motor patterns causing movement.
Research on cellular interactions in CPGs has shown that simple models of fixed circuitry
for fixed outputs are oversimplified. Instead, these neural networks have evolved with
maximal flexibility in mind, such that modulatory inputs to the circuit can reconfigure it
"on the fly" to generate an almost infinite variety of motor patterns. These modulatory
inputs, using slow transmitters such as monoamines and peptides, can change every
component of the network, thus constructing multiple functional circuits from a single
network (Harris-Warrick, 1988). In this paper, I will describe a model biological system
to demonstrate the types of flexibility that are built into real neural networks.
Mechanisms for Neuromodulation of Biological Neural Networks
2 THE CRUSTACEAN STOMATOGASTRIC GANGLION
The pyloric CPG in the stomatogastric ganglion (STG) of lobsters and crabs is the OO8tunderstood neural circuit (Selverston and Moulins, 1987). The STG is a tiny ganglion of
30 neurons that controls rhythmic movements of the foregut. The pyloric CPG controls
the peristaltic pumping and filtering movements of the pylorus, or posterior part of the
foregut. This network contains 14 neurons, each of which is unambiguously assignable
to one of 6 cell types (Figure lA). Since each neuron can be identified from preparation
to preparation, detailed studies of the properties of each cell are possible. Thanks to the
careful work of Selverston and Marder and their colleagues, ~e anatomical synaptic
circuitry is completely known (Fig.1A). and consists of chemical synaptic inhibition and
electrotonic coupling; there is no chemical excitation in the circuit (Miller. 1987).
Despite the complete knowledge of the synaptic connections within this network. the
major question of "how it works" is still an important topic of neurobiological research.
Early modelling efforts (summarized in Hartline. 1987) showed that. while the pattern of
mutual synaptic inhibition provided important insights into the phase relations of the
neurons active in the three-phase motor pattern. pure connectionist models with simple
threshold elements for neurons were insufficient to explain the motor pattern generated by
the network. It has been necessary to understand the intrinsic response properties of each
neuron in the circuit. which differ markedly from one another in their responses to
identical stimuli. Most importantly. as will be described below. all 14 neurons are
conditional oscillators. capable (under the appropriate conditions) of generating rhythmic
bursts of action potentials in the absence of synaptic input (Bal et al. 1988). This and
other intrinsic properties of the neurons. coupled with the pattern of mutual synaptic
inhibition within the circuitry. has generated relatively good models of the pyloric motor
pattern under a specified set of conditions (Hartline, 1987).
A.
B.
Pyloric circuit
I~I
PDN
LI'.I'Y
~lr.
D.
I
I
1.1
III I I
Iill
E.
Dopamine
I
1111
1111 I
t
I
III
II
III
I;
II
.' I
lilt.
."
111
111111
I ~W
I
I
F.
111111
11111 1111
Sucrose block
II
Octopamlne
111111
I
C.
Combined
I
I
I
t
i
Serotonin
1111
1111
1111
III
Figure 1: Multiple motor patterns from the pyloric network in ~e presence of different
neurotransmitters. A. Synaptic wiring diagram of the pylonc CPG. B.-F. Motor
patterns observed under different cond~tions (s~ t~xt). PDN.LP-PY.~VN traces:
extracellular recordings of action potentIals from mdlcated neurons. AB. mtracellular
recording from the AB interneuron. From Harris-Warrick and Flamm (1987a).
19
20
Harris-Warrick
3 MULTIPLE MOTOR PATTERNS PRODUCED BY AN
ANATOMICALLY FIXED NEURAL NETWORK
When the STG is dissected with intact inputs from other ganglia. the pyloric CPG
generates a stereotyped motor pattern (Miller.1987). However. in vivo, the network generates a widely varying motor pattern. depending on the feeding state of the animal (Rezer
and Moulins. 1983). The motor pattern varies in the cycle frequency and regularity.
which cells are active. the intensity of cell firing, and phase relations.
This variability can be mimicked in vitro. where experimental control over the system is
better. Two major experimental approaches have been used. First. transmitters and
modulators that are present in the input nerve to the STG can be bath-applied. producing
unique variants on the basic motor theme. Second. identified modulatory neurons can be
selectively stimulated. activating and altering the ongoing motor pattern.
As an example. the effects of the monoamines dopamine (DA). serotonin (SHT) and
octopamine (OCT) on the pyloric motor pattern are shown in Figure 1. When modulatory inputs from other ganglia are present, the pyloric rhythm cycles strongly. with all
neurons active (Combined). Removal of these inputs usually causes the rhythm to cease.
and cells are either silent or fire tonically (Sucrose Block). Bath application of some of
the transmitters present in the input nerve can restore rhythmic cycling. However. the
motor pattern induced is different and unique for each transmitter tested: clearly the
patterns induced by DA. SHT and OCT differ markedly in frequency. intensity. active cells
and phasing (Flamm and Harris-Warrick. 1986a). The conclusion is that an anatomically
fIXed network can generate a variety of outputs in the presence of different modulatory inputs: the anatomy of the network does not determine its output.
4 MECHANISMS FOR ALTERATION OF NEURAL
NETWORK OUTPUT BY NEUROMODULATORS
We have studied the cellular mechanisms used by monoamines to modify the pyloric
rhythm. To do this. we isolate a single neuron or single synaptic interaction by selective
killing of other neurons or pharmacological blockade of synapses (Flamm and HarrisWarrick. 1986b). The amine is then added and its direct effects on the neuron or synapse
determined. Nearly every neuron in the network responded directly to all three amines we
tested. However. even in this simple 14-neuron circuit. different neurons responded differently to a single amine. For example. DA induced rhythmic oscillations and bursting in
one cell type. hyperpolarized and silenced two others, and depolarized the remaining cells
to fire tonically (Fig.2). Thus. one cannot use the knowledge of the effects of a
transmitter on one neuron to infer its actions on other neurons in the same circuit.Our
studies of the actions of DA. SHT and OCT on the pyloric network have demonstrated
three simple mechanisms for altering the output from a network.
Mechanisms for Neuromodulation of Biological Neural Networks
Control
VDJ~
Dopamine
_ _ __
LP ----------LL- JJJ.UUillUW
py - - - -
JJJJllJllillLlUJj I
Ie ------ Jllillllilillll I
-
Figure 2: Actions of dopamine on isolated neurons from the pyloric network.
Control: Activity of each neuron when totally isolated from all synaptic input.
Dopamine: Activity of isolated cell during bath application of 10-4M dopamine.
4.1 ALTERATION OF THE NEURONS THAT ARE ACTIVE PARTICI?
PANTS IN THE FUNCTIONAL CIRCUIT
By simply exciting a silent cell or inhibiting an active cell. a neuromodulator can determine which of the cells in a network will actively participate in the generation of the
motor pattern. Some cells thus are physiologically inactive. even though they are
anatomically present.
However. in some cases. unaffected cells can make a significant contribution to the motor
pattern. Hooper and Marder (1986) have shown that the peptide proctolin activates the
pyloric rhythm and induces rhythmic oscillations in one neuron. Proctolin has no effect
on three other neurons that are electrically coupled to the oscillating neuron; these cells
impose an electrical drag on the oscillator neuron. causing it to cycle more slowly than it
does when isolated from these cells. Thus. the unaffected cells cause the whole motor
pattern to cycle more slowly.
4.2 ALTERATION OF THE SYNAPTIC EFFICACY OF
CONNECTIONS WITHIN THE NETWORK
The flexibility of synaptic interactions is well-known and is used in virtually all models
of plasticity in neural networks. By changing the amount of transmitter released from the
pre-synaptic tenninal or the post-synaptic responsiveness (either by altering the membrane resistance or the number of receptors). the strength of a synapse can be altered over
an order of magnitude. Obviously. this will have important effects on the phase relations
of neurons firing in the network.
In the STG. the situation is complicated by the fact that graded synapses are the primary
fonn of chemical communication: the cells release transmitter as a continuous function of
membrane potential. and do not require action potentials to trigger release (Graubard.
21
22
Harris-Warrick
1978). Some neurons even release transmitter at rest and must be hyperpolarized to block
release. We have shown that graded synaptic transmission is also strongly modulated by
monoamines, which can completely eliminate some synapses while strengthening others
(Fig.3; Johnson and Harris-Warrick, 1990). Amines can change the apparent threshold for
transmitter release or the functional strength of the synapse. Modulation of graded transmission thus allows delicate adjustments of the phasing between cells in the
motorpattern, which is often detennined by synaptic interactions. Graded synaptic
transmission occurs in many species, so this could turn out to be a general fonn of
plasticity.
6--0
Control
lO?4M DA
lO?SM Oct
PD~
~
LP~
~
~
-1
I
%tIllV
J IIIV
I?
Figure 3: Modulation of graded synaptic transmission from the PD neuron to the LP
neuron by octopamine and dopamine. Experiment done in the presence of tetrodotoxin to
abolish action potentials. Other synaptic inputs to these cells have been eliminated.
In one case, modulation of graded transmission results in a sign reversal of the synaptic
interaction between two cells (Johnson and Harris-Warrick, 1990). In the pyloric CPG,
the PD neurons weakly inhibit the IC neuron by a graded chemical mechanism, but in
addition the two cells are weakly electrically coupled. This mixed synapse is weak and
variable. Dopamine weakens the chemical inhibition: the electrical coupling dominates
and the IC cell depolarizes upon PD depolarization. Octopamine strengthens the chemical
inhibition, and the IC cell hyperpolarizes upon PD depolarization. Combined chemical
and electrical synaptic interactions have been detected in many other preparations, and thus
can undecly flexibility in the strength and sign of synaptic interactions.
4.3 ALTERATION OF THE INTRINSIC RESPONSE PROPERTIES
OF THE NETWORK NEURONS
The physiological response properties of neurons within a network are not fixed, but can
be extensively altered by neuromodulators. As a consequence, the response to an identical
synaptic input can vary radically in the presence of different neuromodulators.
4.3.1
Induction of bistable firing properties
Many neurons in both vertebrates and invertebrates are capable of firing in "plateau
potentials", where a brief excitatory stimulus triggers a prolonged depolarized plateau,
with tonic spiking for many seconds, which can be prematurely truncated by a brief
hyperpolarizing input (Hartline et al, 1988). Thus, the neuron shows bistable properties:
brief synaptic inputs can step it between two relatively stable resting potentials which
differ markedly in spike frequency. This property is plastic, and can be induced or
Mechanisms for Neuromodulation of Biological Neural Networks
suppressed by neuromodulatory inputs. For example, Fig. 4 shows the DG neuron in the
STG. Under control conditions, a brief depolarizing current injection causes a small
depolarization that is subthreshold for spike initiation. However, after stimulating a
serotonergic/cholinergic modulatory neuron (called GPR), the same brief current injection
induces a prolonged burst of spikes on a depolarized plateau potential (Katz and HarrisWarrick. 1989). Similar results have been obtained in turtle and cat spinal motor neurons
after application of monoamines such as serotonin or its biochemical precursor
(Hounsgaard et aI.1988; Hounsgaard and Kiehn.1989). Stimulation of a modulatory
neuron can also disable the plateau potentials that are normally present in a neuron (Nagy
et aI. 1988).
~
DG.-J~
~1
___-,,---_____"--- 11
10mv
_~,-
-
GPR stirn.
nA
5 see
Figure 4: Induction of plateau potential capability in DG neuron by stimulation of a
serotonergic/cholinergic sensory neuron, GPR.
4.3.2
Induction of endogenous rhythmic bursting
A more extreme fonn of modulation can occur where the modulatory stimulus induces
endogenous rhythmic oscillations in membrane potential underlying rhythmic bursts of
action potentials. For example. in Figure 4. the pyloric AB neuron shows no intrinsic
oscillatory capabilities when it is isolated from all synaptic input. Bath application of
monoamines such as DA. 5HT and OCT induce rhythmic bursting in this isolated cell
(Flamm and Harris-Warrick. 1986b). Brief stimulation of the serotonergic/cholinergic
GPR neuron can also induce or enhance rhythmic bursting that outlasts the stimulus by
Control
Dopamine
.J
Figure S: Induction of rhythmic bursting in a synaptically isolated AB neuron by bath
application of dopamine (104 M).
several minutes. The quantitative details of the bursting (cycle frequency. oscillation
amplitude. spike frequency, etc.) are different with each amine. due to different ionic
mechanisms for burst generation (Harris-Warrick and Flamm. 1987b). Since the AB
neuron is the major pacemaker in the pyloric CPG, these differences underly the marked
differences in pyloric rhythm frequency seen with the amines in Fig.I. Induction of
rhythmic bursting by neuromodulators has been observed in vertebrates (for example.
Dekin et aI.1985). and this is likely to be a general mechanism.
23
24
Harris-Warrick
4.3.3
Modulation or post-inhibitory rebound
Most neurons show post-inhibitory rebound, a period of increased excitability following
strong inhibition. This is probably due in part to the activation of prolonged inward
currents during hyperpolarization (Angstadt and Calabrese, 1989). This property can be
modified by biochemical second messengers used by neuromodulators. For example.
elevation of cAMP by forskolin enhances post-inhibitory rebound in the pyloric LP
neuron (Figure 5; Flamm et al, 1987). As a consequence of this modulation, the cell's
response to a simple inhibitory input is radically changed to a biphasic response, with an
initial inhibition followed by delayed excitation.
Control
~ ~IUUU!!lUU~Wl
LP
, r-
---u-
' r
r ---
i:
50 ~ Forskolin
~ ~WJllllUllilllllliUllUUlU
U
Figure 6: Induction of post-inhibitory rebound by forskolin, which elevates cAMP
levels, in the LP neuron. Control: Hyperpolarizing current injection does not induce
post-inhibitory rebound, measured at two different resting potentials. Forskolin:
Elevation of cAMP depolarizes LP and induces tonic spiking (left). At all membrane
potentials, a hyperpolarizing pulse is followed by an enhanced burst of action potentials.
S ENDOGENOUS RELEASE OF NEUROMODULATORS
FROM IDENTIFIED NEURONS
Most of the results I have described were obtained with bath application of amines or peptides, a method that can be criticized as being non-physiological. To test this, a number
of neurons containing identified neuromodulators have been found, and the action of the
naturally released and bath-applied modulator directly compared. An immediate
complication arose from these studies: the majority of the known modulatory neurons
contain more than one transmitter. All possible combinations have been observed,
including a slow transmitter with a fast transmitter, two or more slow transmitters, and
multiple fast transmitters. To fully understand the complex changes in network function
induced by activity in these neurons, it is necessary to study the actions of all the cotransmitters on all the neurons in the network. This has been recently accomplished in
the STG. Here, serotonin is released by a set of sensory cells responding to muscle
stretch (Katz et aI, 1989). These cells also contain and release acetylcholine (Katz et
al,1989). In studying the actions of the two transmitters, remarkable flexibility was
uncovered (Katz and Harris-Warrick, 1989,1990). First, not all target neurons responded
Mechanisms for Neuromodulation of Biological Neural Networks
to both released transmitters: some responded only to 5HT, while one cell responded only
to ACh. Second, the responses to released 5HT were all modulatory, but varied markedly
in different cells, mimicking the bath application studies described earlier. Finally, the
two transmitters acted over entirely different time scales. ACh induced rapid EPSPs lasting tens to hundreds of msec via nicotinic receptors, while 5HT induced slow prolonged
responses lasting many seconds to minutes (for example, Fig.4).
It is now clear that neural networks are targets for multiple neuronal inputs using many
different transmitters and modulators. For example, the STG contains only 30 neurons,
but is innervated by over 100 axons from other ganglia. Twelve neurotransmitters have
thus far been identified in these axons (Marder and Nusbaum,1989), and these are probably
a minority of the total that are present. In recordings from the input nerve to the ganglion, many axons are spontaneously active. Thus, the pyloric network is continuously
bathed with a varying mixture of transmitters and modulators, allowing for very subtle
changes in the firing pattern. In vivo, we expect that each modulator plays a small role
in the overall mixture that determines the final motor pattern.
6 CONCLUSION
The work described here shows conclusively that an anatomically fixed neural network can
be modulated to produce a large variety of output patterns. The anatomical connections in
the network are necessary but not sufficient to understand the output of the network.
Indeed, it is best to think of these networks as libraries of potential components, which
are then selected and activated by the modulatory inputs. In addition to altering which
neurons are active and altering the synaptic strength in the circuits, I have emphasized the
important role of modulation of the intrinsic response properties of the network neurons
in determining the final pattern of output. Indeed, if this aspect of modulation is ignored,
predictions of the actions of modulators on the final motor pattern are grossly in error.
Many modellers claim that this emphasis on the intrinsic computational properties of
single neurons is unique to the invertebrates, which have few cells to work with. In the
vertebrates, they argue, the enormous increase in numbers of cells changes the computational rules such that each cell is a simple threshold element, and complex transformations only take place with changes in synaptic efficacy in the circuits. There are
absolutely no data to support this hypothesis of "simple cells" in vertebrates. In fact, a
great deal of careful work has shown that vertebrate neurons are dynamic elements that
show all the complex intrinsic response properties of invertebrate neurons (Llinas,1988).
These properties can be changed by neuromodulators, just as in the crustacean STG, such
that vertebrate cells can have radically different physiological "personalities" in the
presence of different modulators. Network models which ignore the complex
computational properties of single neurons thus do not reflect the richness and variability
of biological neural networks of both invertebrates and vertebrates alike.
Acknowledgments: Supported by NIH Grant NS17323 and Hatch Act NYC-19141O.
7 BIBLIOGRAPHY
Angstadt, J.D., Calabrese, R.L. (1989) A hyperpolarization-activated inward current in
heart interneurons of the medicinal leech. 1. Neurosci. 9: 2846-2857.
2S
26
Harris-Warrick
Bal, T., Nagy, F., Moulins, M. (1988) The pyloric central pattern generator in Crustacea:
a set of conditional neuronal oscillators. J. Compo Physiol. A 163: 715-727.
Dekin, M.S., Richerson. G.B .? Getting, P.A. (1985) Thyrotropin-releasing honnone induces rhythmic bursting in neurons of the nucleus tractus solitarius. Science 229:6769.
Flamm, R.E., Harris-Warrick. R.M. (1986a) Aminergic modulation in lobster stomatogastric ganglion. I. The effects on motor pattern and activity of neurons within the
pyloric circuit. J. Neurophysiol. 55: 847-865.
Flamm, R.E., Harris-Warrick. R.M. (1986b) Aminergic modulation in lobster stomatogastric ganglion. II. Target neurons of dopamine, octopamine. and serotonin within
the pyloric circuit. J. Neurophysiol. 55: 866-881.
Flamm. R.E .? Fickbohm, D., Harris-Warrick. R.M. (1987) cAMP elevation modulates
physiological activity of pyloric neurons in the lobster stomatogastric ganglion. J.
Neurophysiol. 58: 1370-1386.
Getting, P.A. (1988). Comparative analysis of invertebrate central pattern generators. in:
Cohen, A.H., Rossignol. S., Grillner, S. (eds.), Neural Control of Rhythmic
Movements in vertebrates. John Wiley and Sons, New York, pp. 101-127.
Graubard, K. (1978) Synaptic transmission without action potentials: input-output properties of a non-spiking presynaptic neuron. J. Neurophysiol. 41: 1014-1025.
Harris-Warrick, R. M. (1988) Chemical modulation of central pattern generators. in: Cohen, A.H., Rossignol, S., Grillner. S.(eds.) Neural Control of Rhythmic Movements
in vertebrates, John Wiley & Sons. New York. pp 285-331.
Harris-Warrick. R.M .? Flamm, RE. (1987a) Chemical modulation of a small central
pattern generator circuit. Trends in Neurosci. 9: 432-437.
Harris-Warrick, R.M., Flamm. R E. (1987b) Multiple mechanisms of bursting in a
conditional bursting neuron. J. Neurosci. 7: 2113-2128.
Hartline, D.K. (1987) Modeling stomatogastric ganglion. in: Selverston, A.I .? Moulins.
M. (eds.), The Crustacean Stomatogastric System. Springer-Verlag, Berlin. pp. 181197.
Hartline, D.K., Russell. D.K.? Raper. J.A .? Graubard. K. (1988) Special cellular and synaptic mechanisms in motor pattern generation. Compo Biochem. Physiol.
91C:115-131.
Hooper, S.L., Marder. E (1987) Modulation of the lobster pyloric rhythm by the peptide
proctolin. J. Neurosci. 7:2097-2112.
Hounsgaard. J .? Kiehn, O. (1989) Serotonin-induced bistability of turtle motoneurones
caused by a nifedipine-sensitive calcium plateau potential. J. Physiol. 414:265-282.
Hounsgaard, J., Hultborn. H., Jespersen, B., Kiehn. O. (1988) Bistability of alpha-motoneurones in the decerebrate cat and in the acute spinal cat after intravenous 5hydroxy tryptophan. J. Physiol. 405:345-367.
Jan. L.Y., Jan, Y.N. (1982) Peptidergic transmission in sympathetic ganglia of the frog.
J. Physiol. 327: 219-246.
Johnson, B. R., Harris-Warrick. R.M. (1990) Aminergic modulation of graded synaptic
transmission in the lobster stomatogastric ganglion. J. Neurosci., in press.
Katz. P.S .? Eigg, M.H., Harris-Warrick. R.M. (1989) Serotonergic/cholinergic muscle
receptor cells in the crab stomatogastric nervous system. I. Identification and
characterization of the gastropyloric receptor cells. J. Neurophysiol. 62: 558-570.
Mechanisms for Neuromodulation of Biological Neural Networks
Katz, P.S., Harris-Warrick, R.M. (1989) Serotonergic/cholinergic muscle receptor cells
in the crab stomatogastric nervous system. II. Rapid nicotinic and prolonged
modulatory effects on neurons in the stomatogastric ganglion. J. Neurophysiol. 62:
571-581.
Katz, P.S., Harris-Warrick, R. M. (1990) Neuromodulation of the crab pyloric central
pattern generator by serotonergic/cholinergic proprioceptive afferents. J. Neurosci.,
in press.
Llinas, R.R. (1988) The intrinsic electrophysiological properties of mammalian neurons:
insights into central nervous function. Science 242: 1654-1664.
Marder, E., Nusbaum, M.P. (1989) Peptidergic modulation of the motor pattern generators in the stomatogastric ganglion. in: Carew, T.I., Kelley, D.B. (eds.), Perspectives in Neural Systems and Behavior, Alan R. Liss, Inc., New York. pp 73-91.
Miller, J.P. (1987) Pyloric mechanisms. in: Selverston, A.I., Moulins, M. (eds.) ~
Crustacean Stomatogastric System, Springer-Verlag, Berlin, pp. 109-136.
Nagy, F., Dickinson, P.S., Moulins, M. (1988) Control by an identified modulatory
neuron of the sequential expression of plateau properties of, and synaptic inputs to, a
neuron in a central pattern generator. J. Neurosci. 8:2875-2886.
Rezer, E., Moulins, M. (1983) Expression of the crustacean pyloric pattern generator in
the intact animal. J. Compo Physiol. 153:17-28.
Selverston, A.I., Moulins, M. (eds.) (1987) The Crustacean Stomatogastric System
Springer-Verlag, Berlin, 338 pp.
27
| 220 |@word hyperpolarized:2 pulse:1 fonn:3 initial:1 contains:2 efficacy:3 uncovered:1 current:5 activation:1 must:1 john:2 physiol:6 underly:1 ronald:1 hyperpolarizing:3 plasticity:2 motor:25 pacemaker:1 selected:1 nervous:5 lr:1 compo:3 characterization:1 complication:1 simpler:1 cpg:6 burst:5 direct:1 consists:1 indeed:2 tryptophan:1 rapid:2 behavior:2 oversimplified:1 prolonged:5 actual:1 precursor:1 totally:1 vertebrate:9 provided:1 underlying:1 circuit:17 inward:2 evolved:1 depolarization:3 selverston:5 transformation:1 biphasic:1 quantitative:1 every:2 act:1 control:13 normally:1 grant:1 producing:1 elevates:1 modify:1 consequence:2 despite:1 receptor:5 pumping:1 firing:5 modulation:16 emphasis:1 frog:1 studied:1 bursting:10 drag:1 unique:4 acknowledgment:1 spontaneously:1 block:3 jan:2 pre:1 induce:3 cannot:1 py:2 demonstrated:1 pure:1 foregut:2 stomatogastric:15 insight:2 rule:1 importantly:1 enhanced:1 trigger:2 target:3 play:1 dickinson:1 hypothesis:1 element:3 trend:1 strengthens:1 mammalian:1 observed:3 role:2 rossignol:2 fly:1 electrical:3 cycle:5 richness:1 movement:6 inhibit:1 russell:1 leech:1 pd:5 dynamic:1 weakly:2 upon:2 completely:2 neurophysiol:6 easily:1 differently:1 cat:3 neurotransmitter:2 fast:2 describe:1 modulators:5 detected:1 apparent:1 widely:1 serotonin:6 think:1 final:3 obviously:1 interaction:7 maximal:1 strengthening:1 causing:2 bath:8 detennined:1 flexibility:5 getting:3 regularity:1 transmission:8 ach:2 produce:2 generating:2 oscillating:1 comparative:1 tions:1 coupling:2 depending:1 weakens:1 measured:2 strong:1 epsps:1 differ:3 anatomy:1 dissected:1 bistable:2 carew:1 require:1 feeding:1 activating:1 elevation:3 biological:10 stretch:1 crab:4 ic:3 great:1 claim:1 circuitry:3 inhibiting:1 major:3 vary:1 early:1 released:5 tonically:2 peptide:4 modulating:1 sensitive:1 wl:1 clearly:1 octopamine:4 activates:1 aim:1 modified:1 rather:1 arose:1 moulins:8 cornell:1 varying:2 acetylcholine:1 release:7 transmitter:19 modelling:1 stg:9 camp:4 biochemical:2 eliminate:1 relation:3 selective:1 mimicking:1 overall:1 warrick:24 animal:2 special:1 mutual:2 eliminated:1 identical:2 nearly:1 rebound:5 connectionist:1 stimulus:4 others:2 few:1 dg:3 individual:1 delayed:1 flamm:11 phase:4 fire:2 delicate:1 ab:5 interneurons:1 kiehn:3 cholinergic:6 mixture:2 extreme:1 activated:2 capable:2 necessary:3 re:1 isolated:7 theoretical:1 criticized:1 increased:1 earlier:1 modeling:1 altering:5 depolarizes:2 bistability:2 hundred:1 johnson:3 hooper:2 varies:1 combined:3 thanks:1 twelve:1 ie:1 enhance:1 continuously:1 na:1 central:9 neuromodulators:8 reflect:1 containing:1 slowly:2 li:2 actively:1 potential:18 alteration:4 summarized:1 inc:1 caused:1 mv:1 afferent:1 endogenous:3 motoneurones:2 complicated:1 capability:2 depolarizing:1 vivo:2 contribution:1 abolish:1 accuracy:1 responded:5 miller:3 subthreshold:1 weak:1 identification:1 plastic:1 killing:1 produced:1 tenninal:1 ionic:1 calabrese:2 monitoring:1 hartline:5 unaffected:2 modeller:1 explain:1 synapsis:3 plateau:7 oscillatory:1 messenger:1 synaptic:31 ed:6 grossly:1 colleague:1 lobster:6 frequency:6 pp:6 naturally:1 dekin:2 monoamine:6 crustacean:7 knowledge:2 electrophysiological:1 subtle:1 amplitude:1 nerve:3 unambiguously:1 response:13 llinas:2 synapse:4 done:1 though:1 strongly:2 just:1 effect:7 contain:2 chemical:9 excitability:1 proprioceptive:1 blockade:1 deal:1 wiring:1 pharmacological:1 ll:1 during:2 excitation:2 rhythm:6 bal:2 complete:1 demonstrate:1 recently:1 nih:1 stimulation:3 spiking:3 vitro:1 hyperpolarization:2 functional:3 spinal:2 detennining:1 cohen:2 discussed:1 resting:2 katz:7 significant:1 composition:1 ai:4 neuromodulatory:1 nyc:1 kelley:1 decerebrate:1 stable:1 acute:1 inhibition:7 sht:3 etc:1 biochem:1 posterior:1 showed:1 perspective:1 verlag:3 initiation:1 graubard:3 accomplished:2 muscle:3 responsiveness:1 seen:1 impose:1 disable:1 determine:2 period:1 ii:5 multiple:7 infer:1 alan:1 concerning:1 post:6 prediction:1 variant:1 basic:1 dopamine:11 synaptically:1 cell:39 addition:2 diagram:1 ithaca:1 releasing:1 phasing:2 depolarized:3 rest:1 proctolin:3 markedly:4 probably:2 recording:3 induced:8 isolate:1 virtually:1 presence:5 iii:4 variety:3 intravenous:1 modulator:2 identified:6 silent:2 inactive:1 nusbaum:2 pdn:2 expression:2 effort:1 resistance:1 york:3 cause:3 action:14 electrotonic:1 ignored:1 modulatory:13 detailed:1 aimed:1 clear:1 amount:1 extensively:1 ten:1 induces:5 generate:2 amine:7 inhibitory:6 hultborn:1 sign:2 modifiable:1 anatomical:3 serving:1 aminergic:3 threshold:3 enormous:1 changing:2 ht:4 place:1 almost:2 vn:1 oscillation:4 lilt:1 entirely:1 followed:2 activity:5 marder:5 strength:4 occur:1 bibliography:1 invertebrate:6 generates:2 aspect:1 turtle:2 jjj:1 injection:3 relatively:3 extracellular:1 acted:1 hyperpolarizes:1 combination:1 electrically:2 membrane:4 son:2 suppressed:1 lp:8 alike:1 lasting:2 sucrose:2 anatomically:4 heart:1 turn:1 mechanism:16 neuromodulation:7 mind:1 reversal:1 studying:1 available:1 appropriate:1 innervated:1 mimicked:1 personality:1 responding:1 remaining:1 especially:1 graded:8 question:1 added:1 occurs:1 spike:4 primary:1 cycling:1 enhances:1 berlin:3 majority:1 participate:1 topic:1 luu:1 argue:1 presynaptic:1 cellular:3 induction:6 minority:1 insufficient:1 unfortunately:1 trace:1 calcium:1 unknown:1 allowing:1 neuron:76 sm:1 truncated:1 immediate:1 situation:1 neurobiology:1 variability:2 communication:1 tetrodotoxin:1 tonic:2 prematurely:1 varied:1 intensity:2 specified:1 connection:3 below:1 pattern:38 usually:1 built:1 including:1 hatch:1 restore:1 altered:2 brief:6 library:1 pyloric:27 coupled:3 medicinal:1 removal:1 determining:1 fully:1 expect:1 mixed:1 generation:3 filtering:1 remarkable:1 generator:10 nucleus:1 sufficient:1 exciting:1 tiny:1 pant:1 neuromodulator:1 lo:2 excitatory:1 changed:2 supported:1 cpgs:2 understand:4 nagy:3 rhythmic:16 sensory:3 far:1 alpha:1 ignore:1 neurobiological:1 conclusively:1 stirn:1 active:9 physiologically:1 continuous:1 iill:1 stimulated:1 complex:4 constructing:1 da:6 stereotyped:1 neurosci:7 grillner:2 whole:1 serotonergic:6 silenced:1 fig:6 neuronal:2 ny:1 slow:4 axon:3 wiley:2 theme:1 msec:1 gpr:4 reconfigure:2 minute:2 xt:1 emphasized:1 cease:1 physiological:4 dominates:1 intrinsic:10 sequential:1 importance:1 modulates:1 magnitude:1 interneuron:1 simply:1 likely:1 ganglion:16 sympathetic:1 adjustment:1 springer:3 radically:3 determines:1 harris:24 stimulating:1 oct:5 conditional:3 marked:1 careful:2 oscillator:3 absence:1 change:6 infinite:1 determined:1 called:2 specie:1 total:1 experimental:2 la:1 cond:1 intact:2 selectively:1 support:1 modulated:3 absolutely:1 preparation:3 ongoing:1 tested:2 |
1,319 | 2,200 | A Bilinear Model for Sparse Coding
David B. Grimes and Rajesh P. N. Rao
Department of Computer Science and Engineering
University of Washington
Seattle, WA 98195-2350, U.S.A.
grimes,rao @cs.washington.edu
Abstract
Recent algorithms for sparse coding and independent component analysis (ICA) have demonstrated how localized features can be learned from
natural images. However, these approaches do not take image transformations into account. As a result, they produce image codes that are
redundant because the same feature is learned at multiple locations. We
describe an algorithm for sparse coding based on a bilinear generative
model of images. By explicitly modeling the interaction between image features and their transformations, the bilinear approach helps reduce
redundancy in the image code and provides a basis for transformationinvariant vision. We present results demonstrating bilinear sparse coding
of natural images. We also explore an extension of the model that can
capture spatial relationships between the independent features of an object, thereby providing a new framework for parts-based object recognition.
1 Introduction
Algorithms for redundancy reduction and efficient coding have been the subject of considerable attention in recent years [6, 3, 4, 7, 9, 5, 11]. Although the basic ideas can be
traced to the early work of Attneave [1] and Barlow [2], recent techniques such as independent component analysis (ICA) and sparse coding have helped formalize these ideas and
have demonstrated the feasibility of efficient coding through redundancy reduction. These
techniques produce an efficient code by attempting to minimize the dependencies between
elements of the code by using appropriate constraints.
One of the most successful applications of ICA and sparse coding has been in the area of
image coding. Olshausen and Field showed that sparse coding of natural images produces
localized, oriented basis filters that resemble the receptive fields of simple cells in primary
visual cortex [6, 7]. Bell and Sejnowski obtained similar results using their algorithm
for ICA [3]. However, these approaches do not take image transformations into account.
As a result, the same oriented feature is often learned at different locations, yielding a
redundant code. Moreover, the presence of the same feature at multiple locations prevents
more complex features from being learned and leads to a combinatorial explosion when
one attempts to scale the approach to large image patches or hierarchical networks.
In this paper, we propose an approach to sparse coding that explicitly models the interac-
tion between image features and their transformations. A bilinear generative model is used
to learn both the independent features in an image as well as their transformations. Our
approach extends Tenenbaum and Freeman?s work on bilinear models for learning content and style [12] by casting the problem within probabilistic sparse coding framework.
Thus, whereas prior work on bilinear models used global decomposition methods such as
SVD, the approach presented here emphasizes the extraction of local features by removing higher-order redundancies through sparseness constraints. We show that for natural
images, this approach produces localized, oriented filters that can be translated by different amounts to account for image features at arbitrary locations. Our results demonstrate
how an image can be factored into a set of basic local features and their transformations,
providing a basis for transformation-invariant vision. We conclude by discussing how the
approach can be extended to allow parts-based object recognition, wherein an object is
modeled as a collection of local features (or ?parts?) and their relative transformations.
2 Bilinear Generative Models
We begin by considering the standard linear generative model used in algorithms for ICA
and sparse coding [3, 7, 9]:
(1)
where is a -dimensional input vector (e.g. an image),
is a -dimensional basis vector
and is its scalar coefficient. Given the linear generative model above, the goal of ICA is
to learn the basis vectors
such that the are as independent as possible, while the goal
in sparse coding is to make the distribution of highly kurtotic given Equation 1.
The linear generative model in Equation 1 can be extended to the bilinear case by using
two independent sets of coefficients and (or equivalently, two vectors and ) [12]:
(2)
The coefficients
and jointly modulate a set of basis vectors to produce an input
encoding the presence
vector . For the present study, the coefficient can be regarded as
of object feature in the image while the values determine the transformation present in
the image. In the terminology of Tenenbaum and Freeman [12], describes the ?content?
of the image while encodes its ?style.?
Equation 2 can also be expressed as a linear equation in for a fixed :
"#
!
&%'
(3)
$
)(
Likewise, for a fixed , one obtains a linear equation in . Indeed this is the definition
of bilinear: given one fixed factor, the model is linear with respect to the other factor. The
power of bilinear models stems from the rich non-linear interactions that can be represented
by varying both and simultaneously.
3 Learning Sparse Bilinear Models
3.1
Learning Bilinear Models
Our goal is to learn from image data an appropriate set of basis vectors
that effectively
describe the interactions between the feature vector and the transformation vector .
A commonly used approach in unsupervised learning is to minimize the sum of squared
pixel-wise errors over all images:
(4)
(5)
where
denotes the
norm of a vector. A standard approach to minimizing such
a function is to use gradient descent and alternate between minimization with respect to
and minimization with respect to
. Unfortunately, the optimization problem as
stated is underconstrained. The function
has many local minima and results from our
simulations indicate that convergence is difficult in many cases. There are many different
ways to represent an image, making it difficult for the method to converge to a basis set
that can generalize effectively.
A related approach is presented by Tenenbaum and Freeman [12]. Rather than using gradient descent, their method estimates the parameters directly by computing the singular value
decomposition (SVD) of a matrix containing input data corresponding to each content
class in every style . Their approach can be regarded as an extension of methods based
on principal component analysis (PCA) applied to the bilinear case. The SVD approach
avoids the difficulties of convergence that plague the gradient descent method and is much
faster in practice. Unfortunately, the learned features tend to be global and non-localized
similar to those obtained from PCA-based methods based on second-order statistics. As a
result, the method is unsuitable for the problem of learning local features of objects and
their transformations.
The underconstrained nature of the problem can be remedied by imposing constraints on
and . In particular, we could cast the problem within a probabilistic framework and impose specific prior distributions on and with higher probabilities for values that achieve
certain desirable properties. We focus here on the class of sparse prior distributions for several reasons: (a) by forcing most of the coefficients to be zero for any given input, sparse
priors minimize redundancy and encourage statistical independence between the various
and between the various [7], (b) there is growing evidence for sparse representations
in the brain ? the distribution of neural responses in visual cortical areas is highly kurtotic
i.e. the cell exhibits little activity for most inputs but responds vigorously for a few inputs,
causing a distribution with a high peak near zero and long tails, (c) previous approaches
based on sparseness constraints have obtained encouraging results [7], and (d) enforcing
encourages the parts and local features shared across objects to be
sparseness on the
learned while imposing sparseness on the allows object transformations to be explained
in terms of a small set of basic transformations.
We assume the following priors for
and :
3.2
Bilinear Sparse Coding
!
(6)
!
"!
(
#
(7)
where
and
are normalization constants, $ and % are parameters that control the
degree
of sparseness, and & is a ?sparseness function.? For this study, we used & '
( )+*
'
.
,
Within a probabilistic framework, the squared error function
summed over all images
can be interpreted
as
representing
the
negative
log
likelihood
of
the
data given the parame( )+*
(see, for example, [7]). The priors
and
can
be used
ters:
to marginalize this likelihood to obtain the new likelihood function:
.
The goal then is to find the
that maximize
, or equivalently, minimize the negative
log of
. Under certain reasonable assumptions (discussed in [7]), this is equivalent to
minimizing the following optimization function over all input images:
!
(8)
Gradient descent can be used to derive update rules for the components and
of the
feature vector and transformation vector respectively for any image , assuming a fixed
basis :
(9)
(10)
$
Given a training set of inputs , the values for and for each image after convergence
can be used to update the basis set in batch mode according to:
(11)
$
As suggested
by
Olshausen
and
Field [7], in order to keep the basis vectors from growing
without bound, we adapted the
norm of each basis vector in such a way that the variances
of the and were maintained at a fixed desired level.
,
$
&
,
%
&
$
&
,
%
,
&
4 Results
4.1
Training Paradigm
We tested the algorithms for bilinear sparse coding on natural image data. The natural
images we used are distributed by Olshausen and Field [7], along with the code for their
algorithm.
The
training set of images consisted of patches randomly extracted from
ten source images. The images are pre-whitened to equalize large variances in
frequency, and thus speed convergence. We choose to use a complete basis where
be at least as large as the number of transformations (including
the
and we let
notransformation case). The sparseness parameters $ and % were set to
and . In
order to assist convergence all learning occurs in batch mode, where the batch consisted
of
image patches. The step size for gradient descent using Equation 11 was
set to . The transformations were chosen to be 2D translations in the range "!#$
pixels in both the axes. The style/content separation was enforced by learning a single
vector to describe an image patch regardless its translation, and likewise a single vector
to describe a particular style given any image patch content.
4.2
Bilinear Sparse Coding of Natural Images
Figure 1 shows the results of training on natural image data. A comparison between the
learned features for the linear generative model (Equation 1) and the bilinear model is
(a)
Example of
linear basis
wi
Example of
Bilinear basis
y(?2)
y(?3)
wi
wi
y(2)
y(1)
y(0)
y(?1)
wi
wi
wi
wi
y(3)
wi
i=1
i=2
(b)
Estimated
feature x
vector i =
y?
x
j 1
?
i
3 9
wi j after learning
2
7
8
91
3
y
Canonical
patch
9
Estimated
transformation
vectors
91
Translated
patch
y
j= 12
78
Figure 1: Representing natural images and their transformations with a sparse bilinear model. (a) A comparison of learned features between a standard linear model and a
bilinear model, both trained with the same sparseness priors. The two rows for the bilinear
case depict the translated object features w (see Equation 3) for translations of
pixels. (b) The representation of an example natural image patch, and of the same patch
translated to the left. Note that the bar plot representing the vector is indeed sparse, having only three significant coefficients. The code for the style vectors for both the canonical
patch, and the translated one is likewise sparse. The
basis images are shown for those
dimensions which have non-zero coefficients for or .
(
provided in Figure 1 (a). Although both show simple, localized, and oriented features,
the bilinear method is able to model the same features under different transformations. In
this case, the range $ horizontal translations were used in the training of the bilinear
model. Figure 1 (b) provides an example of how the bilinear sparse coding model encodes
a natural image patch and the same patch after it has been translated. Note that both the
and vectors are sparse.
Figure 2 shows how the model can account for a given localized feature at different locations by varying the y vector. As shown in the last column of the figure, the translated local
feature is generated by linearly combining a sparse set of basis vectors
.
4.3
Towards Parts-Based Object Recognition
The bilinear generative model
in Equation 2 uses the same set of transformation values
for all the features
. Such a model is appropriate for global transformations
Selected transformations
Feature 1 (x57 )
y(?1,+2)
wi j
y(0,3)
Feature 2 (x32 )
y(?2,0)
wi j
y(+1,0)
j=
1
2
3
4
5
6
7
8
...
j= 1
8
Figure 2: Translating a learned feature to multiple locations. The two rows of eight
images represent the individual basis vectors
for two values of . The values for two
selected transformations for each are shown as bar plots. ' denotes a translation of
'
pixels in the Cartesian plane. The last column shows the resulting basis vectors after
translation.
&
that apply to an entire image region such as a shift of pixels for an image patch or a global
illumination change.
Consider the problem of representing an object in terms of its constituent parts. In this
case, we would like to be able to transform each part independently of other parts in order
to account for the location, orientation, and size of each part in the object image. The
standard bilinear model can be extended to address this need as follows:
(12)
Note that each object feature now has its own set of transformation values . The double
summation is thus no longer symmetric. Also note that the standard model (Equation 2) is
a special case of Equation 12 where for all .
We have conducted preliminary experiments to test the feasibility of Equation 12 using
a set of object features learned for the standard bilinear model. Fig. 3 shows the results.
These results suggest that allowing independent transformations for the different features
provides a rich substrate for modeling images and objects in terms of a set of local features
(or parts) and their individual transformations.
5 Summary and Conclusion
A fundamental problem in vision is to simultaneously recognize objects and their transformations [8, 10]. Bilinear generative models provide a tractable way of addressing this
problem by factoring an image into object features and transformations using a bilinear
equation. Previous approaches used unconstrained bilinear models and produced global
basis vectors for image representation [12]. In contrast, recent research on image coding
has stressed the importance of localized, independent features derived from metrics that
emphasize the higher-order statistics of inputs [6, 3, 7, 5]. This paper introduces a new
probabilistic framework for learning bilinear generative models based on the idea of sparse
coding.
Our results demonstrate that bilinear sparse coding of natural images produces localized
oriented basis vectors that can simultaneously represent features in an image and their
transformation. We showed how the learned generative model can be used to translate a
(a)
y
w81
x81
y(0,1)
x
57
y
w57
z
81
x57
y(0,1)
(b)
? w81 j y81
j
x81
y81
y(?2,0)
z
y(0,1)
y57
y(?2,0)
x57
y(0,1)
y(1,0)
y(1,1)
y(1,0)
y(1,1)
Figure 3: Modeling independently transformed features. (a) shows the standard bilinear
method of generating a translated feature by combining basis vectors
using the same
and ). (b) shows four examples of
set of
values for two different features (
images generated by allowing different values of for the two different features. Note the
significant differences between the resulting images, which cannot be obtained using the
standard bilinear model.
basis vector to different locations, thereby reducing the need to learn the same basis vector
at multiple locations as in traditional sparse coding methods. We also proposed an extension of the bilinear model that allows each feature to be transformed independently of
other features. Our preliminary results suggest that such an approach could provide a flexible platform for adaptive parts-based object recognition, wherein objects are described by
a set of independent, shared parts and their transformations. The importance of parts-based
methods has long been recognized in object recognition in view of their ability to handle
a combinatorially large number of objects by combining parts and their transformations.
Few methods, if any, exist for learning representations of object parts and their transformations directly from images. Our ongoing efforts are therefore focused on deriving efficient
algorithms for parts-based object recognition based on the combination of bilinear models
and sparse coding.
Acknowledgments
This research is supported by NSF grant no. 133592 and a Sloan Research Fellowship to
RPNR.
References
[1] F. Attneave. Some informational aspects of visual perception. Psychological Review,
61(3):183?193, 1954.
[2] H. B. Barlow. Possible principles underlying the transformation of sensory messages.
In W. A. Rosenblith, editor, Sensory Communication, pages 217?234. Cambridge,
MA: MIT Press, 1961.
[3] A. J. Bell and T. J. Sejnowski. The ?independent components? of natural scenes are
edge filters. Vision Research, 37(23):3327?3338, 1997.
[4] G. E. Hinton and Z. Ghahramani. Generative models for discovering sparse distributed representations. Philosophical Transactions Royal Society B, 352(1177?
1190), 1997.
[5] M. S. Lewicki and T. J. Sejnowski. Learning overcomplete representations. Neural
Computation, 12(2):337?365, 2000.
[6] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties
by learning a sparse code for natural images. Nature, 381:607?609, 1996.
[7] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A
strategy employed by V1? Vision Research, 37:33113325, 1997.
[8] R. P. N. Rao and D. H. Ballard. Development of localized oriented receptive fields
by learning a translation-invariant code for natural images. Network: Computation in
Neural Systems, 9(2):219?234, 1998.
[9] R. P. N. Rao and D. H. Ballard. Predictive coding in the visual cortex: A functional
interpretation of some extra-classical receptive field effects. Nature Neuroscience,
2(1):79?87, 1999.
[10] R. P. N. Rao and D. L. Ruderman. Learning Lie groups for invariant visual perception. In Advances in Neural Information Processing Systems 11, pages 810?816.
Cambridge, MA: MIT Press, 1999.
[11] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control.
Nature Neuroscience, 4(8):819?825, August 2001.
[12] J. B. Tenenbaum and W. T. Freeman. Separating style and content with bilinear models. Neural Computation, 12(6):1247?1283, 2000.
| 2200 |@word norm:2 simulation:1 decomposition:2 thereby:2 vigorously:1 reduction:2 x81:2 plot:2 update:2 depict:1 generative:12 selected:2 discovering:1 plane:1 provides:3 location:9 along:1 ica:6 indeed:2 growing:2 brain:1 freeman:4 informational:1 little:1 encouraging:1 considering:1 begin:1 provided:1 moreover:1 underlying:1 interpreted:1 transformation:33 every:1 schwartz:1 control:2 grant:1 engineering:1 local:8 bilinear:38 encoding:1 range:2 acknowledgment:1 practice:1 area:2 bell:2 pre:1 suggest:2 cannot:1 marginalize:1 equivalent:1 demonstrated:2 attention:1 regardless:1 independently:3 focused:1 x32:1 factored:1 rule:1 regarded:2 deriving:1 handle:1 substrate:1 us:1 element:1 recognition:6 capture:1 region:1 equalize:1 trained:1 predictive:1 basis:25 translated:8 represented:1 various:2 describe:4 sejnowski:3 ability:1 statistic:3 jointly:1 transform:1 emergence:1 propose:1 interaction:3 causing:1 combining:3 translate:1 achieve:1 constituent:1 seattle:1 convergence:5 double:1 produce:6 generating:1 object:23 help:1 derive:1 c:1 resemble:1 indicate:1 filter:3 translating:1 preliminary:2 summation:1 extension:3 early:1 combinatorial:1 combinatorially:1 minimization:2 mit:2 rather:1 varying:2 casting:1 ax:1 focus:1 derived:1 likelihood:3 contrast:1 factoring:1 entire:1 transformed:2 pixel:5 orientation:1 flexible:1 development:1 spatial:1 summed:1 special:1 platform:1 field:9 extraction:1 washington:2 having:1 unsupervised:1 few:2 oriented:6 randomly:1 simultaneously:3 recognize:1 individual:2 attempt:1 message:1 highly:2 introduces:1 grime:2 yielding:1 rajesh:1 edge:1 encourage:1 explosion:1 desired:1 overcomplete:2 psychological:1 column:2 modeling:3 rao:5 kurtotic:2 addressing:1 successful:1 conducted:1 interac:1 dependency:1 peak:1 fundamental:1 probabilistic:4 squared:2 containing:1 choose:1 style:7 account:5 coding:25 coefficient:7 explicitly:2 sloan:1 tion:1 helped:1 view:1 minimize:4 variance:2 likewise:3 generalize:1 emphasizes:1 produced:1 rosenblith:1 definition:1 frequency:1 attneave:2 gain:1 formalize:1 higher:3 wherein:2 response:1 horizontal:1 ruderman:1 mode:2 olshausen:5 effect:1 consisted:2 barlow:2 symmetric:1 encourages:1 maintained:1 complete:1 demonstrate:2 image:56 wise:1 functional:1 tail:1 discussed:1 interpretation:1 significant:2 transformationinvariant:1 cambridge:2 imposing:2 unconstrained:1 cortex:2 longer:1 own:1 recent:4 showed:2 forcing:1 certain:2 discussing:1 minimum:1 impose:1 employed:1 recognized:1 determine:1 converge:1 paradigm:1 maximize:1 signal:1 redundant:2 multiple:4 desirable:1 simoncelli:1 stem:1 faster:1 long:2 feasibility:2 basic:3 whitened:1 vision:5 metric:1 represent:3 normalization:1 cell:3 whereas:1 fellowship:1 singular:1 source:1 extra:1 subject:1 tend:1 near:1 presence:2 independence:1 reduce:1 idea:3 shift:1 pca:2 assist:1 effort:1 amount:1 tenenbaum:4 ten:1 exist:1 canonical:2 nsf:1 estimated:2 w81:2 neuroscience:2 group:1 redundancy:5 four:1 terminology:1 demonstrating:1 traced:1 v1:1 year:1 sum:1 enforced:1 extends:1 reasonable:1 patch:13 separation:1 bound:1 activity:1 adapted:1 constraint:4 scene:1 encodes:2 aspect:1 speed:1 attempting:1 department:1 according:1 alternate:1 combination:1 describes:1 across:1 wi:11 making:1 explained:1 invariant:3 equation:13 tractable:1 eight:1 apply:1 hierarchical:1 appropriate:3 batch:3 denotes:2 unsuitable:1 ghahramani:1 society:1 classical:1 occurs:1 receptive:4 primary:1 strategy:1 responds:1 traditional:1 exhibit:1 gradient:5 remedied:1 separating:1 parame:1 reason:1 enforcing:1 assuming:1 code:9 modeled:1 relationship:1 providing:2 minimizing:2 equivalently:2 difficult:2 unfortunately:2 stated:1 negative:2 allowing:2 descent:5 extended:3 communication:1 hinton:1 arbitrary:1 august:1 david:1 cast:1 philosophical:1 plague:1 learned:11 address:1 able:2 suggested:1 bar:2 perception:2 including:1 royal:1 power:1 natural:16 difficulty:1 representing:4 prior:7 review:1 relative:1 localized:9 degree:1 principle:1 editor:1 translation:7 row:2 summary:1 supported:1 last:2 allow:1 sparse:31 distributed:2 dimension:1 cortical:1 avoids:1 rich:2 sensory:3 collection:1 commonly:1 adaptive:1 transaction:1 obtains:1 emphasize:1 keep:1 global:5 conclude:1 learn:4 nature:4 ballard:2 complex:1 linearly:1 fig:1 lie:1 removing:1 specific:1 evidence:1 underconstrained:2 effectively:2 importance:2 illumination:1 sparseness:8 cartesian:1 explore:1 visual:5 prevents:1 expressed:1 scalar:1 lewicki:1 ters:1 extracted:1 ma:2 modulate:1 goal:4 towards:1 shared:2 considerable:1 content:6 change:1 reducing:1 principal:1 svd:3 stressed:1 ongoing:1 tested:1 |
1,320 | 2,201 | Binary Thning is Optimal for eural Rate
Coding with High Temporal Resolution
Matthias Bethge:David Rotermund, and Klaus Pawelzik
Institute of Theoretical Physics
University of Bremen
28334 Bremen
{mbethge,davrot,pawelzik}@physik.uni-bremen.de
Abstract
Here we derive optimal gain functions for minimum mean square reconstruction from neural rate responses subjected to Poisson noise. The
shape of these functions strongly depends on the length T of the time
window within which spikes are counted in order to estimate the underlying firing rate. A phase transition towards pure binary encoding occurs
if the maximum mean spike count becomes smaller than approximately
three provided the minimum firing rate is zero. For a particular function
class, we were able to prove the existence of a second-order phase transition analytically. The critical decoding time window length obtained
from the analytical derivation is in precise agreement with the numerical
results. We conclude that under most circumstances relevant to information processing in the brain, rate coding can be better ascribed to a binary
(low-entropy) code than to the other extreme of rich analog coding.
1
Optimal neuronal gain functions for short decoding time windows
The use of action potentials (spikes) as a means of communication is the striking feature of
neurons in the central nervous system. Since the discovery by Adrian [1] that action potentials are generated by sensory neurons with a frequency that is substantially determined by
the stimulus, the idea of rate coding has become a prevalent paradigm in neuroscience [2].
In particular, today the coding properties of many neurons from various areas in the cortex
have been characterized by tuning curves, which describe the average firing rate response
as a function of certain stimulus parameters. This way of description is closely related to
the idea of analog coding, which constitutes the basis for many neural network models.
Reliabl v inference from the observed number of spikes about the underlying firing rate of
a neuronal response, however, requires a sufficiently long time interval, while integration
times of neurons in vivo [3] as well as reaction times of humans or animals when performing classification tasks [4, 5] are known to be rather short. Therefore, it is important
to understand, how neural rate coding is affected by a limited time window available for
decoding.
While rate codes are usually characterized by tuning functions relating the intensity of the
,f
* http://www.neuro.urn-bremen.dermbethge
neuronal response to a particular stimulus parameter, the question, how relevant the idea of
analog coding actually is does not depend on the particular entity represented by a neuron.
Instead it suffices to determine the shape of the gain function, which displays the mean firing rate as a function of the actual analog signal to be sent to subsequent neurons. Here we
seek for optimal gain functions that minimize the minimum average squared reconstruction
error for a uniform source signal transmitted through a Poisson channel as a function of the
maximum mean number of spikes.
In formal terms, the issue is to optimally encode a real random variable x in the number
of pulses emitted by a neuron within a certain time window. Thereby, x stands for the
intended analog output of the neuron that shall be signaled to subsequent neurons. The
latter, however, can only observe a number of spikes k integrated within a time interval of
length T. The statistical dependency between x and k is specified by the assumption of
Poisson noise
p(kIJL(x))
= (JL~))k exp{ -JL(X)} ,
(1)
and the choice of the gain function f(x), which together with T determines the mean spike
count J.L(x) == T f(x) . An important additional constraint is the limited output range of the
neuronal firing rate, which can be included by the requirement of a bounded gain function
(fmin :::; f (x) :::; f max, VX). Since inhibition can reliably prevent a neuron from firing,
we will here consider the case f min == 0 only. Instead of specifying f max, we impose
a bound directly on the mean spike count (i.e. J.L(x) :::; /l), because f max constitutes a
meaningful constraint only in conjunction with a fixed time window length T.
As objective function we consider the minimum mean squared error (MMSE) with respect
to Lebesgue measure for x E [0, 1],
~
2 X _ E x2 _ E (i2 _ _
[jt( )] - []
[] - 3
X
~
(Xl
(J01 xp(kIJL(x)) dx
r
J01p(kIJL(x)) dx'
(2)
where x(k) == E[xlk] denotes the mean square estimator, which is the conditional expectation (see e.g. [6]).
1.1
Tunings and errors
As derived in [7] on the basis of Fisher information the optimal gain function for a single
neuron in the asymptotic limit T -+ 00 has a parabolic shape:
fasymp(x) == fmaxx2 .
(3)
For any finite /l, however, this gain function is not necessarily optimal, and in the limit
T -+ 0, it is straight forward to show that the optimal tuning curve is a step function
f step (xl'19) == fmax 8 (x - {)) ,
(4)
where 8(z) denotes the Heaviside function that equals one, if z > 0 and zero if z < O.
The optimal threshold 'l9(p,) of the step tuning curve depends on /l and can be determined
analytically
11(-)
=1_
It
3 - V8e-J.'
+1
4(1 - e- il )
(5)
as well as the corresponding MMSE [8]:
2
2[fste p] _ 1 (
3'19 (p,)
)
X
- 12 1 - [(1 -11(p))(l - e-iL)]-1 - 1 .
(6)
1
S
+1 0.5
CJ;)
o
........
'------'-----'---'---'--'~----'----'--
~---'---'---'--'~
10-1 ~---,.---,---.,...............---.----.---.---.-.......-.-.--.-~ ...............~
Figure 1: The upper panel shows a bifurcation plot for {}(Jt) - wand {}(Jt) + w of the
optimal gain function in 51 as a function of {t illustrating the phase transition from binary
to continuous encoding. The dotted line separates the regions before and after the phase
transition in all three panels. Left from this line (i.e. for Jt < Jt C) the step function given by
Eq. 4+5 is optimal. The middle panel shows the MMSE of this step function (dashed) and
of the optimal gain function in 52 (solid), which becomes smaller than the first one after
the phase transition. The relative deviation between the minimal errors of 51 and 52 (i.e.
(X~l - X~2)/X~2) is displayed in the lower panel and has a maximum below 0.035.
The binary shape for small {t and the continuous parabolic shape for large {t implies that
there has to be a transition from discrete to analog encoding with increasing {to Unfortunately it is not possible to determine the optimal gain function within the set of all bounded
functions B :== {fli : [0, 1] -+ [0, fmax]} and hence, one has to choose a certain parameterized function space 5 c B in advance that is feasible for the optimization. In [8], we
investigated various such function-'spaces and for {t < 2.9, we did not find any gain function with an error smaller than the MMSE of the step function. Furthermore, we always
observed a phase transition from binary to analog encoding at a critical {t C that depends
only slightly on the function space. As one can see in Fig. 1 (upper) pc is approximately
three.
In this paper, we consider two function classes 51, 52, which both contain the binary gain
function as well as the asymptotic optimal parabolic function as special cases. Furthermore
51 is a proper subset of 52. Our interest in 51 results from the fact that we can analyze the
phase transition in this subset analytically, while 52 is the most general parameterization
for which we have. determined the optimal encoding numerically. The latter has six free
parameters a :::; b :::; c E [0, 1], fmid E (0, fmax), a, f3 E [0,00) and the parameterization
of the gain functions is given by
o
fS2 (xla, b, c, fmid, a, (3)
fmid ( ~=:
==
, O<x<a
)
, a<x<b
<>
(~=:)f3
fmid + (Imam - fmid)
fmax
, b<x<c
, c<x<l
(7)
The integrals entering Eq. 2 for the !v1!v1SE in case of the gain function fS2 then read
1
~ {a 2 8 + (b-a)2r O'!"'id (k+~)
1
x p(klx) dx
k!
+
+
+
+
1
2 O,k
a(b - a) rO,fTnid (k +~)
a v'fmid
(c-b)2r fTnid'!Tnaz
p(klx) dx
(
{I fmid(C - b)
b - ({lfmam -
(1 - 2) fk
max
2
k'.
+
(k+~)
{3
(8)
f3( ~fmax - {/fmid)2
~ { a8
1
a(v'fmid)2
0, k
~fmid)
) (c - b) r !",id,!",a.. (k
fJ( {lfmam -
+ ~)
~fmid)
e-f",a.. }
+ (b -
a) rO,fn>id (k
a vrr;;;;a.
m~d
(c - b) r !n>id,f", a .. (k + ~ )
fJ( ij fmam - {I fmid)
+ ~)
+ (1 -
(9)
k
-!n>a.. }
c)fmam e
,
where r u,v(z) == J~ sz-l e-s.ds denotes the truncated Gamma function. Numericaloptimization leads to the minimal MMSE as a function of Jl as displayed in Fig. 1 (middle).
The parameterization of the gain functions in 51 is given by
o < x < 'l9(p) -
w
, iJ(jj) - W < x < f}(Jl)
, iJ(Jl) + w < x < 1
with W E
[0, 1] and,
E
+w
(10)
[0, 00). The integrals entering Eq. 2 for the MMSE in case of the
i S1 read
gain function
1
1
1 {(1?(jl) - W)2
4w
k!
2
80 ,k +
x p(klx) dx
2w(1?(jl) - w)ro,f",az
+
+
1
1
+
,( ~)2
(k + ~ )
(11)
,?/fmax
1 - (i1(JL)
2
+ W)2 fk
max e
1 {
_
k l (1?(J.t) - W)80 ,k
.
p(klx) dx
rO,t",az (k + ~ )
2
+
-f'TTLa~}
2wro,t",az
,
(k + ~) .
~
max
(1 - i1(JL) - w)f~axe-fTnatD }
(12)
The minimal MMSE for these gain functions is only slightly worse than that for 52. The
relative difference between both is plotted in Fig. 1 (lower) showing a maximum deviation
of 3.2%. In particular, the relative deviation is extremely small around the phase transition.
This comparison suggests that a restriction to 51, which is a necessary simplification for
the following analytical investigation, does not change the qualitative results.
2 A phase transition
The phase transition from binary to analog encoding corresponds to a structural change of
the objective function X 2 (w, ,). In particular, the optimality of binary encoding for JL < JL C
implies that X 2 (w, ,) has a minimum at w == O. The existence of a phase transition implies
that with increasing JL this minimum changes into a local maximum at a certain critical
point JL == fic. Therefore, the critical point can be determined by a local expansion of
2
2
X (w",jl) - X (O",jl)
00
k
= L9k(A,jl) ~!
(13)
k==1
around w == 0, because the sign of its leading coefficient A, (JL) (i.e. the coefficient 9k with
minimal k that does not vanish identically) determines, whether X 2 (w", p,) has a local
minimum or maximum at w == O. Accordingly, the critical point is given as the solution of
A,(JL) == O.
With quite a bit of efforts one can prove that the first derivative of X 2 (w, " fi) vanishes for
all fi. The second derivative, however, is a decreasing function of JL and hence constitutes
the wanted leading coefficient
_I
4(eP - 1)2
VI
{8 _ 7eli
+ I6e2li + e3li
+ 8e- P (2 + eP (-3 + eP (6 + eP )))
+ (I6eli - 48e 2li - 4e3li + VI + 8e-~ (4e li - 8 (4 + eli)))
+ ( 8e 2li + 2(5 - 3VI + 8e- li ) e3li )
jl~2~r~'li (~)
jl~~ rO,li (~)
5
4
~c
3
2
1
3
2
V
Figure 2: The critical maximum mean spike count J-lc is shown as a function of, (numerical
evaluation at, E {O.5, 0.505, 0.51, ... , 3.5}). The minimum J-lc = 2.98291 ? 10- 7 at
, = 1.9 determines the phase transition in 8 1 .
16eft (eft -
1)
(VI + 8e- ft - 3) p~~ rO,ft (~)
2e 2ft (eft -
1)
(VI + 8e- 3) P,;2'i
(14)
2
+
1t -
1e-S/~ry (1-~)-~
ft
rO,ft-S
(~) dS}
Obviously, it is not possible to write the zeros of A, (p,) in a closed form. The numerical
evaluation of the critical point jJ C ( , ) as a function of, is displayed in Fig. 2. Note, that we
have treated, as a fixed parameter, which means that we determine the critical point of the
phase transition in all subsets 8 1 ( , ) of 8 1 that correspond to a fixed,. It is straight forward
to show that the critical point [t C with respect to the entire class 8 1 is given by the minimum
of [tC(,). We determined this value up to a precision of ?O.OOOl to be pc = 2.9857.
3
Conclusion
Our study reveals that optimal encoding with respect to the minimum mean squared error
is binary for maximum mean spike counts smaller than approximately three. Within the
function class 8 1 we determined a second-order phase transition from binary to continuous
encoding analytically. With respect to mutual information the advantage of binary encoding holds even up to a maximum mean spike count of about 3.5 (results not shown) and
remains discrete also for larger [t. In a related work [9], Softky compared the information capacity of the Poisson channel with the information rate of a (noiseless) binary pulse
code. The rate of the latter turned out to exceed the capacity of the former at a factor of
at least 72 demonstrating a clear superiority of binary coding over analog rate coding. Our
rate-distortion analysis of the Poisson channel differs from that comparison in a twofold
way: First, we do not change-the noise model and second, the MMSE is often more appropriate to account for the coding efficiency than the channel capacity [10]. In particular, the
assumption of a real random variable to be encoded with minimal mean squared error loss
appears to introduce a bias for analog coding rather than for binary coding. Nevertheless,
assuming a high temporal precision (i.e. small integration times T), our results hint into a
similar direction, namely that binary coding seems to be a more reasonable choice even if
one supposes that the only means of neuronal communication would be the transmission of
Poisson distributed spike counts.
Methodologically, our analysis is similar to many theoretical studies of population coding
if f(x) == J-l(x)/T is not interpreted as the neuron's gain function, but as a tuning function
with respect to a stimulus parameter x. Though conceptually different, s9me readers may
therefore wish to know whether binary coding is still advantageous if many neurons, say N,
together encode for a single analog. value. While the approach chosen in this paper is not
feasible in case of large N, a partial answer can be given: For the efficiency of population
coding redundancy reduction is most important [7,8, 11]. Smooth tuning curves, which
have a dynamic range at about the same size as the signal range always lead to a large
amount of redundancy so that the MMSE can not decrease faster than N- 1 . In contrast the
MMSE of binary tuning functions scales proportional to N- 2 or even faster. This holds
also true for tuning functions, which are not perfectly binary, but have a dynamic range that
is at least smaller than the signal range divided by N. Independent from jj this implies that
a small dynamic range is always advantageous in case of population coding.
In contrast, most experimental studies do not report on binary or steep tuning functions,
but show smooth tuning curves only. However, the shape of a tuning function always depends on the stimulus set used. Only recently, experimental studies under natural stimulus
conditions provided evidence for the idea that neuronal encoding is essentially binary [12J.
Particularly striking is this observation for the HI neuron of the fly [13J, for which the
functional role is probably better understood than for most other neurons that have been
characterized by tuning functions.
While the noise level of the Poisson channel studied in this paper is rather large, the HI
neuron can respond very reliably under optimal stimulus conditions [13J. Another example
of a low-noise binary code has been found in the auditory cortex [14J. If we drop the
restriction to Poisson noise and impose a hard constraint on the maximum number of spikes
instead, optimal encoding is always discrete with J-l(x) taking integer values only [15]. This
is easy to grasp, because any rational J-l can not serve to increase the entropy of the available
symbol set (i.e. the candidate spike counts), but only increases the noise entropy instead.
In other words, it is the simple fact that spike counts are discrete by nature, which already
severely limits the possibility of graded rate coding. Clearly, this is not so obvious in case
of the Poisson channel, if there is no hard constraint imposed on the maximum spike count.
A remarkable aspect of the neuronal response of HI shown in [13J is that it becomes the
more binary the less noisy the stimulus conditions are (the noise level is determined by
the different light conditions at midday, half an hour before, and half an hour after sunset). This suggests an interesting hypothesis why choosing a binary code with very high
temporal precision might be advantageous even if the signal of interest by itself does not
change at that time scale: the sensory inputmay sometimes be too noisy, so that repeated,
independent samples from the signal of interest may sometimes lead to neuronal firing and
sometimes not. In other words, a binary code at the short time scale is useful independent
from the correlation time of the signal to be encoded, if uncertainties have to be taken into
account, because any surplus available amount of temporal precision is maximally used for
uncertainty representation in a self-adjusting manner. Furthermore, this Monte-Carlo type
of uncertainty representation features several computational advantages [16]. Finally, it is
a remarkable fact that this property is unique for a binary code, because the representation
of uncertainty is necessary for many information processing tasks solved by the brain.
Additional support for the potential relevance of a binary neural code comes from intracellular recordings in vivo revealing that the subthreshold membrane potential of many cortical
cells switches between up and down states [17J depending on the stimulus. Furthermore,
the dynamics of bursting cells plays an important role for neuronal signal transmission [18]
and may also be seen as evidence for binary rate coding. In light of these experimental
facts, we conclude from our results that the idea of binary tuning constitutes? an important
hypothesis for neural coding.
Acknowledgments
This work/was supported by the Deutsche Forschungsgesellschaft SFB 517.
References
[1] E.D. Adrian. The impulses produced by sensory nerve endings: Part i. J. Physiol.
(London), 61:49-72,1926.
[2] D.H. Perkel and T.H. Bullock. Neural coding: a report based on an nrp work session.
Neurosci. Research Prog. Bull., 6:220-349, 1968.
[3] W.R. Softky and C. Koch. The hihgly irregular firing of cortical cells is inconsistent
with temporal integration of random epsps. J Neurosci., 13:334-350,1993.
[4] C. Keysers, D. Xiao, P. Foldiak, and D. Perrett. The speed of sight. J. Cog. Neurosci.,
13:90-101,2001.
[5] S. Thorpe, D. Fize, and Marlot. Speed of processing in the human visual system.
Nature, 381:520-522,1996.
[6] E.L. Lehmann and G. Casella. Theory ofpoint estimation. Springer, New York, 1999.
[7] M. Bethge, D. Rotermund, and K. Pawelzik. Optimal short-term population coding:
when fisher information fails. Neural Comput., 14(10):2317-2351,2002.
[8] M. Bethge, D. Rotermund, and K. Pawelzik. Optimal neural rate coding leads to
bimodal firing rate distributions. Network: Comput. Neural Syst., 2002. in press.
[9] W.R. Softky. Fine analog coding minimizes information transmission. Neural Networks, 9:15-24, 1996.
[10] D.H. Johnson. Point process models of single-neuron discharges. J. Comput. Neurosci., 3:275-299,1996.
[11] M. Bethge and K. Pawelzik. Population coding with unreliable spikes. Neurocomputing, 44-46:323-328,2002.
[12] P. Reinagel. How do visual neurons respond in the real world. Curro Gp. Neurobiol.,
11:437-442,2001.
[13] G.D. Lewen, W. Bialek, and R.R. de Ruyter van Steveninck. Neural coding of natural
stimuli. Network: Comput. Neural Syst., 12:317-329,2001.
[14] M.R. DeWeese and A.M. Zador. Binary coding in auditory cortex. In S. Becker,
S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing
Systems, volume 15, 2002.
[15] A. Gersho and R.M. Grey. Vector quantization and signal compression. Kluwer,
Boston, 1992.
[16] P.O. Hoyer and A. Hyvarinen. Interpreting neural response variability as monte carlo
sampling of the posterior. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems, volume 15, 2002.
[17] J. Anderson, 1. Lampl, 1. Reichova, M. Carandini, and D. Ferster. Stimulus dependence of two-state fluctuations of membrane potential in cat visual cortex. Nature
Neurosci., 3:617-621,2000.
[18] J.E. Lisman. Bursts as a unit of neural information processing: making unreliable
synapses reliable. TINS, 20:38-43, 1997.
| 2201 |@word illustrating:1 middle:2 compression:1 seems:1 advantageous:3 physik:1 adrian:2 grey:1 seek:1 pulse:2 methodologically:1 thereby:1 solid:1 reduction:1 mmse:10 reaction:1 dx:6 fn:1 subsequent:2 numerical:3 physiol:1 shape:6 wanted:1 plot:1 drop:1 half:2 nervous:1 parameterization:3 accordingly:1 short:4 burst:1 klx:4 become:1 qualitative:1 prove:2 manner:1 introduce:1 ascribed:1 ry:1 brain:2 perkel:1 decreasing:1 pawelzik:5 actual:1 window:6 increasing:2 becomes:3 provided:2 underlying:2 bounded:2 panel:4 deutsche:1 interpreted:1 substantially:1 minimizes:1 neurobiol:1 temporal:5 ro:7 unit:1 superiority:1 before:2 understood:1 local:3 limit:3 severely:1 encoding:12 id:4 firing:10 fluctuation:1 approximately:3 might:1 studied:1 bursting:1 specifying:1 suggests:2 limited:2 range:6 steveninck:1 unique:1 acknowledgment:1 differs:1 area:1 revealing:1 word:2 www:1 restriction:2 imposed:1 zador:1 resolution:1 mbethge:1 pure:1 estimator:1 reinagel:1 population:5 discharge:1 today:1 play:1 hypothesis:2 agreement:1 particularly:1 sunset:1 observed:2 ep:4 ft:5 fly:1 role:2 midday:1 solved:1 region:1 decrease:1 vanishes:1 dynamic:4 depend:1 serve:1 efficiency:2 basis:2 various:2 represented:1 cat:1 derivation:1 describe:1 london:1 monte:2 klaus:1 choosing:1 quite:1 encoded:2 larger:1 distortion:1 say:1 gp:1 noisy:2 itself:1 obviously:1 advantage:2 matthias:1 analytical:2 reconstruction:2 relevant:2 turned:1 fmax:6 description:1 az:3 requirement:1 transmission:3 derive:1 depending:1 ij:3 eq:3 epsps:1 implies:4 come:1 direction:1 closely:1 human:2 vx:1 imam:1 suffices:1 investigation:1 hold:2 sufficiently:1 around:2 koch:1 exp:1 estimation:1 clearly:1 always:5 sight:1 rather:3 conjunction:1 encode:2 derived:1 prevalent:1 contrast:2 inference:1 integrated:1 entire:1 oool:1 i1:2 issue:1 classification:1 animal:1 integration:3 bifurcation:1 special:1 mutual:1 equal:1 f3:3 sampling:1 constitutes:4 report:2 stimulus:11 hint:1 thorpe:1 gamma:1 neurocomputing:1 phase:14 intended:1 lebesgue:1 interest:3 possibility:1 marlot:1 evaluation:2 grasp:1 extreme:1 pc:2 light:2 integral:2 partial:1 necessary:2 signaled:1 plotted:1 theoretical:2 minimal:5 bull:1 deviation:3 subset:3 uniform:1 johnson:1 too:1 optimally:1 dependency:1 answer:1 supposes:1 physic:1 decoding:3 bethge:4 together:2 squared:4 central:1 choose:1 worse:1 derivative:2 leading:2 li:6 syst:2 account:2 potential:5 de:2 coding:28 coefficient:3 depends:4 vi:5 closed:1 analyze:1 vivo:2 minimize:1 square:2 il:2 correspond:1 subthreshold:1 conceptually:1 produced:1 carlo:2 straight:2 synapsis:1 casella:1 frequency:1 obvious:1 gain:19 auditory:2 rational:1 adjusting:1 carandini:1 cj:1 actually:1 surplus:1 nerve:1 appears:1 response:6 maximally:1 though:1 strongly:1 anderson:1 furthermore:4 correlation:1 d:2 impulse:1 contain:1 true:1 former:1 analytically:4 hence:2 entering:2 read:2 i2:1 self:1 xla:1 perrett:1 interpreting:1 fj:2 fi:2 recently:1 functional:1 axe:1 jl:21 analog:12 eft:3 volume:2 relating:1 numerically:1 kluwer:1 tuning:14 fk:2 session:1 cortex:4 inhibition:1 posterior:1 foldiak:1 lisman:1 certain:4 binary:30 fs2:2 transmitted:1 minimum:10 additional:2 seen:1 impose:2 determine:3 paradigm:1 signal:9 dashed:1 smooth:2 faster:2 characterized:3 long:1 divided:1 neuro:1 essentially:1 circumstance:1 expectation:1 poisson:9 noiseless:1 sometimes:3 bimodal:1 cell:3 irregular:1 fine:1 interval:2 source:1 probably:1 recording:1 sent:1 inconsistent:1 emitted:1 integer:1 structural:1 exceed:1 identically:1 easy:1 switch:1 perfectly:1 idea:5 whether:2 six:1 sfb:1 becker:2 effort:1 york:1 jj:3 action:2 useful:1 clear:1 amount:2 http:1 dotted:1 sign:1 neuroscience:1 discrete:4 write:1 shall:1 affected:1 redundancy:2 threshold:1 demonstrating:1 nevertheless:1 prevent:1 deweese:1 fize:1 v1:1 wand:1 eli:2 parameterized:1 uncertainty:4 respond:2 striking:2 lehmann:1 prog:1 parabolic:3 reasonable:1 reader:1 curro:1 rotermund:3 bit:1 bound:1 hi:3 simplification:1 display:1 constraint:4 x2:1 l9:2 fmin:1 aspect:1 speed:2 min:1 extremely:1 optimality:1 performing:1 urn:1 membrane:2 smaller:5 slightly:2 bullock:1 making:1 s1:1 taken:1 remains:1 count:10 know:1 fic:1 subjected:1 gersho:1 available:3 observe:1 appropriate:1 existence:2 lampl:1 denotes:3 graded:1 objective:2 question:1 already:1 spike:17 occurs:1 dependence:1 bialek:1 obermayer:2 hoyer:1 softky:3 separate:1 thrun:2 entity:1 capacity:3 assuming:1 length:4 code:8 unfortunately:1 steep:1 reliably:2 proper:1 upper:2 neuron:18 observation:1 finite:1 displayed:3 truncated:1 communication:2 precise:1 variability:1 intensity:1 david:1 namely:1 specified:1 hour:2 able:1 usually:1 below:1 max:6 reliable:1 critical:9 treated:1 natural:2 lewen:1 discovery:1 asymptotic:2 relative:3 loss:1 interesting:1 proportional:1 remarkable:2 xp:1 xiao:1 editor:2 bremen:4 supported:1 free:1 formal:1 bias:1 understand:1 institute:1 taking:1 distributed:1 van:1 curve:5 cortical:2 transition:15 stand:1 rich:1 ending:1 sensory:3 forward:2 world:1 counted:1 hyvarinen:1 uni:1 unreliable:2 sz:1 reveals:1 conclude:2 continuous:3 why:1 channel:6 nature:3 ruyter:1 expansion:1 investigated:1 necessarily:1 fli:1 did:1 intracellular:1 neurosci:5 noise:8 repeated:1 eural:1 neuronal:9 fig:4 lc:2 precision:4 fails:1 wish:1 xl:2 candidate:1 comput:4 vanish:1 tin:1 down:1 cog:1 jt:5 showing:1 symbol:1 evidence:2 quantization:1 keysers:1 boston:1 entropy:3 tc:1 visual:3 springer:1 a8:1 corresponds:1 determines:3 conditional:1 towards:1 ferster:1 twofold:1 xlk:1 fisher:2 feasible:2 change:5 hard:2 included:1 determined:7 experimental:3 meaningful:1 support:1 latter:3 relevance:1 heaviside:1 |
1,321 | 2,202 | Kernel Design Using Boosting
Koby Crammer Joseph Keshet Yoram Singer
School of Computer Science & Engineering
The Hebrew University, Jerusalem 91904, Israel
{kobics,jkeshet,singer}@cs.huji.ac.il
Abstract
The focus of the paper is the problem of learning kernel operators from
empirical data. We cast the kernel design problem as the construction of
an accurate kernel from simple (and less accurate) base kernels. We use
the boosting paradigm to perform the kernel construction process. To do
so, we modify the booster so as to accommodate kernel operators. We
also devise an efficient weak-learner for simple kernels that is based on
generalized eigen vector decomposition. We demonstrate the effectiveness of our approach on synthetic data and on the USPS dataset. On the
USPS dataset, the performance of the Perceptron algorithm with learned
kernels is systematically better than a fixed RBF kernel.
1 Introduction and problem Setting
The last decade brought voluminous amount of work on the design, analysis and experimentation of kernel machines. Algorithm based on kernels can be used for various machine learning tasks such as classification, regression, ranking, and principle component
analysis. The most prominent learning algorithm that employs kernels is the Support Vector Machines (SVM) [1, 2] designed for classification and regression. A key component
in a kernel machine is a kernel operator which computes for any pair of instances their
inner-product in some abstract vector space. Intuitively and informally, a kernel operator
is a means for measuring similarity between instances. Almost all of the work that employed kernel operators concentrated on various machine learning problems that involved
a predefined kernel. A typical approach when using kernels is to choose a kernel before
learning starts. Examples to popular predefined kernels are the Radial Basis Functions and
the polynomial kernels (see for instance [1]). Despite the simplicity required in modifying
a learning algorithm to a ?kernelized? version, the success of such algorithms is not well
understood yet. More recently, special efforts have been devoted to crafting kernels for
specific tasks such as text categorization [3] and protein classification problems [4].
Our work attempts to give a computational alternative to predefined kernels by learning
kernel operators from data. We start with a few definitions. Let X be an instance space.
A kernel is an inner-product operator K : X ? X ? . An explicit way to describe K
is via a mapping ? : X ? H from X to an inner-products space H such that K(x, x 0 ) =
?(x)??(x0 ). Given a kernel operator and a finite set of instances S = {xi , yi }m
i=1 , the kernel
matrix (a.k.a the Gram matrix) is the matrix of all possible inner-products of pairs from S,
Ki,j = K(xi , xj ). We therefore refer to the general form of K as the kernel operator and
to the application of the kernel operator to a set of pairs of instances as the kernel matrix.
The specific setting of kernel design we consider assumes that we have access to a
base kernel learner and we are given a target kernel K ? manifested as a kernel matrix on a set of examples. Upon calling the base kernel learner it returns a kernel operator denote Kj . The goal thereafter is to find a weighted combination of kernels
P
0
?
K(x,
x0 ) =
j ?j Kj (x, x ) that is similar, in a sense that will be defined shortly, to
?
? ? K . Cristianini et al. [5] in their pioneering work on kernel target
the target kernel, K
alignment employed asPthe notion of similarity the inner-product between the kernel ma0
trices < K, K 0 >F = m
i,j=1 K(xi , xj )K (xi , xj ). Given this definition, they defined the
kernel-similarity, or alignment, to be the above inner-product
normalized by the norm of
q
?
?
?
?
?
?
?
each kernel, A(S, K, K ) = < K, K >F / < K, K >F < K ? , K ? >F , where S
is, as above, a finite sample of m instances. Put another way, the kernel alignment Cristianini et al. employed is the cosine of the angle between the kernel matrices where each
matrix is ?flattened? into a vector of dimension m2 . Therefore, this definition implies that
the alignment is bounded above by 1 and can attain this value iff the two kernel matrices
are identical. Given a (column) vector of m labels y where yi ? {?1, +1} is the label
of the instance xi , Cristianini et al. used the outer-product of y as the the target kernel,
? i , xj ) = yi yj . Clearly,
K ? = yy T . Therefore, an optimal alignment is achieved if K(x
if such a kernel is used for classifying instances from X , then the kernel itself suffices to
construct an excellent classifier f : X ? {?1, +1} by setting, f (x) = sign(y i K(xi , x))
where (xi , yi ) is any instance-label pair. Cristianini et al. then devised a procedure that
works with both labelled and unlabelled examples to find a Gram matrix which attains a
good alignment with K ? on the labelled part of the matrix. While this approach can clearly
construct powerful kernels, a few problems arise from the notion of kernel alignment they
employed. For instance, a kernel operator such that the sign(K(x i , xj )) is equal to yi yj
but its magnitude, |K(xi , xj )|, is not necessarily 1, might achieve a poor alignment score
while it can constitute a classifier whose empirical loss is zero. Furthermore, the task of
finding a good kernel when it is not always possible to find a kernel whose sign on each
pair of instances is equal to the products of the labels (termed the soft-margin case in [5, 6])
becomes rather tricky. We thus propose a different approach which attempts to overcome
some of the difficulties above.
Like Cristianini et al. we assume that we are given a set of labelled instances S =
{(xi , yi ) | xi ? X , yi ? {?1, +1}, i = 1, . . . , m} . We are also given a set of unlabelled
m
?
examples S? = {?
xi }i=1 . If such a set is not provided we can simply use the labelled in? The set S? is used for constructing the
stances (without the labels themselves) as the set S.
? The labelled set is
primitive kernels that are combined to constitute the learned kernel K.
used to form the target kernel matrix and its instances are used for evaluating the learned
? This approach, known as transductive learning, was suggested in [5, 6] for kernel
kernel K.
alignment tasks when the distribution of the instances in the test data is different from that
of the training data. This setting becomes in particular handy in datasets where the test data
was collected in a different scheme than the training data. We next discuss the notion of
kernel goodness employed in this paper. This notion builds on the objective function that
several variants of boosting algorithms maintain [7, 8]. We therefore first discuss in brief
the form of boosting algorithms for kernels.
2 Using Boosting to Combine Kernels
Numerous interpretations of AdaBoost and its variants cast the boosting process as a procedure that attempts to minimize, or make small, a continuous bound on the classification
error (see for instance [9, 7] and the references therein). A recent work by Collins et al. [8]
unifies the boosting process for two popular loss functions, the exponential-loss (denoted
henceforth as ExpLoss) and logarithmic-loss (denoted as LogLoss) that bound the empir-
?
?
Input: Labelled and unlabelled sets of examples: S = {(xi , yi )}m
x i }m
i=1 ; S = {?
i=1
Initialize: K ? 0 (all zeros matrix)
For t = 1, 2, . . . , T :
? Calculate distribution over pairs 1 ? i, j ? m:
exp(?yi yj K(xi , xj ))
Dt (i, j) =
1/(1 + exp(?yi yj K(xi , xj )))
ExpLoss
LogLoss
? and receive Kt
? Call base-kernel-learner with (Dt , S, S)
? Calculate:
St+ = {(i,
; St? = {(i,
P j) | yi yj Kt (xi , xj ) < 0}
P j) | yi yj Kt (xi , xj ) > 0}
+
Wt = (i,j)?S + Dt (i, j)|Kt (xi , xj )| ; Wt? = (i,j)?S ? Dt (i, j)|Kt (xi , xj )|
t
t+
Wt
1
? Set: ?t = 2 ln W ?
; K ? K + ? t Kt .
t
Return: kernel operator K : X ? X ?
Figure 1: The skeleton of the boosting algorithm for kernels.
ical classification error. Given the prediction of a classifier f on an instance x and a label
y ? {?1, +1} the ExpLoss and the LogLoss are defined as,
ExpLoss(f (x), y) = exp(?yf (x))
LogLoss(f (x), y) = log(1 + exp(?yf (x))) .
Collins et al. described a single algorithm for the two losses above that can be used within
the boosting framework to construct a strong-hypothesis which is a classifier f (x). This
classifier is a weighted combination of (possibly very simple) base classifiers. (In the
boosting framework, the base classifiers are referred to as weak-hypotheses.) The strongPT
hypothesis is of the form f (x) = t=1 ?t ht (x). Collins et al. discussed a few ways to
select the weak-hypotheses ht and to find a good of weights ?t . Our starting point in this
paper is the first sequential algorithm from [8] that enables the construction or creation of
weak-hypotheses on-the-fly. We would like to note however that it is possible to use other
variants of boosting to design kernels.
In order to use boosting to design kernels we extend the algorithm to operate over pairs of
instances. Building on the notion of alignment from [5, 6], we say that the inner-product
of x1 and x2 is aligned with the labels y1 and y2 if sign(K(x1 , x2 )) = y1 y2 . Furthermore,
we would like to make the magnitude of K(x, x0 ) to be as large as possible. We therefore
use one of the following two alignment losses for a pair of examples (x 1 , y1 ) and (x2 , y2 ),
ExpLoss(K(x1 , x2 ), y1 y2 ) = exp(?y1 y2 K(x1 , x2 ))
LogLoss(K(x1 , x2 ), y1 y2 ) = log(1 + exp(?y1 y2 K(x1 , x2 ))) .
Put another way, we view a pair of instances as a single example and cast the pairs of
instances that attain the same label as positively labelled examples while pairs of opposite
labels are cast as negatively labelled examples. Clearly, this approach can be applied to both
losses. In the boosting process we therefore maintain a distribution over pairs of instances.
The weight of each pair reflects how difficult it is to predict whether the labels of the two
instances are the same or different. The core boosting algorithm follows similar lines to
boosting algorithms for classification algorithm. The pseudo code of the booster is given in
Fig. 1. The pseudo-code is an adaptation the to problem of kernel design of the sequentialupdate algorithm from [8]. As with other boosting algorithm, the base-learner, which in
our case is charge of returning a good kernel with respect to the current distribution, is
left unspecified. We therefore turn our attention to the algorithmic implementation of the
base-learning algorithm for kernels.
3 Learning Base Kernels
The base kernel learner is provided with a training set S and a distribution D t over a pairs
?
of instances from the training set. It is also provided with a set of unlabelled examples S.
Without any knowledge of the topology of the space of instances a learning algorithm is
likely to fail. Therefore, we assume the existence of an initial inner-product over the input
space. We assume for now that this initial inner-product is the standard scalar products
over vectors in n . We later discuss a way to relax the assumption on the form of the
inner-product. Equipped with an inner-product, we define the family of base kernels to be
the possible outer-products Kw = wwT between a vector w ? n and itself.
Using this definition we get,
Kw (xi , xj ) = (xi ?w)(xj ?w) .
Input: A distribution Dt . Labelled and unlabelled sets:
?
?
Therefore, the similarity beS = {(xi , yi )}m
x i }m
i=1 ; S = {?
i=1 .
tween two instances xi and
Compute :
xj is high iff both xi and xj
? Calculate:
?
are similar (w.r.t the standard
A ? m?m
, Ai,r = xi ? x?r
inner-product) to a third vecm?m
B?
, Bi,j = Dt (i, j)yi yj
tor w. Analogously, if both
m?
? m
?
K
?
, Kr,s = x
?r ? x
?s
xi and xj seem to be dissim?
Find
the
generalized
eigenvector
v ? m for
ilar to the vector w then they
T
the problem A BAv = ?Kv which attains
are similar to each other. Dethe largest P
eigenvalue ? P
spite the restrictive form of
?
Set:
w
=
(
?r )/k r vr x
?r k.
r vr x
the inner-products, this famt
ily is still too rich for our setReturn: Kernel operator Kw = ww .
ting and we further impose
two restrictions on the inner
Figure 2: The base kernel learning algorithm.
products. First, we assume
? Second, since scaling of
that w is restricted to a linear combination of vectors from S.
the base kernels is performed by the boosted, we constrain the norm of w to be 1. The
Pm
?
resulting class of kernels is therefore, C = {Kw = wwT | w = r=1 ?r x
?r , kwk = 1} .
In the boosting process we need to choose a specific base-kernel K w from C. We therefore
need to devise a notion of how good a candidate for base kernel is given a labelled set S and
a distribution function Dt . In this work we use the simplest version suggested by Collins et
al. This version can been viewed as a linear approximation on the loss function. We define
the score of a kernel Kw w.r.t to the current distribution Dt to be,
X
Score(Kw ) =
Dt (i, j)yi yj Kw (xi , xj ) .
(1)
i,j
The higher the value of the score is, the better Kw fits the training data. Note that if
Dt (i, j) = 1/m2 (as is D0 ) then Score(Kw ) is proportional to the alignment since kwk =
1. Under mild assumptions the score can also provide a lower bound of the
To
loss function.
see that let c be the derivative of the loss function at margin zero, c = Loss0 (0). If all the
?
training examples xi ? S lies in a ball of radius c, we get that Loss(Kw (xi , xj ), yi yj ) ?
1 ? cKw (xi , xj )yi yj ? 0, and therefore,
X
X
Dt (i, j)Loss(Kw (xi , xj ), yi yj ) ? 1 ? c
Dt (i, j)Kw (xi , xj )yi yj .
i,j
i,j
Using the explicit form of Kw in the Score function (Eq. (1)) we get, Score(Kw ) =
P
i,j D(i, j)yi yj (w?xi )(w?xj ) . Further developing the above equation using the constraint
Pm
?
that w = r=1 ?r x
?r we get,
X
X
Score(Kw ) =
?s ?r
D(i, j)yi yj (xi ? x
?r ) (xj ? x
?s ) .
r,s
i,j
To compute efficiently the base kernel score without an explicit enumeration we exploit
the fact that if the initial distribution D0 is symmetric (D0 (i, j) = D0 (j, i)) then all the
distributions generated along the run of the boosting process, D t , are also symmetric. We
?
now define a matrix A ? m?m
where Ai,r = xi ? x?r and a symmetric matrix B ? m?m
with Bi,j = Dt (i, j)yi yj . Simple algebraic manipulations yield that the score function
can be written as the following quadratic form, Score(?) = ? T (AT BA)? , where ? is
m
? dimensional column vector. Note that since B is symmetric so is A T BA. Finding a
good base kernel is equivalent to finding a vector ? which maximizes this quadratic form
Pm
2
?
under the norm equality constraint kwk = k r=1 ?r x
?r k2 = ? T K? = 1 where Kr,s =
x
?r ? x
?s . Finding the maximum of Score(?) subject to the norm constraint is a well known
maximization problem known as the generalized eigen vector problem (cf. [10]). Applying
simple algebraic manipulations it is easy to show that the matrix AT BA is positive semidefinite. Assuming that the matrix K is invertible, the the vector ? which maximizes the
quadratic form is proportional the eigenvector of K ?1 AT BA which is associated with the
Pm
?
generalized largest eigenvalue. Denoting this vector by v we get that w ?
?r .
r=1 vr x
Pm
P
?
m
?
Adding the norm constraint we get that w = ( r=1 vr x
?r )/k r=1 vr x
?r k. The skeleton
of the algorithm for finding a base kernels is given in Fig. 3. To conclude the description of
the kernel learning algorithm we describe how to the extend the algorithm to be employed
with general kernel functions.
Kernelizing the Kernel: As described above, we assumed that the standard scalarproduct constitutes the template for the class of base-kernels C. However, since the procedure for choosing a base kernel depends on S and S? only through the inner-products matrix
A, we can replace the scalar-product itself with a general kernel operator ? : X ? X ? ,
where ?(xi , xj ) = ?(xi ) ? ?(xj ). Using a general kernel function ? we can not compute however the vector w explicitly. We therefore need to show that the norm of w, and
evaluation Kw on any two examples can still be performed efficiently.
First note that given the vector v we can compute the norm of w as follows,
!T
!
X
X
X
2
kwk =
vr x
?r
vs x
?r =
vr vs ?(?
xr , x
?s ) .
r
s
r,s
Next, given two vectors xi and xj the value of their inner-product is,
X
Kw (xi , xj ) =
vr vs ?(xi , x
?r )?(xj , x
?s ) .
r,s
Therefore, although we cannot compute the vector w explicitly we can still compute its
norm and evaluate any of the kernels from the class C.
4 Experiments
Synthetic data: We generated binary-labelled data using as input space the vectors in
100
. The labels, in {?1, +1}, were picked uniformly at random. Let y designate the label
of a particular example. Then, the first two components
of each instance were drawn from
P
a two-dimensional normal distribution, N (?, ? ??1 ) with the following parameters,
X 0.1
1
0.03
1 ?1
0
?=y
?= ?
=
.
0.03
1
1
0 0.01
2
That is, the label of each examples determined the mean of the distribution from which
the first two components were generated. The rest of the components in the vector (98
8
0.2
6
50
50
100
100
150
150
200
200
4
2
0
0
?2
?4
?6
250
250
?0.2
?8
?0.2
0
0.2
?8
?6
?4
?2
0
2
4
6
8
300
20
40
60
80
100
120
140
160
180
200
300
20
40
60
80
100
120
140
160
180
Figure 3: Results on a toy data set prior to learning a kernel (first and third from left)
and after learning (second and fourth). For each of the two settings we show the first two
components of the training data (left) and the matrix of inner products between the train
and the test data (right).
altogether) were generated independently using the normal distribution with a zero mean
and a standard deviation of 0.05. We generated 100 training and test sets of size 300 and
200 respectively. We used the standard dot-product as the initial kernel operator.
On each experiment we first learned a linear classier that separates the classes using the
Perceptron [11] algorithm. We ran the algorithm for 10 epochs on the training set. After
each epoch we evaluated the performance of the current classifier on the test set. We then
used the boosting algorithm for kernels with the LogLoss for 30 rounds to build a kernel
for each random training set. After learning the kernel we re-trained a classifier with the
Perceptron algorithm and recorded the results. A summary of the online performance is
given in Fig. 4. The plot on the left-hand-side of the figure shows the instantaneous error
(achieved during the run of the algorithm). Clearly, the Perceptron algorithm with the
learned kernel converges much faster than the original kernel. The middle plot shows the
test error after each epoch. The plot on the right shows the test error on a noisy test set
in which we added a Gaussian noise of zero mean and a standard deviation of 0.03 to
the first two features. In all plots, each bar indicates a 95% confidence level. It is clear
from the figure that the original kernel is much slower to converge than the learned kernel.
Furthermore, though the kernel learning algorithm was not expoed to the test set noise, the
learned kernel reflects better the structure of the feature space which makes the learned
kernel more robust to noise.
Fig. 3 further illustrates the benefits of using a boutique kernel. The first and third plots
from the left correspond to results obtained using the original kernel and the second and
fourth plots show results using the learned kernel. The left plots show the empirical distribution of the two informative components on the test data. For the learned kernel we took
each input vector and projected it onto the two eigenvectors of the learned kernel operator matrix that correspond to the two largest eigenvalues. Note that the distribution after
the projection is bimodal and well separated along the first eigen direction (x-axis) and
shows rather little deviation along the second eigen direction (y-axis). This indicates that
the kernel learning algorithm indeed found the most informative projection for separating
the labelled data with large margin. It is worth noting that, in this particular setting, any
algorithm which chooses a single feature at a time is prone to failure since both the first
and second features are mandatory for correctly classifying the data.
The two plots on the right hand side of Fig. 3 use a gray level color-map to designate the
value of the inner-product between each pairs instances, one from training set (y-axis) and
the other from the test set. The examples were ordered such that the first group consists
of the positively labelled instances while the second group consists of the negatively labelled instances. Since most of the features are non-relevant the original inner-products
are noisy and do not exhibit any structure. In contrast, the inner-products using the learned
kernel yields in a 2 ? 2 block matrix indicating that the inner-products between instances
sharing the same label obtain large positive values. Similarly, for instances of opposite
200
1
12
Regular Kernel
Learned Kernel
0.8
17
0.7
16
0.5
0.4
0.3
Test Error %
8
0.6
Regular Kernel
Learned Kernel
18
10
Test Error %
Averaged Cumulative Error %
19
Regular Kernel
Learned Kernel
0.9
6
4
15
14
13
12
0.2
11
2
0.1
10
0
0
10
1
10
2
10
Round
3
10
4
10
0
2
4
6
Epochs
8
10
9
2
4
6
Epochs
8
10
Figure 4: The online training error (left), test error (middle) on clean synthetic data using
a standard kernel and a learned kernel. Right: the online test error for the two kernels on a
noisy test set.
labels the inner products are large and negative. The form of the inner-products matrix of
the learned kernel indicates that the learning problem itself becomes much easier. Indeed,
the Perceptron algorithm with the standard kernel required around 94 training examples
on the average before converging to a hyperplane which perfectly separates the training
data while using the Perceptron algorithm with learned kernel required a single example to
reach a perfect separation on all 100 random training sets.
USPS dataset: The USPS (US Postal Service) dataset is known as a challenging classification problem in which the training set and the test set were collected in a different
manner. The USPS contains 7, 291 training examples and 2, 007 test examples. Each example is represented as a 16 ? 16 matrix where each entry in the matrix is a pixel that can
take values in {0, . . . , 255}. Each example is associated with a label in {0, . . . , 9} which
is the digit content of the image. Since the kernel learning algorithm is designed for binary
problems, we broke the 10-class problem into 45 binary problems by comparing all pairs
of classes. The interesting question of how to learn kernels for multiclass problems is beyond the scopre of this short paper. We thus constraint on the binary error results for the 45
binary problem described above. For the original kernel we chose a RBF kernel with ? = 1
which is the value employed in the experiments reported in [12]. We used the kernelized
version of the kernel design algorithm to learn a different kernel operator for each of the
binary problems. We then used a variant of the Perceptron [11] and with the original RBF
kernel and with the learned kernels. One of the motivations for using the Perceptron is its
simplicity which can underscore differences in the kernels. We ran the kernel learning al? Thus,
gorithm with LogLoss and ExpLoss, using bith the training set and the test test as S.
we obtained four different sets of kernels where each set consists of 45 kernels. By examining the training loss, we set the number of rounds of boosting to be 30 for the LogLoss
and 50 for the ExpLoss, when using the trainin set. When using the test set, the number
of rounds of boosting was set to 100 for both losses. Since the algorithm exhibits slower
rate of convergence with the test data, we choose a a higher value without attempting to
optimize the actual value. The left plot of Fig. 5 is a scatter plot comparing the test error of
each of the binary classifiers when trained with the original RBF a kernel versus the performance achieved on the same binary problem with a learned kernel. The kernels were built
using boosting with the LogLoss and S? was the training data. In almost all of the 45 binary
classification problems, the learned kernels yielded lower error rates when combined with
the Perceptron algorithm. The right plot of Fig. 5 compares two learned kernels: the first
was build using the training instances as the templates constituing S? while the second used
the test instances. Although the differenece between the two versions is not as significant
as the difference on the left plot, we still achieve an overall improvement in about 25% of
the binary problems by using the test instances.
6
4.5
4
5
Learned Kernel (Test)
Learned Kernel (Train)
3.5
4
3
2
3
2.5
2
1.5
1
1
0.5
0
0
1
2
3
Base Kernel
4
5
6
0
0
1
2
3
Learned Kernel (Train)
4
5
Figure 5: Left: a scatter plot comparing the error rate of 45 binary classifiers trained using
an RBF kernel (x-axis) and a learned kernel with training instances. Right: a similar scatter
plot for a learned kernel only constructed from training instances (x-axis) and test instances.
5 Discussion
In this paper we showed how to use the boosting framework to design kernels. Our approach is especially appealing in transductive learning tasks where the test data distribution
is different than the the distribution of the training data. For example, in speech recognition tasks the training data is often clean and well recorded while the test data often passes
through a noisy channel that distorts the signal. An interesting and challanging question
that stem from this research is how to extend the framework to accommodate more complex decision tasks such as multiclass and regression problems. Finally, we would like to
note alternative approaches to the kernel design problem has been devised in parallel and
independently. See [13, 14] for further details.
Acknowledgements: Special thanks to Cyril Goutte and to John Show-Taylor for pointing
the connection to the generalized eigen vector problem. Thanks also to the anonymous
reviewers for constructive comments.
References
[1] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.
[2] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge
University Press, 2000.
[3] Huma Lodhi, John Shawe-Taylor, Nello Cristianini, and Christopher J. C. H. Watkins. Text
classification using string kernels. Journal of Machine Learning Research, 2:419?444, 2002.
[4] C. Leslie, E. Eskin, and W. Stafford Noble. The spectrum kernel: A string kernel for svm
protein classification. In Proceedings of the Pacific Symposium on Biocomputing, 2002.
[5] Nello Cristianini, Andre Elisseeff, John Shawe-Taylor, and Jaz Kandla. On kernel target alignment. In Advances in Neural Information Processing Systems 14, 2001.
[6] G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. Jordan. Learning the kernel matrix
with semi-definite programming. In Proc. of the 19th Intl. Conf. on Machine Learning, 2002.
[7] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28(2):337?374, April 2000.
[8] Michael Collins, Robert E. Schapire, and Yoram Singer. Logistic regression, adaboost and
bregman distances. Machine Learning, 47(2/3):253?285, 2002.
[9] Llew Mason, Jonathan Baxter, Peter Bartlett, and Marcus Frean. Functional gradient techniques
for combining hypotheses. In Advances in Large Margin Classifiers. MIT Press, 1999.
[10] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1985.
[11] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization
in the brain. Psychological Review, 65:386?407, 1958.
[12] B. Sch?olkopf, S. Mika, C.J.C. Burges, P. Knirsch, K. M?uller, G. R?atsch, and A.J. Smola. Input
space vs. feature space in kernel-based methods. IEEE Trans. on NN, 10(5):1000?1017, 1999.
[13] O. Bosquet and D.J.L. Herrmann. On the complexity of learning the kernel matrix. NIPS, 2002.
[14] C.S. Ong, A.J. Smola, and R.C. Williamson. Superkenels. NIPS, 2002.
| 2202 |@word mild:1 version:5 middle:2 polynomial:1 norm:8 lodhi:1 decomposition:1 elisseeff:1 accommodate:2 initial:4 contains:1 score:13 denoting:1 current:3 comparing:3 jaz:1 yet:1 scatter:3 written:1 john:3 additive:1 informative:2 ma0:1 enables:1 designed:2 plot:14 v:4 classier:1 core:1 short:1 eskin:1 boosting:24 postal:1 along:3 constructed:1 symposium:1 consists:3 combine:1 manner:1 x0:3 indeed:2 themselves:1 brain:1 little:1 enumeration:1 equipped:1 actual:1 becomes:3 provided:3 bounded:1 maximizes:2 israel:1 unspecified:1 string:2 eigenvector:2 finding:5 pseudo:2 charge:1 returning:1 classifier:12 k2:1 tricky:1 before:2 positive:2 engineering:1 understood:1 modify:1 service:1 llew:1 despite:1 might:1 chose:1 mika:1 therein:1 challenging:1 bi:2 trice:1 averaged:1 horn:1 yj:15 block:1 definite:1 handy:1 xr:1 digit:1 procedure:3 empirical:3 attain:2 projection:2 confidence:1 radial:1 regular:3 spite:1 protein:2 get:6 cannot:1 onto:1 operator:18 put:2 storage:1 applying:1 restriction:1 equivalent:1 map:1 optimize:1 reviewer:1 jerusalem:1 primitive:1 starting:1 attention:1 independently:2 simplicity:2 m2:2 notion:6 annals:1 construction:3 target:6 programming:1 hypothesis:6 lanckriet:1 recognition:1 gorithm:1 fly:1 calculate:3 stafford:1 ran:2 complexity:1 skeleton:2 cristianini:9 ong:1 trained:3 creation:1 upon:1 negatively:2 learner:6 usps:5 basis:1 various:2 represented:1 train:3 separated:1 describe:2 aspthe:1 choosing:1 whose:2 say:1 relax:1 statistic:1 transductive:2 itself:4 noisy:4 online:3 eigenvalue:3 took:1 propose:1 product:29 adaptation:1 aligned:1 relevant:1 combining:1 iff:2 achieve:2 description:1 kv:1 olkopf:1 convergence:1 intl:1 categorization:1 perfect:1 converges:1 ac:1 frean:1 school:1 eq:1 strong:1 c:1 implies:1 direction:2 radius:1 modifying:1 broke:1 suffices:1 anonymous:1 designate:2 around:1 normal:2 exp:6 mapping:1 predict:1 algorithmic:1 pointing:1 tor:1 proc:1 label:16 largest:3 weighted:2 reflects:2 uller:1 brought:1 clearly:4 mit:1 always:1 gaussian:1 rather:2 boosted:1 focus:1 improvement:1 ily:1 indicates:3 underscore:1 contrast:1 attains:2 sense:1 el:1 nn:1 kernelized:2 ical:1 voluminous:1 pixel:1 overall:1 classification:10 denoted:2 special:2 initialize:1 equal:2 construct:3 identical:1 kw:17 koby:1 constitutes:1 noble:1 employ:1 few:3 maintain:2 attempt:3 friedman:1 organization:1 evaluation:1 alignment:13 semidefinite:1 devoted:1 predefined:3 accurate:2 logloss:9 kt:6 bregman:1 taylor:4 re:1 psychological:1 instance:37 column:2 soft:1 measuring:1 goodness:1 maximization:1 leslie:1 deviation:3 entry:1 examining:1 johnson:1 too:1 reported:1 synthetic:3 combined:2 chooses:1 st:2 thanks:2 bosquet:1 huji:1 probabilistic:1 invertible:1 michael:1 analogously:1 recorded:2 choose:3 possibly:1 henceforth:1 conf:1 booster:2 derivative:1 knirsch:1 return:2 toy:1 explicitly:2 ranking:1 depends:1 later:1 view:2 performed:2 picked:1 kwk:4 start:2 parallel:1 minimize:1 il:1 efficiently:2 yield:2 correspond:2 weak:4 unifies:1 worth:1 reach:1 sharing:1 andre:1 trevor:1 definition:4 failure:1 involved:1 associated:2 dataset:4 popular:2 knowledge:1 color:1 wwt:2 higher:2 dt:13 adaboost:2 april:1 evaluated:1 though:1 furthermore:3 roger:1 smola:2 jerome:1 hand:2 christopher:1 empir:1 logistic:2 yf:2 gray:1 building:1 normalized:1 y2:7 equality:1 stance:1 symmetric:4 round:4 during:1 cosine:1 generalized:5 prominent:1 demonstrate:1 image:1 instantaneous:1 recently:1 charles:1 functional:1 jkeshet:1 discussed:1 interpretation:1 extend:3 refer:1 significant:1 cambridge:2 ai:2 pm:5 similarly:1 kobics:1 shawe:3 dot:1 access:1 similarity:4 dissim:1 base:21 loss0:1 recent:1 showed:1 termed:1 manipulation:2 mandatory:1 manifested:1 binary:11 success:1 yi:22 devise:2 impose:1 employed:7 converge:1 paradigm:1 signal:1 semi:1 d0:4 stem:1 unlabelled:5 faster:1 devised:2 prediction:1 variant:4 regression:5 converging:1 kernel:155 bimodal:1 achieved:3 receive:1 sch:1 operate:1 rest:1 pass:1 comment:1 subject:1 effectiveness:1 seem:1 call:1 jordan:1 noting:1 easy:1 baxter:1 xj:29 fit:1 hastie:1 topology:1 opposite:2 perfectly:1 inner:23 multiclass:2 whether:1 bartlett:2 effort:1 peter:1 algebraic:2 speech:1 cyril:1 constitute:2 clear:1 informally:1 eigenvectors:1 amount:1 concentrated:1 simplest:1 schapire:1 sign:4 correctly:1 yy:1 tibshirani:1 rosenblatt:1 group:2 key:1 thereafter:1 four:1 drawn:1 clean:2 ht:2 run:2 angle:1 powerful:1 fourth:2 almost:2 family:1 separation:1 decision:1 scaling:1 ki:1 bound:3 quadratic:3 yielded:1 constraint:5 constrain:1 x2:7 calling:1 attempting:1 pacific:1 developing:1 combination:3 poor:1 ball:1 appealing:1 joseph:1 intuitively:1 restricted:1 ghaoui:1 ln:1 equation:1 goutte:1 discus:3 turn:1 fail:1 singer:3 experimentation:1 kernelizing:1 alternative:2 shortly:1 eigen:5 altogether:1 existence:1 original:7 slower:2 assumes:1 cf:1 yoram:2 exploit:1 restrictive:1 ting:1 build:3 especially:1 crafting:1 objective:1 added:1 question:2 exhibit:2 gradient:1 distance:1 separate:2 separating:1 outer:2 collected:2 nello:2 marcus:1 assuming:1 code:2 hebrew:1 difficult:1 robert:2 negative:1 ba:4 design:10 implementation:1 perform:1 datasets:1 finite:2 y1:7 ww:1 cast:4 pair:16 required:3 distorts:1 connection:1 challanging:1 learned:27 huma:1 nip:2 trans:1 beyond:1 suggested:2 bar:1 pioneering:1 built:1 difficulty:1 scheme:1 brief:1 numerous:1 axis:5 kj:2 text:2 prior:1 epoch:5 acknowledgement:1 review:1 loss:14 ckw:1 interesting:2 proportional:2 versus:1 principle:1 systematically:1 classifying:2 prone:1 summary:1 last:1 side:2 burges:1 perceptron:10 template:2 benefit:1 overcome:1 dimension:1 evaluating:1 gram:2 rich:1 computes:1 cumulative:1 herrmann:1 projected:1 conclude:1 assumed:1 xi:39 spectrum:1 continuous:1 decade:1 learn:2 channel:1 robust:1 williamson:1 excellent:1 necessarily:1 complex:1 constructing:1 tween:1 motivation:1 noise:3 arise:1 x1:6 positively:2 fig:7 referred:1 vr:8 wiley:1 explicit:3 exponential:1 candidate:1 lie:1 trainin:1 watkins:1 third:3 specific:3 mason:1 svm:2 vapnik:1 sequential:1 adding:1 flattened:1 kr:2 keshet:1 magnitude:2 illustrates:1 margin:4 easier:1 logarithmic:1 simply:1 likely:1 ordered:1 scalar:2 ilar:1 goal:1 viewed:1 rbf:5 labelled:14 replace:1 content:1 typical:1 determined:1 uniformly:1 wt:3 hyperplane:1 atsch:1 indicating:1 select:1 support:2 crammer:1 collins:5 jonathan:1 biocomputing:1 constructive:1 evaluate:1 |
1,322 | 2,203 | Manifold Parzen Windows
Pascal Vincent and Yoshua Bengio
Dept. IRO, Universit? de Montr?al
C.P. 6128, Montreal, Qc, H3C 3J7, Canada
{vincentp,bengioy}@iro.umontreal.ca
http://www.iro.umontreal.ca/ vincentp
Abstract
The similarity between objects is a fundamental element of many learning algorithms. Most non-parametric methods take this similarity to be
fixed, but much recent work has shown the advantages of learning it, in
particular to exploit the local invariances in the data or to capture the
possibly non-linear manifold on which most of the data lies. We propose
a new non-parametric kernel density estimation method which captures
the local structure of an underlying manifold through the leading eigenvectors of regularized local covariance matrices. Experiments in density
estimation show significant improvements with respect to Parzen density
estimators. The density estimators can also be used within Bayes classifiers, yielding classification rates similar to SVMs and much superior to
the Parzen classifier.
1 Introduction
In [1], while attempting to better understand and bridge the gap between the good performance of the popular Support Vector Machines and the more traditional K-NN (K Nearest
Neighbors) for classification problems, we had suggested a modified Nearest-Neighbor
algorithm. This algorithm, which was able to slightly outperform SVMs on several realworld problems, was based on the geometric intuition that the classes actually lived ?close
to? a lower dimensional non-linear manifold in the high dimensional input space. When
this was not properly taken into account, as with traditional K-NN, the sparsity of the data
points due to having a finite number of training samples would cause ?holes? or ?zig-zag?
artifacts in the resulting decision surface, as illustrated in Figure 1.
Figure 1: A local view of the decision surface, with ?holes?, produced by the Nearest Neighbor
when the data have a local structure (horizontal direction).
The present work is based on the same underlying geometric intuition, but applied to the
well known Parzen windows [2] non-parametric method for density estimation, using Gaussian kernels.
Most of the time, Parzen Windows estimates are built using a ?spherical Gaussian? with
a single scalar variance (or width) parameter . It is also possible to use a ?diagonal
Gaussian?, i.e. with a diagonal covariance matrix, or even a ?full Gaussian? with a full
covariance matrix, usually set to be proportional to the global empirical covariance of the
training data. However these are equivalent to using a spherical Gaussian on preprocessed,
normalized data (i.e. normalized by subtracting the empirical sample mean, and multiplying by the inverse sample covariance). Whatever the shape of the kernel, if, as is customary,
a fixed shape is used, merely centered on every training point, the shape can only compensate for the global structure (such as global covariance) of the data.
Now if the true density that we want to model is indeed ?close to? a non-linear lower dimensional manifold embedded in the higher dimensional input space, in the sense that most
of the probability density is concentrated around such a manifold (with a small noise component away from it), then using Parzen Windows with a spherical or fixed-shape Gaussian
is probably not the most appropriate method, for the following reason.
While the true density mass, in the vicinity of a particular training point , will be mostly
concentrated in a few local directions along the manifold, a spherical Gaussian centered on
that point will spread its density mass equally along all input space directions, thus giving
too much probability to irrelevant regions of space and too little along the manifold. This
is likely to result in an excessive ?bumpyness? of the thus modeled density, much like the
?holes? and ?zig-zag? artifacts observed in KNN (see Fig. 1 and Fig. 2).
If the true density in the vicinity of is concentrated along a lower dimensional manifold,
then it should be possible to infer the local direction of that manifold from the neighborhood of , and then anchor on a Gaussian ?pancake? parameterized in such a way that
it spreads mostly along the directions of the manifold, and is almost flat along the other
directions. The resulting model is a mixture of Gaussian ?pancakes?, similar to [3], mixtures of probabilistic PCAs [4] or mixtures of factor analyzers [5, 6], in the same way that
the most traditional Parzen Windows is a mixture of spherical Gaussians. But it remains a
memory-based method, with a Gaussian kernel centered on each training points, yet with a
differently shaped kernel for each point.
2 The Manifold Parzen Windows algorithm
In the following we formally define and justify in detail the proposed algorithm. Let be
an -dimensional random variable with values in
, and an unknown probability density
function
. Our training set contains samples of that random variable, collected in a
matrix whose row is the -th sample. Our goal is to estimate the density .
Our estimator
has the form of a mixture of Gaussians, but unlike the Parzen density
estimator, its covariances are not necessarily spherical and not necessarily identical
everywhere:
&21 + -
"
!
$#
%
&('*),+ -)
.0/
(1)
. is the multivariate Gaussian density with mean vector 3 and covariance
' 1EDF -HG A ' 1ED
& 1 +B8A C
C
(2)
. 4
65879;:9< =<>@?
?
?
where < =< is the determinant of . How should we select the individual covariances ?
where
matrix :
From the above discussion, we expect that if there is an underlying ?non-linear principal
manifold?, those gaussians would be ?pancakes? aligned with the plane locally tangent
to this underlying manifold. The only available information (in the absence of further
prior knowledge) about this tangent plane can be gathered from the training samples int the
neighborhood of . In other words, we are interested in computing the principal directions
of the samples in the neighborhood of .
For
I generality, we can define a soft neighborhood of with a neighborhood kernel
.
J that will associate an influence weight to any point in the neighborhood of
. We can then compute the weighted covariance matrix
! + I
,
)
#
% #
J ,0
. ,
6
! + I
(3)
#
% #
J ,
.
denotes the outer product.
where
I
.
/ could be a spherical Gaussian centered on for instance, or any other positive
definite kernel, possibly incorporating priorI knowledge as to what constitutes a reasonable
)
neighborhood for point . Notice that if
9/ , is a constant (uniform kernel),
is
the global training sample covariance. As an important special case, we can define a hard
k-neighborhood for training sample by assigning a weight of to any point no further
than the -th nearest neighbor of among the training set, according to some metric such
as the Euclidean distance in input ) space, and assigning a weight of to points further than
the -th neighbor. In that case, is the unweighted covariance of the nearest neighbors
of .
Notice what is happening here: we start with a possibly rough prior notion of neighborhood,
such as one based on the ordinary Euclidean distance in input space, and use this to compute
a local covariance matrix, which implicitly defines a refined local notion of neighborhood,
taking into account the local direction observed in the training samples.
Now that we have a way of computing a local covariance matrix for each training point, we
might be tempted to use this directly in equations 2 and 1. But a number of problems must
first be addressed:
)
Equation 2 requires the inverse covariance matrix, whereas is likely to be illconditioned. This situation will definitely arise if we use a hard k-neighborhood with
. In this case we get a Gaussian that is totally flat outside of the affine subspace
spanned by and its neighbors, and it does not constitute a proper density in . A
common way to deal with this problem is to add a small isotropic (spherical) Gaussian
noise of variance in all directions,
is done by simply adding to the diagonal of
) which
.
the covariance matrix:
Even if we regularize by adding , when we deal with high dimensional spaces,
it would be prohibitive in computation time and storage to keep and use the full inverse
covariance matrix as expressed in 2. This would in effect multiply both the time and storage
requirement of the already expensive ordinary Parzen Windows by
. So instead, we
use a different, more compact representation of the inverse Gaussian, by storing only the
eigenvectors associated with the first few largest eigenvalues of , as described below.
The eigen-decomposition of a covariance matrix can be expressed as: ,
where the columns of are the orthonormal eigenvectors and is a diagonal matrix with
the eigenvalues % ! : , that we will suppose sorted in decreasing order, without loss of
generality.
The first " eigenvectors with largest eigenvalues correspond to the principal directions of
the local neighborhood, i.e. the high variance local directions of the supposed underlying
" -dimensional manifold (but the true underlying dimension is unknown and may actually
vary across space). The last few eigenvalues and eigenvectors are but noise directions with
a small variance. So we may, without too much risk, force those last few
components to
the same low noise level . We have done this by zeroing the last
" eigenvalues
(by considering only the first " leading eigenvalues) and then adding to& all
1 + - eigenvalues.
"
in time
This
allows
us
to
store
only
the
first
eigenvectors,
and
to
later
compute
#
#
.%$ " instead of
. Thus both the storage requirement
and
the
computational
cost
when estimating the density at a test point is only about "
times that of ordinary Parzen.
It can easily be shown that
1 + - an approximation of the covariance matrix yields to the
& such
following computation of
:
;/ /! / " / )
9 , training vector , " eigenvalues , " eigenvectors
Input: test vector
in the columns of , dimension " , and the regularization
hyper-parameter .
(1) "
5E79
"
#
%
%
%
%
<$<
<$<
(2) B <$<
#
%
D B <$<
.
C
Output: Gaussian density
Algorithm LocalGaussian(9/
> ?
In the case of the hard k-neighborhood, the training algorithm pre-computes the local principal directions of the nearest neighbors of each training point (in practice we compute
them with a SVD rather than an eigen-decomposition of the covariance matrix, see below).
Note that with " , we trivially obtain the traditional Parzen windows estimator.
Algorithm MParzen::Train(/
" /!/ )
Input: training set matrix with rows
, chosen number of principal directions " , chosen number of neighbors " , and regularization hyper-parameter .
/ 5 / /
(1) For
in the rows of matrix .
(2)
Collect nearest neighbors of , and put
(3)
Perform a partial singular value decomposition of , to obtain
# / /!"$ ) and singular column vectors &% of the. leading "
singular values ! ("
B
(4)
For "
/ /!"$ , let
Output: The model )
=
. / ///
collects all the eigenvectors and is a
(' !
" / , where is an " tensor that
" matrix with all the eigenvalues.
Algorithm MParzen::Test(
/*)
/!/!9/! / " / .
Input: test point and model )
(1) !,+
/ 5 / /
(2) For
!-+.! LocalGaussian(9/ /! / / "
(3)
Output: manifold Parzen estimator
)
/ ' )
!.
3 Related work
As we have already pointed out, Manifold Parzen Windows, like traditional Parzen Windows and so many other density estimation algorithms, results in defining the density as
a mixture of Gaussians. What differs is mostly how those Gaussians and their parameters
are chosen. The idea of having a parameterization of each Gaussian that orients it along
the local principal directions also underlies the already mentioned work on mixtures of
Gaussian pancakes [3], mixtures of probabilistic PCAs [4], and mixtures of factor analysers [5, 6]. All these algorithms typically model the density using a relatively small number
of Gaussians, whose centers and parameters must be learnt with some iterative optimisation
algorithm such as EM (procedures which are known to be sensitive to local minima traps).
By contrast our approach is, like the original Parzen windows, heavily memory-based. It
avoids the problem of optimizing the centers by assigning a Gaussian to every training
point, and uses simple analytic SVD to compute the local principal directions for each.
Another successful memory-based approach that uses local directions and inspired our
work is the tangent distance algorithm [7]. While this approach was initially aimed at
solving classification tasks with a nearest neighbor paradigm, some work has already been
done in developing it into a probabilistic interpretation for mixtures with a few gaussians,
as well as for full-fledged kernel density estimation [8, 9]. The main difference between
our approach and the above is that the Manifold Parzen estimator does not require prior
knowledge, as it infers the local directions directly from the data, although it should be
easy to also incorporate prior knowledge if available.
We should also mention similarities between our approach and the Local Linear Embedding and recent related dimensionality reduction methods [10, 11, 12, 13]. There are also
links with previous work on locally-defined metrics for nearest-neighbors [14, 15, 16, 17].
Lastly, it can also be seen as an extension along the line of traditional variable and adaptive
kernel estimators that adapt the kernel width locally (see [18] for a survey).
4 Experimental results
Throughout this whole section, when we mention Parzen Windows (sometimes abbreviated
Parzen ), we mean ordinary Parzen windows using a spherical Gaussian kernel with a single
hyper-parameter , the width of the Gaussian.
When we mention Manifold Parzen Windows (sometimes abbreviated MParzen), we used
a hard k-neighborhood, so that the hyper-parameters are: the number of neighbors , the
number of retained principal components " , and the additional isotropic Gaussian noise
parameter .
When measuring the quality
% of a density estimator
, we used the average negative log
likelihood: ANLL
# % =
. with the examples from a test set.
4.1 Experiment on 2D artificial data
A training set of 300 points, a validation set of 300 points and a test set of 10000 points
were generated from the following distribution of two dimensional
.
/ points:
'
where
/!0 and
2
9
'
,
/ ,
/
3H/ is a normal density.
/
/ ,
/"! is uniform in the interval
We trained an ordinary Parzen, as well as MParzen with "(
and "( 5 on the training
set, tuning the hyper-parameters to achieve best performance on the validation set. Figure 2
shows the training set and gives a good idea of the densities produced by both kinds of algorithms (as the visual representation for MParzen with "
and " 5 did not appear very
different, we show only the case "
). The graphic reveals the anticipated ?bumpyness?
artifacts of ordinary Parzen, and shows that MParzen is indeed able to better concentrate
the probability density along the manifold, even when the training data is scarce.
Quantitative comparative results of the two models are reported in table 1
Table 1: Comparative results on the artificial data (standard errors are in parenthesis).
Algorithm
Parzen
MParzen
MParzen
Parameters
used
$#
@
" ,=
, &%
" 5 ,=
,
ANLL on test-set
-1.183 (0.016)
-1.466 (0.009)
-1.419 (0.009)
Several points are worth noticing:
Both MParzen models seem to achieve a lower ANLL than ordinary Parzen (even
though the underlying manifold really has dimension "
), and with more consistency over the test sets (lower standard error).
The optimal width for ordinary Parzen is much larger than the noise parameter
of the true generating model (0.01), probably because of the finite sample size.
The optimal regularization parameter for MParzen with "
(i.e. supposing
a one-dimensional underlying manifold) is very close to the actual noise parameter of the true generating model. This suggests that it was able to capture the
underlying structure quite well. Also it is the best of the three models, which is
not surprising, since the true model is indeed a one dimensional manifold with an
added isotropic Gaussian noise.
The optimal additional noise parameter for MParzen with " 5 (i.e. supposing
a two-dimensional underlying manifold) is close to 0, which suggests that the
model was able to capture all the noise in the second ?principal direction?.
Figure 2: Illustration of the density estimated by ordinary Parzen Windows (left) and Manifold
Parzen Windows (right). The two images on the bottom are a zoomed area of the corresponding image
at the top. The 300 training points are represented as black dots and the area where the estimated
density
is above 1.0 is painted in gray. The excessive ?bumpyness? and holes produced
by ordinary Parzen windows model can clearly be seen, whereas Manifold Parzen density is better
aligned with the underlying manifold, allowing it to even successfully ?extrapolate? in regions with
few data points but high true density.
4.2 Density estimation on OCR data
In order to compare the performance of both algorithms for density estimation on a realworld problem, we estimated the density of one class of the MNIST OCR data set, namely
the ?2? digit. The available data for this class was divided into 5400 training points, 558
validation points and 1032 test points. Hyper-parameters were tuned on the validation
set. The results are summarized in Table 2, using the performance measures introduced
above (average negative log-likelihood). Note that the improvement with respect to Parzen
windows is extremely large and of course statistically significant.
Table 2: Density estimation of class ?2? in the MNIST data set. Standard errors in parenthesis.
Algorithm Parameters used
validation ANLL test ANLL
%
Parzen
-197.27 (4.18)
-197.19 (3.55)
MParzen
-695.15 (5.21)
" , = , % -696.42 (5.94)
4.3 Classification performance
< , we used the negative conWhen measuring the quality of a probabilistic
% classifier
ditional log likelihood: ANCLL
# %
< , with the examples
/
To obtain a probabilistic classifier with a density estimator we train an' estimator
D D
F C ' F D C F D
*
. < for each class , and apply Bayes? rule to obtain
<
C
C .
(correct class, input) from a test set.
This method was applied to both the Parzen and the Manifold Parzen density estimators,
which were compared with state-of-the-art Gaussian SVMs on the full USPS data set. The
original training set (7291) was split into a training (first 6291) and validation set (last
1000), used to tune hyper-parameters. The classification errors for all three methods are
compared in Table 3, where the hyper-parameters are chosen based on validation classification error. The log-likelihoods are compared in Table 4, where the hyper-parameters are
chosen based on validation ANCLL. Hyper-parameters for SVMs are the box constraint
and the Gaussian width . MParzen has the lowest classification error and ANCLL of the
three algorithms.
Table 3: Classification error obtained on USPS with SVM, Parzen windows and Manifold Parzen
windows classifiers.
Algorithm
SVM
Parzen
MParzen
validation error
1.2%
1.8%
0.9%
test error
4.68%
5.08%
4.08%
parameters
,
" ,
,
Table 4: Comparative negative conditional log likelihood obtained on USPS.
Algorithm valid ANCLL test ANCLL parameters
Parzen
0.1022
0.3478
MParzen
0.0658
0.3384
" # , = # , #
5 Conclusion
The rapid increase in computational power now allows to experiment with sophisticated
non-parametric models such as those presented here. They have allowed to show the usefulness of learning the local structure of the data through a regularized covariance matrix
estimated for each data point. By taking advantage of local structure, the new kernel density estimation method outperforms the Parzen windows estimator. Classifiers built from
this density estimator yield state-of-the-art knowledge-free performance, which is remarkable for a not discriminatively trained classifier. Besides, in some applications, the accurate
estimation of probabilities can be crucial, e.g. when the classes are highly imbalanced.
Future work should consider other alternative methods of estimating the local covariance
matrix, for example as suggested here using a weighted estimator, or taking advantage of
prior knowledge (e.g. the Tangent distance directions).
References
[1] P. Vincent and Y. Bengio. K-local hyperplane and convex distance nearest neighbor algorithms.
In T.G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information
Processing Systems, volume 14. The MIT Press, 2002.
[2] E. Parzen. On the estimation of a probability density function and mode. Annals of Mathematical Statistics, 33:1064?1076, 1962.
[3] G.E. Hinton, M. Revow, and P. Dayan. Recognizing handwritten digits using mixtures of linear
models. In G. Tesauro, D.S. Touretzky, and T.K. Leen, editors, Advances in Neural Information
Processing Systems 7, pages 1015?1022. MIT Press, Cambridge, MA, 1995.
[4] M.E. Tipping and C.M. Bishop. Mixtures of probabilistic principal component analysers. Neural Computation, 11(2):443?482, 1999.
[5] Z. Ghahramani and G.E. Hinton. The EM algorithm for mixtures of factor analyzers. Technical
Report CRG-TR-96-1, Dpt. of Comp. Sci., Univ. of Toronto, 21 1996.
[6] Z. Ghahramani and M. J. Beal. Variational inference for Bayesian mixtures of factor analysers.
In Advances in Neural Information Processing Systems 12, Cambridge, MA, 2000. MIT Press.
[7] P. Y. Simard, Y. A. LeCun, J. S. Denker, and B. Victorri. Transformation invariance in pattern
recognition ? tangent distance and tangent propagation. Lecture Notes in Computer Science,
1524, 1998.
[8] D. Keysers, J. Dahmen, and H. Ney. A probabilistic view on tangent distance. In 22nd Symposium of the German Association for Pattern Recognition, Kiel, Germany, 2000.
[9] J. Dahmen, D. Keysers, M. Pitz, and H. Ney. Structured covariance matrices for statistical image
object recognition. In 22nd Symposium of the German Association for Pattern Recognition,
Kiel, Germany, 2000.
[10] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323?2326, Dec. 2000.
[11] Y. Whye Teh and S. Roweis. Automatic alignment of local representations. In S. Becker,
S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems,
volume 15. The MIT Press, 2003.
[12] V. de Silva and J.B. Tenenbaum. Global versus local approaches to nonlinear dimensionality
reduction. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information
Processing Systems, volume 15. The MIT Press, 2003.
[13] M. Brand. Charting a manifold. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances
in Neural Information Processing Systems, volume 15. The MIT Press, 2003.
[14] R. D. Short and K. Fukunaga. The optimal distance measure for nearest neighbor classification.
IEEE Transactions on Information Theory, 27:622?627, 1981.
[15] J. Myles and D. Hand. The multi-class measure problem in nearest neighbour discrimination
rules. Pattern Recognition, 23:1291?1297, 1990.
[16] J. Friedman. Flexible metric nearest neighbor classification. Technical Report 113, Stanford
University Statistics Department, 1994.
[17] T. Hastie and R. Tibshirani. Discriminant adaptive nearest neighbor classification and regression. In David S. Touretzky, Michael C. Mozer, and Michael E. Hasselmo, editors, Advances in
Neural Information Processing Systems, volume 8, pages 409?415. The MIT Press, 1996.
[18] A.J. Inzenman. Recent developments in nonparametric density estimation. Journal of the
American Statistical Association, 86(413):205?224, 1991.
| 2203 |@word determinant:1 nd:2 covariance:23 decomposition:3 mention:3 tr:1 reduction:3 myles:1 contains:1 tuned:1 outperforms:1 surprising:1 yet:1 assigning:3 must:2 shape:4 analytic:1 discrimination:1 prohibitive:1 parameterization:1 plane:2 isotropic:3 short:1 toronto:1 kiel:2 mathematical:1 along:9 symposium:2 indeed:3 rapid:1 multi:1 inspired:1 spherical:9 decreasing:1 little:1 actual:1 window:21 considering:1 totally:1 estimating:2 underlying:11 mass:2 lowest:1 what:3 kind:1 transformation:1 quantitative:1 every:2 universit:1 classifier:7 whatever:1 appear:1 positive:1 local:26 painted:1 might:1 black:1 collect:2 suggests:2 statistically:1 lecun:1 practice:1 definite:1 differs:1 digit:2 procedure:1 area:2 empirical:2 word:1 pre:1 get:1 close:4 storage:3 risk:1 influence:1 put:1 www:1 equivalent:1 center:2 convex:1 survey:1 qc:1 estimator:15 rule:2 spanned:1 regularize:1 orthonormal:1 dahmen:2 embedding:2 notion:2 annals:1 suppose:1 heavily:1 us:2 associate:1 element:1 expensive:1 recognition:5 observed:2 bottom:1 capture:4 region:2 zig:2 mentioned:1 intuition:2 mozer:1 trained:2 solving:1 usps:3 easily:1 differently:1 represented:1 train:2 univ:1 artificial:2 analyser:3 hyper:10 neighborhood:14 refined:1 outside:1 whose:2 quite:1 larger:1 stanford:1 statistic:2 knn:1 h3c:1 beal:1 advantage:3 eigenvalue:9 propose:1 subtracting:1 product:1 zoomed:1 aligned:2 achieve:2 roweis:2 supposed:1 requirement:2 comparative:3 generating:2 object:2 montreal:1 nearest:14 direction:20 concentrate:1 correct:1 centered:4 require:1 really:1 crg:1 extension:1 around:1 normal:1 vary:1 ditional:1 estimation:12 bridge:1 sensitive:1 largest:2 hasselmo:1 successfully:1 weighted:2 rough:1 clearly:1 j7:1 gaussian:25 mit:7 modified:1 rather:1 improvement:2 properly:1 likelihood:5 contrast:1 sense:1 inference:1 dayan:1 nn:2 typically:1 initially:1 interested:1 germany:2 classification:11 among:1 pascal:1 flexible:1 priori:1 development:1 art:2 special:1 having:2 shaped:1 identical:1 excessive:2 constitutes:1 anticipated:1 future:1 yoshua:1 report:2 few:6 neighbour:1 individual:1 friedman:1 montr:1 highly:1 multiply:1 alignment:1 mixture:14 yielding:1 hg:1 accurate:1 partial:1 pancake:4 euclidean:2 instance:1 column:3 soft:1 measuring:2 ordinary:10 cost:1 uniform:2 usefulness:1 recognizing:1 successful:1 too:3 graphic:1 reported:1 learnt:1 density:40 fundamental:1 definitely:1 probabilistic:7 michael:2 parzen:40 possibly:3 american:1 simard:1 leading:3 account:2 de:2 summarized:1 int:1 later:1 view:2 start:1 bayes:2 variance:4 illconditioned:1 gathered:1 correspond:1 yield:2 handwritten:1 vincent:2 bayesian:1 produced:3 multiplying:1 pcas:2 worth:1 comp:1 touretzky:2 ed:2 associated:1 popular:1 knowledge:6 infers:1 dimensionality:3 sophisticated:1 actually:2 higher:1 tipping:1 leen:1 done:3 though:1 box:1 generality:2 lastly:1 hand:1 horizontal:1 nonlinear:2 propagation:1 defines:1 mode:1 quality:2 artifact:3 gray:1 effect:1 dietterich:1 normalized:2 true:8 vicinity:2 regularization:3 illustrated:1 deal:2 width:5 whye:1 silva:1 image:3 variational:1 umontreal:2 superior:1 common:1 volume:5 association:3 interpretation:1 significant:2 cambridge:2 tuning:1 automatic:1 trivially:1 consistency:1 pointed:1 zeroing:1 analyzer:2 had:1 dot:1 similarity:3 surface:2 add:1 multivariate:1 imbalanced:1 recent:3 optimizing:1 irrelevant:1 tesauro:1 store:1 seen:2 minimum:1 additional:2 paradigm:1 full:5 infer:1 technical:2 adapt:1 compensate:1 divided:1 equally:1 parenthesis:2 underlies:1 regression:1 optimisation:1 metric:3 kernel:13 sometimes:2 dec:1 whereas:2 want:1 addressed:1 interval:1 victorri:1 singular:3 crucial:1 unlike:1 probably:2 supposing:2 seem:1 bengio:2 easy:1 split:1 hastie:1 idea:2 becker:4 cause:1 constitute:1 eigenvectors:8 aimed:1 tune:1 nonparametric:1 locally:4 tenenbaum:1 concentrated:3 svms:4 http:1 outperform:1 notice:2 estimated:4 tibshirani:1 preprocessed:1 merely:1 realworld:2 inverse:4 parameterized:1 everywhere:1 orient:1 noticing:1 almost:1 reasonable:1 throughout:1 decision:2 constraint:1 flat:2 extremely:1 fukunaga:1 attempting:1 relatively:1 structured:1 developing:1 according:1 department:1 across:1 slightly:1 em:2 taken:1 equation:2 remains:1 abbreviated:2 german:2 available:3 gaussians:7 apply:1 denker:1 ocr:2 away:1 appropriate:1 ney:2 alternative:1 eigen:2 customary:1 original:2 dpt:1 denotes:1 top:1 exploit:1 giving:1 ghahramani:3 tensor:1 already:4 added:1 parametric:4 traditional:6 diagonal:4 obermayer:3 subspace:1 distance:8 link:1 sci:1 thrun:3 outer:1 manifold:30 collected:1 discriminant:1 iro:3 reason:1 charting:1 besides:1 modeled:1 retained:1 illustration:1 mostly:3 negative:4 lived:1 proper:1 unknown:2 perform:1 allowing:1 teh:1 finite:2 situation:1 defining:1 hinton:2 canada:1 introduced:1 david:1 namely:1 able:4 suggested:2 usually:1 below:2 pattern:4 vincentp:2 sparsity:1 built:2 memory:3 power:1 force:1 regularized:2 scarce:1 prior:5 geometric:2 tangent:7 embedded:1 loss:1 expect:1 discriminatively:1 lecture:1 proportional:1 versus:1 remarkable:1 validation:9 affine:1 editor:6 storing:1 row:3 course:1 last:4 free:1 understand:1 fledged:1 neighbor:17 saul:1 taking:3 dimension:3 valid:1 avoids:1 unweighted:1 computes:1 adaptive:2 transaction:1 compact:1 implicitly:1 keep:1 global:5 reveals:1 anchor:1 iterative:1 table:8 ca:2 necessarily:2 did:1 spread:2 main:1 whole:1 noise:10 arise:1 allowed:1 fig:2 bengioy:1 lie:1 bishop:1 svm:2 incorporating:1 trap:1 mnist:2 adding:3 hole:4 keysers:2 gap:1 simply:1 likely:2 happening:1 visual:1 expressed:2 scalar:1 ma:2 conditional:1 goal:1 sorted:1 tempted:1 revow:1 absence:1 hard:4 justify:1 hyperplane:1 principal:10 invariance:2 svd:2 experimental:1 brand:1 zag:2 formally:1 select:1 support:1 incorporate:1 dept:1 extrapolate:1 |
1,323 | 2,204 | Learning to Take Concurrent Actions
Khashayar Rohanimanesh
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
[email protected]
Sridhar Mahadevan
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
[email protected]
Abstract
We investigate a general semi-Markov Decision Process (SMDP)
framework for modeling concurrent decision making, where agents
learn optimal plans over concurrent temporally extended actions.
We introduce three types of parallel termination schemes ? all, any
and continue ? and theoretically and experimentally compare them.
1
Introduction
We investigate a general framework for modeling concurrent actions. The notion of
concurrent action is formalized in a general way, to capture both situations where a
single agent can execute multiple parallel processes, as well as the multi-agent case
where many agents act in parallel. Concurrency clearly allows agents to achieve
goals more quickly: in making breakfast, we interleave making toast and coffee
with other activities such as getting milk; in driving, we search for road signs while
controlling the wheel, accelerator and brakes.
Most previous work on concurrency has focused on parallelizing primitive (unit
step) actions. Reiter developed axioms for concurrent planning using the situation
calculus framework [4]. Knoblock [3] and Boutilier [1] modify the STRIPS representation of actions to allow for concurrent actions. These approaches assume
deterministic effects. Prior work in decision-theoretic planning includes work on
multi-dimensional vector action spaces [2], and models based on dynamic merging
of multiple MDPs [6]. There is also a massive literature on concurrent processes,
dynamic logic, and temporal logic. Parts of these lines of research deal with the
specification and synthesis of concurrent actions, including probabilistic ones [8].
In contrast, we focus on parallelizing temporally extended actions. The concurrency
framework described below significantly extends our previous work [5]. We provide
a detailed analysis of three termination schemes for composing parallel action structures. The three schemes ? any, all, and continue ? are illustrated in Figure 1. We
characterize the class of policies under each scheme. We also theoretically compare
the optimality of the concurrent policies under each scheme with that of the typical
sequential case. The theoretical results are complemented by an experimental study,
which illustrate the trade-offs between optimality and convergence speed, and the
advantages of concurrency over sequentiality.
2
Concurrent Action Model
Building on SMDPs, we introduce the Concurrent Action Model (CAM)
(S, A, T , R), where S is a set of states, A is a set of primary actions, T is a
transition probability distribution S ? ?(A) ? S ? N ? [0, 1], where ?(A) is the
power-set of the primary actions and N is the set of natural numbers, and R is the
reward function mapping S ? <. Here, a concurrent action is simply represented
as a set of primary actions (hereafter called a multi-action), where each primary
action is either a single step action, or a temporally extended action (e.g., modeled
as a closed loop policy over single step actions [7]).
We denote the set of multi-actions that can be executed in a state s by A(s). In
practice, this function can capture resource constraints that limit how many actions
an agent can execute in parallel. Thus, the transition probability distribution in
practice may be defined over a much smaller subset than the power-set of primary
actions (e.g., in the grid world example in Figure 3, the power set is > 100, but the
set of concurrent actions is only ? 10).
t
a1
a1
a2
a3
a4
St
t
t+k
a1
a2
a3
a4
interrupted
St
multi-action
dn
t+k
t
terminated
terminated
at = {a1, a 2 , a 3 , a 4}
dn
at = {a1, a 2 , a 3 , a 4}
dn+1
t+k
terminated
a1
a2
a3
a4
St
multi-action
dn+1
a1
dn
a1
Current multi-action
at = {a1, a 2 , a 3 , a 4} dn+1
Next multi-action
at+k= {a1, a 2 , a 3 , a 4}
Continue to run
Figure 1: Left: Tany termination scheme. Middle: Tall termination scheme. Right:
Tcontinue termination scheme.
A principal goal of this paper is to understand how to define decision epochs for
concurrent processes, since the primary actions in a multi-action may not terminate
at the same time. The event of termination of a multi-action can be defined in many
ways. Three termination schemes are illustrated in Figure 1. In the Tany termination
scheme (Figure 1, left), the next decision epoch is when the first primary action
within the multi-action currently being executed terminates, where the rest of the
primary actions that did not terminate naturally are interrupted (the notion of
interruption is similar to [7]). In the Tall termination scheme (Figure 1, middle),
the next decision epoch is the earliest time at which all the primary actions within
the multi-action currently being executed have terminated.
We can design other termination schemes by combining Tany and Tall : for example,
another termination scheme called continue is one that always terminates based on
the Tany termination scheme, but lets those primary actions that did not terminate
naturally continue running, while initiating new primary actions if they are going
to be useful (Figure 1, right).
A deterministic Markovian (memoryless) policy in CAMs is defined as the mapping
? : S ? ?(A). Note that even though the mapping is defined independent of the
termination scheme, the behavior of a multi-action policy depends on the termination scheme that is used in the model. To illustrate this, let < ?, ? > (called a
policy-termination construct) denote the process of executing the multi-action policy ? using the termination scheme ? ? {Tany , Tall }. To simplify notation, we only
use this form whenever we want to explicitly point out what termination scheme is
being used for executing the policy ?. For a given Markovian policy, we can write
the value of that policy in an arbitrary state given the termination mechanism used
in the model. Let ?(?, st , ? ) denote the event of initiating the multi-action ?(st )
at time t and terminating it according to the ? ? {Tany , Tall } termination scheme.
Also let ? ?? denote the optimal multi-action policy within the space of policies over
multi-actions that terminate according to the ? ? {Tany , Tall } termination scheme.
To simplify notation, we may alternatively use ?? to denote optimality with respect
to the ? termination scheme. Then the optimal value function can be written as:
V ?? (st ) = E{rt+1 + ?rt+2 + ... + ? k?1 rt+k + ? k
max
a?A(st+k )
Q?? (st+k , a) | ?(? ?? , st , ? )}
where Q?? (st+k , a) denotes the multi-action value of executing a in state st+k (terminated using ? ) and following the optimal policy ? ?? thereafter.
The policy associated with the continue termination scheme is a history dependent
policy, since for a given state st , the continue policy will select a multi-action such
that it includes the set of all the primary actions of the multi-action executed in the
previous decision epoch that did not terminate naturally in the current state s t (we
refer to this set as the continue-set represented by ht ). The continue policy is defined
as the mapping ?cont : S ? H ? ?(A) in which H is a set of continue-sets ht . Note
that the value function definition for the continue policy should be defined over both
state st and the continue-set ht (represented by ? st , ht ), i.e., V ?cont (? st , ht ).
Let the function A(st , ht ) return the set of multi-actions that can be executed in
state st that include the continuing primary actions in ht . Then the continue policy
is formally defined as: ?cont (? st , ht ) = arg maxa?A(st ,ht ) Q?cont (? st , ht , a).
To illustrate this, assume that the current state is st and the multi-action at =
{a1 , a2 , a3 , a4 } is executed in state st . Also, assume that the primary action a1
is the first action that terminates after k steps in state st+k . According to the
definition of the continue termination scheme (that terminates based on T any ), the
multi-action at is terminated at time t + k and we need to select a new multiaction to execute in state st+k (with the continue-set ht+k = {a2 , a3 , a4 }). The
continue policy will select the best multi-action at+k that includes the primary
actions {a2 , a3 , a4 }, since they did not terminate in state st+k (see Figure 1, right).
3
Theoretical Results
In this section we present some of our theoretical results comparing the optimality
of various policies under different termination schemes introduced in the previous
section. In all of these theorems we use the partial ordering relation V ?1 ? V ?2 ?
?1 ? ?2 , in order to compare different policies. For lack of space, we abbreviated
the proofs. Note that in theorems 1 and 3 which compare the continue policy with
? ?any and ? ?all policies, the value function is written over the pair ? st , ht to
be consistent with the definition of the continue policy. This does not influence
the original definition of the value function for the optimal policies in Tany and Tall
termination schemes, since they are independent of the continue-set h t . First, we
compare the optimal multi-action policies based on the Tany termination scheme
and the continue policy.
Theorem 1: For every state st ? S, and all continue-set ht ? H,
V ?cont (? st , ht ) ? V ?any (? st , ht ).
Proof: By writing the value function definition for each case we have:
V ?cont (? st , ht ) =
max
a?A(st ,ht )
Q?cont (? st , ht , a) ? max Q?cont (? st , ht , a)
a?A(st )
? max Q?any (? st , ht , a) = V ?any (? st , ht )
a?A(st )
The inequality holds since the maximization in ?cont is over a smaller set (i.e.,
A(st , ht )) which is a subset of the larger set A(st ) that is maximized over, in the
? ?any case.
Next, we show that the optimal plans with multi-actions that terminate according
to the Tany termination scheme are better compared to the optimal plans with
multi-actions that terminate according to the Tall termination scheme:
Theorem 2: For every state s ? S, V ?all (s) ? V ?any (s).
Proof: The proof is based on the following lemma which states that if we alter the
execution of the optimal multi-action policy based on Tall (i.e., ? ?all ) in such a way
that at every decision epoch the next multi-action is still selected from ? ?all , but we
terminate it based on Tany then the new policy-termination construct represented
by < ?all , any > is better than the ? ?all policy. Intuitively this makes sense, since
if we interrupt ? ?all (s) when the first primary action ai ? a = ? ?all (s) terminates
in some future state s0 , due to the optimality of ? ?all , executing ? ?all (s0 ) is always
better than or equal to continuing some other policy such as the one in progress
(i.e., ? ?all (s)). Note that the proof is not as simple as in the first theorem since the
two different policies discussed in this theorem (i.e., ? ?any and ? ?all ) are not being
executed using the same termination method.
Lemma 1: For every state s ? S, V ?all (s) ? V <?all ,any> (s).
?all
Proof: Let Vn,any
(s) denote the value of following the optimal ? ?all policy in state
s, where for the first n decision epochs we use the Tany termination scheme and for
the rest we use the Tall termination scheme. By induction on n, we can show that
?all
V ?all (s) ? Vn,any
(s), ?s ? S and for all n. This suggests that if we always terminate
a multi-action ? ?all (st ) according to the Tany termination scheme, we achieve a
?all
better return; or mathematically V ?all (s) ? limn?? Vn,any
(s) = V <?all ,any> (s).
Using Lemma 1, and the optimality of ? ?any in the space of policies with termination
scheme according to Tany , it follows that V ?all (s) ? V <?all ,any> (s) ? V ?any (s).
Next, we show that if we execute the continue policy in which at any decision epoch
we always execute the best set of primary actions along with those ones that were
executed in the previous decision epoch and have not terminated yet, we achieve
a better return compared to the case in which we execute the best set of primary
actions, but always wait until all of the primary actions terminate before making a
new decision:
Theorem 3: For every state st ? S, and all continue-set ht ? H,
V ?all (? st , ht ) ? V ?cont (? st , ht ).
Proof: In ? ?all policies, multi-actions are executed until all of the primary actions
of that multi-action terminate. The continue policy, however, may also initiate
new useful primary action in addition to those already running which may achieve
?all
a better return. Let Vn,cont
(? st , ht ) denote the value of the altered policy
?all
?
that works as follows: for a given state and continue-set ? st , ht , the
policy ? ?all (? st , ht ) is executed while for the first n decision epochs we use
the continue termination scheme (which means terminating according to T any , and
selecting the next multi-action according to the continue policy) and for the rest
we use the Tall termination scheme. By induction on n, it can be shown that
?all
V ?all (? st , ht ) ? Vn,cont
(? st , ht ) for all n. This suggests that as we increase
n, the altered policy behaves more like the continue policy and thus in the limit
?all
we have V ?all (? st , ht ) ? limn?? Vn,cont
(? st , ht ) = V ?cont (? st , ht ) which
proves the theorem.
Finally we show that the optimal multi-action policies based on Tall termination
scheme are as good as the case where the agent always executes a single primary
action at a time, as it is the case in standard SMDPs. Note that this theorem does
not state that concurrent plans are always better than sequential ones; it simply
says that if in a problem, the sequential execution of the primary actions is the
best policy, CAM is able to represent and find that policy. Let ? ?seq represent
the optimal policy in the sequential case, where only one primary action can be
executed at a time:
Theorem 4: For every state s ? S, V ?seq (s) ? V ?all (s), in which V ?seq (s) is the
value of the optimal policy when the primary actions are executed one at a time
sequentially.
Proof: It suffices to show that sequential policies are within the space of concurrent
policies. This holds since a single primary action can be considered as a multi-action
containing only one primary action whose termination is consistent with either of
the multi-action termination schemes (i.e., in the sequential case both T any and Tall
termination schemes are same).
Corollary 1 summarizes our theoretical results. It shows how different policies in a
concurrent action model using different termination schemes compare to each other
in terms of optimality.
Corollary 1: In a concurrent action model and a set of termination schemes
{Tany , Tall , continue}, the following partial ordering holds among the optimal policy based on Tany , the optimal policy based on Tall , the continue policy and the
optimal sequential policy: ? ?seq ? ? ?all ? ?cont ? ? ?any .
Proof: This follows immediately from the above theorems.
Figure 2 visually describes the summary of results that we presented in Corollary
1. According to this figure, the optimal multi-action policies based on T any and
Tall , and also continue multi-action policies dominate (with respect to the partial
ordering relation defined over policies) the optimal policies over the sequential case.
Furthermore, policies based on continue multi-actions dominate the optimal multiaction policies based on Tall termination scheme, while themselves being dominated
by the optimal multi-action policies based on Tany termination scheme.
Multi-action policies using Tany
Continue multi-action policies
Multi-action policies using Tall
Policies over sequential actions
Figure 2: Comparison of policies over multi-actions and sequential primary actions
using different termination schemes.
4
Experimental Results
In this section we present experimental results using a grid world task comparing
various termination schemes (see Figure 3). Each hallway connects two rooms, and
has a door with two locks. An agent has to retrieve two keys and hold both keys
at the same time in order to open both locks. The process of picking up keys is
modeled as a temporally extended action that takes different amount of times for
each key. Moreover, keys cannot be held indefinitely, since the agent may drop a
key occasionally. Therefore the agent needs to find an efficient solution for picking
up the keys in parallel with navigation to act optimally. This is an episodic task,
in which at the beginning of each episode the agent is placed in a fixed position
(upper left corner) and the goal of the agent is to navigate to a fixed position goal
(hallway H3).
Agent
H0
- 4 stochastic primitive actions
(Up, Down, Left and Right)
- Fail 10% of times, when fails it will
move randomly to one of the neighbors
H3 (Goal)
H1
- 8 multi-step navigation actions
(to each room?s 2 hallways)
- One primitive no-op action
- 3 stochastic primitive actions for keys
(get-key, key-nop and putback-key)
- 2 multi-step key actions (pickup-key),
one for each key
- Drop each key 30% of times when holding it
H2
Figure 3: A navigation problem that requires concurrent plans. There are two locks
on each door, which need to be opened simultaneously. Retrieving each key takes
different amounts of time.
The agent can execute two types of action concurrently: (1) navigation actions, and
(2) key actions. Navigation actions include a set of one-step stochastic navigation
actions (Up, Left, Down and Right) that move the agent in the corresponding
direction with probability 0.9 and fail with probability 0.1. Upon failure the agent
1
moves instead in one of the other three directions, each with probability 30
. There
is also a set of temporally extended actions defined over the one step navigation
actions that transport the agent from within the room to one of the two hallway cells
leading out of the room (Figure 4 (left)). Key actions are defined to manipulate each
key (get-key, putback-key, pickup-key, etc). Among them pickup-key is a temporally
extended action (Figure 4 (right)). Note that each key has its own set of actions.
Primitive action "get-key"
Door is closed
&
both keys are ready
Primitive action "key-nop"
Primitive action "putback-key"
Door is open
Multi-step action "pickup-key"
Inside the room
Multi-step hallway action can be taken
1
1
S0
...
S1
...
S1
S10
1
1
0.7
1
Target
Hallway
S0
1
1
1
0.3
S7
S6
Key Ready
...
1
S10
Key Dropped
1
Key 1
1
1
1
0.7
1
Multi-step hallway action can not be taken
0.3
1
Door is closed
&
keys are not ready
S0
Key 2
Outside the room
1
S1
Key Ready
1
S2
...
1
S6
Key Dropped
1
Figure 4: Left: the policy associated with one of the hallway temporally extended
actions. Right: representation of the key pickup actions for each key process.
In this example, navigation actions can be executed concurrently with key actions.
Actions that manipulate different keys can be also executed concurrently. However,
the agent is not allowed to execute more than one navigation action, or more than
one key action (from the same key action set) concurrently. In order to properly
handle concurrent execution of actions, we have used a factored state space defined
by state variables position (104 positions), key1-state (11 states) and key2-state (7
states).
In our previous work we showed that concurrent actions formed an SMDP over
primitive actions [5], which turns out to hold for all the termination schemes described above. Thus, we can use SMDP Q-learning to compare concurrent policies over different termination schemes with the use of this method for purely sequential policy learning [7]. After each decision epoch where the multi-action a is
taken in some state s and
in state s0 , the following
terminates
update rule is used:
k
Q(s, a) ? Q(s, a) + ? r + ? maxa0 ?A(s0 ) Q(s0 , a0 ) ? Q(s, a) , where k denotes the
number of time steps since initiation of the multi-action a at state s and its termination at state s0 , and r denotes the cumulative discounted reward over this period.
The agent is punished by ?1 for each primitive action. Figure 5 (left) compares
the number of primitive actions taken until success, and Figure 5 (right) shows the
median number of decision epochs per trial, where for trial n, it is the median of
all trials from 1 to n. These data are averaged over 10 episodes, each consisting of
500, 000 trials. As shown in figure 5 (left), concurrent actions over any termination
scheme yield a faster plan than sequential execution. Moreover, the policies learned
based on Tany (i.e. both ? ?any and ?cont ) are also faster than Tall . Also, ? ?any
achieves higher optimality than ?cont , however the difference is small.
We conjecture that sequential execution and Tall converge faster compared to Tany ,
due to the frequency with which multi-actions are terminated. As shown in Figure
5 (right), Tall makes fewer decisions, compared to Tany . This is intuitive since Tall
terminates only when all of the primary actions in a multi-action are completed,
and hence it involves less interruption compared to learning based on Tany . Note
?cont converges faster than ? ?any and it is nearly as good as Tany . . We can think of
70
60
Median/Trials (# of decision epochs)
Median/Trials (steps to goal)
70
Sequential Actions
Concurrent Actions: optimal, T-all
Concurrent Actions: optimal, T-any
Concurrent Actions: continue
65
55
50
45
40
35
30
25
20
Sequential Actions
Concurrent Actions: optimal, T-all
Concurrent Actions: optimal, T-any
Concurrent Actions: continue
60
50
40
30
20
10
0
0
100000
200000
300000
Trial
400000
500000
0
100000
200000
300000
400000
Trial
Figure 5: Left: moving median of number of steps to the goal. Right: moving
median of number of multi-action level decision epochs taken to the goal.
?cont as a blend of Tall and Tany . Even though it uses the Tany termination scheme,
it continues executing primary actions that did not terminate naturally when the
first primary action terminates, making it similar to Tall .
5
Future Work
Even though specifying the A(s) set of applicable multi-actions might significantly
reduce the set of choices, we still may need additional mechanisms for efficiently
searching the space of multi-actions that can run in parallel. Also, we can additionally exploit the hierarchical structure of multi-actions to compile them into an
effective policy over primary actions. These are some of the practical issues that we
will investigate in future work.
References
[1] Craig Boutilier and Ronen Brafman. Planning with concurrent interacting actions. In Proceedings
of the Fourteenth National Conference on Artificial Intelligence (AAAI ?97), 1997.
[2] P. Cichosz. Learning multidimensional control actions from delayed reinforcements. In Eighth International Symposium on System-Modelling-Control (SMC-8), Zakopane, Poland, 1995.
[3] C. A. Knoblock. Generating parallel execution plans with a partial-order planner. In Proceedings of
the Second International Conference on Artificial Intelligence Planning Systems , Chicago, IL,
1994., 1994.
[4] Ray Reiter. Natural actions, concurrency and continuous time in the situation calculus. Principles
of Knowledge Representation and Reasoning: Proceedings of the Fifth International Conference
(KR?96), Cambridge MA., November 5-8, 1996, 1996.
[5] Khashayar Rohanimanesh and Sridhar Mahadevan. Decision-theoretic planning with concurrent
temporally extended actions. In Proceedings of the 17th Conference on Uncertainty in Artificial
Intelligence, 2001.
[6] S. Singh and David Cohn. How to dynamically merge markov decision processes. Proceedings of
NIPS 11, 1998.
[7] R. Sutton, D. Precup, and S. Singh. Between MDPs and Semi-MDPs: A framework for temporal
abstraction in reinforcement learning. Artificial Intelligence, pages 181?211, 1999.
[8] Glynn Winskel. Topics in concurrency: Part ii comp. sci. lecture notes. Computer Science course
at the University of Cambridge, 2002.
500000
| 2204 |@word trial:8 middle:2 interleave:1 open:2 termination:51 calculus:2 uma:2 hereafter:1 selecting:1 current:3 comparing:2 yet:1 written:2 interrupted:2 chicago:1 drop:2 update:1 smdp:3 intelligence:4 selected:1 fewer:1 hallway:8 beginning:1 indefinitely:1 dn:6 along:1 symposium:1 retrieving:1 ray:1 inside:1 introduce:2 theoretically:2 behavior:1 themselves:1 planning:5 multi:57 initiating:2 discounted:1 notation:2 moreover:2 what:1 maxa:1 developed:1 temporal:2 every:6 multidimensional:1 act:2 control:2 unit:1 before:1 dropped:2 modify:1 limit:2 sutton:1 merge:1 might:1 dynamically:1 suggests:2 specifying:1 compile:1 smc:1 averaged:1 practical:1 practice:2 episodic:1 axiom:1 significantly:2 road:1 wait:1 get:3 cannot:1 wheel:1 influence:1 writing:1 deterministic:2 brake:1 primitive:10 focused:1 formalized:1 immediately:1 factored:1 rule:1 dominate:2 retrieve:1 s6:2 handle:1 notion:2 searching:1 controlling:1 target:1 massive:1 us:1 continues:1 capture:2 episode:2 ordering:3 trade:1 reward:2 cam:3 dynamic:2 terminating:2 singh:2 concurrency:6 purely:1 upon:1 represented:4 various:2 effective:1 artificial:4 outside:1 h0:1 whose:1 larger:1 say:1 think:1 khash:1 advantage:1 loop:1 combining:1 achieve:4 intuitive:1 getting:1 convergence:1 generating:1 executing:5 converges:1 tall:24 illustrate:3 op:1 h3:2 progress:1 c:2 involves:1 direction:2 stochastic:3 opened:1 maxa0:1 suffices:1 toast:1 mathematically:1 hold:5 considered:1 visually:1 mapping:4 driving:1 achieves:1 a2:6 applicable:1 currently:2 punished:1 concurrent:32 offs:1 clearly:1 concurrently:4 always:7 earliest:1 corollary:3 focus:1 interrupt:1 properly:1 modelling:1 contrast:1 sense:1 dependent:1 abstraction:1 a0:1 relation:2 going:1 arg:1 among:2 issue:1 plan:7 equal:1 construct:2 nearly:1 alter:1 future:3 simplify:2 randomly:1 simultaneously:1 national:1 delayed:1 connects:1 consisting:1 investigate:3 navigation:9 held:1 partial:4 continuing:2 theoretical:4 modeling:2 markovian:2 maximization:1 subset:2 characterize:1 optimally:1 st:51 international:3 amherst:2 probabilistic:1 picking:2 synthesis:1 quickly:1 precup:1 aaai:1 containing:1 corner:1 leading:1 return:4 includes:3 explicitly:1 depends:1 h1:1 closed:3 parallel:8 formed:1 il:1 efficiently:1 maximized:1 yield:1 ronen:1 craig:1 comp:1 executes:1 history:1 whenever:1 strip:1 definition:5 failure:1 frequency:1 glynn:1 naturally:4 associated:2 proof:9 massachusetts:2 sequentiality:1 knowledge:1 higher:1 breakfast:1 execute:8 though:3 furthermore:1 until:3 transport:1 cohn:1 lack:1 building:1 effect:1 hence:1 memoryless:1 reiter:2 illustrated:2 deal:1 theoretic:2 reasoning:1 behaves:1 discussed:1 refer:1 cambridge:2 ai:1 grid:2 knoblock:2 moving:2 specification:1 etc:1 nop:2 own:1 showed:1 occasionally:1 initiation:1 inequality:1 continue:35 success:1 additional:1 converge:1 period:1 semi:2 ii:1 multiple:2 faster:4 manipulate:2 a1:12 represent:2 cell:1 addition:1 want:1 median:6 limn:2 rest:3 mahadeva:1 door:5 mahadevan:2 reduce:1 s7:1 action:153 boutilier:2 useful:2 detailed:1 amount:2 sign:1 per:1 write:1 thereafter:1 key:42 ht:33 run:2 fourteenth:1 uncertainty:1 extends:1 planner:1 vn:6 seq:4 decision:20 summarizes:1 activity:1 constraint:1 s10:2 dominated:1 speed:1 optimality:8 conjecture:1 department:2 according:10 smaller:2 terminates:8 describes:1 making:5 s1:3 intuitively:1 taken:5 resource:1 abbreviated:1 turn:1 mechanism:2 fail:2 initiate:1 hierarchical:1 original:1 denotes:3 running:2 include:2 completed:1 a4:6 lock:3 exploit:1 coffee:1 prof:1 move:3 already:1 blend:1 primary:32 rt:3 interruption:2 sci:1 topic:1 khashayar:2 induction:2 modeled:2 cont:19 executed:14 holding:1 design:1 policy:72 upper:1 markov:2 november:1 pickup:5 situation:3 extended:8 interacting:1 arbitrary:1 parallelizing:2 introduced:1 david:1 pair:1 learned:1 nip:1 able:1 below:1 eighth:1 including:1 max:4 power:3 event:2 natural:2 scheme:47 altered:2 mdps:3 temporally:8 ready:4 poland:1 prior:1 literature:1 epoch:13 lecture:1 accelerator:1 h2:1 agent:19 consistent:2 s0:9 principle:1 course:1 summary:1 placed:1 brafman:1 allow:1 understand:1 neighbor:1 fifth:1 transition:2 world:2 cumulative:1 reinforcement:2 logic:2 sequentially:1 alternatively:1 search:1 continuous:1 additionally:1 learn:1 rohanimanesh:2 terminate:13 composing:1 did:5 terminated:8 s2:1 sridhar:2 allowed:1 fails:1 position:4 theorem:11 down:2 navigate:1 a3:6 merging:1 sequential:15 milk:1 kr:1 execution:6 simply:2 complemented:1 ma:3 goal:8 room:6 experimentally:1 typical:1 smdps:2 principal:1 lemma:3 called:3 experimental:3 select:3 formally:1 |
1,324 | 2,205 | Spikernels:
Embedding Spiking Neurons
in Inner-Product Spaces
Lavi Shpigelman Yoram Singer Rony Paz Eilon Vaadia
School of computer Science and Engineering
Interdisciplinary Center for Neural Computation
Dept. of Physiology, Hadassah Medical School
The Hebrew University Jerusalem, 91904, Israel
{shpigi,singer}@cs.huji.ac.il
{ronyp,eilon}@hbf.huji.ac.il
Abstract
Inner-product operators, often referred to as kernels in statistical learning, define a mapping from some input space into a feature space. The focus of
this paper is the construction of biologically-motivated kernels for cortical activities. The kernels we derive, termed Spikernels, map spike count sequences
into an abstract vector space in which we can perform various prediction tasks.
We discuss in detail the derivation of Spikernels and describe an efficient algorithm for computing their value on any two sequences of neural population
spike counts. We demonstrate the merits of our modeling approach using the
Spikernel and various standard kernels for the task of predicting hand movement velocities from cortical recordings. In all of our experiments all the kernels we tested outperform the standard scalar product used in regression with
the Spikernel consistently achieving the best performance.
1
Introduction
Neuronal activity in primary motor cortex (MI) during multi-joint arm reaching movements in 2D and 3-D [1, 2] and drawing movements [3] has been used extensively as a test bed for gaining
understanding of neural computations in the brain. Most approaches assume that information is
coded by firing rates, measured on various time scales. The tuning curve approach models the
average firing rate of a cortical unit as a function of some external variable, like the frequency
of an auditory stimulus or the direction of a planned movement. Many studies of motor cortical
areas [4, 2, 5, 3, 6] showed that while single units are broadly tuned to movement direction,
a relatively small population of cells (tens to hundreds) carries enough information to allow
for accurate prediction. Such broad tuning can be found in many parts of the nervous system,
suggesting that computation by distributed populations of cells is a general cortical feature. The
population-vector method [4, 2] describes each cell?s firing rate as the dot product between that
cell?s preferred direction and the direction of hand movement. The vector sum of preferred
directions, weighted by the measured firing rates is used both as a way of understanding what
the cortical units encode and as a means for estimating the velocity vector.
Several recent studies [7, 8, 9] propose that neurons can represent or process multiple parameters
simultaneously, suggesting that it is the dynamic organization of the activity in neuronal populations that may represent temporal properties of behavior such as the computation of transformation from ?desired action? in external coordinates to muscle activation patterns. Some studies
[10, 11, 12] support the notion that neurons can associate and dissociate rapidly to functional
groups in the process of performing a computational task. The concepts of simultaneous encoding of multiple parameters and dynamic representation in neuronal populations, could together
explain some of the conundrums in motor system physiology. These concepts also invite usage
of increasingly complex models for relating neural activity to behavior. Advances in computing power and recent developments of physiological recording methods allow recording of ever
growing numbers of cortical units that can be used for real-time analysis and modeling. These
developments and new understandings have recently been used to reconstruct movements on the
basis of neuronal activity in real-time in an effort to facilitate the development of hybrid brainmachine interfaces that allow interaction between living brain tissue and artificial electronic or
mechanical devices to produce brain controlled movements [13, 6, 14, 15, 11, 16, 17]. Current attempts at predicting movement from cortical activity rely on modeling techniques such as
cosine-tuning estimation (pop. vector) [18], linear regression [15, 19] and artificial neural nets
[15] (though this study reports getting better results by linear regression). A major deficiency
of standard approaches is poor ability to extract the relevant information from monitored brain
activity in an efficient manner that will allow reducing the number of recorded channels and
recording time.
The paper is organized as follows. In Sec. 2 we describe the problem setting that this paper
is concerned with. In Sec. 3 we introduce and explain the main mathematical tool that we
use, namely, the kernel operator. In Sec. 4 we discuss the design and implementation of a
biologically-motivated kernel for neural activities. We report experimental results in Sec. 5 and
give conclusions in Sec. 6.
2
Problem setting
Consider the case where we monitor instantaneous spike rates from cortical units during physical motor behavior of a subject. Our goal is to learn a predictive model of some behavior
parameter with the cortical activity as the input. Formally speaking, let
be a sequence
of instantaneous firing rates from cortical units consisting of
samples altogether. We use
to
sequences of firing rates and denote by the length of a sequence . Let be the
thdenote
sample (i.e. instantaneous firing rates) of a sequence . We also use to denote the concatenation of with one more sample . We refer to the instantaneous firing rate of a unit ! by #" .
We also need to employ a notation for sub-sequences. The $ -long prefix is denoted &%' ( . Finally,
throughout the work we need to examine
by ) a vector of
a.- substrings
.1 of sequences.
5-We36denote
/ / /&3 .1 37# .
indices into the sequence where )+*, % / / /0 and 243 % 3
We also need to introduce some notation for target variables we would like to predict. Let 89
denote some parameter of the movement that we would like to predict (e.g. the movement
velocity in the direction, :<; ). Our goal is to learn an approximation 8= ( of the form >@?
A
CB
from neural firing rates to movement parameter. In general, information about
movement can be found in neural activity both before and after the time of movement itself. Our
plan, though, is to design a model that can be used for controlling a neural prosthesis. We will
therefore confine ourselves to causal predictors that use %' ( to predict 8 ( . We therefore would
like to make 8= ( *D>E. %F' ( as close as possible (in a sense that is explained in the sequel) to 8 ( .
3
Kernel methods for regression
A major mathematical notion employed in this paper is kernel operators. Kernel operators allow algorithms whose interface to the data is limited to scalar products to employ complicated
premappings of the data into feature spaces by use of kernels. Formally, a kernel is an innerproduct operator GH?JILKMI B where I is some arbitrary vector space. An explicit way
to describe G is via a mapping N6?I BPO from I to an inner-products space O such that
GQJ+RST*UN#VXW<N+RY . Given a kernel operator we can use it to perform various statistical
learning tasks. One such task is support vector regression (SVR) [20] which attempts to find a
regression function for target values that is linear if observed in the (typically very large) feature
space mapped by the kernel. We give here a brief description of SVR for the the sake of clarity.
Support Vector
minimizes Vapnik?s [21] -insensitive loss function 8 >J
*
8 >JRegression
which defines a hyperplane with width around the estimate. Examples that fall within it?s boundaries are considered well estimated and do not contribute to the
error. Examples outside the tube contribute linearly to the loss. Say N# is the feature vector implemented by kernel GQ WS . To estimate a linear (linear in feature space) regression
>J * WN# with precision , one minimizes
8 >EN
2
"! %
This can be written as a constrained minimization problem
minimize
subject to
2 - )
* $+,$' &
# %$&%$'&* (
! %
WN J.-/. 0 8A312$
8 W N 3 -/. X312$4 &
$ %$ 6& 5
By switching to the dual problem of this optimization problem, it is possible to incorporate the
kernel function, achieving a mapping that may not be feasible by calculating (possibly infinite)
feature vectors N# . For 87 9 5 chosen a-priori, the dual problem is
maximize
: ; <; & *
subject to
C
2A / / / <DET?
"! %
; & /; >
; & 2; . 8A
" ! %
2
;A& 2; . ;A@ & ,;@ ! B@
? @<! %
; <; & 2F GIH and ; ,; & *
! %
=
The solution of the regression estimate takes the form
>J *
! %
; & ,; . !V J.
In summary, SVM regression solves a quadratic optimization problem to find a hyperplane in the
kernel induced feature space that best estimates the data for an -insensitive linear loss function.
4
Spikernels
The quality of SVM learning is highly dependent on how the data is embedded in the feature
space via the kernel operator. For this reason, several studies have been devoted lately to developing new kernels [22, 23, 24]. In fact, for classification problems, a good kernel would render
the work of the classification algorithm trivial. With this in mind, we develop a kernel for neural
spiking activity.
4.1 Motivation
Our goal in developing a kernel for spike trains is to map similar patterns to nearby areas of the
feature space. Current methods for predicting response variables from neural activities use standard linear regression techniques (see for instance [15]) or or even replace the time pattern with
mean firing rates. A notable example is the population vector method [18]. Other approaches use
off-the-shelf learning algorithms, intended for general purpose. In the description of our kernel
we attempt to capture some well accepted notions on similarities between spike trains. We make
the following assumptions regarding similarities between spike patterns:
Pattern A
PatternA
PatternA
Pattern B
PatternB
PatternB
Timeof
Interest
Rate
Rate
Rate
Time
Time
Time
Figure 1: Illustrative examples of pattern similarities. Left: bin-by-bin comparison yields small
differences. Middle: patterns with large bin-by-bin differences that can be eliminated with some
time warping. Right: patterns whose suffix (time of interest) is similar and prefix is different.
The most commonly made assumption is that similar firing patterns may have small differences
in a bin-by-bin comparison. This type of variation is due to inherent noise of any physical system
but also responses to external factors that were not recorded and are not directly related the to
the task performed. On the left-hand side of Fig. 1 we show an example of two patterns that are
bin-wise similar though clearly not identical.
A cortical population may display highly specific patterns to represent specific information. It
is conceivable that some features of external stimuli are represented by population dynamics that
would be best described as ?temporal? coding.
Two patterns may be quite different in a simple bin-wise comparison but if they are aligned
by some non-linear time distortion or shifting, the similarity becomes apparent. An illustration
of such patterns is given in the middle plots of Fig. 1. In comparing patterns we would like to
induce a higher score when the time-shifts are small.
Patterns that are associated with identical values of an external stimulus at time $ may be
similar at that time but very different at $
when values of the external stimulus for these
patterns are no longer similar (as illustrated on the right-hand-side of Fig. 1).
4.2 Kernel definition
We describe the kernel by specifying the features that make up the feature space. Our construction of the feature space builds on the work of Lodhi et al. [24]. First, we need to introduce a few
-long index
more notations. Let be a sequence of length * 1. The set of all possible
243 ) % / / / ) 1 3 # . Also,
vectors defining a sub-sequence of is 9? *
) ? )J
1 over a pair of samples (firing rates). We also overload nolet ; # denote a bin-wise distance
tation and denote by . *
" ! % " a distance between sequences. The sequence
distance is the sum over the samples constituting the two sequences. Let # D 2 . The
component of our (infinite) feature vector N is defined as,
N *
! " #$% &('
1
*),+.-0/ ? 1 32 +.- 51 467
(1)
where and is a normalization constant that simplifies the calculation and and )F% is the first
index of ) . In words, N V is a sum over all n-long sub-sequences of . Each sub-sequence
is compared to (the feature coordinate) and is weighted up according to its similarity to .
In particular, part of the weight of each sub-sequence of reflects how concentrated the subsequence is toward the end of . Put another way, the entry indexed by measures how close
is to the time series near its end.
8
This definition seems to fit our assumptions on neural coding for the following reasons:
It allows for complex patterns: small values of and (or concentrated measures) mean that
each coordinate tends toward being either 2 or depending whether is almost identical to a
suffix of or not.
Patterns that are piece-wise similar to contribute to the feature coordinate with a weight
that decays as the sample-by-sample comparison between the sequences grows large.
We allow gaps in the indexes defining sub-sequences, thus, allowing for time warping.
Patterns that begin further from the required prediction time are penalized by an exponentially
decaying weight.
4.3 Efficient kernel calculation
he definition of N given by Eq. (1) requires the manipulation of an infinite feature space. Straightforward calculation of the feature values and performing the induced inner-product is clearly
impossible. Based on ideas from [24] we developed an indirect method for evaluating the kernel
through a recursion which can be performed efficiently using dynamic programing. We now
describe the recursion.
1
) 3 2 .+ - ; 1
Denote by the last entry in the sequence
. We now describe two recursive
equations for N with respect to the length of the time series and the sub-sequence length. Due
to the lack of space we skip some of the algebraic manipulations that are needed to derive the
recursions. The first equation is
N ),+ ? ; 1 7 N 7 7
(2)
Eq. (2) simply separates the sum over sub-sequences
1 of into two subsets: one where is not
specified by the index vectors and the latter where ) specifies . The second recursive equation
for N is, again, with respect to both the length of the sub-sequence ( ) and the length of the
sequence ,
32 1 + 1 ? 1 32 1 154 @ %
7
(3)
N*VP*
),+ + N* 7 7J %' @ 4V%0
@! %
1
The last equation
simply states that the feature is a sum over all possible values of ) . Note that
1
for
, ? @ is empty. Eqs. (2) and (3) are now used for computing the recursion equation for
G :
G 1 J#*
N*V V N
We plug Eq. (2) into N* V and plug Eq. (3) into N8 . Using algebraic manipulations we
N V *
replace integrals over scalar products of N by the proper kernels and get the following recursive
function,
1
G 1 J *
G 1 -
32 + 1 3 2 1 154 @
+
@<! %
- 1
G 4 J% %' @ 4 %0
*),+ ? ; 1 *),+ ? 1 (4)1
1
1
C ) 32 +.- 1 T ) 32 + 1 G<P* 2
if
# F#
G *
- of the integral in Eq. (4) is a constant, computing the entire
Assuming that the computation time
time. We can achieve a speed up by a factor of if
recursion requires .#
we cache the term on the 1 right hand side of Eq.(4) as follows. Define,
32 + 1 32 1 1 4 @ - 1
1
G R J8P*
+
G 4V% 8 %F' @ 4 % 8) + ? ; 1 ),+ ? + 1 1 1 (5)
@<! %
Separating the above sum into its two parts (one for *D#8 and one for the rest), and using
the definition of G R from Eq.(5) we get the following recursive equation for G R ,
(6)
G 1 R J8& * G 1 4 % *),+ ? ; 1 ) + ? 1 1 G 1 R J
132 1
132 1
C ) +.- ) + G R <*,2
# F#
G R <*
if
The initial conditions are:
Finally, the recursive equation for G
is,
G 1 J * G 1 - G 1 R J
1 < .
yielding an D.# dynamic programing solution for G
4.4 Spikernel variants.
The kernels defined by Eq.(1) consider only patterns of fixed length ( ). It makes sense to look
at sub-sequences of various lengths. Since a linear combination of kernels is also a kernel, we
can define our kernel to be
1
GQ$ #*
! %
G $ 7 /
The kernel summation can be interpreted as a concatenation of the feature vectors that these kernels represent. Weighted summation is concatenation of the feature vectors after first multiplying
them by the square root of the weights.
- compared.
Different choices of +; result in kernels that differ in the- way two rate values are
Say we assign ; to be the squared norm: ;E
- " ! % ;#" V" , the integral
in the kernel recursion Eq.(6) becomes:
)
) 7 4
8) + 1 ,) + ? 1
?
*
Note that the constant
goes to infinity as
"!$#%'& , which has an fold gain affect on G
goes to 1. This gain results in a kernel whose computation is numerically unstable. However, we
can easily cancel it with the constant . Substituting this result back into Eq.(4) we get
G 1 R J8P*
We show results for the norm.
5
- 1
G 4 %
7 ; 4 G
1 R J
Experimental results
Data collection: The data used in this work was recorded from the primary motor cortex of a
rhesus (Macaca mulatta) monkey (~4.5 kg). The animal?s care and surgical procedures accorded
with The NIH Guide for the Care and Use of Laboratory Animals (rev. 1996) and with the
Hebrew University guidelines supervised by the institutional committee for animal care and use.
The monkey sat in a dark chamber, and 8 electrodes were introduced into each hemisphere.
The electrode signals were amplified, filtered and sorted (MCP-PLUS, MSD, Alpha-Omega,
Nazareth, Israel). The data used in this report includes 31 single units and 16 multi-unit channels
(MUA) that were recorded in one session by 16 microelectrodes. The monkey used two planarmovement manipulanda to control 2 cursors (X and + shapes) on the screen to perform center-out
task. Each trial begun when the monkey centered both cursors on a central circle for 1.0-1.5s.
Either cursor could turn green, indicating the hand to be used in the trial (X for right arm and
+ for the left). Then, (after an additional hold period of 1.0-1.5s) one of eight targets appeared
at a distance of 4 cm from the origin and monkey had to move and reach the target in less than
2s to receive liquid reward. At the end of each session, we examined the activity of neurons
evoked by passive manipulation of the limbs and applied intracortical microstimulation (ICMS)
to evoke movements. The data presented here was recorded in penetration sites where ICMS
evoked shoulder and elbow movements. Penetration locations were verified by MRI (Biospec
Bruker 4.7 Tesla).
Data preprocessing and modeling: The movements and spike data were preprocessed to create a labeled corpus. We used only the data from trials on which the monkey succeeded in the
movement task and examined only the right hand movements. We partitioned the movement and
spike trains into 2 D( -long bins to get the spike counts and average hand movement velocities
in each segment. We then normalized the spike counts to achieve a zero mean and a unit variance
for each cortical unit. A labeled example A(F: ( for time $ consisted of the I or velocity as
the target label and the preceding 2 second (i.e. 10 segments) of spike counts from all ( ) cortical
units as the input sequence ( . In our experiments the number of cortical units was hence
the matrix of spike counts is of size K 2 .
Each kernel employs a few parameters ( J # / / / ) and the SVM regression setup requires setting
of two more parameters, ( and ). Therefore, the learning task is performed in two stages. First,
we used cross-validation to choose the best parameters using
a validation set. Then, we learned to
predict the response variable using SVR. Overall we had minutes of clean cortical recordings
of which we used the first 2 minutes as our validation set for tuning the parameters. The second
half was used for training
and testing. The kernels that we tested are the exponential kernel
(GQ *
, the homogeneous polynomial kernel (GQ *, W , * ), the
standard scalar product kernel (GQ#* W ) which boils down to a linear regression, and the
Spikernel.
4 +3- 4 1
)
Accuracy results were obtained by performing 5-fold cross-validation for each kernel. The 5
folds were produced by randomly splitting the data into 5 groups: out of the groups were
used for training and the rest of the data was used for evaluation. This was process was repeated
5 times by using once each fifth of the data as a test set. We computed the correlation coefficient
per fold for each kernel. The per-fold results are shown in Fig. 2A as a scatter plot. Each point
compares the Spikernel score versus one of the adversaries. The Spikernel out-performed the
rest in every single test set. We found out that predicting the : signal was more difficult than
predicting the :; signal. This may be the result of sampling a population of cortical units that
are tuned more to the left-right directions. The mean results are summarized in Fig. 2B. The
linear regression method (scalar-product kernel) came in last. It seems that both re-mapping
the data by standard kernels and by the Spikernel allow for better prediction models. The ordering of the kernels by their mean score is consistent when looking at per-test results, except
for the exponential kernel which is out-performed by linear regression in some of the tests.
A
0.8
B
MeanValues
0.7
Kernel
0.6
Spikernel
0.5
vx
vy
Mean r
Mean r
Parameters
Spikernel
0.70
0.49
?=0.99,, ?=0.7, N=5
C=0.01
(s?t)2
0.62
0.36
(s?t)3
0.56
0.29
C=10
exp(-?(s-t)2)
0.47
0.25
? =10-6 C=1
Lin. - (s?t)
0.44
0.21
C=0.01
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
standardembeddings
6
0.7
0.8
Figure 2:
The Spikernel is compared to (color & shape
coded) standard kernels.
A - Scatter plot of correlation coefficient
results in all cross-validation folds.
B ? Mean correlation coefficient values for
each kernel type
The Spikernel out-performs in all folds.
Summary
In this paper we described an approach based on recent advances in kernel-based learning for
predicting response variables from neural activities. On the data we collected, all the kernels we
devised outperform the standard scalar product that is used in linear regression. Furthermore,
the Spikernel, a biologically motivated kernel operator for spike counts outperforms all the other
kernels. Our current research is focused in two directions. First, we are investigating the adaptations of the Spikernel to other neural activities such as Local Field Potentials (LFP). Our second
and more challenging goal is to devise statistical learning algorithms that use the Spikernel as
part of a dynamical system that may incorporate bio-feedback. We believe that such extensions
are an important and necessary steps toward operational neural prostheses.
Acknowledgments: Supported in part by the German-Israeli-Foundation for Scientific Research and Development (GIF) and by the German-Israeli Project Cooperation (DIP) established
by BMBF.
References
[1] Georgopoulos AP, Schwartz AB, and Kettner RE. Neuronal population coding of movement direction.
Science, 233:1416?1419, 1986.
[2] Apostolos P. Georgopoulus, Ronald E. Kettner, and Andrew B. Wchwartz. Primate motor cortex and
free arm movements to visual targets in three-dimensional space. The Journal of NeuroScience, 8,
August 1988.
[3] Schwartz AB. Direct cortical representation of drawing. Science, 265:540?542, 1994.
[4] A. P. Georgopoulus, J.F. Kalaska, and J.T. Massey. Spatial coding of movements: A hypothesis concerning the coding of movement of movement direction by motor cortical populations. Experimental
Brain Research (Supp), 7:327?336, 1983.
[5] Daniel W. Moran and Andrew B. Schwartz. Motor cortical representation of speed and direction
during reaching. Journal of Neurophysiology, 82:2676?2692, 1999.
[6] Mark Laubach, Johan Wessberh, and Miguel A. L. Nicolelis. Cortical ensemble activity increasingly
predicts behavior outcomes during learning of a motor task. Nature, 405(1), June 2000.
[7] Fu QG, Flament D, Coltz JD, and Ebner TJ. Relationship of cerebellar purkinje cell simple spike
discharge to movement kinematics in the monkey. Journal of Neurophysiology, 78, 1997.
[8] Donchin O, Gribova A, Steinberg O, Bergman H, and Vaadia E. Primary motor cortex is involved in
bimanual coordination. Nature, 1998.
[9] Anthony G. Reina, Daniel W. Moran, and Andrew B. Schwartz. On the relationship between joint
angular velocity and motor cortical discharge during reaching. Journal of Neurophysiology, 85:2576?
2589, 2001.
[10] E. Vaadia, I. Haalman, M. Abeles, H. Bergman, Y. Prut, H. Slovin, and A. Aertsen. Dynamics
of neuronal interactions in monkey cortex in relation to behavioral events. Nature, 373:515?518,
Febuary 1995.
[11] Nicolelis MA Laubach M, Shuler M. Independent component analyses for quantifying neuronal
ensemble interactions. J Neurosci Methods, 1999.
[12] A. Reihle, S. Grun, M. Diesmann, and A. M. H. J. Aersten. Spike synchronization and rate modulation
differentially involved in motor cortical function. Science, 278:1950?1952, 1997.
[13] Chapin JK, Moxon KA, Markowitz RS, and Nicolelis MA. Real-time control of a robot arm using
simultaneously recorded neurons in the motor cortex. Nature Neuroscience, 2:664?670, 1999.
[14] Miguel A. L. Nicolelis. Actions from thoughts. Nature, 409(18), January 2001.
[15] Johan Wessberg, Christopher R. Stambaugh, Jerald D. Kralik, Pamela D. Beck, Mark Laubach,
John K. Chapin, Jung Kim, James Biggs, Mandayam A. Srinivasan, and Miguel A. L. Nicolelis.
Real-time predictionof hand trajectory by ensembles of cortical neurons in primates. Nature, 408(16),
November 2000.
[16] Nicolelis MA, Ghazanfar AA, Faggin BM, Votaw S, and Oliveira LM. Reconstructing the engram:
simultaneous, multisite, many single neuron recordings. Neuron, 18:529?537, 1997.
[17] Isaacs RE, Weber DJ, and Schwartz A. Work toward real-time control of a cortical neural prothesis.
IEEE Trans Rehabil Eng, 8(196?198), 2000.
[18] Dawn M. Taylor, Stephen I. Helms Tillery, and Andrew B. Schwartz. Direct cortical control of 3d
neuroprosthetic devices. Science, 2002.
[19] Mijail D. Serruya, Nicholas G. Hatsopoulus, Liam Paninski, Matthew R. Fellows, and John P.
Donoghue. Instant neural control of a movement signal. Nature, 416:141?142, March 2002.
[20] A. Smola and B. Sch. A tutorial on support vector regression, 1998.
[21] Vladimir Vapnik. The Nature of Statistical Learning Theory. Springer, N.Y., 1995.
[22] Tommi S. Jaakola and David Haussler. Exploiting generative models in discriminative calssifiers. In
NIPS, 1998.
[23] Marc G. Genton. Classes of kernels for machine learning: A statistical perspective. Journal of
MAchine Learning Research, 2:299?312, January 2001.
[24] Huma Lodhi, John Shawe-Taylor, Nello Cristianini, and Christopher J. C. H. Watkins. Text classification using string kernels. In NIPS, pages 563?569, 2000.
| 2205 |@word neurophysiology:3 trial:3 middle:2 mri:1 polynomial:1 seems:2 norm:2 lodhi:2 r:2 rhesus:1 eng:1 n8:1 carry:1 initial:1 series:2 score:3 liquid:1 daniel:2 tuned:2 prefix:2 outperforms:1 current:3 comparing:1 ka:1 activation:1 scatter:2 written:1 john:3 ronald:1 shape:2 motor:13 plot:3 half:1 generative:1 device:2 nervous:1 wessberg:1 filtered:1 contribute:3 location:1 mathematical:2 direct:2 apostolos:1 shpigelman:1 behavioral:1 ghazanfar:1 introduce:3 manner:1 behavior:5 examine:1 growing:1 multi:2 brain:5 ry:1 cache:1 elbow:1 becomes:2 begin:1 estimating:1 notation:3 project:1 chapin:2 israel:2 what:1 kg:1 interpreted:1 minimizes:2 cm:1 gif:1 developed:1 monkey:8 string:1 transformation:1 temporal:2 fellow:1 every:1 schwartz:6 control:5 unit:14 medical:1 bio:1 before:1 engineering:1 local:1 tends:1 tation:1 switching:1 encoding:1 firing:12 modulation:1 ap:1 plus:1 examined:2 evoked:2 specifying:1 challenging:1 limited:1 liam:1 jaakola:1 acknowledgment:1 testing:1 lfp:1 recursive:5 x3:1 procedure:1 area:2 physiology:2 thought:1 word:1 induce:1 svr:3 get:4 close:2 operator:8 put:1 impossible:1 eilon:2 spikernels:4 map:2 center:2 jerusalem:1 straightforward:1 go:2 focused:1 splitting:1 microstimulation:1 haussler:1 conundrum:1 embedding:1 population:12 notion:3 coordinate:4 variation:1 discharge:2 construction:2 target:6 controlling:1 homogeneous:1 hypothesis:1 origin:1 bergman:2 associate:1 velocity:6 jk:1 predicts:1 labeled:2 observed:1 capture:1 dissociate:1 ordering:1 movement:28 reward:1 cristianini:1 dynamic:6 segment:2 predictive:1 basis:1 biggs:1 easily:1 joint:2 indirect:1 various:5 represented:1 derivation:1 train:3 describe:6 artificial:2 outside:1 outcome:1 whose:3 quite:1 apparent:1 say:2 drawing:2 reconstruct:1 distortion:1 ability:1 itself:1 vaadia:3 sequence:26 net:1 propose:1 interaction:3 product:11 gq:6 adaptation:1 relevant:1 aligned:1 rapidly:1 achieve:2 amplified:1 bed:1 description:2 tillery:1 macaca:1 differentially:1 getting:1 exploiting:1 empty:1 electrode:2 produce:1 derive:2 develop:1 ac:2 depending:1 miguel:3 omega:1 measured:2 andrew:4 school:2 eq:11 solves:1 implemented:1 c:1 skip:1 differ:1 direction:11 tommi:1 centered:1 vx:1 genton:1 bin:10 assign:1 mua:1 summation:2 extension:1 hold:1 stambaugh:1 confine:1 around:1 considered:1 exp:1 cb:1 mapping:4 predict:4 lm:1 matthew:1 substituting:1 major:2 institutional:1 purpose:1 estimation:1 label:1 coordination:1 create:1 tool:1 weighted:3 reflects:1 minimization:1 clearly:2 reaching:3 shelf:1 encode:1 focus:1 june:1 consistently:1 mijail:1 kim:1 sense:2 febuary:1 dependent:1 suffix:2 typically:1 entire:1 w:1 relation:1 overall:1 dual:2 classification:3 denoted:1 priori:1 development:4 plan:1 constrained:1 animal:3 spatial:1 field:1 once:1 eliminated:1 sampling:1 identical:3 broad:1 look:1 cancel:1 lavi:1 markowitz:1 report:3 stimulus:4 inherent:1 employ:3 few:2 randomly:1 microelectrodes:1 simultaneously:2 beck:1 intended:1 consisting:1 ourselves:1 attempt:3 ab:2 organization:1 interest:2 highly:2 evaluation:1 yielding:1 tj:1 devoted:1 accurate:1 integral:3 succeeded:1 fu:1 necessary:1 indexed:1 taylor:2 desired:1 prosthesis:2 causal:1 circle:1 re:3 instance:1 modeling:4 purkinje:1 planned:1 timeof:1 moxon:1 entry:2 subset:1 hundred:1 predictor:1 paz:1 faggin:1 abele:1 huji:2 interdisciplinary:1 sequel:1 off:1 together:1 again:1 squared:1 recorded:6 tube:1 central:1 choose:1 possibly:1 external:6 supp:1 suggesting:2 potential:1 de:1 intracortical:1 sec:5 coding:5 includes:1 coefficient:3 summarized:1 notable:1 piece:1 performed:5 root:1 decaying:1 complicated:1 minimize:1 il:2 square:1 accuracy:1 variance:1 prut:1 efficiently:1 ensemble:3 yield:1 surgical:1 reina:1 produced:1 substring:1 multiplying:1 trajectory:1 tissue:1 simultaneous:2 explain:2 reach:1 definition:4 frequency:1 involved:2 james:1 isaac:1 associated:1 mi:1 monitored:1 boil:1 gain:2 auditory:1 begun:1 color:1 organized:1 back:1 higher:1 supervised:1 response:4 though:3 spikernel:14 furthermore:1 angular:1 stage:1 smola:1 correlation:3 hand:9 invite:1 christopher:2 lack:1 defines:1 quality:1 scientific:1 grows:1 believe:1 facilitate:1 usage:1 concept:2 normalized:1 consisted:1 hence:1 laboratory:1 illustrated:1 haalman:1 during:5 width:1 illustrative:1 cosine:1 demonstrate:1 performs:1 interface:2 gh:1 passive:1 weber:1 wise:4 instantaneous:4 recently:1 dawn:1 nih:1 mulatta:1 functional:1 spiking:2 physical:2 insensitive:2 exponentially:1 he:1 relating:1 numerically:1 refer:1 tuning:4 shuler:1 session:2 shawe:1 had:2 dot:1 dj:1 robot:1 cortex:6 similarity:5 longer:1 hadassah:1 showed:1 recent:3 perspective:1 hemisphere:1 termed:1 manipulation:4 came:1 devise:1 muscle:1 additional:1 care:3 preceding:1 employed:1 maximize:1 period:1 living:1 signal:4 stephen:1 multiple:2 calculation:3 plug:2 long:4 cross:3 lin:1 msd:1 devised:1 kalaska:1 concerning:1 coded:2 controlled:1 qg:1 prediction:4 variant:1 regression:17 kernel:57 represent:4 normalization:1 cerebellar:1 serruya:1 cell:5 receive:1 sch:1 rest:3 recording:6 subject:3 induced:2 near:1 enough:1 concerned:1 affect:1 fit:1 inner:4 regarding:1 simplifies:1 idea:1 donoghue:1 shift:1 whether:1 motivated:3 effort:1 render:1 algebraic:2 speaking:1 action:2 dark:1 oliveira:1 extensively:1 ten:1 concentrated:2 specifies:1 outperform:2 vy:1 tutorial:1 estimated:1 neuroscience:2 per:3 flament:1 broadly:1 srinivasan:1 group:3 achieving:2 monitor:1 clarity:1 preprocessed:1 clean:1 verified:1 massey:1 sum:6 throughout:1 almost:1 electronic:1 display:1 fold:7 quadratic:1 activity:16 infinity:1 deficiency:1 georgopoulus:2 georgopoulos:1 sake:1 bpo:1 nearby:1 diesmann:1 speed:2 performing:3 relatively:1 developing:2 according:1 combination:1 poor:1 march:1 describes:1 increasingly:2 reconstructing:1 partitioned:1 rev:1 biologically:3 penetration:2 primate:2 explained:1 equation:7 mcp:1 discus:2 count:7 committee:1 turn:1 singer:2 mind:1 merit:1 needed:1 german:2 kinematics:1 end:3 accorded:1 eight:1 limb:1 chamber:1 nicholas:1 altogether:1 jd:1 bimanual:1 instant:1 xw:1 calculating:1 yoram:1 build:1 warping:2 move:1 spike:15 primary:3 aertsen:1 conceivable:1 distance:4 separate:1 mapped:1 separating:1 concatenation:3 collected:1 unstable:1 trivial:1 reason:2 toward:4 nello:1 assuming:1 length:8 index:5 manipulanda:1 illustration:1 relationship:2 hebrew:2 laubach:3 vladimir:1 setup:1 difficult:1 kralik:1 design:2 implementation:1 proper:1 guideline:1 ebner:1 perform:3 allowing:1 neuron:8 november:1 january:2 defining:2 ever:1 shoulder:1 looking:1 arbitrary:1 august:1 introduced:1 david:1 namely:1 mechanical:1 pair:1 required:1 specified:1 learned:1 huma:1 established:1 pop:1 nip:2 israeli:2 trans:1 adversary:1 dynamical:1 pattern:21 appeared:1 patterna:2 gaining:1 green:1 shifting:1 power:1 event:1 nicolelis:6 hybrid:1 rely:1 predicting:6 recursion:6 arm:4 innerproduct:1 brief:1 lately:1 extract:1 n6:1 text:1 understanding:3 engram:1 embedded:1 loss:3 synchronization:1 versus:1 validation:5 foundation:1 slovin:1 consistent:1 summary:2 penalized:1 supported:1 last:3 cooperation:1 free:1 jung:1 side:3 allow:7 guide:1 fall:1 fifth:1 distributed:1 curve:1 boundary:1 cortical:26 evaluating:1 feedback:1 dip:1 neuroprosthetic:1 commonly:1 made:1 collection:1 preprocessing:1 bm:1 constituting:1 alpha:1 preferred:2 evoke:1 investigating:1 sat:1 corpus:1 discriminative:1 subsequence:1 un:1 helm:1 nature:8 learn:2 channel:2 kettner:2 johan:2 operational:1 complex:2 anthony:1 marc:1 main:1 hbf:1 linearly:1 motivation:1 noise:1 neurosci:1 tesla:1 repeated:1 neuronal:7 fig:5 referred:1 site:1 screen:1 bmbf:1 precision:1 sub:10 explicit:1 exponential:2 watkins:1 steinberg:1 minute:2 down:1 specific:2 moran:2 decay:1 physiological:1 svm:3 ih:1 vapnik:2 donchin:1 cursor:3 gap:1 multisite:1 pamela:1 simply:2 paninski:1 visual:1 scalar:6 springer:1 aa:1 ma:3 goal:4 sorted:1 quantifying:1 rehabil:1 replace:2 brainmachine:1 feasible:1 programing:2 infinite:3 except:1 reducing:1 hyperplane:2 shpigi:1 accepted:1 experimental:3 indicating:1 formally:2 support:4 mark:2 latter:1 overload:1 incorporate:2 dept:1 tested:2 |
1,325 | 2,206 | Exact MAP Estimates by (Hyper)tree Agreement
Martin J. Wainwright,
Department of EECS,
UC Berkeley,
Berkeley, CA 94720
[email protected]
Tommi S. Jaakkola and Alan S. Willsky,
Department of EECS,
Massachusetts Institute of Technology,
Cambridge, MA, 02139
tommi,willsky @mit.edu
Abstract
We describe a method for computing provably exact maximum a posteriori (MAP) estimates for a subclass of problems on graphs with cycles.
The basic idea is to represent the original problem on the graph with cycles as a convex combination of tree-structured problems. A convexity
argument then guarantees that the optimal value of the original problem
(i.e., the log probability of the MAP assignment) is upper bounded by the
combined optimal values of the tree problems. We prove that this upper
bound is met with equality if and only if the tree problems share an optimal configuration in common. An important implication is that any such
shared configuration must also be the MAP configuration for the original
problem. Next we develop a tree-reweighted max-product algorithm for
attempting to find convex combinations of tree-structured problems that
share a common optimum. We give necessary and sufficient conditions
for a fixed point to yield the exact MAP estimate. An attractive feature
of our analysis is that it generalizes naturally to convex combinations of
hypertree-structured distributions.
1 Introduction
Integer programming problems arise in various fields, including machine learning, statistical physics, communication theory, and error-correcting coding. In many cases, such
problems can be formulated in terms of undirected graphical models [e.g., 1], in which the
cost function corresponds to a graph-structured probability distribution, and the problem of
interest is to find the maximum a posteriori (MAP) configuration.
In previous work [2], we have shown how to use convex combinations of tree-structured
distributions in order to upper bound the log partition function. In this paper, we apply
similar ideas to upper bound the log probability of the MAP configuration. As we show,
this upper bound is met with equality whenever there is a configuration that is optimal
for all trees, in which case it must also be a MAP configuration for the original problem.
The work described here also makes connections with the max-product algorithm [e.g.,
3, 4, 5], a well-known method for attempting to compute the MAP configuration, one
which is exact for trees but approximate for graphs with cycles. In the context of coding
problems, Frey and Koetter [4] developed an attenuated version of max-product, which is
guaranteed to find the MAP codeword if it converges. One contribution of this paper is
to develop a tree-reweighted max-product algorithm that attempts to find a collection of
tree-structured problems that share a common optimum. This algorithm, though similar to
both the standard and attenuated max-product updates [4], differs in key ways.
The remainder of this paper is organized as follows. The next two subsections provide
background on exponential families and convex combinations. In Section 2, we introduce
the basic form of the upper bounds on the log probability of the MAP assignment, and then
develop necessary and sufficient conditions for it to tight (i.e., met with equality). In Section 3, we develop tree-reweighted max-product algorithms for attempting to find a convex
combination of trees that yields a tight bound. We prove that for positive compatibility
functions, the algorithm always has at least one fixed point; moreover, if a key uniqueness
condition is satisfied, the configuration specified by a fixed point must be MAP optimal.
We also illustrate how the algorithm, like the standard max-product algorithm [5], can fail if
the uniqueness condition is not satisfied. We conclude in Section 4 with pointers to related
work, and extensions of the current work.
1.1 Notation and set-up
!
#"$
- ., /1032544462
(7 %*'& )
+!
,
) ) 89 (
=<
>
:
;. A! )@? A!> : )B? C:
;
8D (FE AGIHKJ@LM#N >PO=Q =<A >R> <S> ( T
In a minimal exponential representation, the functions
are affinely
For
independent.
example, one minimal representation of a binary process (i.e., U1
for all
) using
pairwise potential functions is the usual Ising model, in which the collection of potentials
In this case, the index set is given by
;Vc YWZd ) .
XInYmost
[Z of ourWanalysis,
R\ ) ]
^
we_ `ause an. overcomplete
representation, in which
:bthere
=<> . In particular,
we use indicator
are linear dependencies among the potentials
functions as potentials:
< e f g h i e f g h
j` E %k!
(1a)
< l\e f'm n
\ h i e f g oi \e m g \ K
_ pq E r%'&P1` 2` \ (1b)
where the indicator function i e f g is equal to one if $% , and zero otherwise. In this
E %s )
tu E %`v with the
case, the index set : consists of the union of :
E
E
edge indices : 5
_ %s&P ) ]
_ #! r%6&Pp!pw2`x\ .
zp{S| I}^~
?t}?J?? O?x? 8D (FE A .
y
Of interest to us is the maximum a posteriori configuration (
Equivalently,
this MAP configuration as the solution of the integer program
?-? express
? gAI?k}?J ? weO? can
(FE A , where
? (FE Ah ?gARB; ( ???? O?? ? A? e f < e fgW? ? O^? ? A?l\e f'm < l\e f'mRnW=
R\ (2)
? '? \g? f
? m
f
?
Note that the function nA is the maximum of a collection of linear functions, and hence
is convex [6] as a function of A , which is a key property for our subsequent development.
Consider an undirected (simple) graph
. For each vertex
, let
be a
random variable taking values in the discrete space
. We use the
letters
to denote particular elements of the sample space . The overall random vector
takes values in the Cartesian product space
, where
. We make use of the following exponential representation of a graph-structured
distribution
. For some index set , we let
denote a collection of
potential functions defined on the cliques of , and let
be a vector of
real-valued weights on these potential functions. The exponential family determined by
is the collection of distributions
.
1.2 Convex combinations of trees
Ay
?
Let be a particular parameter vector for which we are interested in computing
this section, we show how to derive upper bounds via the convexity of . Let
? Ay . In
? denote
Unj
?
R
A
?
A ARn?
?
?
?
Y
?
A
?n? RA n?
? 0 ARn?5 ?) ? A ?n? 0
?
over the
In order to define a convex combination, we require
a probability distribution
set of spanning
? that is, a vector
n?K ? ) ? such that
n?5` trees
. For any distribution , we define its support, denoted by PLPL3 ,
Ntobe O
the
set of trees to which it assigns
In the sequel, we will
strictly positivethatprobability.
also be interested in the probability ~ 5!?
a given edge appears in
#
a spanning tree ? chosen randomly under . We let "!
) +! treerepresent
a vector
of edge appearance probabilities, which must belong to the
spanning
polytope
for every edge j![see . 2].
We say that a distribution (or the vector $! ) is valid if &%
A convex
parameter vectors is defined via the weighted sum
ofoARexponential
n?t , whichof weexponential
denote compactly as '&&(
ARn? *) . Of particular importance are
Ncollections
O
n?5combination
parameters for which there exists a convex combination that
y ,
M - E .. '&"( AR?*) Ay T . For any
is equal to A y . Accordingly, we define the set + A?
y .
valid distribution , it can be seen that there exist pairs * E p+ As
Example
cycle). To illustrate these definitions, consider a binary distribution
1 (Single
( #w
for all nodes
d ) defined by a single cycle on 4 nodes. Consider a target
distribution in the minimal Ising form 89 (FE A?y ` HJsL 0 /j?.//0j?./021j?.21 0 "
3 Asy ; otherwise stated, the target distribution is specified by the minimal parameter AYy
( dud udu4) , where the zeros represent the fact that A?y ` for all
ua . The
a particular spanning tree of , and let
denote the set of all spanning trees.
For each spanning tree
, let
be an exponential parameter vector of the same
dimension as that respects the structure of . To be explicit, if is defined by an edge
set
, then
must have zeros in all elements corresponding to edges not in
. However, given an edge belonging to two trees
and , the quantity
can be different than
. For compactness, let
denote the
full collection, where the notation
specifies those subelements of corresponding to
spanning tree .
56
56
7
56
89
:
89
89
=
;<
JL
;<
ACBEDGAQFIBEH
DRBKFJMHTS LON NPN
Figure 1. A convex combination of four distributions
ning tree , is used to approximate the target distribution
v ?2U )WV Y^YX
;<
>?
>?
>?
@
, each defined by a spanon the single-cycle graph.
Y ARn? U
/4)
/4)
four possible spanning trees
on a single cycle on four nodes are
illustrated in Figure 1. We define a set of associated exponential parameters
as follows:
ARn? 0 XZ ( / .. )
ARn?0=h XZ ( ..
ARn?= XZ ( / . [)
ARn?\1?h XZ ( .. /
Z n?U bW]WX for all ?U1^ . Withy this uniform
Finally, we choose
distribution over trees,
we have 1
]_X for each edge, and '"`( ABn?P)S A , so that - E pa+ A?y .
2 Optimal upper bounds
- E t + Asy
? A?y '""( ? g ABn?
*)
? Asy ? n? ? gAR? v ? ?5*?kO?}?J ? ?nARn? B;* ( ?
(3)
?
?
Now suppose that there exists an (y k, that attains the maximum defining nARn? for
each tree ? YPLRLD . In this case, it is clear that the bound (3) is met with equality. An
?
important implication is that the configuration (y also attains the maximum defining A?y ,
so that it is an optimal solution to the original problem.
In fact, as we show below, the converse to this statement also holds. More formally, for any
exponential parameter vector ARn? ? , let `*nARn? be the collection of configurations (
that attain the maximum defining gAR? , defined as follows:
`
nARn?t ( ! , ) ?nARn? B; (
? ?gABn?KB;* (
?
~k
} ( , (4)
With
this notation, the critical property is that the intersection `
+-*
`
nARn? of configurations optimal for all tree-structured problems is non-empty.
We thus have the following result:
Proposition 1 (Tightness of bound). The bound of equation (3) is tight if and only if there
(y ( k, that for each ? PLPL3 achieves the maximum defining
exists
? gAR?a configuration
y ` *** .
.
In
other
words,
Proof. Consider some pair - E ^+ A?y . Let (y be a configuration that attains the max?
imum defining Ay . We write the difference of the RHS and the LHS of equation (3) as
follows:
? ? ? gABn?
*" ? Asy ? n? ? nARn? *"$? ARy B;g(y
?
?
? n? gAR?
" ?nARn? B;n(y ?
?
Now for each ? PLPL3 , the term gAR? W"u?gABn?tB;(y ? is non-negative, and equal
to zero only when (y belongs to `*gABn?t . ? Therefore, the bound is met with equality if
and only if (y achieves the maximum defining gAR? for all trees ? PLPL3 .
Proposition 1 motivates the following strategy: given a spanning tree distribution , find
a collection of exponential parameters .
that the following holds:
An^?n? oA ^n?such
E
y . (b) Mutual agreement:
(a) Admissibility: The pair *
satisfies N
#
A
The intersection
`*gA^?
of tree-optimal configurations is non-empty.
If (for a fixed ) we are able to find a collection satisfying these two properties, then Proposition 1 guarantees that all configurations in the (non-empty) intersection
`nAn? achieve the maximum defining ? A?y . As discussed above, assuming that
assigns
strictly positive probability to every edge in the graph, satisfying the admissibility condition is not difficult. It is the second condition of mutual optimality on all trees that
With the set-up of the previous section, the basic form of the upper bounds follows by
, we have the
applying Jensen?s inequality [6]. In particular, for any pair
upper bound
. The goal of this section is to examine this bound, and
understand when it is met with equality. In more explicit terms, the upper bound can be
written as:
poses the challenge.
3 Mutual agreement via equal max-marginals
A ^n?
We now develop an algorithm that attempts to find, for a given spanning tree distribution ,
a collection
satisfying both of these properties. Interestingly, this algorithm
is related to the ordinary max-product algorithm [3, 5], but differs in several key ways.
While this algorithm can be formulated in terms of reparameterization [e.g., 5], here we
present a set of message-passing updates.
3.1 Max-marginals
of our development is the fact [1] that any tree-structured distribution
D8The
+!(FE ARfoundation
,?the
corresponding
can be factored in terms of its max-marginals. In particular, for each node
single node max-marginal is defined as follows:
=gW ? ?k}? J
89 ( E ABn?
(5)
In words, for each Sw # , =nWK is the maximum probability over the subset of configurations ( with element fixed to S . For each edge
_ #! , the pairwise max-marginal
is defined analogously as l\ g
\ $?k}?J ? ? ? ?
? ? ?
89 ( E AR?
. With these
definitions, the max-marginal tree factorization [1] is given by:
89 (FE ARn?t G ?O ? =nW ? '? \g? O^? ? ? =nl W\ nK
\ nR\ \
(6)
One interpretation of the ordinary max-product algorithm for trees, as shown in our related
work [5], is as computing this alternative representation.
j!
+
!
( )
+
Suppose moreover that for each node
, the following uniqueness condition holds:
, the max-marginal
has a unique optimum .
Uniqueness Condition: For each
In this case, the vector
is the MAP configuration for the tree-structured
distribution [see 5].
3.2 Tree-reweighted max-product
t A ??
8D (FE A ?5 ?
l \ 8D F( E A n?5
5
?
)
+! xZ l\ )
_ p ` ?
ACBEDGF*HTBKJ"NPNaA BEDGF N !#"$%'&)( $ +B * $ N , $- .+/" %'0 , / $ ( B+$ * . B+$ * N $21 . * B+. * N . N
(
(
(
where 3 is a constant independent of . As long as satisfies the Uniqueness Condition,
A ^n) ?
.bThis
mutual
the configuration (
must be the MAP configuration for each treestructured distribution 8D (FE
agreement on trees, in conjunction with
(
the admissibility of , implies that is also the MAP configuration for 8D (FE A?y .
For each valid ! , there exists a tree-reweighted max-product algorithm designed to find
the requisite set of max-marginals via a sequence of message-passing operations. For
each edge
_ !Y , let 4v\n=nWK be the message passed from node _ to node
. It is
<
a vector of length ! , with one element for each state %/Y . We use ?gS E A?y as a
The tree-reweighted max-product method is a message-passing algorithm, with fixed points
that specify a collection of tree exponential parameters
satisfying the ad
missibility condition. The defining feature of is that the associated tree distributions
all share a common set
of max-marginals. In particular, for
a given tree with edge set
, the distribution
is specified compactly by
the subcollection
as follows:
(7)
1
1
We use this notation throughout the paper, where the value of
may change from line to line.
N f A?y e f < e f?gSK , with the quantity < \KgW? R\ E A?y l\o similarly defined. We use
4Xl\ to specify a set of functions a l\ as follows:
$ B+* $ N
$ B+* $ F HS $ N
$ B+* $ N
"
(
% , $ /
$ B+* $ N
. B+* . N
% , $ / .
% , . / $
$. B+* $ 1 * . N
$. B+* $ 1 * . FYHTS N
,
,
/
/
. $ B+* $ N
$. B+* . N
(
0 < \KgW? R\ E A?y l\o3? < nW E A?y 3? A=y \ < gB\ E A=y \o .
where l\KgS= B\ E Ay IHKJ@L
5 can be used to define a tree-structured distriFor each tree ? , the subcollection
F
(
E
bution 8 , in a manner analogous to equation (7). By expanding the expectation
'lowing:
&(
8 (FE 5*) and making use of the definitions of and l \ , we can prove the folLemma 1 (Admissibility). Given any collection l\ defined by a set of messages
?
^38 (FE 5 is equivalent
as in equations (8a) and (8b), the convex combination N
F
(
E
to
^89 Asy up to an additive constant.
set of max-marginals for
We now need to ensure that
l\ are[1, a5]consistent
to
impose,
for each edge
_ , the
each tree-distribution 8 (FE 5 . It is sufficient
O?
edgewise consistency condition ?k}?J l\nW=
\ 3 =gW . In order to enforce this
condition, we update the messages in the following manner:
shorthand for
the messages
!
(8a)
(8b)
#
"
w l\
P , update the messages as follows:
2. For iterations `
. B+* . N
% , .+/ $
$
.
$
1
$
.
.
.
.
.
. $ B+* $ N
,
+B * * F HS N
+B * F HS N
/
$.
%
$. B+* . N
Using the definitions of and l \ , as well as the message update equation (9), the following result can be proved:
Lemma 2 (Edgewise consistency). Let be a fixed point of the message update equation (9), and let j
l \ be defined via as in equations (8a) and (8b) respectively. Then the edgewise consistency condition is satisfied.
The message update equation (9) is similar to the standard
max-product algorithm [3, 5].
Indeed, if
is actually a tree, then we must have l\ for every edge
_ ,
in which case equation (9) is precisely equivalent to the ordinary max-product update.
However, if has cycles, then it is impossible to have l\p for every edge ]
^
_ w ,
so that the updates in equation (9) differ from ordinary max-product in some key ways.
<
First of all, the weight A?y l\ on the potential function l\ is scaled by the (inverse of the) edge
appearance probability W] l\^ . Secondly, for each neighbor ! g_ ?
, the
incoming
. Third
message 4 \ is scaled by the corresponding edge appearance probability \
of all, in sharp contrast to standard [3] and attenuated [4] max-product updates, the update
of message 4v\n ? that is, from _ to
along edge
_ ? depends on the reverse direction
message 4?l\ from
to _ along the same edge. Despite these differences, the messages
Algorithm 1 (Tree reweighted max-product).
4&$ with arbitrary positive real numbers.
1. Initialize the messages %$
'
(
*),+
.2 -0/13 4 658
9 7
;:
=<>
;:
@?
)
B
>C
:
)
:
A
(9)
FE
D
E
can be updated synchronously as in ordinary max-product. It also possible to perform
reparameterization updates over spanning trees, analogous to but distinct from those for
ordinary max-product [5]. Such tree-based updates can be terminated once the trees agree
on a common configuration, which may happen prior to message convergence [7].
3.3 Analysis of fixed points
In related work [5], we established the existence of fixed points for the ordinary maxproduct algorithm for positive compatibility functions on an arbitrary graph. The same
proof can be adapted to show that the tree-reweighted max-product algorithm also has at
least one fixed point . Any such fixed point defines pseudo-max-marginals via
equations (8a) and (8b), which (by design of the algorithm) have the following property:
Theorem 1 (Exact MAP). If satisfies the Uniqueness Condition, then the configuration
is a MAP configuration for
with elements
.
}^~
F?k}?J O? n
89 (FE A?y
Proof. For each spanning tree ? a
n?
, the fixed point defines a tree-structured
distribution 89 (FE A?5 via equation (7). By Lemma 2, the elements of are edgewise
consistent. By the equivalence of edgewise and global consistency for trees [1], the subcollection
)
+! xZ \ ) ]
^
_ #! n?5 are exact max-marginals for the
tree-structured distribution 8D (FE A ?? . As a consequence, the configuration ( must belong to `*gAn?5
for each tree ? , so that mutual agreement is satisfied. By Lemma 1,
^D8D (FE A ^n? ( *) is equal to
^89 (FE A?y , so that(Fadmissibility
is
the convex combination '&"(
E
y
satisfied. Proposition 1 then implies that is a MAP configuration for 8D A .
(
3.4 Failures of tree-reweighted max-product
In all of our experiments so far, the message updates of equation (9), if suitably relaxed,
have always converged.2 Rather than convergence problems, the breakdown of the algorithm appears to stem primarily from failure of the Uniqueness Condition. If this assumption is not satisfied, we are no longer guaranteed that the mutual agreement condition is
satisfied (i.e., may be empty). Indeed, a configuration belongs to
if and only if the following conditions hold:
for every
Node optimality: The element must achieve
.
Edge optimality: The pair must achieve
for all
.
For a given fixed point that fails the Uniqueness Condition, it may or may not be possible to satisfy these conditions, as the following example illustrates.
` *
(
g \
` *-
?t}?J ? n
+
k? }?J ? ? \ n
\
_ #!
D8 (FE A?y ? ( 4)
v ( [)
(FE
8
F
(
E
Z 8D Asy # '`( ^38 (FE P)P?
W]
^% , illustrated in panel (b), it can be seen that two configurations ? namely
In
( vthev case
and ( vv[) ? satisfy the node and edgewise optimality conditions. Therefore,
)
each of these configurations are
global maxima for the cost function '`&(
^89 (FE ( P) . On
, as illustrated in panel (c), any configuration that is
the other hand, when
\ for all ]
_ p` . This is clearly
edgewise optimal for all three edges must satisfy
impossible, so that the fixed point cannot be used to specify a MAP assignment.
Example 2. Consider the single cycle on three vertices, as illustrated in Figure 2. We
in an indirect manner, by first defining a set of pseudo-maxdefine a distribution
marginals in panel (a). Here
is a parameter to be specified. Observe that the
symmetry of this construction ensures that satisfies the edgewise consistency condition
(Lemma 2) for any
. For each of the three spanning trees of this graph, the collection defines a tree-structured distribution
as in equation (7). We define the
, where is the uniform
underlying distribution via
distribution (weight
on each tree).
Of course, it should be recognized that this example was contrived to break down the algorithm. It should also be noted that, as shown in our related work [5], the standard max-
B
1
In a relaxed message update, we take an -step towards the new (log) message, where
7
is the step size parameter. To date, we have not been able to prove that relaxed updates will always
converge.
2
(
(
$
$ .
7
7
7
(a)
*) +% '
7
$# &% '
(b)
( (
!
"
!
(c)
Figure 2. Cases where the Uniqueness Condition fails. (a) Specification of pseudo-maxmarginals . (b) For
, both
and
are node and edgewise optimal. (c)
7 7 7 optimal on the full graph.
For
, no configurations are node and edgewise
product algorithm can also break down when this Uniqueness Condition is not satisfied.
4 Discussion
This paper demonstrated the utility of convex combinations of tree-structured distributions
in upper bounding the log probability of the MAP configuration. We developed a family of
tree-reweighted max-product algorithms for computing optimal upper bounds. In certain
cases, the optimal upper bound is met with equality, and hence yields an exact MAP configuration for the original problem on the graph with cycles. An important open question
is to characterize the range of problems for which the upper bound is tight. For problems
involving a binary-valued random vector, we have isolated a class of problems for which
the upper bound is guaranteed to be tight. We have also investigated the Lagrangian dual
associated with the upper bound (3). The dual has a natural interpretation as a tree-relaxed
linear program, and has been applied to turbo decoding [7]. Finally, the analysis and upper bounds of this paper can be extended in a straightforward manner to hypertrees of of
higher width. In this context, hypertree-reweighted forms of generalized max-product updates [see 5] can again be used to find optimal upper bounds, which (when they are tight)
again yield exact MAP configurations.
References
[1] R. G. Cowell, A. P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter. Probablistic Networks and
Expert Systems. Statistics for Engineering and Information Science. Springer-Verlag, 1999.
[2] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log
partition function. In Proc. Uncertainty in Artificial Intelligence, volume 18, pages 536?543,
August 2002.
[3] W. T. Freeman and Y. Weiss. On the optimality of solutions of the max-product belief propagation
algorithm in arbitrary graphs. IEEE Trans. Info. Theory, 47:736?744, 2001.
[4] B. J. Frey and R. Koetter. Exact inference using the attenuated max-product algorithm. In
Advanced mean field methods: Theory and Practice. MIT Press, 2000.
[5] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. Tree consistency and bounds on the maxproduct algorithm and its generalizations. LIDS Tech. report P-2554, MIT; Available online at
http://www.eecs.berkeley.edu/ martinw, July 2002.
,
[6] D.P. Bertsekas. Nonlinear programming. Athena Scientific, Belmont, MA, 1995.
[7] J. Feldman, M. J. Wainwright, and D. R. Karger. Linear programming-based decoding and its
relation to iterative approaches. In Proc. Allerton Conf. Comm. Control and Computing, October
2002.
| 2206 |@word h:3 version:1 suitably:1 open:1 ayy:1 configuration:36 karger:1 interestingly:1 current:1 must:11 written:1 belmont:1 additive:1 happen:1 wx:1 subsequent:1 partition:2 koetter:2 designed:1 update:17 intelligence:1 accordingly:1 pointer:1 node:12 allerton:1 along:2 prove:4 consists:1 shorthand:1 manner:4 introduce:1 pairwise:2 indeed:2 ra:1 nto:1 xz:6 examine:1 wzd:1 freeman:1 bounded:1 moreover:2 notation:4 panel:3 underlying:1 developed:2 lowing:1 nj:1 guarantee:2 pseudo:3 berkeley:4 every:5 subclass:1 scaled:2 control:1 converse:1 bertsekas:1 positive:4 engineering:1 frey:2 consequence:1 despite:1 probablistic:1 equivalence:1 factorization:1 range:1 unique:1 union:1 practice:1 differs:2 attain:1 word:2 cannot:1 ga:3 context:2 applying:1 impossible:2 www:1 equivalent:2 map:23 demonstrated:1 lagrangian:1 straightforward:1 convex:15 assigns:2 correcting:1 factored:1 reparameterization:2 analogous:2 updated:1 target:3 suppose:2 construction:1 exact:9 programming:3 agreement:6 element:7 dawid:1 satisfying:4 breakdown:1 ising:2 ensures:1 cycle:10 convexity:2 comm:1 maxmarginals:1 tight:6 imum:1 compactly:2 po:1 indirect:1 various:1 distinct:1 describe:1 artificial:1 hyper:1 asy:6 valued:2 say:1 tightness:1 otherwise:2 statistic:1 online:1 sequence:1 product:27 remainder:1 date:1 achieve:3 convergence:2 empty:4 optimum:3 zp:1 contrived:1 converges:1 gab:4 illustrate:2 develop:5 derive:1 pose:1 lauritzen:1 implies:2 met:7 tommi:2 differ:1 ning:1 direction:1 require:1 generalization:1 hypertrees:1 proposition:4 secondly:1 extension:1 strictly:2 hold:4 achieves:2 uniqueness:10 proc:2 gihkj:1 treestructured:1 weighted:1 mit:3 clearly:1 always:3 rather:1 jaakkola:3 conjunction:1 lon:1 tech:1 contrast:1 affinely:1 attains:3 posteriori:3 inference:1 compactness:1 relation:1 interested:2 provably:1 compatibility:2 overall:1 among:1 dual:2 denoted:1 development:2 nar:8 initialize:1 uc:1 mutual:6 field:2 equal:5 marginal:4 once:1 report:1 primarily:1 randomly:1 attempt:2 ab:3 interest:2 message:20 a5:1 g_:1 bthis:1 nl:1 implication:2 edge:21 necessary:2 lh:1 tree:65 overcomplete:1 isolated:1 minimal:4 ar:12 assignment:3 ordinary:7 cost:2 vertex:2 subset:1 uniform:2 characterize:1 dependency:1 eec:4 combined:1 sequel:1 physic:1 decoding:2 analogously:1 na:2 again:2 satisfied:8 choose:1 weo:1 d8:1 conf:1 expert:1 potential:7 coding:2 satisfy:3 ad:1 depends:1 break:2 bution:1 contribution:1 oi:1 yield:4 ary:1 converged:1 whenever:1 definition:4 failure:2 naturally:1 associated:3 proof:3 edgewise:10 proved:1 massachusetts:1 subsection:1 organized:1 actually:1 appears:2 higher:1 specify:3 wei:1 though:1 hand:1 nonlinear:1 propagation:1 defines:3 scientific:1 equality:7 hence:2 illustrated:4 reweighted:11 attractive:1 width:1 noted:1 generalized:1 ay:2 common:5 volume:1 belong:2 discussed:1 interpretation:2 marginals:9 cambridge:1 feldman:1 consistency:6 similarly:1 specification:1 longer:1 belongs:2 reverse:1 codeword:1 verlag:1 certain:1 inequality:1 wv:1 binary:3 seen:2 relaxed:4 impose:1 mr:1 recognized:1 converge:1 july:1 full:2 stem:1 alan:1 long:1 involving:1 basic:3 ko:1 expectation:1 iteration:1 represent:2 background:1 subcollection:3 w2:1 undirected:2 integer:2 npn:2 idea:2 attenuated:4 utility:1 passed:1 passing:3 clear:1 http:1 specifies:1 exist:1 discrete:1 write:1 express:1 key:5 four:3 graph:13 sum:1 inverse:1 letter:1 uncertainty:1 family:3 throughout:1 bound:25 guaranteed:3 turbo:1 adapted:1 precisely:1 u1:1 argument:1 optimality:5 attempting:3 martin:1 department:2 structured:15 combination:14 belonging:1 lid:1 making:1 equation:14 agree:1 fail:1 generalizes:1 operation:1 available:1 apply:1 observe:1 enforce:1 alternative:1 existence:1 original:6 ensure:1 graphical:1 yx:1 question:1 quantity:2 strategy:1 usual:1 ause:1 athena:1 polytope:1 spanning:13 willsky:4 assuming:1 length:1 index:4 equivalently:1 difficult:1 october:1 hypertree:2 fe:23 statement:1 info:1 stated:1 negative:1 design:1 motivates:1 perform:1 upper:20 defining:9 extended:1 communication:1 synchronously:1 sharp:1 arbitrary:3 august:1 pair:5 namely:1 specified:4 connection:1 established:1 trans:1 able:2 below:1 challenge:1 program:2 gar:7 max:39 including:1 belief:1 wainwright:4 critical:1 natural:1 indicator:2 advanced:1 technology:1 spiegelhalter:1 prior:1 admissibility:4 sufficient:3 consistent:2 jsl:1 share:4 course:1 understand:1 institute:1 neighbor:1 taking:1 dimension:1 valid:3 collection:13 far:1 approximate:2 clique:1 global:2 incoming:1 conclude:1 iterative:1 ca:1 expanding:1 symmetry:1 investigated:1 rh:1 terminated:1 bounding:1 arise:1 fails:2 explicit:2 exponential:9 ihkj:1 third:1 theorem:1 down:2 jensen:1 exists:4 importance:1 illustrates:1 cartesian:1 intersection:3 appearance:3 cowell:1 springer:1 corresponds:1 satisfies:4 ma:2 goal:1 formulated:2 towards:1 shared:1 change:1 determined:1 lemma:4 xin:1 maxproduct:2 formally:1 support:1 requisite:1 |
1,326 | 2,207 | Convergence Properties of some
Spike-Triggered Analysis Techniques
Liam Paninski
Center for Neural Science
New York University
New York, NY 10003
liam@cns. nyu. edu
http://www.cns.nyu.edu/rvliam
Abstract
vVe analyze the convergence properties of three spike-triggered data
analysis techniques. All of our results are obtained in the setting of a (possibly multidimensional) linear-nonlinear (LN) cascade
model for stimulus-driven neural activity. We start by giving exact
rate of convergence results for the common spike-triggered average
(STA) technique. Next, we analyze a spike-triggered covariance
method, variants of which have been recently exploited successfully
by Bialek, Simoncelli, and colleagues. These first two methods suffer from extraneous conditions on their convergence; therefore, we
introduce an estimator for the LN model parameters which is designed to be consistent under general conditions. We provide an
algorithm for the computation of this estimator and derive its rate
of convergence. We close with a brief discussion of the efficiency
of these estimators and an application to data recorded from the
primary motor cortex of awake, behaving primates.
1
Introduction
Systems-level neuroscientists have a few favorite problems, the most prominent of
which is the "what" part of the neural coding problem: what makes a given neuron
in a particular part of the brain fire? In more technical language, we want to know
about the conditional probability distributions P(spikelX = x), the probability
that our cell emits a spike, given that some observable signal X in the world takes
value x. Because data is expensive, neuroscientists typically postulate a functional
form for this collection of conditional distributions, and then fit experimental data to
these functional models, in lieu of attempting to directly estimate P(spikelX = x)
for each possible x. In this paper, we analyze one such phenomenological model
whose popularity seems to be on the rise:
p(spikelx) = f( <
k1 , x>, < k2 , x>, . .. ,< km , x ?.
(1)
Here f is some arbitrary nonconstant, ~m-measurable, [O,l]-valued function, and
{k i } are some linearly independent elements of the dual space, X', of some topological vector space, X - the space of possible "input signals." Interpret f as a regular
x
conditional distribution. Roughly, then, the neuron projects the signal onto some
m-dimensional subspace spanned by {ki}l <i<m (call this subspace K), then looks
up its probability of firing based only on thIs-projection. This model is often called
a "linear-nonlinear," or "LN," cascade model. It is also a probabilistic analog of a
certain type of "Wiener cascade" model; this class of models has received extensive
study in the systems identification literature. (Note that this model is not the same
as a Volterra series model; these two classes of systems have very different uniform
approximation properties.)
The LN model has two important features. First, the spike trains of the cell are
given by a conditionally (inhomogeneous) Poisson process given that is, there are
no dynamics in this model beyond those induced by x and K. Second, equation (1)
implies:
p(spikelx) = p(spikelx + y) V y 1- K.
(2)
x;
In other words, the conditional probability of firing is constant along (hyper)planes
in the input space. (The natural generalization of this is a model for which these
surfaces of constant firing probability are manifolds of low codimension; however,
we will stick to the linear case here.) This model is semiparametric in the sense
that it separates the problem of learning p(spikelx) into two pieces: 1) learning the
finite-dimensional parameter K, and 2) learning the infinite-dimensional parameter
f. If K is given, the problem of learning f reduces to a density estimation problem,
about which much is known. The problem of estimating K seems to be less wellunderstood, and we focus primarily on this problem here.
We start with some notation. Let N, as usual, denote the number of available
samples, drawn from the fixed stimulus distribution p(x) (in practice, of course, the
samples from p(x) are not independent; for simplicity, we will stick to the i.i.d. case
here, but most of our methods can be extended to the more general case). Then
our basic results will take the following form:
E
(Error(K)) '" aN-).. + {3,
(3)
as N becomes large. The estimator K is a deterministic map taking N observations
of stimulus and spike data (where spikes are binary random variables, conditionally
independent given the stimulus) into an estimate of the true underlying K:
K : (X x {a, l})N
-t
Qm(X)
(4)
(XN,SN)
-t
K(XN,SN),
(5)
where (fEN, SN) denotes the N-sample data. Qm(X) is the m-Grassmann manifold
of X, the space of all m-dimensional subspaces of X; the natural error metric,
then, is the geodesic distance on Qm(X) (the "canonical angle") between the true
subspace K and the estimated subspace K. For brevity, we will present most of our
results in the m = 1 case only; here the metric takes the simple form
A
_
Error(K) = cos
-1
K,k~1 >
IIKllllk111
<
A
?
(6)
The scalar terms A, a, and f3 in (3) each depend on .J, K, and p(x); A is a constant
giving the order of magnitude of convergence (usually, but not always, equal to 1/2),
a gives the precise convergence rate, and (3 gives the asymptotic error. We will be
mostly concerned with giving exact values for a and A, and simply indicating when
(3 is zero or positive (i.e., when K is consistent in probability or not, respectively).
As usual, rate-of-convergence results clarify why a given estimator works well (in
the sense that a only a small number of samples is needed for reliable estimates) in
certain cases and poorly (sometimes not at all) in others.
We will discuss three estimators here; the first two are well-known, while the third
is novel, and is consistent under much more general conditions. The first part of
the paper will indicate how to derive representation (3), including the constants
a, 13,. and A, for these three estimators. In the final two sections, we discuss lower
bounds on the convergence rates of any possible K -estimator (these kinds of bounds
provide a rigorous measure of the- difficulty of this estimation problem), and then
give a brief illustration of the new estimator applied to data recorded in the primary
motor cortex of awake, behaving monkeys.
2
Convergence rates
All three of the estimators considered here can be naturally written as "Mestimators," that is,
K(XN' SN) == argmaxVEQm (X)
M(XN
,SN
)(V),
for some data-dependent function M N == M(XN ,SN ) on Ym(X). Most of the mathematical labor in this section comes down to an application of the standard "delta
method" from the theory of ~v1-estimators [5]: typically the data-dependent (i.e.,
random) functions M N converge in some suitable sense, as N - ? 00, to some limit
function M. The asymptotics of the M-estimator are then reduced to a study of 1)
the variability of M N around the limit M and 2) the local differential structure of
M in a neighborhood of the true value of the underlying parameter K. This program can be carried out trivially for the first two estimators but is more interesting
for the third (the first two require only the multivariate CLT; the third requires an
infinite-dimensional CLT).
2.1
Spike-triggered averaging
The first estimator, the spike-triggered average, is classical and very intuitive:
KST A is defined as the sample -mean of the spike-conditional stimulus distribution
p( xl spike); since the spike signal is binary, this is the same as the cross-correlation
between the spike and the stimulus signal. (We assume throughout, without loss
of generality, that p(x) is centered, that is, E(x) == 0.) We will also consider the
following "linear regression" modification:
K LR == AKsTA'
where A is an operator chosen to "divide out" correlations in the stimulus distribution p(x) (A is typically the (pseudo-) inverse of the stimulus correlation matrix,
which we will denote as a 2 (p(x))). The analysis for KSTA and K LR depends only
on a straightforward application of the multivariate central limit theorem' (CLT).
We begin with necessary and sufficient conditions for consistency. We assume
throughout this paper that the stimulus distribution p(x) has finite second moments; this assumption seems entirely reasonable on physical grounds. Let q be a
random variable with distribution given by
==
P( q) - p
(
~ k~1 > ISP~Ok)
_
< X,
e -
f( < X, k1 > )p( < X, k1 ?
JRf?
~
x,k 1
?p?
~,
x,k 1 ?
(7)
with f as defined in (1) and p( < X, k1 ? denoting the one-dimensional projection of
p(x). The expectation of this random variable exists by the finite-variance assump-
tion on p(x). Finally, as usual, we say p( x) is radially symmetric if p(B)
for all Borel sets B and all unitary transformations U.
== p(UB)
Theorem 1 ((3(KST A)). Ifp(x) (resp. p(Alj2 x )) is radially symmetric and E(q) i0, then (3(KSTA ) == (resp. (3(KLR ) == 0). Conversely, if p(x) is radially symmetric
and E(q) == 0, then (3 > 0, and if p(x) is not radially symmetric, then there exists
an i for which {3 > 0.
?
(Note that i is not required to be smooth, or even continuous.) The above sufficiency conditions seem to be somewhat well-known; for example, most of the
sufficiency statement appeared (albeit in somewhat less precise form) in [1]. On
the other hand, the converse is novel, to our knowledge, and is perhaps surprisingly stringent. The first part of the necessity statement will be obvious from the
following discussion of a (and in fact appears implicitly in [1]), while the second
part is a little harder, and seems to require (rather elementary) characteristic function techniques. The proof proceeds by showing that a distribution is symmetric
iff it has the property that the conditional mean of is zero on all planar "slices"
< k, >E B for some k E XI and real Borel set B.
x
x
Next we have the rate of convergence:
Theorem
tion a(p).
mean zero
underlying
~
2 (a(KSTA)). Assumep(i),is symmetric normal, with standard deviaIf (3(KS T A) == 0, then N 1j 2(KsT A - K) is asymptotically normal with
(considered as a distribution on the tangent plane of Ym(X) at the true
value K), and
Thus the performance of the spike-triggered average scales directly with the dimension of the ambient space and inversely with E(q), a measure of the asymmetry
of the spike-triggered distribution along k1 . Note that we stated the result under the much stronger condition that p(i) is Gaussian. In this case, the form of a
becomes quite simple, depending on the nonlinearity i only through E(q). The general case is proven by identical methods but results in a slightly more complicated
(i-dependent) term in place of a(p). The proof follows by applying the multivariate central limit theorem to the sample mean random vectors drawn ij.d. from
the spike-conditional stimulus distribution, p(xlspike). The proof also supplies the
asymptotic distribution of Error(KsTA) (a noncentral F), which might be useful
for hypothesis testing. The details are quite easy once the mean of this distribution
is identified (as in [1], under the above sufficiency conditions), and we skip them to
save room for more interesting results.
One final note: in stating the above two results, we have been assuming implicitly
that K is one-dimensional (since KSTA clearly returns a single vector, that is, a
one-dimensional subspace of X). Nevertheless, the two theorems extend easily to
the more general case, after Error(KsTA) is redefined to measure angles between
m- and I-dimensional subspaces. (Of course, now E(KsTA) and limN-H:xJ KSTA
depend strongly on the input distribution p(x), even for radi~lly symmetric p(x);
see, e.g., [3] for an analysis of a special case of this effect.)
2.2
Covariance-based methods
The next estimator was introduced in an effort to extend spike-triggered analysis to
the m > 1 case (see, e.g., [3], and references therein). Where KSTA was based on
the first moment of the spike-conditional stimulus distribution p(xlspike), KCORR
is based on the second moment. We define
2 -1.
KCORR == (0-)
A
elg(.6.0- 2 ),
A
where eig(A) denotes the significantly non-zero eigenspace of the operator A, and
.6.~2 is some estimate (typically the usual sample covariance estimate) of the
"difference-covariance" matrix .6.0- 2 , defined by
Again, we start with {3:
Theorem 3 ((3(KCORR )). Ifp(x) is Gaussian and
Varp(xlspike)
?
k, x?
=I-
Varp(x) ( <
k, x? \:Ik E E K ,
for some orthogonal basis E K of K, then (3(KCORR) == o. Conversely, if p(x) is
Gaussian and the variance condition is not satisfied for f, then (3 > 0, and if p(x)
is non-Gaussian, then there exists an f for which {3 > o.
As before, the sufficiency is fairly well-known, while the necessity appears to be novel and relies on characteristic function arguments. It is perhaps surprising that
the conditions on p for the consistency of this estimator are even stricter than for the
spike-triggered average. The essential fact here turns out to be that a distribution
is normal iff, after a suitable change of basis, the conditional variance on all planar
"slices" of the distribution is constant.
We have, with Odelia SChwartz, developed a striking inconsistency example which
is worth mentioning here:
Example (Inconsistency of KCORR). There is a nonempty open set of nonconstant f and radially symmetric p(x) such that KCORR is asymptotically orthogonal
to K almost surely as N ---7 00. (In fact, the f and p in this set can be taken to be
infinitely differentiable.)
The basic idea is that, for nonnormal p, the spike-triggered variance of
depends on f even for v-lk; we leave the details to the reader.
< V, x >
We can derive a similar rate of convergence for these covariance-based methods. To
reduce the notational load, we state the result for m == 1 only; in this case, we can
define AAa-2 to be the (unique and nonzero by assumption) eigenvalue of .6,0- 2 .
Theorem 4 (a(KcoRR)). Assume p(x) is independent normal. If (3(KCORR) == 0,
then N 1 / 2 (KcoRR - K) is asymptotically normal with mean zero and
(Again, while AAa-2 will not be exactly zero in practice, it can often be small enough
that the asymptotic error remains prohibitively large for physiologically reasonable
values of N.) The proof proceeds by applying the multivariate central limit theorem to the covariance matrix estimator, then examining the first-order Taylor
expansion of the eigenspace map at .6,0- 2 ; see the longer draft of this paper at
http://www.cns.nyu.edu/r-.;liam for the more general statement and proof.
2.3
Empirical processes techniques
We have seen that the two most common K-estimators are not consistent in general; that is, the asymptotic error (3 is bounded away from zero for many (nonpathological) combinations of p(x), f, and K. We now introduce a new estimator
for which (3 == 0 under very general conditions (without, say, any symmetry or
normality assumptions on p or any symmetry assumptions on f).
The basic idea is that Ki is in a sense a sufficient statistic for i (that is, x - Ki
- spike forms a Markov chain). The data processing inequality suggests that we
could estimate K by maximizing
where DcjJ is a functional with suitable convexity properties, and qN is some estimate
ofp. For example, we could let DcjJ be an information divergence and qN some kernel
estimate, that is, a filtered version of the empirical measure
(see [4] for an independent approach along these lines). This doesn't quite work,
however, because the kernel induces an arbitrary scale; if this scale is larger than
the natural scale of f and p( < V, X ? for some V but not others, our estimate will
be biased away from K. Therefore, DcjJ and PN have to be asymptotically scale-free
in some sense.
The simplest approach is to let the kernel width tend to zero as N becomes large; it
is even possible to calculate the optimal rate of kernel shrinkage in N, depending on
the smoothness of f. It also turns out to be helpful to use a bias-corrected version
of M N (V); a standard jackknife correction is sufficient to obtain an estimator which
converges at the standard VN rate. We have:
Theorem 5 ?(3(KcP )). lfp has a nonzero density with respect to Lebesgue measure,
f is not constant a.e., and the kernel width goes to zero more slowly than NT-l,
for some r > 0, then {3 == 0 for the kernel estimator KcP ?
In other words, this new estimator KcjJ works for very general neurons f and stimulus
distributions p; in particular, K ? is suitable for application to natural signal data.
Clearly, the condition on f is minimal; we ask only that the neuron be tuned. The
condition on p is quite weak (and can be relaxed further); we are simply ensuring
that we are sampling from all of X, and in particular, the part of X on which the
cell is tuned.
Next we have the rate of convergence; in the following, the "approximation error"
measures the difference between the true information divergence M cP (V) and its
kernel-smoothed version, defined in the obvious way.
Theorem 6 (1 and a for (K?)). If the approximation error is of order aN,
r > 1, then the jackknifed kernel or histogram versions of KcjJ, with bandwidth
l 2
NS, -1 < s < -l/r, converge at an N- /
rate. Moreover, N l / 2 (K? - K) is
asymptotically normal, with mean zero and easily calculable a (K?) .
The methods follow, e.g., example 3.2.12 of [5] - basically, a generalization
of the classical theorem on the asymptotic distribution of the maximum likelihood estimator in regular parametric families. Again, see the longer draft at
http://www.cns.nyu.edu/rvliam for the precise definition of the approximation error and the full expression for a(K?).
We have developed an algorithm for the computation of argmaxvMN(V) , and numerical results show that K? can be competitive with spike-triggered average or
covariance techniques even in cases in which f3(KS TA) and f3(KCORR) are zero. We
present a brief application of K? in section 4.
?3
Lower bounds
Lower bounds for convergence rates provide a rigorous measure of the difficulty of
a given estimation problem, or of the efficiency of a given estimator. We give a few
such results below. The first lower bound is local, in the sense that we assume that
the true parameter is known a priori to be in some small neighborhood of parameter
space. For simplicity, assume for the moment that p(x) is radially symmetric. Recall
that the Hellinger metric between any two densities is defined as (half of) the L 2
distance between the square roots of the densities.
Theorem 7 (Local (Hellinger) lower bound). For simplicity, let p be standard
normal. For any fixed differentiable f, uniformly bounded away from 0 and 1 and
with a uniformly bounded derivative f', and any Hellinger ball F around the true
parameter (f, K),
lW-!;e,f N 1/ 2 ikf s~ E(Error(K)) ~
A
(
11'12
a(p)(Ep ( 1(1 _ f) ))1/2
)-1
vctim X - 1.
The second infimum above is taken over all possible estimators k. The right-hand
side plays the role of the inverse Fisher information in the Cramer-Rao bound and
is derived using a similarly local analysis; see [2] for details.
Global bounds are more subtle. We want to prove something like:
liminf aN iI!f sup E(Error(k)) ~ C(E),
N-HXJ
K :F(?)
where F( E) is some large parameter set containing, say, all K and all f for which
some relevant measure of tuning is greater than E, aN is the corresponding convergence rate, and C(E) plays the role of a(K) from the previous sections. So far, our
most interesting results in this direction are negative:
Theorem 8 (Information divergences are poor indices of K-difficulty). Let
F(E) be the set of all (K, f) for which the ?-divergence ((information" between x and
spike is greater than E, that is,
DcjJ(P(Kx, spike); p(spike)p(Kx))
Then, for
E
>
>
E.
0 small enough, for any putative convergence rate aN,
liminf aN iI!f sup E(Error(k)) ==
N-'Hx)
00.
K :F(?)
In other words, strictly information-theoretic measures of tuning do not provide a
useful index of the difficulty of the K-Iearning problem; the intuitive explanation of
this result is that purely measure-theoretic distance functions, like ?-divergences,
ignore the topological and vector space structure of the. underlying probability measures, and it is exactly this structure that determines the convergence rates of any
efficient K -estimator. To put it more simply, the learnability of K depends on the
smoothness of f, just as we saw in the last section.
4
Application to primary motor cortex data
We have applied these new spike-triggered analysis techniques to data collected in
the primary motor cortex (MI) of awake, behaving monkeys in an effort to elucidate
the neural encoding of time-varying hand position signals in MI. This analysis h~s
led to several interesting findings on the encoding properties of these neurons, with
immediate applications to the design of neural prosthetic devices. Here, we have
room to mention only one result: the relevant K for MI cells appear to be largely
one-dimensional. In other words, the conditional firing rate of these neurons, given
a specific time-varying hand path, is well captured by the following model (Fig. 1):
p(spikel?) == f( < ko, ? ?, where ? represents the two-dimensional hand position
signal in a temporal neighborhood of the current time, ko is a cell-specific affine
functional, and f is a cell-independent scalar function.
20
20
Figure 1: Example }(I<?) functions, computed from two different MI cells, with
rank I< == 2; the x- and y-axes index < k1 , ? > and < k2 , x >, respectively, while
the color axis indicates the value of j (the conditional firing rate given K ?), in Hz.
The scale on the x- and y-axes is arbitrary and has been omitted. k was computed
using the q'J-divergence estimator, and j was estimated using an adaptive kernel
within the circular region shown (where sufficient data was available for reliable
estimates). Note that the contours of this function are approximately linear; that
is, }(I<?) ~ fo? ko,? ?, where ko is the vector orthogonal to the contour lines
and fa is a suitably chosen scalar function on the line.
Acknowledgements
We thank the Simoncelli lab for interesting discussions, and N. Rust and T. Sharpee
for preliminary discussions of [4]. The MI experiments were done with M. Fellows, N.
Hatsopoulos, and J. Donoghue. LP is supported by a HHMI predoctoral fellowship.
References
[1] Chichilnisky, E. Network 12: 199-213 (2001).
[2] Gill, R. & Levit, B. Bernoulli, 1/2: 59-79 (1995).
[3] Schwartz, 0., Chichilnisky, E. & Simoncelli, E. NIPS 14 (2002).
[4] Sharpee, T., Bialek, W. & Rust, N. This volume (2003).
[5] van der Vaart , A. & Wellner, J. Weak convergence and empirical processes.
Springer-Verlag, New York (1996).
| 2207 |@word version:4 seems:4 stronger:1 suitably:1 open:1 km:1 covariance:7 mention:1 harder:1 moment:4 necessity:2 series:1 denoting:1 tuned:2 current:1 nt:1 surprising:1 written:1 numerical:1 motor:4 designed:1 half:1 device:1 plane:2 lr:2 filtered:1 draft:2 mathematical:1 along:3 differential:1 supply:1 ik:1 calculable:1 prove:1 hellinger:3 introduce:2 roughly:1 brain:1 little:1 becomes:3 project:1 estimating:1 notation:1 underlying:4 begin:1 nonnormal:1 eigenspace:2 bounded:3 what:2 moreover:1 kind:1 monkey:2 developed:2 finding:1 transformation:1 pseudo:1 temporal:1 fellow:1 multidimensional:1 iearning:1 stricter:1 exactly:2 prohibitively:1 qm:3 k2:2 stick:2 schwartz:2 converse:1 appear:1 positive:1 before:1 local:4 limit:5 encoding:2 firing:5 path:1 approximately:1 might:1 therein:1 k:2 conversely:2 suggests:1 co:1 mentioning:1 liam:3 jrf:1 unique:1 testing:1 lfp:1 practice:2 asymptotics:1 empirical:3 cascade:3 significantly:1 projection:2 word:4 regular:2 onto:1 close:1 operator:2 put:1 applying:2 www:3 measurable:1 deterministic:1 map:2 center:1 maximizing:1 straightforward:1 radi:1 go:1 simplicity:3 estimator:27 spanned:1 resp:2 elucidate:1 play:2 exact:2 hypothesis:1 element:1 ikf:1 expensive:1 ep:1 role:2 calculate:1 region:1 hatsopoulos:1 convexity:1 dynamic:1 geodesic:1 depend:2 purely:1 efficiency:2 basis:2 isp:1 easily:2 train:1 hyper:1 neighborhood:3 whose:1 quite:4 larger:1 valued:1 say:3 statistic:1 vaart:1 final:2 triggered:13 differentiable:2 eigenvalue:1 relevant:2 iff:2 poorly:1 intuitive:2 convergence:18 asymmetry:1 noncentral:1 leave:1 converges:1 derive:3 depending:2 stating:1 ij:1 received:1 skip:1 implies:1 indicate:1 come:1 direction:1 inhomogeneous:1 centered:1 stringent:1 require:2 hx:1 generalization:2 preliminary:1 elementary:1 strictly:1 clarify:1 correction:1 around:2 considered:2 ground:1 normal:7 cramer:1 omitted:1 estimation:3 saw:1 successfully:1 clearly:2 always:1 gaussian:4 jackknifed:1 rather:1 pn:1 shrinkage:1 varying:2 derived:1 focus:1 ax:2 notational:1 rank:1 likelihood:1 indicates:1 bernoulli:1 rigorous:2 sense:6 helpful:1 dependent:3 i0:1 typically:4 dual:1 extraneous:1 priori:1 special:1 fairly:1 equal:1 once:1 f3:3 sampling:1 identical:1 represents:1 look:1 others:2 stimulus:12 few:2 primarily:1 sta:1 divergence:6 cns:4 fire:1 lebesgue:1 neuroscientist:2 circular:1 chain:1 ambient:1 necessary:1 orthogonal:3 divide:1 taylor:1 minimal:1 rao:1 uniform:1 examining:1 learnability:1 kcp:2 density:4 probabilistic:1 ym:2 again:3 postulate:1 recorded:2 central:3 satisfied:1 containing:1 possibly:1 slowly:1 derivative:1 return:1 coding:1 depends:3 piece:1 tion:2 root:1 lab:1 analyze:3 sup:2 start:3 competitive:1 complicated:1 levit:1 square:1 wiener:1 variance:4 characteristic:2 largely:1 weak:2 identification:1 basically:1 worth:1 fo:1 definition:1 colleague:1 obvious:2 naturally:1 proof:5 mi:5 emits:1 radially:6 ask:1 recall:1 knowledge:1 color:1 subtle:1 appears:2 ok:1 ta:1 follow:1 planar:2 sufficiency:4 done:1 strongly:1 generality:1 just:1 varp:2 correlation:3 hand:5 nonlinear:2 eig:1 infimum:1 perhaps:2 effect:1 true:7 symmetric:9 nonzero:2 conditionally:2 width:2 prominent:1 theoretic:2 cp:1 novel:3 recently:1 common:2 ifp:2 functional:4 physical:1 rust:2 volume:1 analog:1 extend:2 interpret:1 smoothness:2 tuning:2 trivially:1 consistency:2 similarly:1 nonlinearity:1 language:1 phenomenological:1 cortex:4 behaving:3 surface:1 longer:2 something:1 multivariate:4 driven:1 certain:2 verlag:1 inequality:1 binary:2 inconsistency:2 der:1 exploited:1 fen:1 assumep:1 seen:1 captured:1 greater:2 somewhat:2 relaxed:1 gill:1 surely:1 converge:2 signal:8 wellunderstood:1 clt:3 full:1 simoncelli:3 ii:2 reduces:1 smooth:1 technical:1 cross:1 hhmi:1 ofp:1 grassmann:1 ensuring:1 variant:1 basic:3 regression:1 ko:4 metric:3 poisson:1 expectation:1 lly:1 histogram:1 sometimes:1 kernel:9 cell:7 want:2 semiparametric:1 fellowship:1 limn:1 biased:1 induced:1 tend:1 hz:1 seem:1 call:1 unitary:1 easy:1 concerned:1 enough:2 xj:1 fit:1 codimension:1 identified:1 bandwidth:1 reduce:1 idea:2 donoghue:1 expression:1 wellner:1 effort:2 suffer:1 york:3 useful:2 induces:1 simplest:1 reduced:1 http:3 canonical:1 estimated:2 delta:1 popularity:1 nevertheless:1 drawn:2 v1:1 asymptotically:5 angle:2 inverse:2 striking:1 place:1 throughout:2 reasonable:2 almost:1 reader:1 vn:1 family:1 putative:1 entirely:1 ki:3 bound:8 topological:2 activity:1 awake:3 prosthetic:1 mestimators:1 argument:1 attempting:1 jackknife:1 combination:1 ball:1 poor:1 slightly:1 lp:1 primate:1 modification:1 aaa:2 liminf:2 taken:2 ln:4 equation:1 remains:1 discus:2 turn:2 nonempty:1 needed:1 know:1 lieu:1 available:2 away:3 save:1 denotes:2 giving:3 k1:6 classical:2 spike:27 volterra:1 parametric:1 primary:4 fa:1 usual:4 bialek:2 subspace:7 distance:3 separate:1 thank:1 manifold:2 collected:1 assuming:1 kst:3 index:3 illustration:1 mostly:1 statement:3 stated:1 rise:1 negative:1 design:1 redefined:1 predoctoral:1 neuron:6 observation:1 markov:1 finite:3 immediate:1 extended:1 variability:1 precise:3 smoothed:1 arbitrary:3 hxj:1 introduced:1 required:1 chichilnisky:2 extensive:1 nip:1 beyond:1 proceeds:2 usually:1 below:1 appeared:1 program:1 reliable:2 including:1 explanation:1 suitable:4 natural:4 difficulty:4 normality:1 brief:3 inversely:1 lk:1 axis:1 carried:1 nonconstant:2 sn:6 literature:1 acknowledgement:1 tangent:1 asymptotic:5 loss:1 interesting:5 proven:1 affine:1 sufficient:4 consistent:4 klr:1 course:2 surprisingly:1 last:1 free:1 supported:1 bias:1 side:1 taking:1 van:1 slice:2 dimension:1 xn:5 world:1 contour:2 qn:2 doesn:1 collection:1 adaptive:1 far:1 observable:1 ignore:1 implicitly:2 global:1 xi:1 continuous:1 physiologically:1 why:1 favorite:1 symmetry:2 expansion:1 linearly:1 dcjj:4 vve:1 fig:1 borel:2 ny:1 n:1 position:2 xl:1 lw:1 third:3 down:1 theorem:13 load:1 specific:2 showing:1 nyu:4 exists:3 essential:1 albeit:1 magnitude:1 kx:2 led:1 paninski:1 simply:3 infinitely:1 assump:1 labor:1 scalar:3 springer:1 determines:1 relies:1 conditional:11 room:2 fisher:1 change:1 infinite:2 corrected:1 uniformly:2 averaging:1 called:1 experimental:1 sharpee:2 indicating:1 odelia:1 brevity:1 ub:1 |
1,327 | 2,208 | Convergent Combinations of
Reinforcement Learning with Linear
Function Approximation
Ralf Schoknecht
ILKD
University of Karlsruhe, Germany
ralf. schoknecht@ilkd. uni-karlsruhe. de
Artur Merke
Lehrstuhl Informatik 1
University of Dortmund, Germany
arturo [email protected]
Abstract
Convergence for iterative reinforcement learning algorithms like
TD(O) depends on the sampling strategy for the transitions. However, in practical applications it is convenient to take transition
data from arbitrary sources without losing convergence. In this
paper we investigate the problem of repeated synchronous updates
based on a fixed set of transitions. Our main theorem yields sufficient conditions of convergence for combinations of reinforcement
learning algorithms and linear function approximation. This allows
to analyse if a certain reinforcement learning algorithm and a certain function approximator are compatible. For the combination of
the residual gradient algorithm with grid-based linear interpolation
we show that there exists a universal constant learning rate such
that the iteration converges independently of the concrete transition data.
1
Introduction
The strongest convergence guarantees for reinforcement learning (RL) algorithms
are available for the tabular case, where temporal difference algorithms for both
policy evaluation and the general control problem converge with probability one
independently of the concrete sampling strategy as long as all states are sampled
infinitely often and the learning rate is decreased appropriately [2]. In large, possibly continuous, state spaces a tabular representation and adaptation of the value
function is not feasible with respect to time and memory considerations. Therefore,
linear feature-based function approximation is often used. However, it has been
shown that synchronous TD(O), i.e. dynamic programming, diverges for general linear function approximation [1]. Convergence with probability one for TD('\) with
general linear function approximation has been proved in [12]. They establish the
crucial condition of sampling states according to the steady-state distribution of
the Markov chain in order to ensure convergence. This requirement is reasonable
for the pure prediction task but may be disadvantageous for policy improvement
as shown in [6] because it may lead to bad action choices in rarely visited parts
of the state space. When transition data is taken from arbitrary sources a certain
sampling distribution cannot be assured which may prevent convergence.
An alternative to such iterative TD approaches are least-squares TD (LSTD) methods [4, 3, 6, 8]. They eliminate the learning rate parameter and carry out a matrix
inversion in order to compute the fixed point of the iteration directly. In [4] a leastsquares approach for TD(O) is presented which is generalised to TD(A) in [3]. Both
approaches still sample the states according to the steady-state distribution. In
[6, 8] arbitrary sampling distributions are used such that the transition data could
be taken from any source. This may yield solutions that are not achievable by
the corresponding iterative approach because this iteration diverges. All the LSTD
approaches have the problem that the matrix to be inverted may be singular. This
case can occur if the basis functions are not linearly independent or if the Markov
chain is not recurrent. In order to apply the LSTD approach the problem would
have to be preprocessed by sorting out the linear dependent basis functions and
the transient states of the Markov chain. In practice one would like to save this
additional work.
Thus, the least-squares TD algorithm can fail due to matrix singularity and the
iterative TD(O) algorithm can fail if the sampling distribution is different from the
steady-state distribution. Hence, there are problems for which neither an iterative
nor a least-squares TD solution exist. The actual reason for the failure of the
iterative TD(O) approach lies in an incompatible combination of the RL algorithm
and the function approximator. Thus, the idea is that either a change in the RL
algorithm or a change in the approximator may yield a convergent iteration. Here,
a change in the TD(O) algorithm is not meant to completely alter the character
of the algorithm. We require that only modifications of the TD(O) algorithm be
considered that are consistent according to the definition in the next section.
In this paper we propose a unified framework for the analysis of a whole class of
synchronous iterative RL algorithms combined with arbitrary linear function approximation. For the sparse iteration matrices that occur in RL such an iterative
approach is superior to a method that uses matrix inversion as the LSTD approach
does [5]. Our main theorem states sufficient conditions under which combinations
of RL algorithms and linear function approximation converge. We hope that these
conditions and the convergence analysis, that is based on the eigenvalues of the iteration matrix, bring new insight in the interplay of RL and function approximation.
For an arbitrary linear function approximator and for arbitrary fixed transition data
the theorem allows to predict the existence of a constant learning rate such that
the synchronous residual gradient algorithm [1] converges. Moreover, in combination with interpolating grid-based function approximators we are able to specify
a formula for a constant learning rate such that the synchronous residual gradient algorithm converges independently of the transition data. This is very useful
because otherwise the learning rate would have to be decreased which slows down
convergence.
2
A Framework for Synchronous Iterative RL Algorithms
For a Markov decision process (MDP) with N states S = {S1' .. . ,SN}, action space
A, state transition probabilities p : (S, S, A) -+ [0,1] and stochastic reward function
r : (S, A) -+ R policy evaluation is concerned with solving the Bellman equation
V 7r = 'Y P7rV7r + R7r
(1)
for a fixed policy 7r : S -+ A. Vt denotes the value of state Si, Pi7j = P(Si ' Sj, 7r(Si)) ,
Ri = E{r(si,7r(Si))} and 'Y is the discount factor. As the policy 7r is fixed we will
omit it in the following to make notation easier.
If the state space S gets too large the exact solution of equation (1) becomes very
costly with respect to both memory and computation time. Therefore, often linear
feature-based function approximation is applied. The value function V is represented as a linear combination of basis functions {<PI, ... ,<P F } which can be written
as V = <pw , where WE IRF is the parameter vector describing the linear combination
and <P = (<PI I?? .I<p F) E IRNxF is the matrix with the basis functions as columns.
The rows of <P are the feature vectors <P(Si) E IRF for the states Si.
A popular algorithm for updating the parameter vector
Xi ---+ Zi with reward ri is the TD(O)-algorithm [11]
wn +l = wn
+ o:<p(xi)[ri + ,<p(zif w n
-
W
after a single transition
<p(xif w n ] = (IF
+ o:A;)w n + o:b i ,
(2)
where 0: is the learning rate, Ai = <P(Xi)[,<P(Zi) - <P(Xi)Y, bi = <p(xi)ri and IF is
the identity matrix in IRF. In the following we investigate the synchronous update
for a fixed set of m transitions T = {(xi,zi,ri)li = 1, . . . ,m}. The start states
Xi are sampled with respect to the probability distribution p, the next states Zi
are sampled according to P(Xi,') and the rewards ri are sampled from r(xi). The
synchronous update for the transition set T can then be written in matrix notation
as
(3)
with ATD = Al + ... + Am and bTD = bl + ... + bm' Let X E IRmxN with Xi ,j = 1
if Xi = Sj and 0 otherwise. Then, <p X = X<P E IRmxF is the matrix with feature
vector <p(Xi) as its i-th row. Define Z and <p Z accordingly for the states Zi . With
the vector of obtained rewards r = (rl ,'" ,rm)T we have ATD = (<pX)Th<pz - <p X)
and bTD = (<px)T r .
The synchronous TD(O) algorithm is an instance of a much broader class of RL
algorithms. The residual gradient algorithm [1], for example, minimises the Bellman error by gradient descent. In the following , let e = ,<pz - <px. The matrix
fn D = fn XT X E IRNxN is diagonal and denotes the relative frequency of state Si
as start state in the transition data T. Let 15 be the diagonal matrix with the
inverse entries of D. For Di,i = 0 set 15 i ,i = O. The matrix of the relative frequencies for the state transitions from Si to Sj is given by P = 15XT Z and the
vector of the average reward in the different states Si is given by it = 15XT r.
It can be shown that the weighted Bellman error for the synchronous update
~ [hP - IN)<pw + itr fnD [hP - IN)<pw + it] with the estimated entities P, it and D instead of the unknown expected values P , Rand D is equivalent
to the expression EB(W) = 2!n [ew + rf X15XT [ew + r]. Thus, for the residual
EB(W) =
gradient algorithm the update rule (3) becomes Wn+l = (IF + o:A RG )w n + o:bRG
with A RG = -e T x15x T e and bRG = -e T x15XTr. The synchronous TD(O)
and the residual gradient algorithm can be analysed in an unified framework with
A = 'lTTe and b = 'lTTr. By setting 'lTTD = <p X and 'lTRG = -x15x T e , for example,
one obtains the TD(O) algorithm and the residual gradient algorithm respectively.
Moreover, varying 'IT yields a whole class of algorithms. We denote such algorithms
as consistent RL algorithms if two conditions are fulfilled. First, for a tabular representation the algorithm converges to an optimal solution w* with Bellman error
zero. And second, if the algorithm converges with a linear function approximator
it achieves the same Bellman error independently of the initial value wo. This class
of RL algorithms includes the Kaczmarz rule [9], which is similar to the NTD(O)
rule [4], or the uniform update rule described in [7]. In general, these algorithms
yield different solutions when function approximation is used. For the TD(O) and
the residual gradient algorithm this is shown in [10]. However, a general assessment
of the solution quality of the different algorithms is still missing.
3
Convergence Results
The convergence properties of RL algorithms for synchronous updates in the general
framework presented in the last section are described in the following main theorem
of our paper. It generalises the case of repeated single-transition updates [7] to
repeated multi-transition updates. For the following let [M] be the span of the
columns of a matrix M and [M]l. the orthogonal complement of [M].
Theorem 1 Let wn+l = (IF + aA)w n + ab be the synchronous update rule for the
transition data T. Let A E jRF x F be representable as A = C T D with some C, D E
jRk x F and bE jRF be representable as b = C T v with some v E jRk. Let K = DC T E jRk x k
and p( x) = ( _l)k (x - Al )fh ... (x - Al )f31 be the characteristic polynomial of Kover
<C with IAII > ... > IAll. Also, let Ef, be the eigenspace corresponding to eigenvalue
Ai and H = maxd
,J;(l:)I}.
If the following assumptions hold
(a) Vi: (Re(Ai) < 0) v Ai
(b) dim(Ef,)
=0
= (3i for Ai = 0
(c) [C T ] 11 [DT]l.
= {O}
then the limit w* = lim n -> (1) w n exists for all learning rates 0 < a < aL, where the
limit learning rate aL satisfies aL = if. The limit w* may depend on the initial
value wO . Note, if the Ai leading to the maximum of H is real then H = IAi I.
A proof of this theorem can be found in the appendix. General convergence conditions of iterations have been examined in numerical mathematics. A standard
result states that if the absolute value of the largest eigenvalue of the iteration
matrix IF + aA, i.e. the spectral radius , is smaller than one, then the iteration
converges to the unique fixed point w* = -A-I b [5] (Theorem 2.1.1). In our case,
however, the matrix A may not be invertible. This happens , for example, if the
features <Pi in the feature matrix <P are linearly dependent. If A is not invertible it
has eigenvalue zero and, thus, IF + aA has eigenvalue one. Conditions (b) and (c)
in the above theorem are needed in order to compensate for the singularity of A
and to assure convergence. If the iteration converges for singular A the fixed point
depends on the initial value wO and is no longer unique. Therefore, for consistent
RL algorithms we require that the Bellman error of all fixed points be the same.
Thus, the quality of the obtained solution to the policy evaluation problem is independent of the initial value. However, the suitability of different w* for a policy
improvement step can vary but this question is not addressed here.
An important implication of Theorem 1 concerns the choice of the learning rate.
If sampling were involved in the update rule the learning rate would have to be
decreased in the standard manner (Lt at = 00, Lt a; < (0) in order to fulfil the
condition for stochastic approximation algorithms. However, for a fixed set of updates and certain synchronous RL algorithms with linear feature-based function
approximation Theorem 1 predicts the existence of a constant learning rate. In
general the computation of this learning rate would require knowledge of the eigenvalues of K which may not be directly available. As the following proposition shows,
for certain combinations of RL algorithms and linear function approximation a universal constant learning rate exists such that the iteration in Theorem 1 converges.
The proof can be found in the appendix.
Proposition 1 For an appropriate constant choice of the learning rate a the residual gradient algorithm will converge independently of the linear function approximation scheme when applied to the problem of repeated synchronous multi-transition
updates. The residual gradient algorithm is a consistent RL algorithm. If the residual gradient algorithm is combined with grid-based linear interpolation over an arbitrary triangulation of the state space and the transition set contains m transitions
then the iteration converges for all 0: < m(1~ 'Y2)'
A choice of the learning rate 0: < k according to Theorem 1 yields a convergent
iteration. However, this might not be the best choice with respect to asymptotic
convergence rate. The asymptotic convergence rate is better for matrices with lower
spectral radius [5], which yields a criterion for the choice of an optimal learning rate
0:*. If K has only real eigenvalues then we can deduce a particular simple formula
for 0:*. Assume that all nonzero eigenvalues of K satisfy Ai E [Amax, Amin], where
Amin is the largest eigenvalue smaller than zero and Amax is the eigenvalue with
largest absolute value. It can be shown that the asymptotic convergence rate is
determined by the eigenvalues of 1m + o:K that are unequal one. The eigenvalues
Ai of K are related to the eigenvalues ),i of 1m + o:K by ),i = 1 + o:Ai. Hence, the
interval [Amax, Amin] is mapped to [),max, ),min] = [1 +O:A max , 1 +o:Amin]. In order to
obtain a low spectral radius of 1m +o:K this interval should lie symmetrically around
zero, which is equivalent to ),min = -),max' This yields 0:* = 1 >'=in l ~ I >'=ax l < k with
H = IAmaxl. Thus, 0:* leads to convergence according to Theorem 1. Note also that
a larger learning rate does not necessarily lead to a faster asymptotic convergence
of the iteration.
4
Counterexample of Baird - Revisited
In this section we analyse the counterexample given by Baird in [1], and show how
Theorem 1 and Proposition 1 can be applied to obtain explicit bounds for the
learning rate 0: and the discount factor "( for which the residual gradient and TD(O)
algorithms converge. The matrices <I>, X and Z are given by
12000000
1000000
0000001
0000001
10200000
0100000
10020000
0010000
0000001
<I>= 10002000
X= 0001000
Z= 0000001
10000200
0000100
0000001
10000020
0000010
0000001
20000001
0000001
0000001
which corresponds to the synchronous update of every state transition. In
the residual gradient case we have K RG
-("(Z - X)<I>(("(Z - X)<I?T
which has just negative eigenvalues URG
{-4, -15 + 34"( - 35"(2 ?
-}2102,,(2 - 812"( - 2380"(3 + 121 + 1225"(4]}. Using Theorem 1 and Proposition 1
we can find a constant learning rate 0:, such that the iteration converges for every "( E [0,1). For example, for "( = 0.9 the eigenvalues of KRG are URG =
{-0.0204,-4,-12.7296} and Theorem 1 yields 0: < 0.1571 which is also almost
equal to the optimal learning rate 0:* ~ 0.1569.
H
In the TD(O) case we have to analyse the matrix KTD = -("(Z -X)<I>(X<I?T, which
has the eigenvalues UTD = {-4, -15 + 17"( ? -}289"(2 - 406"( + 121]}. There are
eigenvalues of KTD with positive real part for "( ~ 0.89. In such cases we have
divergence for every 0: > 0 as described in [1] for,,( = 0.9. However, contradicting
the argument in [1] the TD(O) algorithm converges for all "( :::; 0.88 if the learning
rate is chosen appropriately. For example, for "( = 0.4 all eigenvalues are negative
(UTD = {-3.0,-4,-5.2}), so condition (a) and (b) of Theorem 1 are trivially
fulfilled. Condition (c) can also be shown by simple computation, and therefore
using Theorem 1 we obtain convergence for 0: < 0.384 and optimal asymptotic
convergence for 0:* ~ 0.244, which is much smaller.
H
5
Conclusions
For the problem of repeated synchronous updates based on a fixed set of transitions
we have proved sufficient conditions of convergence for arbitrary combinations of
reinforcement learning algorithms and linear function approximation. Our main
theorem yields a rule for determining a problem dependent learning rate such that
the algorithm converges. For a combination of the residual gradient algorithm with
grid-based linear interpolation we have deduced a constant learning rate such that
the algorithm converges independently of the concrete transition data. Moreover,
we have derived a general formula for an optimal learning rate with respect to
asymptotic convergence. Finally we have applied our main theorem to fully analyse
the example Baird gives for the divergence of TD(O) [1].
Appendix
Lemma 1 Let D be a real m x F matrix and C T a real F x m matrix, where
> F. Then K = DC T has the same eigenvalues as A = C T D and additionally
the eigenvalue zero with multiplicity (F-m). Let HI{ be the generalised eigenspace
of K corresponding to the eigenvalue A and H1 the generalised eigenspace of A
corresponding to the eigenvalue A. Then, CTHI{ ~ H1 and DH1 ~ HI{. For
A oF 0 it even holds that CTHI{ = H1 and DH1 = HI{.
m
Proof: The generalised eigenspace HI{ has index sI{ if sI{ is the smallest number
for which ker(K - AIm)sf = ker(K - AIm)sf +1 holds, where h denotes the identity
in IRkxk. Let x E HI{, i.e. (K - AIm)sf x = O. With C T Ki = AiCT we have
sf (
CT(K - AImyf x = CT(i~
StK) KiASf - i)x = (A -
AIF)sf C T x .
(4)
Thus, C T x E H1. And with the same argument we obtain Dx E HI{ from x E
H1? Therefore, CTHI{ ~ H1 and DH1 ~ HI{ Let A oF 0 and BI{ a basis in
HI{. As the Jordan block of K corresponding to HI{ is invertible the vectors
C T Bf are linearly independent and therefore form a basis of the span [C T BI{].
With the above consideration we have [C T BI{J ~ H 1. If this is a real subset
CTBI{ can be completed to form a basis B1 of H1 with IBI{I < IB11. Then we
have that DB1 is linearly independent and [DB1 J ~ HI{. Moreover, we have
dim(HI{) = IBI{ I < IB11= dim([DB1]) ~ dim(HI{), which is a contradiction.
Therefore, CTHI{ = [CT BfJ = H1. Similarly, we obtain DH1 = HI{. Thus, the
multiplicities of the eigenvalues A oF 0 of A and K are the same. The multiplicity
of the eigenvalue zero of matrix K is by (F - m) larger than that of matrix A. D
Proof of Theorem 1: Due to assumption (a) and Lemma 1 every eigenvalue of
A is either zero or has a real part less than zero. If the real part of every eigenvalue
of A is less than zero, A is invertible. For invertible matrices Theorem 2.1.1 from
[5] states that the iteration converges if and only if the spectral radius e(IF + aA),
i.e. the largest eigenvalue, is less than 1. For every eigenvalue Ai of A obviously
1 + aAi is an eigenvalue of IF
e(IF
+ aA) <
+ aA.
With H = maxi { ,~;(l:) , } we obtain for a> 0
.
1 ~ 'it: 11
+ aAi l <
2
1 ~ a < H'
(5)
This completes the proof if all eigenvalues of A have a negative real part.
In the following let A have the eigenvalue Al = O. The vector space IRF can be
represented as the direct sum of the generalised eigenspaces IRF = H~ EB H12 EB
? .. EB Htl ? In the following we write ilt = Ht2 EB ... EB Htl because this is a
complementary space of Ht. As the generalised eigenspaces of A are invariant
against A, i.e. \::Ix E Ht. : Ax E Ht., the iteration wn+1 = (IF + aA)w n + ab can
be decomposed in two parts, one in the generalised eigenspace Ht and the other in
the com.Qlem~ntary space ilt. Let wn = wn + wn and b = b+ b, where wn , bE Ht
and wn , b E Ht. Then we have
wn+1 = wn + a(Aw n + b) = ~n + a(Awn + b~ +~n + a(Awn + b~
(6)
Thus, the convergence analysis can be carried out separately for the two iterations.
The matrix A in iteration wn+1 = wn + a(Awn + b) is not invertible. However, the
iteration takes place in the subspace ilt. In this subspace the mapping associated
with A is invertible. Therefore, A can be replaced by an invertible matrix A that
does not ~lter the iteration in ilt. The matrix A can be constructed such that
e(IF + aA) = e(IF + aA). Therefore, according to the considerations above the
iteration converges for 0 < a <
it.
In the following we show that the iteration in Ht is the identity and therefore
trivially converges. According to assumption J~ Hff = E{f. All v E IRm can be
represented as v = ii + v with ii E E{f and v E H o = H~ EB ? .. EB Ht. According to
Lemma 1 CTilff = ilt and CTHff ~ Ht hold. Therefore, for b+ b = b = C T v we
have b = C Tii and b = C Tv. Let E{f =1= {o}. Then, for all ii E E{f
0= Kii = DCTii ===* CTii
E
[CT]
n
[DT].L
1% cTii = O.
For E{f = {O} we also obtain CTii = 0 because ii = o. Therefore, we have
CTE{f = {O} and, as a consequence, b = CTii = o. The last that remains to show
is that Aw = 0 for all w E HA. According to Lemma 1 we know that Dw E Hff.
Assumption (b) says that H~ = E{f and from the above considerations we know
that CTE{f = {O}. Therefore, Aw = CT(Dw) = o. Thus, the iteration in Ht is the
identity. As both parts of the iteration converge the overall iteration also converges
which completes that part of the proof.
The limit w* of wn+1 = wn + a(Aw n + b) is unique and we have w* = A-lb. The
limit of wn+ l = wn + a(Aw n + b) is not unique, but depends on the initial value
wo. It holds that w* = wo. Therefore, the limit w* = w* + w* depends on the
initial value wo.
Proof of Proposition 1: For the residual gradient algorithm we have A RG =
_8 T X DX T 8 and bRG = _8 T X DX Tr . In order to apply Theorem 1 this is
decomposed in A RG = CTD and bRG = CTv with C = -D = v75X T 8 and
v = -v75XT r. As the diagonal entries of D are positive we can write
for the
diagonal matrix whose entries are the square roots of D. Thus [CT] = )DT] which
yields condition (c) of Theorem 1. Moreover, the matrix K = DC = -CCT
is symmetric and therefore diagonalisable. Hence, condition (b) is fulfilled and
all eigenvalues are real. Let now A =1= 0 be an eigenvalue of K and let x be a
corresponding eigenvector. Then 0 > - (C T x) T (C T x) = x T K x = AXT x which
yields A < o. Thus, all requirements are fulfilled and for an appropriate choice of
a the residual gradient algorithm converges independently of the concrete form of
the function approximation scheme.
v75
The consistency of the residual gradient algorithm can be shown formally but due to
space limitations we only give the following informal proof. The algorithm minimises
the Bellman error, which is a quadratic objective function. Hence, there are no local
optima and if the global optimum is not unique , the values of all global optima are
identical. Due to its gradient descent property the residual gradient algorithm
converges to such a global optimum independently of the initial value. In case of a
tabular representation a global minimum has Bellman error zero and corresponds
to an optimal solution. Thus, the residual gradient algorithm is consistent.
A detailed description of how grid-based linear interpolation works in combination
with RL can be found in [7]. Important for us is that in a d-dimensional grid each
feature vector ip(x) satisfies 0 ~ ipi(X) ~ land 2:::1 ipi(X) = 1. With (, -> denoting
the standard scalar product and II . 112 denoting the corresponding euclidean norm,
we have !Ki,jl = 1?CT )i, (CT)j ) 1 ~ maxdll(CT)IIID = 2::=1 Cl~j" According to
the definition Cl,j = (-JD)I,1
2:~1 Xk,ICripj(Zk) - ipj(Xk)) holds. Moreover, from
D = X T X it follows that Dl ,l = 2:;;'=1 X~,l = 2:;;'=1 Xk ,l because Xk ,l is either zero
or one. And besides that we have nl ,IDI ,1 = 1. Altogether we obtain
IK',il
,,;~' (15", ,~, X", it, <Pi (Z.)) '+ (15", ,~, X", it, <Pi (X,l) ~ ~z + 1.
Z
It is well known that the spectral radius {! of the matrix K satisfies (!(K) ~ IIKII
for every norm II . II . Then, for the maximum norm of K we obtain I!K II 00 =
max1 ";i";m 2: =1 IKi,jl ~ m(l + ,2) . With H = m(l + ,2) this yields {!(K) ~
IIKll oo ~ H. Thus we have a bound for the absolute value of the largest eigenvalue
of K. According to Theorem 1 the iteration converges for a < ft?
D
1
References
[1] L. C. Baird. Residual algorithms: Reinforcement learning with function approximation. Proc. of the Tw elfth International Conference on Machine Learning, 1995.
[2] D. P . Bertsekas and J . N. Tsitsiklis. Neuro Dynamic Programming. Athena Scientific,
Belmont, Massachusetts, 1996.
[3] J .A. Boyan. Least-squares temporal difference learning. In Proceeding of th e Sixteenth
International Conference on Machine Learning, pages 49- 56, 1999.
[4] S.J Bradtke and A.G. Barto. Linear least-squares algorithms for temporal difference
learning. Machine Learning, 22:33- 57, 1996.
[5] A. Greenbaum. Iterative Methods for Solving Linear Systems. SIAM, 1997.
[6] D. Koller and R. Parr. Policy iteration for factored mdps. In Proc. of the Sixteenth
Conference on Uncertainty in Artificial Intelligence (UAI), pages 326- 334, 2000.
[7] A. Merke and R. Schoknecht. A necessary condition of convergence for reinforcement
learning with function approximation. In Proceedings of the Nineteenth International
Conference on Machine Learning, pages 411- 418, Sydney, Australia, 2002.
[8] M. G. Lagoudakis and R . Parr. Model-free least-squares policy iteration. In Advances
in Neural Information Processing Systems, volume 14, 2002.
[9] S. Pareigis. Adaptive choice of grid and time in reinforcement learning. Advances in
Neural Information Processing Systems, 1998.
[10] R. Schoknecht. Optimality of reinforcement learning algorithms with linear function
approximation. In Advances in Neural Information Processing Systems, volume 15,
2003.
[11] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine
Learning, 3:9- 44, 1988.
[12] J. N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with
function approximation. IEEE Transactions on Automatic Control, 1997.
| 2208 |@word pw:3 polynomial:1 inversion:2 achievable:1 norm:3 bf:1 iki:1 tr:1 carry:1 initial:7 irnxn:1 contains:1 denoting:2 com:1 analysed:1 si:12 dx:3 written:2 belmont:1 fn:2 numerical:1 update:15 xif:1 intelligence:1 accordingly:1 xk:4 revisited:1 idi:1 ipi:2 constructed:1 direct:1 ik:1 manner:1 expected:1 nor:1 multi:2 bellman:8 cct:1 decomposed:2 td:22 actual:1 becomes:2 moreover:6 notation:2 eigenspace:5 eigenvector:1 unified:2 guarantee:1 temporal:5 every:7 axt:1 rm:1 control:2 omit:1 bertsekas:1 generalised:7 positive:2 local:1 limit:6 consequence:1 sutton:1 interpolation:4 might:1 ibi:2 eb:9 examined:1 bi:4 jrf:2 practical:1 unique:5 practice:1 block:1 kaczmarz:1 ker:2 universal:2 r7r:1 convenient:1 ib11:2 get:1 cannot:1 aai:2 equivalent:2 missing:1 independently:8 pure:1 artur:1 contradiction:1 insight:1 rule:7 factored:1 amax:3 ralf:2 dw:2 fulfil:1 exact:1 losing:1 programming:2 us:1 assure:1 roy:1 updating:1 predicts:1 ft:1 reward:5 dynamic:2 depend:1 solving:2 max1:1 basis:7 completely:1 iall:1 represented:3 artificial:1 whose:1 larger:2 nineteenth:1 say:1 otherwise:2 analyse:4 ip:1 obviously:1 interplay:1 eigenvalue:34 dh1:4 propose:1 product:1 adaptation:1 sixteenth:2 amin:4 description:1 convergence:24 requirement:2 diverges:2 optimum:4 converges:20 oo:1 recurrent:1 minimises:2 sydney:1 radius:5 stochastic:2 australia:1 transient:1 require:3 kii:1 suitability:1 proposition:5 leastsquares:1 singularity:2 hold:6 around:1 considered:1 mapping:1 predict:2 parr:2 achieves:1 vary:1 smallest:1 fh:1 proc:2 ntd:1 visited:1 largest:5 iaii:1 weighted:1 hope:1 aim:3 varying:1 barto:1 broader:1 ax:2 derived:1 improvement:2 am:1 dim:4 dependent:3 eliminate:1 koller:1 germany:2 overall:1 iiid:1 equal:1 sampling:7 identical:1 alter:1 tabular:4 divergence:2 replaced:1 ab:2 investigate:2 evaluation:3 nl:1 atd:2 chain:3 implication:1 necessary:1 eigenspaces:2 orthogonal:1 euclidean:1 irm:1 re:1 instance:1 column:2 entry:3 subset:1 uniform:1 too:1 aw:5 combined:2 deduced:1 international:3 siam:1 merke:3 invertible:8 concrete:4 possibly:1 leading:1 li:1 jrk:3 tii:1 de:1 includes:1 baird:4 satisfy:1 depends:4 vi:1 h1:8 root:1 disadvantageous:1 start:2 square:7 il:1 characteristic:1 yield:13 informatik:1 dortmund:1 strongest:1 urg:2 definition:2 failure:1 against:1 frequency:2 involved:1 proof:8 di:1 associated:1 sampled:4 proved:2 pareigis:1 popular:1 massachusetts:1 lim:1 knowledge:1 dt:3 specify:1 lehrstuhl:1 rand:1 iai:1 aif:1 just:1 zif:1 assessment:1 quality:2 scientific:1 karlsruhe:2 mdp:1 y2:1 hence:4 symmetric:1 nonzero:1 krg:1 htl:2 steady:3 criterion:1 bradtke:1 bring:1 consideration:4 ef:2 lagoudakis:1 superior:1 rl:18 volume:2 jl:2 counterexample:2 ai:10 automatic:1 consistency:1 grid:7 hp:2 mathematics:1 ktd:2 trivially:2 similarly:1 schoknecht:4 longer:1 deduce:1 triangulation:1 certain:5 maxd:1 vt:1 approximators:1 inverted:1 minimum:1 additional:1 converge:5 ii:8 generalises:1 faster:1 long:1 compensate:1 prediction:1 neuro:1 iteration:29 utd:2 separately:1 decreased:3 addressed:1 interval:2 singular:2 source:3 completes:2 crucial:1 appropriately:2 greenbaum:1 irf:5 jordan:1 symmetrically:1 concerned:1 wn:18 zi:5 idea:1 itr:1 synchronous:17 expression:1 wo:6 action:2 useful:1 detailed:1 discount:2 exist:1 ctd:1 estimated:1 fulfilled:4 write:2 prevent:1 preprocessed:1 neither:1 ht:10 lter:1 sum:1 inverse:1 uncertainty:1 ilt:5 place:1 almost:1 reasonable:1 h12:1 incompatible:1 decision:1 appendix:3 bound:2 hi:13 ki:2 ct:9 convergent:3 quadratic:1 occur:2 bfj:1 ri:6 awn:3 argument:2 span:2 min:2 optimality:1 px:3 tv:1 according:12 combination:12 representable:2 smaller:3 character:1 tw:1 modification:1 s1:1 happens:1 invariant:1 multiplicity:3 taken:2 equation:2 remains:1 describing:1 fail:2 needed:1 know:2 informal:1 available:2 apply:2 spectral:5 appropriate:2 save:1 alternative:1 altogether:1 existence:2 jd:1 denotes:3 ensure:1 completed:1 establish:1 bl:1 objective:1 question:1 strategy:2 costly:1 diagonal:4 gradient:21 subspace:2 ilkd:2 mapped:1 entity:1 athena:1 iikii:1 reason:1 besides:1 index:1 slows:1 negative:3 policy:9 unknown:1 markov:4 descent:2 dc:3 arbitrary:8 lb:1 complement:1 unequal:1 able:1 rf:1 max:3 memory:2 boyan:1 residual:20 scheme:2 mdps:1 carried:1 ctv:1 sn:1 determining:1 relative:2 asymptotic:6 ipj:1 fully:1 limitation:1 udo:1 approximator:5 sufficient:3 consistent:5 db1:3 pi:5 land:1 row:2 compatible:1 last:2 free:1 tsitsiklis:2 btd:2 absolute:3 sparse:1 van:1 f31:1 transition:23 reinforcement:10 adaptive:1 bm:1 transaction:1 sj:3 obtains:1 uni:1 global:4 uai:1 b1:1 xi:12 ht2:1 continuous:1 iterative:10 additionally:1 zk:1 interpolating:1 necessarily:1 cl:2 assured:1 main:5 linearly:4 whole:2 contradicting:1 repeated:5 complementary:1 cte:2 explicit:1 sf:5 lie:2 ix:1 theorem:25 formula:3 down:1 bad:1 xt:3 maxi:1 pz:2 concern:1 dl:1 exists:3 sorting:1 easier:1 rg:5 lt:2 infinitely:1 scalar:1 lstd:4 aa:9 corresponds:2 satisfies:3 brg:4 identity:4 stk:1 feasible:1 change:3 determined:1 fnd:1 lemma:4 ew:2 rarely:1 formally:1 meant:1 |
1,328 | 2,209 | Kernel-based Extraction of Slow Features:
Complex Cells Learn Disparity and Translation Invariance from Natural Images
Alistair Bray and Dominique Martinez*
CORTEX Group, LORIA-INRIA, Nancy, France
[email protected], [email protected]
Abstract
In Slow Feature Analysis (SFA [1]), it has been demonstrated that
high-order invariant properties can be extracted by projecting inputs into a nonlinear space and computing the slowest changing
features in this space; this has been proposed as a simple general
model for learning nonlinear invariances in the visual system. However, this method is highly constrained by the curse of dimensionality which limits it to simple theoretical simulations. This paper
demonstrates that by using a different but closely-related objective
function for extracting slowly varying features ([2, 3]), and then exploiting the kernel trick, this curse can be avoided. Using this new
method we show that both the complex cell properties of translation invariance and disparity coding can be learnt simultaneously
from natural images when complex cells are driven by simple cells
also learnt from the image.
The notion of maximising an objective function based upon the temporal predictability of output has been progressively applied in modelling the development
of invariances in the visual system. F6ldiak used it indirectly via a Hebbian trace
rule for modelling the development of translation invariance in complex cells [4]
(closely related to many other models [5,6,7]); this rule has been used to maximise
invariance as one component of a hierarchical system for object and face recognition
[8]. On the other hand, similar functions have been maximised directly in networks
for extracting linear [2] and nonlinear [9, 1] visual invariances. Direct maximisation
of such functions have recently been used to model complex cells [10] and as an
alternative to maximising sparseness/independence in modelling simple cells [11].
Slow Feature Analysis [1] combines many of the best properties of these methods to
provide a good general nonlinear model. That is, it uses an objective function that
minimises the first-order temporal derivative of the outputs; it provides a closedform solution which maximises this function by projecting inputs into a nonlinear
http://www.loria.fr/equipes/cortex/
space; it exploits sphering (or PCA-whitening) of the data to ensure that all outputs
have unit variance and are uncorrelated. However, the method suffers from the curse
of dimensionality in that the nonlinear feature space soon becomes very large as the
input dimension grows, and yet this feature space must be represented explicitly in
order for the essential sphering to occur.
The alternative that we propose here is to use the objective function of Stone [2, 9],
that maximises output variance over a long period whilst minimising variance over a
shorter period; in the linear case, this can be implemented by a biologically plausible
mixture of Hebbian and anti-Hebbian learning on the same synapses [2]. In recent
work, Stone has proposed a closed-form solution for maximising this function in
the linear domain of blind source separation that does not involve data-sphering.
This paper describes how this method can be kernelised. The use of the "kernel
trick" allows projection of inputs into a nonlinear kernel induced feature space
of very high (possibly infinite) dimension which is never explicitly represented or
accessed. This leads to an efficient method that maps to an architecture that could
be biologically implemented either by Sigma-Pi neurons, or fixed REF networks (as
described for SFA [1]). We demonstrate that using this method to extract features
that vary slowly in natural images leads to the development of both the complex-cell
properties of translation invariance and disparity coding simultaneously.
1
Finding Slow Features with kernels
Given I time-series vectors X i<l where each n-dimensional vector Xi is a linear
mixture of n unknown but temporally predictable parameters at time i, the problem
in [3] is to find an n-dimensional weight vector w so that the output Yi = w T Xi at
each i is a scaled version of a particular parameter. Many quasi-invariant parameters
underlying perceptual data exhibit these properties of short-term predictability and
long-term variability. Accordingly, an objective function F can be defined as the
ratio between the long-term variance V and the short-term variance S of the output
sequence i.e.
F -_ V -_
S
L.i Yi 2
L.i Yi
(1)
~2
where Yii and 'iIi represent the output at i centered using long- and short-term means.
The aim is to find the parameters that maximize F, which can be rewritten as:
F =
w T Cw
1
_ -T
~
1
~ ~T
XiX' and C = - "'"' XiX'
TC where C = -I "'"'
~,
I ~
,
w
W
i
i
where C and Care nxn covariance matrices estimated from the I inputs. F is a
version of the Rayleigh quotient and the problem to be solved is, in analogy to
PCA, the right-handed generalized symmetric eigenproblem:
Cw=).Cw
(2)
where A is the largest eigenvalue and W the corresponding eigenvector. In this case,
the component extracted y = w T x corresponds to the most predictable component
with F = A. Most importantly, more than one component can be extracted by
considering successive eigenvalues and eigenvectors which are orthogonal in the
metrics C and 0, i.e. WfCWj = 0 and wfCwj = 0 for i -::/:- j.
To make this algorithm nonlinear we can first project the data x into some highdimensional feature space via a nonlinear mapping ?, and then find the weight
vector W that maximizes F in this space. In this case, to optimise Eq. (2) the
covariance matrices must be estimated in the feature space as
where ?(Xi) and ?(Xi) represent the data centered in the feature space. The problem
with this straight-forward approach is that the dimensionality of the feature space
quickly becomes huge as the input dimension increases [1]. To prevent this we use
the kernel trick: to avoid working with the mapped data directly, we assume that
the solution W can be written as an expansion in terms of mapped training data:
W = 2:~= 1 ai?(xi). We can now rewrite the numerator (likewise denominator) in
Fas
where a = (al??? ad T and K is a (lxl) matrix with entries defined as Kij
?(Xi)T ?(Xj). F can now be written as:
F= aTj(j(Ta
aTK KTa
(3)
To avoid explicitly computing dot products in the feature space, we introduce kernel
functions defined as k(x , y) = ?(x)T ?(y) , which means we just have to evaluate
kernels in the input space. Any kernel involved in Support Vector Machines can be
used, e.g. linear, polynomial, RBF or sigmoid. By now defining the kernel matrix
K with entries
(4)
we can arrive at the corresponding eigenproblem:
(5)
where A is again the corresponding largest eigenvalue equal to F. As for the linear
case, more than one source can be extracted by considering successive eigenvalues
and eigenvectors. In order to recover a temporal component, we need only to
compute the nonlinear projection y = w T ?>(x) of a new input x onto w which is
equivalent to y = 2:!=l Qik(Xi'X).
Finding a sparse solution
If the eigen problem is solved on the entire training set then this algorithm also
suffers from the curse of dimensionality, since the matrices (lxl) easily become computationally intractable. A sparse solution using a small subset p of the training
data in the expansion is therefore essential: this is called the basis set BS. The output is now y = 2: iE BS Qik(Xi' x), and the solution must lie in the subspace spanned
by BS. The kernel elements Kij are computed between the p basis vectors X i and the
1 training data Xj. Thus, K, K and :K are rectangular pxl but the covariance ma--T
-
-
trices (K K ) and (K KT) used in the eigenproblem are only pxp. This approach
can effectively solve very large problems, provided p < < l. The question of course
is how to choose the basis vectors: it is both necessary and sufficient that they span
the space of the solution in the kernel induced feature space. In a recent version
of the algorithm [12] we use the sparse greedy method of [13] as a preprocessing
step. This efficiently finds a small basis set that minimises the least-squares error
between data points in feature space and those reconstructed in the feature space
defined by the basis set. In the simulations below we used a less efficient greedy
algorithm that performed equally well here, but requires a considerably larger basis
setl.
The complete online algorithm requires minimal memory, making it ideal for very
large data sets. The implementation estimates the long- and short-term kernel
means online using exponential time averages parameterised using half-lives As, At
(as in [9]). Likewise, the covariance matrices KK T , i(i(T are updated online at
--T
--T
-T
each time step e.g. KK is updated to KK + KK where K is the column vector
of kernel values centred using the long term mean and computed for the current
time step; there is therefore no need to explicitly compute or store kernel matrices.
2
Simulation Results
The simulation was performed using a grey-level stereo pair of resolution 128x128,
shown in Figure 1 [a]. A new 2D direction 0? <
360? was selected at every 64
time steps, and the image was translated by one pixel per time step in this direction
(with toroidal wrap-around).
e : :;
A set of 20 monocular simple cells was learnt using the algorithm described in [11]
that maximises a nonlinear measure of temporal correlation (TRS) between the
lVectors x are added to BS if, for y E BS, Ik(x,y) 1 ~
annealed from TO = 1, and the size of BS is set at 400.
T
where threshold
T
is slowly
Figure 1: Training on natural images. [a] Stereo Pair. [b] Linear filters that maximise TRS [11]. [c] Output of filters for left image. [d] Output of nonlinear complex
cells in binocular simulation. [e] Output of complex cells in monocular simulation.
present and a previous output, based upon the transfer function g(y) = In cosh(y).
We chose this algorithm since it is based on a nonlinear measure of temporal correlation and yet provides a linear sparse-distributed coding, very similar to that
of lCA for describing simple cells [14] . We did not use the objective function described above since in the linear case it yields filters similar to the local Fourier
series 2 . The filters were optimised for this particular stereo pair; simulations using
a greater variation of more natural images resulted in more spatially localised filters
very similar to those in [14, 11]. We used only the 20 most predictable filters since
results did not improve through use of the full set. The simple cell receptive field
was 8x8, and during learning data was provided by both eyes at one position in the
image 3 . The oriented Gabor-like weight vectors for the 20 cells contributing most
to the TRS objective function are shown in Figure l[b], and the result of processing
the left image with these linear filters is shown in Figure l[c].
The complex cells received input from these 20 types of simple cells when processing
both the left and right eye images. Complex cells had a spatial receptive field of 4x4;
2 An intuitive explanation for this necessity for nonlinearity in the objective function
is provided in [11]; in brief, the temporal correlation of the output of a Gabor-like linear
filter is low, whilst a similar correlation for a measure of the power in the filter is high.
3The dimension of the PeA-whitened space was reduced from 63 to 40, and 6.t = 1, 'f] =
10- 3,0 = 10- 1 ; 10 5 input vectors were used.
[a]
[b]
Figure 2: Testing on simulated pair used in [9] . [a] Artificial stereo pair. [b]
Underlying disparity function. [c] Output of most predictable complex cell trained
on Figure I[a].
each cell therefore received 320 simple cell inputs (2x4x4x20); these were normalised
to have unit variance and zero mean. The most predictable features were extracted
for this input vector over 105 time-steps, using the kernel-based method described
above, using data at just one position in the image. The basis set was made up of 400
input vectors, and a polynomial kernel of degree 2 was used. The temporal half-lifes
for estimating the short- and long-term means in U and V were As = 2, Al = 200.
The algorithm therefore extracts 400 outputs; we display the outputs for the 8 most
predictable (determined by highest eigenvalues) in Figure I[d]; further values were
hard to interpret. Below this, in Figure I[e], we show the complex outputs obtained
if we substitute the right image with the left one in the stereo pair, so making the
simulation monocular.
Consider first the monocular simulation in [e] . It is visually apparent how the most
predictable units are strongly selective for regions of iso-orientation (looking quite
different to any simple cell response in [c]). In this particular image, it results
in different "T" -shaped parts of the Pentagon of considerable size being distinctly
isolated. Since in our network the complex cell receptive field size in the image is
only 50% greater than that for the simple cells, this implies translation invariance:
over the time (or space) that a simple cell of the correct orientation gives a strong
but transitory response, the complex cells provides a strong continuous response.
That is, its response is invariant to the phase that determines the profile of the
simple cell response.
Consider now the stereo simulation in [d]. This tendency is still present (e.g. the
3rd output), but it is confounded with another parameter that isolates the complete
shape of the Pentagon from the background. This is most striking in the output
provided by the first feature; that is, this parameter is the most predictable in
the image (providing an eigenvalue A = VjU = 7.28, as opposed to A ~ 4 for
the "T"-shapes in [e]). This parameter is binocular disparity, generated by the
variation in depth of the Pentagon roof compared to the ground. The proof of this
lies in Figure 2. Here we have taken the artificial stereo pair used in [9], shown in
Figure 2[a] , that has been generated using the known eggshell disparity function
shown in Figure 2[b]. We presented this to the network trained wholly on the
Pentagon stereo pair; it can be seen that the most predictable component, shown
in Figure 2[c], replicates the disparity function of [b] 4.
4The output is somewhat noisy, partly because the image has few linear features like
those in Figure l[b] ; if we train the simple and complex cells on this image we get a much
cleaner result .
3
Discussion
The simulation above confirms that the linear properties of simple cells, and two
of the nonlinear properties of complex cells (translation invariance and disparity
coding) can be extracted simutaneously from natural images through maximising
functions of temporal coherence in the input. Although these properties have been
dealt with in others' work discussed above, they have been considered either in
isolation or through theoretical simulation. It is only because the kernel-based
method we present allows us to work efficiently with large amounts of data in a
nonlinear feature space derived from high dimensional input that we have been able
to extract both complex cell properties together from realistic image data.
The method described above is computationally efficient. It is also biologically plausible in as much as [a] it uses a reasonable objective function based on temporal
coherence of output, and [b] the final computation required to extract these most
predictable outputs could be performed either by Sigma-Pi neurons, or fixed RBF
networks (as in SFA [1]) . However, we do not claim either that the precise formulation of the objective function is biologically exact, or that a biological system would
use the same means to arrive at the final architecture that computes the optimal
solution: the learning algorithm is certainly different. Our approach is therefore
focussed on the constraints provided by [a] and [b].
The method also exploits a distributed representation for maximising the objective
function that results from the generalised eigenvector solution. Is this plausible
given the emphasis that has been laid on sparse-coding early in the visual system
[15]? Sparse representations are often the result of constraining different outputs to
be uncorrelated, or stronger, independent. However, as one ascends the perceptual
pathway generating more reduced nonlinear representations, even the constraint of
uncorrelated output may be too strong, or unnecessary, to create the highly robust
representations exploited by the brain. For example, Rolls reports and defends a
highly distributed coding of faces in infero-temporal cortical areas with cells responding to a large proportion of stimuli to some degree ([16], chapter 5). Our
method enforces the constraint that successive eigenvectors are orthogonal in the
metrics C and C and can result in the partly correlated output expected in the
robust distributed coding Rolls proposes. However, this would not be the case if
the long-term means used for C are estimated with a temporal half-life sufficiently
large that these means do not differ from the true expected values.
Finally, although maximising the sparseness of representation may be inappropriate
in deeper cortex, one might suggest that the coding of parameters we obtain in our
simulation is not highly distributed across outputs: in reality each complex cell
responds to a limited range of disparity and orientation. However, it can be seen
in Figure l[d]) that there is a clear separation of orientation, and some mixing of
disparity and orientation-sensitivity. It is a feature of our method that different
outputs must have different measures of predictability (i.e. eigenvalues) . In the
case of sparse coding of translation invariance, for example, there is no obvious
reason why this assumption should be met by cells coding different orientations
alone; it can however be enforced by coding different mixtures of orientation and
disparity parameters leading to distinct eigenvalues. There is certainly no practical
or biological reason why these parameters should be carried separately in the visual
system (see [1] for discussion).
In conclusion, this work provides further support for the fruitful approach of extracting non-trivial parameters through maximisation of objective functions based
on temporal properties of perceptual input. One of the challenges here is to extend
current linear models into the nonlinear domain whilst limiting the extra complexity
they bring, which can lead to excess degrees of freedom and computational problems. We have described here a kernel-based method that goes some way towards
this, extracting disparity and translation simultaneously for complex cells trained
on natural images.
References
[1] L. Wiskott and T .J . Sejnowski. Slow feature analysis: Unsupervised learning of
invariances. Neural Computation, 14(4) , 2002.
[2] J. V. Stone and A. J. Bray. A learning rule for extracting spatio-temporal invariances.
Network: Computation in Neural Syst ems, 6(3):429- 436 , 1995.
[3] James V. Stone. Blind source separation using temporal predictability. Neural Computation, (13):1559- 1574, 200l.
[4] P. Foldiak. Learning invariance from transformation sequences. Neural Computation,
3(2):194- 200, 1991.
[5] H. G. Barrow and A. J. Bray. A model of adaptive development of complex cortical
cells. In 1. Aleksander and J. Taylor, editors, Artificial Neural Networks II: Proceedings
of the International Conference on Artificial Neural Networks. Elsevier Publishers,
1992 .
[6] K. Fukushima. Self-organisation of shift-invariant receptive fields. N eural N etworks,
12:826- 834, 1999.
[7] M. Stewart Bartlett and T.J. Sejnowski. Learning viewpoint invariant face representations from visual experience in an attractor network. Network: Computation in
Neural Systems, 9(3):399- 417, 1998.
[8] E. T . Rolls and T. Milward. A model of invariant object recognition in the visual
system: Learning rules, activation functions , lateral inhibition, and information-based
performance measures. Neural Computation, 12:2547- 2572, 2000.
[9] J. V. Stone. Learning perceptually salient visual parameters using spatiotemporal
smoothness constraints. N eural Computation, 8(7):1463- 1492, October 1996.
[10] K. Kayser, W. Einhiiuser, O. Dummer, P. Konig, and K. Kording. Extracting slow
subspaces from natural videos leads to complex cells. In ICANN 2001, LNCS 2130,
pages 1075- 1080. Springer-Verlag Berlin Heidelberg 2001 , 200l.
[11] J. Hurri and A. Hyvarinen. Simple-cell-like receptive fields maximise temporal coherence in natural video. Submitted, http://www.cis.hut.fi/)armo/publications. 2002.
[12] D. Martinez and A. Bray. Nonlinear blind source separation using kernels. IEEE
Trans. Neural Networks, 14(1):228- 235, Jan. 2003.
[13] G . Baudat and F . Anouar. Kernel-based methods and function approximation. International Joint Conference of Neural Networks IJCNN, pages 1244-1249, 200l.
[14] A. J. Bell and T. J. Sejnowski. The independent components of natural scenes are
edge filters . Vision Res earch, 37:3327- 3338, 1997.
[15] B.A. Olhausen and D.J. Field. Emergence of simple-cell receptive field properties by
learning a sparse code for natural images. Nature , 381:607- 609, 1996.
[16] E .T . Rolls and G . Deco. Computational Neuroscience of Vision. Oxford University
Press, 2002.
| 2209 |@word version:3 polynomial:2 stronger:1 proportion:1 grey:1 confirms:1 dominique:1 simulation:13 covariance:4 necessity:1 series:2 disparity:12 current:2 activation:1 yet:2 must:4 written:2 realistic:1 shape:2 progressively:1 alone:1 greedy:2 half:3 selected:1 accordingly:1 maximised:1 iso:1 short:5 provides:4 successive:3 x128:1 accessed:1 atj:1 direct:1 become:1 ik:1 combine:1 pathway:1 introduce:1 expected:2 ascends:1 brain:1 curse:4 inappropriate:1 considering:2 becomes:2 project:1 provided:5 underlying:2 estimating:1 maximizes:1 eigenvector:2 sfa:3 whilst:3 finding:2 transformation:1 temporal:15 every:1 anouar:1 demonstrates:1 scaled:1 toroidal:1 unit:3 generalised:1 maximise:3 local:1 limit:1 oxford:1 optimised:1 inria:1 chose:1 emphasis:1 might:1 limited:1 range:1 trice:1 practical:1 enforces:1 testing:1 maximisation:2 kayser:1 kernelised:1 lncs:1 wholly:1 jan:1 area:1 bell:1 gabor:2 projection:2 suggest:1 get:1 onto:1 pxl:1 baudat:1 www:2 equivalent:1 map:1 demonstrated:1 fruitful:1 annealed:1 go:1 rectangular:1 resolution:1 rule:4 importantly:1 spanned:1 notion:1 variation:2 updated:2 limiting:1 exact:1 us:2 trick:3 element:1 recognition:2 solved:2 region:1 highest:1 predictable:10 complexity:1 trained:3 rewrite:1 upon:2 basis:7 translated:1 easily:1 joint:1 represented:2 chapter:1 train:1 distinct:1 sejnowski:3 artificial:4 apparent:1 quite:1 larger:1 plausible:3 solve:1 emergence:1 noisy:1 final:2 online:3 sequence:2 eigenvalue:8 propose:1 product:1 fr:2 yii:1 mixing:1 intuitive:1 exploiting:1 konig:1 generating:1 object:2 minimises:2 received:2 defends:1 eq:1 strong:3 implemented:2 quotient:1 implies:1 met:1 differ:1 direction:2 closely:2 correct:1 filter:10 pea:1 centered:2 atk:1 biological:2 around:1 considered:1 ground:1 sufficiently:1 visually:1 hut:1 mapping:1 claim:1 vary:1 early:1 largest:2 create:1 aim:1 avoid:2 aleksander:1 varying:1 publication:1 derived:1 modelling:3 slowest:1 elsevier:1 entire:1 quasi:1 france:1 selective:1 pixel:1 orientation:7 development:4 proposes:1 constrained:1 spatial:1 equal:1 field:7 never:1 extraction:1 eigenproblem:3 shaped:1 x4:1 unsupervised:1 others:1 report:1 stimulus:1 few:1 oriented:1 simultaneously:3 resulted:1 roof:1 phase:1 attractor:1 fukushima:1 freedom:1 huge:1 earch:1 highly:4 certainly:2 replicates:1 mixture:3 kt:1 edge:1 necessary:1 experience:1 shorter:1 orthogonal:2 taylor:1 re:1 isolated:1 theoretical:2 minimal:1 handed:1 kij:2 column:1 stewart:1 entry:2 subset:1 too:1 spatiotemporal:1 learnt:3 considerably:1 international:2 sensitivity:1 ie:1 together:1 quickly:1 again:1 deco:1 opposed:1 choose:1 slowly:3 possibly:1 derivative:1 leading:1 closedform:1 syst:1 centred:1 coding:11 explicitly:4 blind:3 ad:1 performed:3 closed:1 recover:1 pxp:1 square:1 roll:4 variance:6 likewise:2 efficiently:2 yield:1 dealt:1 straight:1 submitted:1 synapsis:1 suffers:2 involved:1 james:1 obvious:1 proof:1 kta:1 nancy:1 dimensionality:4 ta:1 response:5 formulation:1 strongly:1 just:2 parameterised:1 binocular:2 transitory:1 correlation:4 hand:1 working:1 nonlinear:18 grows:1 true:1 spatially:1 symmetric:1 numerator:1 during:1 self:1 generalized:1 stone:5 complete:2 demonstrate:1 bring:1 image:22 isolates:1 recently:1 fi:1 sigmoid:1 discussed:1 extend:1 interpret:1 ai:1 smoothness:1 rd:1 nonlinearity:1 had:1 dot:1 cortex:3 whitening:1 inhibition:1 loria:4 recent:2 foldiak:1 driven:1 store:1 verlag:1 life:3 yi:3 exploited:1 seen:2 greater:2 care:1 somewhat:1 maximize:1 period:2 ii:1 full:1 hebbian:3 minimising:1 long:8 equally:1 denominator:1 whitened:1 metric:2 vision:2 kernel:21 represent:2 qik:2 cell:38 background:1 separately:1 source:4 publisher:1 extra:1 induced:2 extracting:6 ideal:1 constraining:1 iii:1 independence:1 xj:2 isolation:1 architecture:2 shift:1 pca:2 bartlett:1 stereo:8 clear:1 involve:1 eigenvectors:3 setl:1 cleaner:1 amount:1 cosh:1 reduced:2 http:2 estimated:3 neuroscience:1 per:1 group:1 salient:1 threshold:1 changing:1 prevent:1 enforced:1 striking:1 arrive:2 laid:1 reasonable:1 separation:4 coherence:3 display:1 trs:3 bray:5 occur:1 ijcnn:1 constraint:4 scene:1 fourier:1 span:1 sphering:3 lca:1 jr:1 describes:1 across:1 alistair:1 em:1 biologically:4 b:6 making:2 projecting:2 invariant:6 taken:1 computationally:2 monocular:4 etworks:1 describing:1 confounded:1 rewritten:1 hierarchical:1 indirectly:1 alternative:2 eigen:1 substitute:1 responding:1 ensure:1 pentagon:4 exploit:2 objective:12 question:1 added:1 fa:1 receptive:6 responds:1 exhibit:1 subspace:2 cw:3 wrap:1 mapped:2 simulated:1 lateral:1 berlin:1 trivial:1 reason:2 maximising:6 code:1 kk:4 ratio:1 providing:1 october:1 trace:1 sigma:2 localised:1 f6ldiak:1 implementation:1 unknown:1 maximises:3 neuron:2 anti:1 barrow:1 defining:1 variability:1 looking:1 precise:1 pair:8 required:1 trans:1 able:1 below:2 challenge:1 optimise:1 memory:1 explanation:1 video:2 power:1 natural:11 improve:1 brief:1 eye:2 temporally:1 carried:1 x8:1 extract:4 contributing:1 nxn:1 analogy:1 degree:3 sufficient:1 wiskott:1 editor:1 viewpoint:1 uncorrelated:3 pi:2 translation:8 course:1 soon:1 normalised:1 deeper:1 face:3 focussed:1 sparse:8 distinctly:1 xix:2 distributed:5 dimension:4 depth:1 cortical:2 computes:1 forward:1 made:1 adaptive:1 preprocessing:1 avoided:1 hyvarinen:1 kording:1 reconstructed:1 excess:1 unnecessary:1 hurri:1 spatio:1 xi:8 continuous:1 why:2 reality:1 learn:1 transfer:1 robust:2 nature:1 heidelberg:1 expansion:2 complex:21 domain:2 did:2 icann:1 profile:1 martinez:2 ref:1 eural:2 slow:6 predictability:4 position:2 exponential:1 lie:2 perceptual:3 organisation:1 essential:2 intractable:1 effectively:1 ci:1 perceptually:1 sparseness:2 tc:1 rayleigh:1 lxl:2 visual:8 springer:1 corresponds:1 determines:1 extracted:6 ma:1 rbf:2 towards:1 considerable:1 hard:1 infinite:1 determined:1 called:1 invariance:14 tendency:1 partly:2 highdimensional:1 support:2 evaluate:1 correlated:1 |
1,329 | 221 | 92
Cowan and Friedman
Development and Regeneration of Eye-Brain
Maps: A Computational Model
J.D. Cowan and A.E. Friedman
Department of Mathematics. Committee on
Neurobiology. and Brain Research Institute.
The University of Chicago. 5734 S. Univ. Ave.?
Chicago. Illinois 60637
ABSTRACT
We outline a computational model of the development and regeneration of specific eye-brain circuits. The model comprises a self-organizing map-forming network which uses local Hebb rules. constrained by
molecular markers. Various simulations of the development of eyebrain maps in fish and frogs are described.
1 INTRODUCTION
The brain is a biological computer of immense complexity comprising highly specialized
neurons and neural circuits. Such neurons are interconnected with high specificity in
many regions of the brain. if not in all. There are also many observations which indicate
that there is also considerable circuit plasticity. Both specificity and plasticity are found
in the development and regeneration of eye-brain connections in vertebrates. Sperry
(1944) frrst demonstrated specificity in the regeneration of eye-brain connections in frogs
following optic nerve section and eye rotation; and Gaze and Sharma (1970) and Yoon
(1972) found evidence for plasticity in the expanded and compressed maps which
regenerate following eye and brain lesions in goldfish. There are now many experiments
which indicate that the formation of connections involves both specificity and plasticity.
Development and Regeneration of Eye-Brain Maps: A Computational Model
1.1 EYE-BRAIN MAPS AND MODELS
Fig. 1 shows the retinal map found in the optic lobe or tectum of fish and frog. The map
is topological t Le.; neighborhood relationships in the retina are preserved in the optic
tectum. How does such a map develop? Initially there is considerable disorder in the
1. retina.
0
1.",pol'Jl.
rosll'Jl.
r. retina.
usa!.
X
G
lm.pol'Jl.
roS11'Jl.
1. optic
r. optic
tect'um.
tect'um.
Figure 1: The normal retino-tectal map in fish and frog. Temporal
retina projects to (contralateral) rostral tectum; nasal retina to
(contralateral) caudal tectum.
pathway: retinal ganglion cells make contacts with many widely dispersed tectal neurons.
However the mature pathway shows a high degree of topological order. How is such an
organized map achieved? One answer was provided by Prestige & Wills haw (1975):
retinal axons and tectal neurons are polarized by contact adhesion molecules distributed
such that axons from one end of the retina are stickier than those from the other end, and
neurons at one end of the tectum are (correspondingly) stickier than those at the other
end. Of course this means that isolated retinal axons will all tend to stick to one end of
the tectum. However if such axons compete with each other for tectal terminal sites (and
if tectal sites compete for retinal axon terminals)t less sticky axons will be displaced t and
eventually a topological map will form. The Prestige-Willshaw theory explains many observations indicating neural specificity. It does not provide for plasticity: the ability of
retino-tectal systems to adapt to changed target conditions t and vice-versa. Willshaw and
von der Malsburg (1976 t 1977) provided a theory for the plasticity of map
reorganization t by postulating the synaptic growth in development is Hebbian. Such a
mechanism provides self-organizing properties in retino-tectal map formation and reorganization. Whitelaw & Cowan (1981) combined both sticky molecules and Hebbian synaptic growth to provide a theory which explains both the specificity and plasticity of
map formation and reorganization in a reasonable fashion.
There are many experiments, however t which indicate that such theories are too simple.
Schmidt & Easter (1978) and Meyer (1982) have shown that retinal axons interact with
93
94
Cowan and Friedman
each other in a way which influences map formation. It is our view that there are
(probably) at least two different types of sticky molecules in the system: those described
above which mediate retino-tectal interactions. and an additional class which mediates
axo-axonal interactions in a different way. In what follows we describe a model which
incorporates such interactions. Some aspects of our model are similar to those introduced
by Willshaw & von der Malsburg (1979) and Fraser (1980). Our model can simulate
almost all experiments in the literature. and provides a way to titrate the relative strenghts
of intrinsic polarity markers mediating retino-tectal interactions, (postulated) positional
markers mediating axo-axonal interactions, and stimulus-driven Hebbian synaptic
changes.
2 MODELS OF MAP FORMATION AND REGENERATION
2.1. THE WHITELAW-COWAN MODEL
Let Sij be the strength or weight of the synapse made by the ith retinal axon with the jth
tectal cell. Then the following differential equation expresses the changes in siJ
s??IJ -- c"IJ (r?1 - ol) t?
- It. (Nr -1 ~.
~.J )(c"
?..1 + Nt -l ?..
IJ (r?1 - ol) t?)
J
J.~
(1)
where N r is the number of retinal ganglion cells and Nt the number of tectal neurons. Cij
is the "stickiness" of the ijth contact, ri denotes retinal activity and tj =l:iSijfi is the corresponding tectal activity, and ol is a constant measuring the rate of receptor destabilization (see Whitelaw & Cowan (1981) for details). In addition both retinal and tectal elements have fixed lateral inhibitory contacts. The dynamics described by eqn.l is such
that both l:jsij and l:jSij tend to constant values T and R respectively, where T is the total
amount of tectal receptor material available per neuron, and R is the total amount of axonal material available per retinal ganglion cell: thus if sij increases anywhere in the net,
other synapses made by the ith axon will decrease, as will other synapses on the jth tectal
neuron. In the current terminology, this process is referred to as "winner-take-all".
For purposes of illustration consider the problem of connecting a line of Nr retinal
ganglion cells to a line of Nt tectal cells. The resulting maps can then be represented by
two-dimensional matrices, in which the area of the square at the ijth intersection
represents the weight of the synapse between the ith retinal axon and the jth tectal cell.
The normal retino-tectal map is represented by large squares along the matrix diagonal.,
(see Whitelaw & Cowan (1981) for terminology and further details). It is fairly obvious
that the only solutions to eqn. (1) lie along the matrix diagonal, or the anti-diagonal. as
shown in fig. 2. These solutions correspond, respectively, to normal and inverted
topological maps. It follows that if the affmity Cij of the ith retinal ganglion cell for the
jth tectal neuron is constant, a map will form consisting of normal and inverted local
patches. To obtain a globally normal map itis necessary to bias the system. One way to
do this is to suppose that Cij = ;aiaj, where ai and aj are respectively. the concentrations
Development and Regeneration of Eye-Brain Maps: A Computational Model
Figure 2: Diagonal and anti-diagonal solutions to eqn.1. Such
solutions correspond. respectively. to normal and inverted maps.
of sticky molecules on the tips of retinal axons and on the surfaces of tectal neurons, and
~ is a constant. A good candidate for such a molecule is the recently discovered
toponymic or TOP molecule found in chick retina and tectum (Trisler & Collins, 1987).
If ai and aj are distributed in the graded fashion shown in fig. 3, then the system is
biased in favor of the normally oriented map.
o
1
i
Figure 3: Postulated distribution of sticky molecules in the retina. A
similar distribution is supposed to exist in the tectum.
2.2 INADEQUACIES
The Whitelaw-Cowan model simulates the normal development of monocular retinotectal maps. starting from either diffuse or scrambled initial maps, or from no map. In addition it simulates the compressed. expanded, translocated. mismatched and rotated maps
which have been described in a variety of surgical contexts. However it fails in the
following respects: a. Although tetrodotoxin (TTX) blocks the refinement of retinotopic
maps in salamanders. a coarse map can still develop in the absence of retinal activity
Harris (1980). The model will not simulate this effect. b. Although the model simulates
the formation of double maps in "classical" compound eyes {made from a half-left and a
half right eye} (Gaze. Jacobson. & Szekely. 1963). it fails to account for the
reprogramming observed in "new" compound eyes {made by cutting a slit down the
middle of a tadpole eye} (Hunt & Jacobson. 1974). and fails to simulate the forming of a
95
96
Cowan and Friedman
normal retinotopic map to a compound tectum (made from two posterior halves}
(Sharma, 1975).
l'ii'ht n tinA
l'ii'ht retinA
10 9 8 7 6 5 4 3 2 1
10 9 8 7 6 5 4 3 2 1
1 2 3 4 5 6 78 910
1 2 3 4 5 6 7 8 910
right tectum.
right tectum.
JLOrm.tl m.a.p
exp~a.m.a.p
Figure 4: The normal and expanded maps which form after the prior
expansion ofaxons from a contralateral half-eye. The two maps are
actually superposed, but for ease of exposition are shown separately.
right nti?.
left nti?.
12 34 5
5 43 21
1 2 3 4 5 6 7 8 9 10
l'ii'ht
tectum.
Figure 5: Results of Meyer's experiment. Fibers from the right halfretina fail to contact their normal targets and instead make contact with
available targets, but with reversed polarity.
c. More significantly, it fails to account for the apparent retinal induction reported by
Schmidt, Cicerone & Easter (1978) in which following the expansion of retinal axons
from a goldfish half-eye over an entire (contralateral) tectum, and subsequent sectioning
of the axons, diverted retinal axons from the other (intact) eye are found to expand over
the tectum, as if they were also from a half-eye. This has been interpreted to imply that
the tectum has no intrinsic markers, and that all its markers come from the retina (Chung
& Cooke, 1978). However Schmidt et.al. also found that the diverted axons also map
normally. Fig. 4 shows the result. d. There is also an important mismatch experiment
Development and Regeneration or Eye-Brain Maps: A Computational Model
carried out by Meyer (1979) which the model cannot simulate. In this experiment the
left half of an eye and its attached retinal axons are surgically removed, leaving an intact
normal half-eye map. At the same time the right half the other eye and its attached axons
are removed, and the axons from the remaining half eye are allowed to innervate the
tectum with the left-half eye map. The result is shown in fig. 5. e. Finally. there are
now a variety of chemical assays of the nature of the affinities which retinal axons have
for each other. and for tectal target sites. Thus Bonhoffer and Huff (1980) found that
growing retinal axons stick preferentially to rostral tectum. This is consistent with the
model. However, using a different assay Halfter, Claviez & Schwarz (1981) also found
that tectal fragments tend to stick preferentially to that part of the retina which
corresponds to caudal tectum, i.e.; to nasal retina. This appears to contradict the model,
and the first assay.
3 A NEW MODEL FOR MAP FORMATION
The Whitelaw-Cowan model can be modified and extended to replicate much of the data
described above. The first modification is to replace eqn.1 by a more nonlinear equation.
The reason for this is that the above equation has no threshold below which contacts cannot get established. In practice Whitelaw and I modified the equations to incorporate a
small threshold effect. Another way is to make synaptic growth and decay exponential
rather than linear. An equation expressing this can be easily formulated, which also incorporates axo-axonal interactions, presumed to be produced by neural contact
adhesion molecules (nCAM) of the sort discovered by Edelman (1983) which seem to
mediate the axo-axonal adhesion observed in tissue cultures by Boenhoffer & Huff
(1985). The resulting equations take the form:
Sij = Aj + Cij [J,lij + (ri - oi)tj] Sij
- -~
ks"(T-1"+R-1")(A'
IJ
?..1
?..J
J +c??["??+(r?-oi)t?]s?
IJ ""IJ
1
J IJ?}
(2)
where Aj represents a general nonspecific growth of retinotectal contacts, presumed to
be controlled and modulated by nerve growth factor (Campenot, 1982). The main
difference between eqns. 1 and 2 however, lies in the coefficients Cij' In eqn. 1, Cij =
<;aiaj. In eqn. 2, Cij expresses several different effects: (a). Instead of just one molecular
species on the tips of retinal axons and on corresponding tectal cell surfaces, as in eqn.l,
two molecular species or two states of one species can be postulated to exist on these
sites. In such a case the term <;aiaj is replaced by L<;abaibj where a and b are the
different species, and the sum is over all possible combinations aa, ab etc. A number of
possibilities exist in the choice of <;ab' One possibility that is consistent with most of the
biochemical assays described earlier is <;aa <;bb < <;ab <;ba in which each species
prefers the other, the so-called heterophilic case. (b) The mismatch experiment cited
earlier (Meyer, 1979) indicates that existing axon projections tend to exclude other axons,
especially inappropriate ones, from innervating occupied areas. One way to incorporate
such geometric effects is to suppose that each axon which establishes contact with a
tectal neuron occludes tectal markers there by a factor proportional to its synaptic
=
=
97
98
Cowan and Friedman
weight Sij' Thus we subtract from the coefficient Cij a fraction proportional to '11 L'kSkj
where L k means Lk #:- i' (c) The mismatch experiment also indicates that map formation depends in part on a tendency for axons to stick to their retinal neighbors, in addition to their tendency to stick to tectal cell surfaces. We therefore append to Cij the term
L'k Skj fik where Skj is a local average of Skj and its nearest tectal neighbors, and where
fik measures the mutual stickiness of the ith and kth retinal axons: non-zero only for
nearest retinal neighbors. (Again we suppose this stickiness is produced by the
interaction of two molecular species etc.; specifically the neuronal CAMs discovered by
Edelman, but we do not go into the details). (d) With the introduction of occlusion
effects and axo-axonal interactions, it becomes apparent that debris in the form of
degenerating axon fragments adhering to tectal cells, following optic nerve sectioning,
can also influence map formation. Incoming nerve axons can stick to debris, and debris
can occlude markers. There are in fact four possibilities: debris can occlude tectal
markers, markers on other debris, or on incoming axons; and incoming axons can
occlude markers on debris. All these possibilities can be included in the dependence of
cij on Sij' Skj etc.
The model which results from all these modifications and extensions is much more complex in its mathematical structure than any of the previous models. However computer
simulation studies show it to be capable of correctly reproducing the observed details of
almost all the experiments cited above. Fig. 6, for example shows a simulation of the
retinal "induction" experiments of Schmidt el.al.
1
i
Nr
1
j
Figure 6:
Simulation of the Schmidt et.al. retinal induction
experiment. A nearly normal map is intercalated into an expanded map.
This simulation generated both a patchy expanded and a patchy nearly normal map.
These effects occur because some incoming retinal axons stick to debris left over from
Development and Regeneration of Eye-Brain Maps: A Computational Model
the previous expanded map, and other axons stick to non-occluded tectal markers. The
axo-axonal positional markers control the formation of the expanded map, whereas the
retino-tectal polarity markers control the formation of the nearly normal map.
4 CONCLUSIONS
The model we have outlined combines Hebbian plasticity with intrinsic, genetic eyebrain and axo-axonic markers, to generate correctly oriented retinotopic maps. It permits
the simulation of a large number of experiments, and provides a consistent explanation of
almost all of them. In particular it shows how the apparent induction of central markers
by peripheral effects, as seen in the Schmidt-Cicerone-Easter experiment (Schmidt et.al.
1978), can be produced by the effects of debris; and the polarity reversal seen in Meyer's
experiment (Meyer 1979), can be produced by axo-axonal interactions.
Acknowledgements
We thank the System Development Foundation, Palo Alto, California, and The
University of Chicago Brain Research Foundation for partial support of this work.
References
Boenhoffer, F. & Huf, J. (1980), Nature, 288, 162-164.; (1985), Nature. 315, 409-411.
Campenot, R.B. (1982), Develop. Biol., 93, 1.
Chung, S.-H. & Cooke, J.E. (1978), Proc. Roy. Soc. Lond. B 201,335-373.
Edelman, G.M., (1983), Science, 219,450-454.
Fraser, S. (1980), Develop. BioI., 79, 453-464.
Gaze, R.M. & Sharma, S.C. (1970), Exp. Brain Res., 10, 171-181.
Gaze, R.M., Jacobson, M. & Szekely, T. (1963). J. Physiol. (Lond.), 165,484-499.
Halfter, W., Claviez. M. & Schwarz, U. (1981), Nature. 292.67-70.
Harris, W.A. (1980), J. Compo Neurol., 194, 303-323.
Hubel, D.H. & Wiesel, T.N. (1974), J. Compo Neurol. 158,295-306.
Hunt, R.K. & Jacobson. M. (1974), Devel. BioI. 40, 1-15.
Malsburg, Ch.v.d. & Willshaw, DJ. (1977), PNAS, 74.5176-5178.
Meyer, R.L. (1979), Science, 205. 819-821; (1982). Curro Top. Develop. BioI., 17, 101145.
Prestige, M. & Wills haw , DJ. (1975), Proc. Roy. Soc. B, 190, 77-98.
Schmidt, J.T. & Easter, S.S. (1978), Exp. Brain Res., 31, 155-162.
Schmidt, J.T., Cicerone, C.M. & Easter, S.S. (1978), J. Compo Neurol., 177,257-288.
Sharma, S.C. (1975), Brain Res., 93, 497-501.
Sperry, R.W. (1944), J. Neurophysiol., 7. 57-69.
Trisler, D. & Collins, F. (1987). Science, 237, 1208-1210.
Whitelaw, V.A. & Cowan, J.D. (1981), J. Neurosci .? 1,12, 1369-1387.
Willshaw, D.J. & Malsburg, Ch.v.d. (1976). Proc. Roy. Soc. B, 194,431-445; (1979),
Phil. Trans. Roy. Soc. (Lond.). B, 287, 203-254.
Yoon, M. (1972), Amer. Zool., 12, 106.
99
| 221 |@word middle:1 wiesel:1 replicate:1 simulation:6 lobe:1 innervating:1 initial:1 fragment:2 genetic:1 existing:1 current:1 nt:3 physiol:1 subsequent:1 chicago:3 plasticity:8 occludes:1 occlude:3 half:11 ith:5 compo:3 provides:3 coarse:1 mathematical:1 along:2 differential:1 edelman:3 pathway:2 combine:1 rostral:2 presumed:2 growing:1 brain:17 terminal:2 ol:3 globally:1 inappropriate:1 vertebrate:1 becomes:1 project:1 provided:2 retinotopic:3 circuit:3 alto:1 what:1 interpreted:1 temporal:1 growth:5 um:2 willshaw:5 stick:8 control:2 normally:2 local:3 receptor:2 frog:4 k:1 ease:1 hunt:2 practice:1 block:1 area:2 significantly:1 projection:1 specificity:6 get:1 cannot:2 context:1 influence:2 superposed:1 map:50 demonstrated:1 phil:1 nonspecific:1 go:1 starting:1 adhering:1 disorder:1 fik:2 rule:1 aiaj:3 tectum:18 target:4 suppose:3 us:1 element:1 roy:4 skj:4 observed:3 yoon:2 region:1 sticky:5 decrease:1 removed:2 complexity:1 pol:2 occluded:1 cam:1 dynamic:1 surgically:1 neurophysiol:1 easily:1 represented:2 various:1 fiber:1 univ:1 describe:1 formation:11 neighborhood:1 apparent:3 widely:1 regeneration:9 compressed:2 ability:1 favor:1 net:1 interconnected:1 interaction:9 haw:2 organizing:2 supposed:1 frrst:1 zool:1 double:1 rotated:1 axo:8 develop:5 nearest:2 ij:7 soc:4 involves:1 indicate:3 come:1 material:2 explains:2 biological:1 extension:1 normal:14 exp:3 lm:1 purpose:1 proc:3 debris:8 palo:1 schwarz:2 vice:1 establishes:1 caudal:2 modified:2 rather:1 occupied:1 halfter:2 indicates:2 salamander:1 ave:1 el:1 biochemical:1 entire:1 initially:1 expand:1 comprising:1 development:11 constrained:1 fairly:1 mutual:1 represents:2 nearly:3 stimulus:1 retina:12 oriented:2 replaced:1 consisting:1 occlusion:1 friedman:5 ab:3 highly:1 possibility:4 jacobson:4 tj:2 immense:1 capable:1 partial:1 necessary:1 culture:1 re:3 isolated:1 earlier:2 patchy:2 measuring:1 ijth:2 contralateral:4 too:1 reported:1 answer:1 combined:1 cited:2 gaze:4 connecting:1 tip:2 von:2 again:1 central:1 chung:2 account:2 exclude:1 retinal:29 coefficient:2 postulated:3 depends:1 view:1 sort:1 square:2 oi:2 correspond:2 surgical:1 produced:4 tissue:1 synapsis:2 synaptic:5 obvious:1 organized:1 actually:1 nerve:4 appears:1 synapse:2 prestige:3 amer:1 anywhere:1 just:1 eqn:7 nonlinear:1 marker:15 aj:4 diverted:2 usa:1 effect:8 chemical:1 assay:4 self:2 eqns:1 degenerating:1 szekely:2 outline:1 ofaxons:1 recently:1 rotation:1 specialized:1 winner:1 attached:2 jl:4 expressing:1 versa:1 destabilization:1 ai:2 outlined:1 mathematics:1 illinois:1 innervate:1 dj:2 surface:3 etc:3 posterior:1 driven:1 compound:3 der:2 inverted:3 seen:2 additional:1 sharma:4 tectal:31 ii:3 campenot:2 axonic:1 pnas:1 hebbian:4 adapt:1 molecular:4 fraser:2 controlled:1 achieved:1 cell:11 preserved:1 addition:3 whereas:1 separately:1 adhesion:3 leaving:1 biased:1 huff:2 probably:1 tend:4 mature:1 cowan:12 simulates:3 incorporates:2 seem:1 axonal:8 variety:2 inadequacy:1 prefers:1 retino:7 nasal:2 amount:2 generate:1 boenhoffer:2 exist:3 inhibitory:1 fish:3 per:2 correctly:2 express:2 four:1 terminology:2 threshold:2 ht:3 fraction:1 sum:1 compete:2 sperry:2 chick:1 almost:3 reasonable:1 patch:1 polarized:1 curro:1 claviez:2 topological:4 activity:3 strength:1 occur:1 optic:6 ri:2 diffuse:1 aspect:1 simulate:4 lond:3 expanded:7 department:1 peripheral:1 combination:1 modification:2 sij:7 trisler:2 equation:6 monocular:1 eventually:1 committee:1 mechanism:1 fail:1 end:5 reversal:1 available:3 permit:1 schmidt:9 denotes:1 top:2 tina:1 remaining:1 malsburg:4 especially:1 graded:1 classical:1 contact:10 concentration:1 dependence:1 nr:3 diagonal:5 affinity:1 kth:1 reversed:1 thank:1 lateral:1 reason:1 induction:4 devel:1 reorganization:3 relationship:1 polarity:4 illustration:1 intercalated:1 preferentially:2 mediating:2 cij:10 ba:1 append:1 goldfish:2 neuron:11 observation:2 regenerate:1 displaced:1 anti:2 neurobiology:1 extended:1 tetrodotoxin:1 discovered:3 reproducing:1 tect:2 introduced:1 connection:3 california:1 nti:2 established:1 mediates:1 trans:1 below:1 mismatch:3 explanation:1 eye:24 imply:1 lk:1 stickiness:3 carried:1 lij:1 prior:1 literature:1 geometric:1 acknowledgement:1 relative:1 easter:5 proportional:2 foundation:2 degree:1 ttx:1 consistent:3 itis:1 cooke:2 course:1 changed:1 jth:4 bias:1 institute:1 mismatched:1 stickier:2 neighbor:3 correspondingly:1 distributed:2 made:5 refinement:1 bb:1 contradict:1 cutting:1 hubel:1 incoming:4 scrambled:1 retinotectal:2 nature:4 molecule:8 interact:1 expansion:2 complex:1 main:1 neurosci:1 cicerone:3 mediate:2 lesion:1 allowed:1 neuronal:1 fig:6 site:4 referred:1 tl:1 fashion:2 hebb:1 postulating:1 axon:32 fails:4 meyer:7 comprises:1 exponential:1 lie:2 candidate:1 down:1 specific:1 decay:1 neurol:3 evidence:1 intrinsic:3 subtract:1 intersection:1 forming:2 ganglion:5 positional:2 aa:2 corresponds:1 ch:2 dispersed:1 harris:2 bioi:3 formulated:1 exposition:1 replace:1 whitelaw:8 considerable:2 change:2 absence:1 included:1 specifically:1 total:2 specie:6 sectioning:2 slit:1 called:1 tendency:2 intact:2 indicating:1 support:1 modulated:1 collins:2 incorporate:2 biol:1 |
1,330 | 2,210 | Multiple Cause Vector Quantization
David A. Ross and Richard S. Zemel
Department of Computer Science
University of Toronto
{dross,zemel}@cs.toronto.edu
Abstract
We propose a model that can learn parts-based representations of highdimensional data. Our key assumption is that the dimensions of the data
can be separated into several disjoint subsets, or factors, which take on
values independently of each other. We assume each factor has a small
number of discrete states, and model it using a vector quantizer. The
selected states of each factor represent the multiple causes of the input.
Given a set of training examples, our model learns the association of
data dimensions with factors, as well as the states of each VQ. Inference
and learning are carried out efficiently via variational algorithms. We
present applications of this model to problems in image decomposition,
collaborative filtering, and text classification.
1 Introduction
Many collections of data exhibit a common underlying structure: they consist of a number
of parts or factors, each of which has a small number of discrete states. For example, in a
collection of facial images, every image contains eyes, a nose, and a mouth (except under
occlusion), each of which has a range of different appearances. A specific image can be
described as a composite sketch: a selection of the appearance of each part, depending on
the individual depicted.
In this paper, we describe a stochastic generative model for data of this type. This model
is well-suited to decomposing images into parts (it can be thought of as a Mr. Potato Head
model), but also applies to domains such as text and collaborative filtering in which the
parts correspond to latent features, each having several alternative instantiations. This representational scheme is powerful due to its combinatorial nature: while a standard clustering/VQ method containing N states can represent at most N items, if we divide the N
into j-state VQs, we can represent j N/j items. MCVQ is also especially appropriate for
high-dimensional data in which many values may be unspecified for a given input case.
2 Generative Model
In MCVQ we assume there are K factors, each of which is modeled by a vector quantizer
with J states. To generate an observed data example of D dimensions, x ? < D , we
stochastically select one state for each VQ, and one VQ for each dimension. Given these
selections, a single state from a single VQ determines the value of each data dimension x d .
ad
rd
xd
sk
bk
? kj
D
o kj
J
K
Figure 1: Graphical model representation of MCVQ. We let rd=1 represent all the variables rd=1,k , which together select a VQ for x1 . Similarly, sk=1 represents all sk=1,j ,
which together select a state of VQ 1. The plates depict repetitions across the appropriate
dimensions for each of the three variables: the K VQs, the J states (codebook vectors) per
VQ, and the D input dimensions.
The selections are represented as binary latent variables, S = {skj }, R = {rdk }, for
d = 1...D, k = 1...K, and j = 1...J. The variable skj = 1 if and only if state j has been
selected from VQ k. Similarly rdk = 1 when VQ k has been selected for data dimension
d. These variables can be described equivalently as multinomials, s k ? 1...J, rd ? 1...K;
their values are drawn according to their respective priors, ak and bd . The graphical model
representation of MCVQ is given in Fig. 1.
Assuming each VQ state specifies the mean as well as the standard deviation of a Gaussian
distribution, and the noise in the data dimensions is conditionally independent, we have
(where ? = {?dkj , ?dkj }):
YY
P(x|R, S, ?) =
N (xd ; ?dkj , ?dkj )rdk skj
d k,j
The resulting model can be thought of as a two-dimensional mixture model, in which J ?K
possible states exist for each data dimension (xd ). The selections of states for the different
data dimensions are joined along the J dimension and occur independently along the K
dimension.
3 Learning and Inference
The joint distribution over the observed vector x and the latent variables is
Y r Y s Y
P(x, R, S|?) = P(R|?)P(S|?)P(x|R, S, ?) =
adkdk
bkjkj
N (xd ; ?)rdk skj
d,k
k,j
d,k,j
Given an input x, the posterior distribution over the latent variables, P(R, S|x, ?), cannot
tractably be computed, since all the latent variables become dependent.
We apply a variational EM algorithm to learn the parameters ?, and infer hidden variables
given observations. We approximate the posterior distribution using a factored distribution,
where g and m are variational parameters related to r and s respectively:
Y
Y
s
rdk
Q(R, S|x, ?) =
gdk
mkjkj
d,k
k,j
The variational free energy, F(Q, ?) = EQ ? log P (x, R, S|?) + log Q(R, S|x, ?) is:
X
X
X
F = EQ
rdk log(gdk /akj ) +
skj log(mkj /bkj ) +
rdk skj log N (xd ; ?)
d,k
k,j
d,k,j
=
X
mkj log mkj +
k,j
X
gdk log gdk +
d,k
X
gdk mkj dkj
d,k,j
)2
(x ??
where dkj = log ?dkj + d2?2dkj , and we have assumed uniform priors for the selection
dkj
variables. The negative of the free energy ?F is a lower bound on the log likelihood
of generating the observations. The variational EM algorithm improves this bound by
iteratively improving ?F with respect to Q (E-step) and to ? (M-step).
Let C be the set of training cases, and Qc be the approximation to the posterior distribution
over latent variables given the training case (observation) c ? C. We further constrain this
c
variational approach, forcing the {gdk
} to be consistent across all observations xc . Hence
these parameters relating to the gating variables that govern the selection of a factor for a
given observation dimension, are not dependent on the observation. This approach encourages the model to learn representations that conform to this constraint. That is, if there are
several posterior distributions consistent with an observed data vector, it favours distributions over {rd } that are consistent with those of other observed data vectors. Under this
formulation, only the {mckj } parameters are updated during the E step for each observation
J
X
X
X
c:
c
c
mkj = exp ?
gdk dkj /
exp ?
gdk cd?k
d
?=1
d
The M step updates the parameters, ? and ?, from each hidden state kj to each input
dimension d, and the gating variables {gdk }:
K
1 X
X
1 X
gdk = exp ?
mckj cdkj /
mcj? cdj?
exp ?
C c,j
C c,j
?=1
X
X
X
X
2
?dkj =
mckj xcd /
mckj
?dkj
=
mckj (xcd ? ?dkj )2 /
mckj
c
c
c
c
A slightly different model formulation restricts the selections of VQs, {r dk }, to be the same
for each training case. Variational EM updates for this model are identical to those above,
except that the C1 terms in the updates for gdk disappear. In practice, we obtain good results
by replacing this C1 term with an inverse temperature parameter, that is annealed during
learning. This can be thought of as gradually moving from a generative model in which the
rdk ?s can vary across examples, to one in which they are the same for each example.
The inferred values of the variational parameters specify a posterior distribution over the
VQ states, which in turn implies a mixture of
PGaussians for each input dimension. Below we use the mean of this mixture, x?cd = k,j mckj gdk ?dkj , to measure the model?s
reconstruction error on case c.
4 Related models
MCVQ falls into the expanding class of unsupervised algorithms known as factorial methods, in which the aim of the learning algorithm is to discover multiple independent causes,
or factors, that can well characterize the observed data. Its direct ancestor is Cooperative
Vector Quantization [1, 2, 3], which models each data vector as a linear combination of
VQ selections. Another part-seeking algorithm, non-negative matrix factorization (NMF)
[4], utilizes a non-negative linear combination of non-negative basis functions. MCVQ entails another round of competition, from amongst the VQ selections rather than the linear
combination of CVQ and NMF, which leads to a division of input dimensions into separate
causes. The contrast between these approaches mirrors the development of the competitive mixture-of-experts algorithm which grew out of the inability of a cooperative, linear
combination of experts to decompose inputs into separable experts.
MCVQ also resembles a wide range of generative models developed to address image
segmentation [5, 6, 7]. These are generally complex, hierarchical models designed to focus
on a different aspect of this problem than that of MCVQ: to dynamically decide which
pixels belong to which objects. The chief obstacle faced by these models is the unknown
pose (primarily limited to position) of an object in an image, and they employ learned
object models to find the single object that best explains each pixel. MCVQ adopts a more
constrained solution w.r.t. part locations, assuming that these are consistent across images,
and instead focuses on the assembling of input dimensions into parts, and the variety of
instantiations of each part. The constraints built into MCVQ limit its generality, but also
lead to rapid learning and inference, and enable it to scale up to high-dimensional data.
Finally, MCVQ also closely relates to sparse matrix decomposition techniques, such as the
aspect model [8], a latent variable model which associates an unobserved class variable, the
aspect z, with each observation. Observations consist of co-occurrence statistics, such as
counts of how often a specific word occurs in a document. The latent Dirichlet allocation
model [9] can be seen as a proper generative version of the aspect model: each document/input vector is not represented as a set of labels for a particular vector in the training
set, and there is a natural way to examine the probability of some unseen vector. MCVQ
shares the ability of these models to associate multiple aspects with a given document, yet
it achieves this by sampling from multiple aspects in parallel, rather than repeated sampling of an aspect within a document. It also imposes the additional selection of an aspect
for each input dimension, which leads to a soft decomposition of these dimensions based
on their choice of aspect. Below we present some initial experiments examining whether
MCVQ can match the successful application of the aspect model to information retrieval
and collaborative filtering problems, after evaluating it on image data.
5 Experimental Results
5.1 Parts-based Image Decomposition: Shapes and Faces
The first dataset used to test our model consisted of 11 ? 11 gray-scale images, as pictured
in Fig. 2a. Each image in the set contains three shapes: a box, a triangle, and a cross.
The horizontal position of each shape is fixed, but the vertical position is allowed to vary,
uniformly and independently of the positions of the other shapes. A model containing 3
VQs, 5 states each, was trained on a set of 100 shape images. In this experiment, and all
experiments reported herein, annealing proceeded linearly from an integer less than C to
1. The learned representation, pictured in Fig. 2b, clearly shows the specialization of each
VQ to one of the shapes.
The training set was selected so that none of the examples depict cases in which all three
shapes are located near the top of the image. Despite this handicap, MCVQ is able to learn
the full range of shape positions, and can accurately reconstruct such an image (Fig. 2c).
In contrast, standard unsupervised methods such as Vector Quantization (Fig. 3a) and Principal Component Analysis (Fig. 3b) produce holistic representations of the data, in which
each basis vector tries to account for variation observed across the entire image. Nonnegative matrix factorization does produce a parts-based representation (Fig. 3c), but captures less of the data?s structure. Unlike MCVQ, NMF does not group related parts, and its
generative model does not limit the combination of parts to only produce valid images.
As an empirical comparison, we tested the reconstruction error of each of the aforementioned methods on an independent test set of 629 images. Since each method has one or
more free parameters (e.g. the # of principal components) we chose to relate models with
similar description lengths1 . Using a description length of about 5.9 ? 105 bits, and pixel
1
We define description length to be the number of bits required to represent the model, plus the
a)
? for each component
G
b)
k=1
VQ 1
k=2
VQ 2
k=3
VQ 3
c)
Original
Reconstruction
Figure 2: a) A sample of 24 training images from the Shapes dataset. b) A typical representation learned by MCVQ with 3 VQs and 5 states per VQ. c) Reconstruction of a test
image: original (left) and reconstruction (right).
a)
b)
c)
d)
Original
VQ
PCA
NMF
Figure 3: Other methods trained on shape images: a) VQ, b) PCA, and c) NMF. d) Reconstruction of a test image by the three methods (cf. Fig. 2c).
values ranging from -1 to 1, the average r.m.s. reconstruction error was 0.21 for MCVQ (3
VQs), 0.22 for PCA, 0.35 for NMF, and 0.49 for VQ. Note that this metric may be useful
in determining the number of VQs, e.g., MCVQ with 6 VQs had an eror of 0.6.
As a more interesting visual application, we trained our model on a database of face images
(www.ai.mit.edu/cbcl/projects).The dataset consists of 19 ? 19 gray-scale images, each
containing a single frontal or near-frontal face. A model of 6 VQs with 12 states each was
trained on 2000 images, requiring 15 iterations of EM to converge. As with shape images,
the model learned a parts-based representation of the faces.
The reconstruction of two test images, along with the specific parts used to generate each,
is illustrated in Fig. 4. It is interesting to note that the pixels comprising a single part need
not be physically adjacent (e.g. the eyes) as long as their appearances are correlated. We
again compared the reconstruction error of MCVQ with VQ, PCA, and NMF. The training
and testing sets contained 1800 and 629 images respectively. Using a description length of
1.5 ? 106 bits, and pixel values ranging from -1 to 1, the average r.m.s. reconstruction error
number of bits to encode all the test examples using the model. This metric balances the large model
cost and small encoding cost of VQ/MCVQ with the small model cost and large encoding cost of
PCA/NMF.
Figure 4: The reconstruction of two test images from the Faces dataset. Beside each reconstruction are the parts?the most active state in each of six VQs?used to generate it. Each
part j ? k is represented by its gated prediction (gdk ? mkj ) for each image pixel i.
was 0.12 for PCA, 0.20 for NMF, 0.23 for MCVQ (both 3 and 6 VQs), and 0.28 for VQ.
5.2 Collaborative Filtering
The application of MCVQ to image data assumes that the images are normalized, i.e., that
the head is in a similar pose in each image. Normalization can be difficult to achieve in
some image contexts; however, in many other types of applications, the input representation
is more stable. For example, many information retrieval applications employ bag-of-words
representations, in which a given word always occupies the same input element.
We test MCVQ on a collaborative filtering task, utilizing the EachMovie dataset, where the
input vectors are ratings by users of movies, and a given element always corresponds to the
same movie. The original dataset contains ratings, on a scale from 1 to 6, of a set of 1649
movies, by 74,424 users. In order to reduce the sparseness of the dataset, since many users
rated only a few movies, we only included users who rated at least 75 movies and movies
rated by at least 126 users, leaving a total of 1003 movies and 5831 users. The remaining
dataset was still very sparse, as the maximum user rated 928 movies, and the maximum
movie was rated by 5401 users. We split the data randomly into 4831 users for a training
set, and 1000 users in a test set. We ran MCVQ with 8 VQs and 6 states per VQ on this
dataset. An example of the results, after 18 iterations of EM, is shown in Fig. 5.
Note that in the MCVQ graphical model (Fig. 1), all the observation dimensions are leaves,
so an input variable whose value is not specified in a particular observation vector will not
play a role in inference or learning. This makes inference and learning with sparse data
rapid and efficient.
We compare the performance of MCVQ on this dataset to the aspect model. We implemented a version of the aspect model, with 50 aspects and truncated Gaussians for ratings,
and used ?tempered EM? (with smoothing) to fit the parameters[10]. For both models, we
train the model on the 4831 users in the training set, and then, for each test user, we let the
model observe some fixed number of ratings and hold out the rest. We evaluate the models
by measuring the absolute difference between their predictions for a held-out rating and the
user?s true rating, averaged over all held-out ratings for all test users (Fig. 6).
The Fugitive 5.8 (6)
Terminator 2 5.7 (5)
Robocop 5.4 (5)
Pulp Fiction 5.5 (4)
Godfather: Part II 5.3 (5)
Silence of the Lambs 5.2 (4)
Cinema Paradiso 5.6 (6)
Touch of Evil 5.4 (-)
Rear Window 5.2 (6)
Shawshank Redemption 5.5 (5)
Taxi Driver 5.3 (6)
Dead Man Walking 5.1 (-)
Kazaam 1.9 (-)
Rent-a-Kid 1.9 (-)
Amazing Panda Adventure 1.7 (-)
Brady Bunch Movie 1.4 (1)
Ready to Wear 1.3 (-)
A Goofy Movie 0.8 (1)
Jean de Florette 2.1 (3)
Lawrence of Arabia 2.0 (3)
Sense Sensibility 1.6 (-)
Billy Madison 3.2 (-)
Clerks 3.0 (4)
Forrest Gump 2.7 (2)
Best of Wallace & Gromit 5.6 (-)
The Wrong Trousers 5.4 (6)
A Close Shave 5.3 (5)
Tank Girl 5.5 (6)
Showgirls 5.3 (4)
Heidi Fleiss... 5.2 (5)
Mediterraneo 5.3 (6)
Three Colors: Blue 4.9 (5)
Jean de Florette 4.9 (6)
Sling Blade 5.4 (5)
One Flew ... Cuckoo?s Nest 5.3 (6)
Dr. Strangelove 5.2 (5)
Robocop 2.6 (2)
Dangerous Ground 2.5 (2)
Street Fighter 2.0 (-)
Talking About Sex 2.4 (5)
Barbarella 2.0 (4)
The Big Green 1.8 (2)
Jaws 3-D 2.2 (-)
Richie Rich 1.9 (-)
Getting Even With Dad 1.5 (-)
The Beverly Hillbillies 2.0 (-)
Canadian Bacon 1.9 (4)
Mrs. Doubtfire 1.7 (-)
Figure 5: The MCVQ representation of two test users in the EachMovie dataset. The 3
most conspicuously high-rated (bold) and low-rated movies by the most active states of 4
of the 8 VQs are shown, where conspicuousness is the deviation from the mean rating for a
given movie. Each state?s predictions, ?dkj , can be compared to the test user?s true ratings
(in parentheses); the model?s prediction is a convex combination of state predictions. Note
the intuitive decomposition of movies into separate VQs, and that different states within a
VQ may predict very different rating patterns for the same movies.
2.4
MCVQ
Aspect
Mean test prediction error
2.2
Figure 6: The average absolute deviation
of predicted and true values of held-out
ratings is compared for MCVQ and the
aspect model. Note that the number of
users per x-bin decreases with increasing
x, as a user must rate at least x+1 movies
to be included.
2
1.8
1.6
1.4
1.2
1
0.8
200
300
400
500
600
Number of observed ratings
5.3 Text Classification
MCVQ can also be used for information retrieval from text documents, by employing the
bag-of-words representation. We present preliminary results on the NIPS corpus (available at www.cs.toronto.edu/?roweis/data.html), which consists of the full text of the NIPS
conference proceedings, volumes 1 to 12. The data was pre-processed to remove common
words (e.g. the), and those appearing in fewer than five documents, resulting in a vocabulary of 14,265 words. For each of the 1740 papers in the corpus, we generated a vector
containing the number of occurrences of each word in the vocabulary. These vectors were
normalized so that each contained the same number of words. A model of 8 VQs, 8 states
each, was trained on the data, converging after 15 iterations of EM. A sample of the results
is shown in Fig. 7.
When trained on text data, the values of {gdk } provide a segmentation of the vocabulary
into subsets of words with correlated frequencies. Within a particular subset, the words
can be positively correlated, indicating that they tend to appear in the same documents, or
negatively correlated, indicating that they seldom appear together.
6 Conclusion
We have presented a novel method for learning factored representations of data which can
be efficiently learned, and employed across a wide variety of problem domains. MCVQ
combines the cooperative nature of some methods, such as CVQ, NMF, and LSA, that
Predictive Sequence Learning in Recurrent Neocortical Circuits
R. P. N. Rao & T. J. Sejnowski
afferent
ekf
latent
ltp
lgn
niranjan
som
gerstner
interneurons
freitas
detection
zador
excitatory
kalman
search
soma
membrane
wp
data
depression
svms
svm
margin
kernel
risk
The Relevance Vector Machine
Michael E. Tipping
hme
similarity
svr
classify
svs
classes
hyperparameters
classification
kopf
class
extraction
net
weights
functions
units
query
documents
chess
portfolio
players
jutten
pes
cpg
axon
behavioural
chip
ocular
retinal
surround
cmos
mdp
pomdps
littman
prioritized
pomdp
critic
stack
suffix
nuclei
knudsen
mdp
pomdps
prioritized
singh
elevator
spline
tresp
saddle
hyperplanes
tensor
barn
correlogram
interaural
epsp
bregman
Figure 7: The representation of two documents by an MCVQ model with 8 VQs and 8
states per VQ. For each document we show the states selected for it from 4 VQs. The bold
(plain) words for each state are those most conspicuous by their above (below) average
predicted frequency.
use multiple causes to generate input, with competitive aspects of clustering methods. In
addition, it gains combinatorial power by splitting the input into subsets, and can readily
handle sparse, high-dimensional data. One direction of further research involves extending
the applications described above, including applying MCVQ to other dimensions of the
NIPS corpus such as authors to find groupings of authors based on word-use frequency. An
important theoretical direction is to incorporate Bayesian learning for selecting the number
and size of each VQ.
References
[1] R.S. Zemel. A Minimum Description Length Framework for Unsupervised Learning. PhD
thesis, Dept. of Computer Science, University of Toronto, Toronto, Canada, 1993.
[2] G. Hinton and R.S. Zemel. Autoencoders, minimum description length, and Helmholtz free
energy. In G. Tesauro J. D. Cowan and J. Alspector, editors, Advances in Neural Information
Processing Systems 6. Morgan Kaufmann Publishers, San Mateo, CA, 1994.
[3] Z. Ghahramani. Factorial learning and the EM algorithm. In G. Tesauro, D.S. Touretzky,
and T.K. Leen, editors, Advances in Neural Information Processing Systems 7. MIT Press,
Cambridge, MA, 1995.
[4] D.D. Lee and H.S. Seung. Learning the parts of objects by non-negative matrix factorization.
Nature, 401:788?791, October 1999.
[5] C. Williams and N. Adams. DTs: Dynamic trees. In M.J. Kearns, S.A. Solla, and D.A. Cohn,
editors, Advances in Neural Information Processing Systems 11. MIT Press, Cambridge, MA,
1999.
[6] G.E. Hinton, Z. Ghahramani, and Y.W. Teh. Learning to parse images. In S.A. Solla, T.K. Leen,
and K.R. Muller, editors, Advances in Neural Information Processing Systems 12. MIT Press,
Cambridge, MA, 2000.
[7] N. Jojic and B.J. Frey. Learning flexible sprites in video layers. In CVPR, 2001.
[8] T. Hofmann. Probabilistic latent semantic analysis. In Proc. of Uncertainty in Artificial Intelligence, UAI?99, Stockholm, 1999.
[9] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. In T.K. Leen, T. Dietterich,
and V. Tresp, editors, Advances in Neural Information Processing Systems 13. MIT Press, Cambridge, MA, 2001.
[10] T. Hofmann. Learning what people (don?t) want. In European Conference on Machine Learning, 2001.
| 2210 |@word proceeded:1 version:2 sex:1 d2:1 decomposition:5 blade:1 initial:1 contains:3 selecting:1 document:10 freitas:1 yet:1 bd:1 readily:1 must:1 hofmann:2 shape:11 remove:1 designed:1 depict:2 update:3 generative:6 selected:5 leaf:1 item:2 fewer:1 intelligence:1 blei:1 quantizer:2 codebook:1 toronto:5 location:1 hyperplanes:1 cpg:1 five:1 along:3 direct:1 become:1 driver:1 consists:2 combine:1 interaural:1 rapid:2 alspector:1 examine:1 wallace:1 window:1 increasing:1 project:1 discover:1 underlying:1 circuit:1 what:1 unspecified:1 developed:1 unobserved:1 brady:1 sensibility:1 every:1 xd:5 wrong:1 unit:1 lsa:1 appear:2 frey:1 limit:2 dross:1 despite:1 encoding:2 ak:1 taxi:1 conspicuousness:1 chose:1 plus:1 resembles:1 mateo:1 dynamically:1 co:1 factorization:3 limited:1 range:3 averaged:1 testing:1 practice:1 empirical:1 thought:3 composite:1 word:12 pre:1 mkj:6 cannot:1 close:1 selection:10 svr:1 context:1 risk:1 applying:1 www:2 annealed:1 williams:1 zador:1 independently:3 convex:1 pomdp:1 qc:1 splitting:1 factored:2 utilizing:1 handle:1 variation:1 updated:1 play:1 user:18 associate:2 element:2 helmholtz:1 located:1 walking:1 skj:6 cooperative:3 database:1 observed:7 role:1 capture:1 solla:2 decrease:1 ran:1 redemption:1 govern:1 littman:1 seung:1 dynamic:1 trained:6 singh:1 predictive:1 negatively:1 division:1 basis:2 triangle:1 girl:1 joint:1 chip:1 represented:3 train:1 separated:1 describe:1 sejnowski:1 query:1 zemel:4 artificial:1 whose:1 jean:2 cvpr:1 reconstruct:1 ability:1 statistic:1 unseen:1 sequence:1 net:1 propose:1 reconstruction:12 epsp:1 holistic:1 achieve:1 representational:1 roweis:1 description:6 intuitive:1 competition:1 getting:1 extending:1 produce:3 generating:1 flew:1 cmos:1 adam:1 object:5 depending:1 recurrent:1 pose:2 amazing:1 eq:2 paradiso:1 implemented:1 c:2 predicted:2 implies:1 involves:1 direction:2 closely:1 stochastic:1 occupies:1 enable:1 bin:1 explains:1 decompose:1 preliminary:1 stockholm:1 hold:1 ground:1 barn:1 exp:4 cbcl:1 lawrence:1 predict:1 vary:2 achieves:1 proc:1 bag:2 combinatorial:2 label:1 ross:1 repetition:1 mit:5 clearly:1 gaussian:1 always:2 aim:1 ekf:1 rather:2 encode:1 focus:2 kid:1 likelihood:1 contrast:2 sense:1 inference:5 dependent:2 rear:1 suffix:1 entire:1 hidden:2 ancestor:1 comprising:1 lgn:1 pixel:6 tank:1 classification:3 aforementioned:1 html:1 flexible:1 development:1 constrained:1 smoothing:1 fugitive:1 having:1 extraction:1 sampling:2 ng:1 identical:1 represents:1 unsupervised:3 spline:1 richard:1 primarily:1 employ:2 few:1 randomly:1 individual:1 elevator:1 occlusion:1 detection:1 interneurons:1 mixture:4 held:3 bregman:1 potato:1 respective:1 facial:1 tree:1 divide:1 theoretical:1 classify:1 soft:1 obstacle:1 rao:1 measuring:1 cost:4 deviation:3 subset:4 uniform:1 examining:1 successful:1 characterize:1 reported:1 shave:1 akj:1 lee:1 probabilistic:1 michael:1 together:3 godfather:1 again:1 gump:1 thesis:1 containing:4 nest:1 dr:1 stochastically:1 dead:1 expert:3 account:1 hme:1 de:2 retinal:1 bold:2 afferent:1 ad:1 try:1 dad:1 competitive:2 parallel:1 panda:1 collaborative:5 kaufmann:1 who:1 efficiently:2 beverly:1 correspond:1 bayesian:1 accurately:1 none:1 bunch:1 pomdps:2 touretzky:1 energy:3 frequency:3 ocular:1 richie:1 rdk:8 gain:1 dataset:11 cinema:1 color:1 pulp:1 improves:1 segmentation:2 tipping:1 specify:1 formulation:2 leen:3 box:1 generality:1 autoencoders:1 sketch:1 horizontal:1 parse:1 replacing:1 touch:1 cohn:1 jutten:1 gray:2 mdp:2 dietterich:1 dkj:15 consisted:1 requiring:1 normalized:2 true:3 hence:1 jojic:1 iteratively:1 wp:1 semantic:1 illustrated:1 conditionally:1 round:1 adjacent:1 during:2 encourages:1 plate:1 neocortical:1 kopf:1 temperature:1 image:35 variational:8 ranging:2 adventure:1 novel:1 common:2 multinomial:1 volume:1 association:1 belong:1 assembling:1 relating:1 surround:1 cambridge:4 ai:1 rd:5 seldom:1 similarly:2 had:1 portfolio:1 wear:1 moving:1 gdk:14 entail:1 stable:1 similarity:1 posterior:5 forcing:1 bkj:1 tesauro:2 binary:1 tempered:1 muller:1 seen:1 minimum:2 additional:1 morgan:1 mr:2 employed:1 converge:1 heidi:1 ii:1 relates:1 multiple:6 full:2 infer:1 eachmovie:2 match:1 cross:1 long:1 retrieval:3 fighter:1 niranjan:1 parenthesis:1 prediction:6 converging:1 metric:2 shawshank:1 physically:1 iteration:3 represent:5 normalization:1 kernel:1 c1:2 addition:1 want:1 annealing:1 evil:1 leaving:1 publisher:1 rest:1 unlike:1 tend:1 ltp:1 cowan:1 jordan:1 integer:1 near:2 canadian:1 split:1 variety:2 fit:1 reduce:1 vqs:17 favour:1 whether:1 specialization:1 pca:6 six:1 sprite:1 cause:5 svs:1 depression:1 generally:1 useful:1 factorial:2 conspicuously:1 processed:1 svms:1 generate:4 specifies:1 exist:1 restricts:1 fiction:1 disjoint:1 per:5 yy:1 blue:1 conform:1 discrete:2 group:1 key:1 soma:1 drawn:1 trouser:1 inverse:1 powerful:1 uncertainty:1 decide:1 lamb:1 forrest:1 utilizes:1 bit:4 bound:2 handicap:1 layer:1 nonnegative:1 occur:1 dangerous:1 constraint:2 constrain:1 aspect:16 separable:1 jaw:1 department:1 according:1 combination:6 arabia:1 across:6 slightly:1 em:8 membrane:1 conspicuous:1 chess:1 gradually:1 behavioural:1 vq:29 turn:1 count:1 nose:1 available:1 decomposing:1 gaussians:1 apply:1 observe:1 hierarchical:1 appropriate:2 occurrence:2 mcj:1 appearing:1 alternative:1 original:4 cuckoo:1 top:1 clustering:2 dirichlet:2 cf:1 assumes:1 graphical:3 remaining:1 madison:1 xc:1 ghahramani:2 especially:1 disappear:1 seeking:1 tensor:1 occurs:1 exhibit:1 amongst:1 separate:2 street:1 evaluate:1 assuming:2 length:5 kalman:1 modeled:1 balance:1 equivalently:1 difficult:1 october:1 relate:1 negative:5 proper:1 unknown:1 gated:1 teh:1 vertical:1 observation:11 truncated:1 knudsen:1 grew:1 hinton:2 head:2 stack:1 canada:1 nmf:10 inferred:1 rating:12 david:1 bk:1 required:1 specified:1 learned:5 herein:1 dts:1 tractably:1 nip:3 address:1 able:1 below:3 pattern:1 xcd:2 built:1 green:1 including:1 video:1 mouth:1 power:1 natural:1 cdj:1 bacon:1 pictured:2 scheme:1 movie:16 rated:7 eye:2 carried:1 ready:1 tresp:2 kj:3 text:6 prior:2 faced:1 determining:1 beside:1 interesting:2 filtering:5 allocation:2 billy:1 nucleus:1 consistent:4 imposes:1 editor:5 share:1 cd:2 critic:1 excitatory:1 free:4 silence:1 fall:1 wide:2 face:5 absolute:2 sparse:4 dimension:22 vocabulary:3 evaluating:1 valid:1 rich:1 plain:1 adopts:1 collection:2 author:2 san:1 employing:1 approximate:1 active:2 instantiation:2 uai:1 corpus:3 assumed:1 don:1 search:1 latent:11 sk:3 chief:1 learn:4 nature:3 expanding:1 ca:1 improving:1 gerstner:1 complex:1 european:1 som:1 domain:2 terminator:1 linearly:1 big:1 noise:1 hyperparameters:1 repeated:1 allowed:1 x1:1 positively:1 fig:13 axon:1 position:5 pe:1 rent:1 learns:1 specific:3 gating:2 dk:1 svm:1 grouping:1 consist:2 quantization:3 mirror:1 phd:1 sparseness:1 margin:1 suited:1 depicted:1 appearance:3 saddle:1 visual:1 contained:2 correlogram:1 joined:1 talking:1 applies:1 corresponds:1 determines:1 ma:4 prioritized:2 man:1 included:2 typical:1 except:2 uniformly:1 cvq:2 principal:2 kearns:1 total:1 experimental:1 player:1 indicating:2 select:3 highdimensional:1 people:1 inability:1 relevance:1 frontal:2 incorporate:1 dept:1 tested:1 correlated:4 |
1,331 | 2,211 | Topographic Map Formation by Silicon
Growth Cones
Brian Taba and Kwabena Boahen
Department of Bioengineering
University of Pennsylvania
Philadelphia, PA 19104
{blaba, kwabena}@neuroengineering.upenn.edu
Abstract
We describe a self-configuring neuromorphic chip that uses a
model of activity-dependent axon remodeling to automatically wire
topographic maps based solely on input correlations. Axons are
guided by growth cones, which are modeled in analog VLSI for the
first time. Growth cones migrate up neurotropin gradients, which
are represented by charge diffusing in transistor channels. Virtual
axons move by rerouting address-events. We refined an initially
gross topographic projection by simulating retinal wave input.
1
Neuromorphic Systems
Neuromorphic engineers are attempting to match the computational efficiency of
biological systems by morphing neurocircuitry into silicon circuits [1]. One of the
most detailed implementations to date is the silicon retina described in [2] . This
chip comprises thirteen different cell types, each of which must be individually and
painstakingly wired. While this circuit-level approach has been very successful in
sensory systems, it is less helpful when modeling largely unelucidated and
exceedingly plastic higher processing centers in cortex.
Instead of an explicit blueprint for every cortical area, what is needed is a
developmental rule that can wire complex circuits from minimal specifications. One
candidate is the famous "cells that fire together wire together" rule, which
strengthens excitatory connections between coactive presynaptic and postsynaptic
cells. We implemented a self-rewiring scheme of this type in silicon, taking our cue
from axon remodeling during development.
2
Growth Cones
During development, the brain wires axons into a myriad of topographic projections
between regions. Axonal projections initially organize independent of neural
activity, establishing a coarse spatial order based on gradients of substrate-bound
molecules laid down by local gene expression. These gross topographic projections
are refined and maintained by subsequent neuronal spike activity, and can reroute
post
II
A
B
Figure 1: A. Postsynaptic activity is transmitted to the next layer (up arrows) and
releases neurotropin into the extracellular medium (down arrows). B. Presynaptic
activity excites postsynaptic dendrites (up arrows) and triggers neurotropin uptake
by active growth cones (down arrows). Each growth cone samples the neurotropin
concentration at several spatial locations, measuring the gradient across the axon
terminal. Growth cones move toward higher neurotropin concentrations. C. Axons
that fire at the same time migrate to the same place.
themselves if their signal source changes. In such cases, axons abandon obsolete
territory and invade more promising targets [3].
An axon grows by adding membrane and microtubule segments to its distal tip, an
amoeboid body called a growth cone. Growth cones extend and retract fingers of
cytoplasm called filopodia, which are sensitive to local levels of guidance chemicals
in the surrounding medium. Candidate guidance chemicals include BDNF and NO,
whose release can be triggered by action potentials in the target neuron [4].
Our learning rule is based on an activity-derived diffusive chemical that guides
growth cone migration. In our model, this neurotropin is released by spiking
neurons and diffuses in the extracellular medium until scavenged by glia or bound
by growth cones (Figure lA). An active growth cone compares amounts of
neurotropin bound to each of its filopodia in order to measure the local gradient
(Figure IB). The growth cone then moves up the gradient, dragging the axon behind
it. Since neurotropin is released by postsynaptic activity and axon migration is
driven by presynaptic activity, this rule translates temporal coincidence into spatial
coincidence (Figure 1C).
For topographic map formation, this migration rule requires temporal correlations in
the presynaptic plane to reflect neighborhood relations. We supply such correlations
by simulating retinal waves, spontaneous bursts of action potentials that sweep
across the ganglion cell layer in the developing mammalian retina. Retinal waves
start at random locations and spread over a limited domain before fading away,
eventually tiling the entire retinal plane [5]. Axons participating in the same retinal
~
,,
iJC
,
,,
j=~==~~~~~~~~~?VG~' ,
\
>>
u
a:
>-
-'-'
...
E
x
Xmit X
A
VGC K
NK
B
Figure 2: A. Chip block diagram. Axon terminal (AT) and neuron (N) circuits are
arrayed hexagonally, surrounded by a continuous charge-diffusing lattice. An active
axon terminal (AT x,y) excites the three adjacent neurons and its growth cone
samples neurotropin from four adjacent lattice nodes. The growth cone sends the
measured gradient direction off-chip (VGCx,y)' An active postsynaptic neuron
(Nx,y) releases neurotropin into the six surrounding lattice nodes and sends its spike
off-chip. B. System block diagram. Presynaptic neurons send spikes to the lookup
table (LUT), which routes them to axon terminal coordinates (AT) on-chip. Chip
output filters through a microcontroller (f.lC) that translates gradient measurements
(VGC) into LUT updates (ilAT). Postsynaptic activity (N) may be returned to the
LUT as recurrent excitation and also passed on to the next stage of the system.
wave migrate to the same postsynaptic neighborhood, since neurotropin
concentration is maximized when every cell that fires at the same time releases
neurotropin at the same place.
To prevent all of the axons from collapsing onto a single postsynaptic target, we
enforce a strictly constant synaptic density. We have a fixed number of synaptic
sites, each of which can be occupied by one and only one presynaptic afferent. An
axon terminal moves from one synaptic site to another by swapping places with the
axon already occupying the desired location. Learning occurs only in the point-topoint wiring diagram; synaptic weights are identical and unchanging.
3
System Architecture
We have fabricated and tested a first-generation neurotropin chip, Neurotrope 1, that
implements retrograde transmission of a diffusive factor from postsynaptic neurons
to presynaptic afferents (Figure 2A). The 11.5 mm 2 chip was fabricated through
MOSIS using the TSMC 0.35f.lm process, and includes a 40 x 20 array of growth
cones interleaved with a 20 x 20 array of neurons. The chip receives and transmits
1-
-
-
-
-
-
-
-
Vdd
-
-
-
-
-
-
-
Vdd
-
-
-
-
-
-
-
-
-
-
-
Vdd
-
-
-
-------------------------------------,
-
Vdd
Vdd :
-4
Vi release
Vdd
M11 :
I
: M12
:
r-
JL'
-samp eO
-:- Viuptake
M1
Figure 3: Neurotropin circuit diagram. Postsynaptic activity gates neurotropin
release (left box) and presynaptic activity gates neurotropin uptake (right box).
spike coordinates encoded as address-events, permitting ready interface with other
spike-based chips that obey this standard [6]. Virtual wiring [7] is realized with a
look-up table (LUT) stored in a separate content-addressable memory (CAM) that is
controlled by an Ubi com SX52 microcontroller (Figure 2B).
The core of the chip consists of an array of axon terminals that target a second array
of neurons, all surrounded by a monolithic pFET channel laid out as a hexagonal
lattice, representing a two-dimensional extracellular medium. An activated axon
terminal generates postsynaptic potentials in all the fixed-radius dendritic arbors
that span its location, as modeled by a diffusor network [8]. Once the membrane
potential crosses a threshold, the neuron fires, transmitting its coordinates off-chip
and simultaneously releasing neurotropin, represented as charge spreading within
the lattice. N eurotropin diffuses spatially until removed by either an activityindependent leak current or an active axon terminal.
An axon terminal senses the local extracellular neurotropin gradient by draining
charge from its own node on the hexagonal lattice and from the three immediately
adjacent nodes. Charge from the four locations is integrated on independent
capacitors, which race to cross threshold first. The winner of this latency
competition transmits a set of coordinates that uniquely identify the location and
direction of the measured gradient. We use the neuron circuit described in [9] to
integrate neurotropin as well as dendritic potentials.
Coordinates transmitted off-chip thus fall into two categories: neuron spikes that are
routed through the LUT, and gradient directions that are used to update entries in
the LUT. An axon migrates simply by looking up the entry in the table
corresponding to the site it wants to occupy and swapping that address with that of
its current location. Subsequent spikes are routed to the new coordinates. Thus,
although the physical axon terminal circuits are immobilized in silicon, the virtual
axons are free to move within the postsynaptic plane.
3.1
Neurotropin circuit
Neurotropin in the extracellular medium is represented by charge in the hexagonal
charge-diffusing lattice Ml (Figure 3). VCDL sets the maximum amount of charge
MI can hold. The total charge in Ml is determined by circuits that implement
11
,
*
1
C
12
Vm - sp
2
C
*
10
13
Vm - sp
*
Vm - sp
1
- s031
3
C
Figure 4: Latency competition circuit diagram. A growth cone integrates
neurotropin samples from its own location (right box) and the three neighboring
locations (left three boxes). The first location to accumulate a threshold of charge
resets its three competitors and signals its identity off-chip.
activity-dependent neurotropin release and uptake. In addition, MIl and M12
provide a path for activity-independent release and uptake.
Postsynaptic activity triggers neurotropin release, as implemented by the circuit in
the left box of Figure 3. Spikes from any of the three neighboring postsynaptic
neurons pull Cspost to ground, opening M7 and discharging C/pos t through M4 and
M5. As C/post falls, M6 opens, establishing a transient path from Vdd to M1 that
injects charge into the hexagonal lattice. Upon termination of the postsynaptic
spike, Cspost and C/pos t are recharged by decay currents through M2 and M3. Vppost
and V/postout are chosen such that Cspost relaxes faster than C/post. permitting C/post to
integrate several postsynaptic spikes and facilitate charge injection if spikes arrive
in a burst rather than singly. V/po stin determines the contribution of an individual
spike to the facilitation capacitor C/pos t .
Presynaptic activity triggers neurotropin uptake, as implemented by the circuit in
the right box of Figure 3. Charge is removed from the hexagonal lattice by a
facilitation circuit similar to that used for postsynaptic release. A presynaptic spike
targeted to the axon terminal pulls C spre to ground through M24. C spre. in turn, drains
charge from C/pre through M21 and M22. C/pre removes charge from the hexagonal
lattice through M14, up to a limit set by M13, which prevents the hexagonal lattice
from being completely drained in order to avoid charge trapping. Current from M14
is divided between five possible sinks. Depending on presynaptic activation, up to
four axon terminals may sample a fraction of this current through M 15-18; the
remainder is shunted to ground through M 19 in order to prevent a single presynaptic
event from exerting undue influence on gradient measurements. The current
sampled by the axon terminal at its own site is gated by ~sampleo, which is pulled
low by a presynaptic spike through M26 and subsequently recovers through M25.
Identical circuits in the other axon terminals generate signals ~sample], ~sample2'
and ~sample3. Sample currents la, h hand 13 are routed to latency competition
circuits in the four adjacent axon terminals.
Figure 5: Retinal stimulus and cortical attractor. A. Randomly centered patches of
active retinal cells (left) excite cortical targets (right). B. Density plot of a single
mobile growth cone initialized in a static topographic projection. Histograms bin
column (0'=3.27) and row (0'=3.79) coordinates observed (n=800).
3.2
Latency competition circuit
Each axon terminal measures the local neurotropin gradient by sampling a fraction
of the neurotropin present at its own site, location 0, and the three immediately
adjacent nodes on the hexagonal lattice, locations 1-3. Charge drained from the
hexagonal lattice at these four sites is integrated on a separate capacitor for each
location. The first capacitor to reach the threshold voltage wins the race, resetting
itself and all of its competitors and signaling its victory off-chip.
In the circuit that samples neurotropin from location 1 (left box of Figure 4), charge
pulses 1J arrive through diode Ml and accumulate on capacitor C J in an integrateand-fire circuit described in [9]. Upon crossing threshold this circuit transmits a
swap request ~sol, resets its three competitors by using M6 to pull the shared reset
line GRST high, and disables M4 to prevent GRST from using M3 to reset C J ? The
swap request ~sol remains low until acknowledged by sil, which discharges CJ
through M2. During the time that ~sol is low, the other three capacitors are shunted
to ground by GRST, preventing late arrivals from corrupting the declared gradient
measurement before it has been transmitted off-chip. C] being reset releases GRST
to relax to ground through M24 with a decay time determined by Vgrs t ?
C] is also reset if the neighboring axon terminal initiates a swap. GRSTil is pulled
low if either the axon terminal at location 1 decides to move to location 0 or the
axon terminal at location 0 decides to move to location 1. The accumulated
neurotropin samples at both locations become obsolete after the exchange, and are
therefore discarded when GRST is pulled high through MS. Identical circuits sample
neurotropin from locations 2 and 3 (center two boxes of Figure 4).
If Co (right box of Figure 4) wins the latency competition, the axon terminal decides
that its current location is optimal and therefore no action is required. In this case,
no off-chip communication occurs and Co immediately resets itself and its three
rivals. Thus, the location 0 circuit is identical to those of locations 1-3 except that
the inverted spike is fed directly back to the reset transistor M20 instead of to a
communication circuit. Also, there is no GRSTiO transistor since there is no swap
partner.
4
Results
We drove the chip with a sequence of randomly centered patches of presynaptic
activity meant to simulate retinal waves. Each patch consisted of 19 adjacent
presynaptic cells: a randomly selected presynaptic cell and its nearest, next-nearest,
Presynaptic
Postsynapti c
..
+12000 patches
20 .0
~ 0. ?'~.~'~
.. "" ~~
+.J
III
c
~
Q)
c
=
Q)
cL
17 .5
2
(jj
C
<0
~
~o
c
0--------1. 0
0
..0
~o
\'t
.-...
~.
2.5
~<:i
~t:J
cL
A
7.5
5.0
0--------1. 0
<0
1)
15 .0
2k
B
C
4k
6k
8k 10k 12k
Number of patches
Figure 6: Topographic map evolution. A. Initial maps. Axon terminals in the
postsynaptic plane (right) are dyed according to the presynaptic coordinates of their
cell body (left). Top row: Coarse initial map. Bottom row: Perfect initial map. B.
Postsynaptic plane after 12000 patch presentations. C. Map error in units of average
postsynaptic distance between axon terminals of presynaptic neighbors. Top line:
refinement of coarse initial map; bottom line: relaxation of perfect initial map.
and third-nearest presynaptic neighbors on a hexagonal grid (Figure 5A). Every
patch participant generated a burst of 8192 spikes, which were routed to the
appropriate axon terminal circuit according to the connectivity map stored in the
CAM. About 100 patches were presented per minute.
To establish an upper performance bound, we initialized the system with a perfectly
topographic projection and generated bursts from the same retinal patch, holding all
growth cones static except for the one projected from the center of the patch, which
was free to move over the entire cortical plane. Over 800 min, the single mobile
growth cone wandered within the cortical area of the patch (Figure 5B), suggesting
that the patch radius limits maximum sustainable topography even in the ideal case.
To test this limit empirically, we generated an initial connectivity map by starting
with a perfectly topographic projection and executing a sequence of (N/2)2 swaps
between a randomly chosen axon terminal and one of its randomly chosen
postsynaptic neighbors, where N is the number of axon terminals used. We opted for
a fanout of 1 and full synaptic site occupancy, so 480 presynaptic cells projected
axons to 480 synaptic sites. (One side of the neuron array exhibited enhanced
excitability, apparently due to noise on the power rails, so the 320 synaptic sites on
that side were abandoned.) The perturbed connectivity map preserved a loose global
bias, representing the formation of a coarse topographic projection from activityindependent cues. This new initial map was then allowed to evolve according to the
swap requests generated by the chip. After approximately 12000 patches, a refined
topographic projection reemerged (Figure 6A,B).
To investigate the dynamics of topographic refinement, we defined the error for a
single presynaptic cell to be the average of the postsynaptic distances between the
axon terminals projected by the cell body and its three immediate presynaptic
neighbors. A cell in a perfectly topographic projection would therefore have unit
error. The error drops quickly at the beginning of the evolution as local clumps of
correlated axon terminals crystallize. Further refinement requires the disassembly of
locally topographic crystals that happened to nucleate in a globally inconvenient
location. During this later phase, the error decreases slowly toward an asymptote.
To evaluate this limit we seeded the system with a perfect projection and let it relax
to a sustainable degree of topography, which we found to have an error of about 10
units (Figure 6C).
5
Discussion
Our results demonstrate the feasibility of a spike-based neuromorphic learning
system based on principles of developmental plasticity. This neurotropin chip lends
itself readily to more ambitious multichip systems incorporating silicon retinae that
could be used to automatically wire ocular dominance columns and orientationselectivity maps when driven by spatiotemporal correlations among neurons of
different origin (e.g. left eye/right eye) or type (ON/OFF).
A related model of chemical-driven developmental plasticity posits an activitydependent competition for a local sustenance factor, or neurotrophin. Axon weights
saturate at neurotrophin-rich locations and vanish at neurotrophin-starved locations,
pruning a dense initial arbor until only the final circuit remains [10]. By contrast, in
our chemotaxis model, a handful of growth cone-guided wires rearrange themselves
by moving through locations at which they had no initial presence. These two
mechanisms could plausibly complement each other: noisy gradient measurements
establish an initial axonal arbor that can then be pruned to eliminate outliers and
refine local topography. We can use a similar approach to improve our silicon maps.
Acknowledgments
We would like to thank K. Hynna and K. Zaghloul for assistance with fabrication
and testing. This project was funded in part by the David and Lucille Packard
Foundation and the NSF/BITS program (EIA0130822). B.T. received support from
the Dolores Zohrab Liebmann Foundation.
References
[1] C. Mead (1990) Neuromorphic electronic systems. IEEE Proc, 78(10): 1629-1636.
[2] K.A. Zagh1ou1 (2002) A silicon implementation of a novel model for retinal processing.
PhD thesis, University of Pennsylvania.
[3] M. Sur and C.A. Leamy (2001) Development and plasticity of cortical areas and
networks . Nat Rev Neurosci, 2:251-262 .
[4] E.J. Huang and L.F. Reichardt (2001) Neurotrophins: roles in neuronal development and
function. Annu Rev Neurosci , 24 :677-736.
[5] M.B. Feller, D.A. Butts, H.L. Aaron, D.S. Rokhsar, and C.J. Shatz (1997) Dynamic
processes shape spatiotemporal properties of retinal waves. Neuron, 19:293-306 .
[6] K.A. Boahen (2000) Point-to-point connectivity between neuromorphic chips using
address-events. IEEE Transactions on Circuits and Systems II, 47 :416-434.
[7] J.G. Elias (1993) Artificial dendritic trees. Neural Comp, 5:648-663.
[8] K.A. Boahen and A.G. Andreou (1991) A contrast-sensitive silicon retina with
reciprocal synapses. Advances in Neural Information Processing Systems 4, J.E. Moody
and R.P. Lippmann, eds. , pp 764-772, Morgan Kaufman, San Mateo, CA.
[9] E . Culurciello, R. Etienne-Cummings, and K. Boahen (2001) Arbitrated address event
representation digital image sensor. IEEE International Solid State Circuits Conference,
pp 92-93.
[10] T. Elliott and N.R. Shadbolt (1999) A neurotrophic model of the development of the
retinogeniculocortical pathway induced by spontaneous retinal waves. J Neurosci,
19:7951-7970.
| 2211 |@word open:1 termination:1 pulse:1 solid:1 initial:10 coactive:1 current:8 com:1 activation:1 must:1 readily:1 subsequent:2 plasticity:3 unchanging:1 arrayed:1 disables:1 remove:1 plot:1 drop:1 update:2 asymptote:1 shape:1 cue:2 obsolete:2 selected:1 plane:6 trapping:1 beginning:1 disassembly:1 reciprocal:1 core:1 coarse:4 node:5 location:27 five:1 burst:4 m7:1 supply:1 become:1 m22:1 consists:1 pathway:1 upenn:1 themselves:2 brain:1 terminal:27 globally:1 automatically:2 project:1 circuit:26 taba:1 medium:5 what:1 kaufman:1 fabricated:2 temporal:2 every:3 charge:18 growth:21 unit:3 configuring:1 organize:1 discharging:1 before:2 local:8 monolithic:1 limit:4 mead:1 establishing:2 solely:1 path:2 approximately:1 mateo:1 co:2 limited:1 clump:1 hynna:1 acknowledgment:1 testing:1 block:2 implement:2 signaling:1 addressable:1 area:3 projection:11 pre:2 onto:1 influence:1 map:16 center:3 blueprint:1 send:1 starting:1 immediately:3 m2:2 rule:5 array:5 pull:3 facilitation:2 coordinate:8 discharge:1 target:5 trigger:3 spontaneous:2 drove:1 enhanced:1 substrate:1 us:1 origin:1 pa:1 crossing:1 strengthens:1 mammalian:1 observed:1 m14:2 bottom:2 role:1 coincidence:2 region:1 sol:3 removed:2 decrease:1 gross:2 boahen:4 developmental:3 leak:1 feller:1 cam:2 dynamic:2 vdd:7 segment:1 myriad:1 upon:2 efficiency:1 completely:1 sink:1 swap:6 po:4 chip:22 represented:3 finger:1 surrounding:2 describe:1 artificial:1 formation:3 neighborhood:2 refined:3 whose:1 encoded:1 relax:2 topographic:15 itself:3 abandon:1 final:1 noisy:1 triggered:1 sequence:2 transistor:3 rewiring:1 reset:8 remainder:1 neighboring:3 date:1 participating:1 competition:6 m21:1 transmission:1 wired:1 perfect:3 executing:1 spre:2 depending:1 recurrent:1 measured:2 nearest:3 excites:2 received:1 implemented:3 diode:1 direction:3 guided:2 radius:2 posit:1 filter:1 subsequently:1 centered:2 transient:1 virtual:3 bin:1 exchange:1 wandered:1 integrateand:1 brian:1 biological:1 dendritic:3 neuroengineering:1 strictly:1 mm:1 hold:1 ground:5 lm:1 released:2 proc:1 integrates:1 spreading:1 sensitive:2 individually:1 occupying:1 sensor:1 rather:1 occupied:1 avoid:1 shadbolt:1 mobile:2 voltage:1 mil:1 release:11 derived:1 dragging:1 opted:1 contrast:2 culurciello:1 helpful:1 dependent:2 accumulated:1 entire:2 integrated:2 eliminate:1 initially:2 diffuses:2 vlsi:1 relation:1 among:1 undue:1 development:5 spatial:3 ilat:1 once:1 sampling:1 kwabena:2 identical:4 look:1 stimulus:1 opening:1 retina:4 randomly:5 simultaneously:1 individual:1 m4:2 phase:1 fire:5 filopodia:2 attractor:1 arbitrated:1 investigate:1 behind:1 swapping:2 activated:1 sens:1 rearrange:1 bioengineering:1 tree:1 initialized:2 desired:1 inconvenient:1 guidance:2 minimal:1 column:2 modeling:1 measuring:1 neuromorphic:6 neurotropin:31 lattice:13 entry:2 successful:1 fabrication:1 stored:2 perturbed:1 spatiotemporal:2 migration:3 density:2 international:1 off:9 vm:3 multichip:1 chemotaxis:1 tip:1 together:2 shunted:2 moody:1 transmitting:1 quickly:1 connectivity:4 thesis:1 reflect:1 huang:1 slowly:1 collapsing:1 draining:1 suggesting:1 potential:5 lookup:1 retinal:12 includes:1 afferent:2 vi:1 race:2 later:1 microcontroller:2 apparently:1 wave:7 start:1 participant:1 samp:1 contribution:1 largely:1 resetting:1 maximized:1 identify:1 famous:1 territory:1 plastic:1 shatz:1 comp:1 synapsis:1 reach:1 synaptic:7 ed:1 competitor:3 pp:2 ocular:1 transmits:3 mi:1 recovers:1 static:2 sampled:1 neurocircuitry:1 exerting:1 cj:1 m24:2 neurotrophic:1 back:1 higher:2 cummings:1 box:9 stage:1 dyed:1 correlation:4 until:4 hand:1 receives:1 invade:1 grows:1 facilitate:1 consisted:1 evolution:2 seeded:1 chemical:4 spatially:1 excitability:1 distal:1 adjacent:6 wiring:2 during:4 self:2 uniquely:1 assistance:1 maintained:1 excitation:1 m:1 m5:1 crystal:1 demonstrate:1 topoint:1 interface:1 image:1 novel:1 spiking:1 physical:1 empirically:1 winner:1 jl:1 analog:1 extend:1 m1:2 crystallize:1 accumulate:2 silicon:9 measurement:4 grid:1 had:1 funded:1 moving:1 specification:1 cortex:1 own:4 driven:3 route:1 rerouting:1 inverted:1 transmitted:3 morgan:1 eo:1 signal:3 ii:2 full:1 match:1 faster:1 cross:2 divided:1 post:4 permitting:2 controlled:1 feasibility:1 victory:1 histogram:1 cell:13 diffusive:2 preserved:1 addition:1 want:1 diagram:5 source:1 sends:2 tsmc:1 releasing:1 exhibited:1 induced:1 capacitor:6 axonal:2 presence:1 ideal:1 iii:1 relaxes:1 diffusing:3 m6:2 sample2:1 pennsylvania:2 architecture:1 perfectly:3 translates:2 zaghloul:1 expression:1 six:1 passed:1 fanout:1 ubi:1 routed:4 returned:1 jj:1 action:3 migrate:3 latency:5 detailed:1 amount:2 singly:1 rival:1 locally:1 category:1 generate:1 occupy:1 nsf:1 happened:1 per:1 dominance:1 four:5 threshold:5 acknowledged:1 prevent:3 retrograde:1 mosis:1 relaxation:1 injects:1 cone:21 fraction:2 place:3 laid:2 arrive:2 electronic:1 patch:13 bit:1 interleaved:1 bound:4 layer:2 refine:1 activity:16 fading:1 handful:1 generates:1 declared:1 simulate:1 span:1 min:1 pruned:1 attempting:1 hexagonal:10 remodeling:2 injection:1 extracellular:5 glia:1 department:1 developing:1 according:3 pfet:1 request:3 membrane:2 across:2 postsynaptic:22 rev:2 outlier:1 remains:2 reroute:1 eventually:1 turn:1 loose:1 needed:1 initiate:1 mechanism:1 retinogeniculocortical:1 fed:1 tiling:1 obey:1 away:1 enforce:1 appropriate:1 simulating:2 gate:2 abandoned:1 top:2 include:1 etienne:1 plausibly:1 establish:2 sweep:1 move:8 already:1 realized:1 spike:17 occurs:2 concentration:3 gradient:14 win:2 lends:1 distance:2 separate:2 thank:1 nx:1 partner:1 presynaptic:23 immobilized:1 toward:2 sur:1 modeled:2 thirteen:1 holding:1 implementation:2 ambitious:1 gated:1 m11:1 m12:2 wire:6 neuron:16 upper:1 discarded:1 immediate:1 looking:1 communication:2 diffusor:1 david:1 complement:1 required:1 connection:1 andreou:1 address:5 program:1 packard:1 memory:1 power:1 event:5 representing:2 scheme:1 occupancy:1 improve:1 eye:2 neurotrophin:3 uptake:5 ready:1 philadelphia:1 reichardt:1 morphing:1 drain:1 evolve:1 topography:3 generation:1 vg:1 sil:1 digital:1 foundation:2 integrate:2 degree:1 elia:1 elliott:1 principle:1 corrupting:1 surrounded:2 row:3 excitatory:1 free:2 guide:1 side:2 pulled:3 bias:1 fall:2 neighbor:4 taking:1 cortical:6 rich:1 exceedingly:1 sensory:1 preventing:1 refinement:3 projected:3 san:1 lut:6 transaction:1 cytoplasm:1 pruning:1 lippmann:1 gene:1 ml:3 global:1 active:6 decides:3 butt:1 excite:1 continuous:1 activitydependent:1 table:3 promising:1 channel:2 molecule:1 ca:1 dendrite:1 m13:1 complex:1 cl:2 domain:1 sp:3 spread:1 dense:1 neurosci:3 arrow:4 noise:1 arrival:1 allowed:1 body:3 neuronal:2 site:9 recharged:1 m20:1 axon:44 lc:1 ijc:1 comprises:1 explicit:1 candidate:2 rail:1 ib:1 vanish:1 late:1 third:1 down:3 minute:1 saturate:1 annu:1 decay:2 starved:1 incorporating:1 adding:1 phd:1 nat:1 nk:1 simply:1 ganglion:1 prevents:1 determines:1 identity:1 targeted:1 presentation:1 shared:1 content:1 change:1 determined:2 retract:1 except:2 engineer:1 called:2 total:1 arbor:3 la:2 m3:2 aaron:1 sustainable:2 support:1 meant:1 evaluate:1 tested:1 correlated:1 |
1,332 | 2,212 | Fast Transformation-Invariant Factor Analysis
Anitha Kannan
Nebojsa Jojic
Brendan Frey
University of Toronto, Toronto, Canada
anitha, frey @psi.utoronto.ca
Microsoft Research, Redmond, WA, USA
[email protected]
Abstract
Dimensionality reduction techniques such as principal component analysis and factor analysis are used to discover a linear mapping between high
dimensional data samples and points in a lower dimensional subspace.
In [6], Jojic and Frey introduced mixture of transformation-invariant
component analyzers (MTCA) that can account for global transformations such as translations and rotations, perform clustering and learn local appearance deformations by dimensionality reduction. However, due
to enormous computational requirements of the EM algorithm for learning the model, O( ) where
is the dimensionality of a data sample,
MTCA was not practical for most applications. In this paper, we demonstrate how fast Fourier transforms can reduce the computation to the order of log . With this speedup, we show the effectiveness of MTCA
in various applications - tracking, video textures, clustering video sequences, object recognition, and object detection in images.
1 Introduction
Dimensionality reduction techniques such as principal component analysis [7] and factor
analysis [1] linearly map high dimensional data samples onto points in a lower dimensional
subspace. In factor analysis, this mapping is defined by subspace origin and the subspace
bases stored in the columns of the factor loading matrix, . A mixture of factor analyzers
learn to place the data into several learned subspaces. In computer vision, this approach
has been widely used in face modeling for learning facial expressions (e.g. [2] , [12] ).
When the variability in the data is due, in part, to small transformations such as translations,
scales and rotations, factor analyzer learns a linearized transformation manifold which is
often sufficient ( [4], [11]). However, for large transformations present in the data, linear
approximation is insufficient. For instance, a factor analyzer trained on a video sequence
of a person walking tries to capture a linearized model of large translations (fig. 2a.) as
opposed to learning local deformations such as motion of legs and hands (fig. 2c.).
In [6], it was shown that a discrete hidden transformation variable , enables clustering
and learning subspaces within clusters, invariant to global transformations. However, experiments were done on images of very low resolution due to enormous computational cost
of EM algorithm used for learning the model. It is known that fast Fourier transform(FFT)
Figure 1: Mixture of transformed component analyzers (MTCA). (a) The generative model with
cluster index c, subspace coordinates , latent image
+noise; transformation and
generated final image
+noise; (b) An example of the generative process, where subspace
coordinates , and image position
,
are inferred from a captured video sequence
is very useful in dealing with transformations in images ( [3], [13]). The main purpose of
this work is to show that under very mild assumptions, we can have an effective implementation of MTCA that reduces the complexity from
to
log , where
is
the number of factors, N is the size of input, is the set of all possible transformations.
This means that for 256x256 images, the current implementation will be 4000 times faster.
We present experimental results showing the effectiveness of MTCA in various applications
- tracking, video textures, clustering video sequences, object recognition and detection.
2 Fast inference and learning in MTCA
Mixture of transformation-invariant component analyzers (MTCA) is used for
transformation-invariant clustering and learning a subspace representation within each
cluster. The set of transformations, , to which the model is invariant is specified a priori.
Fig. 1a. shows a generative model for MTCA. The vector is a
dimensional Gaus0 I random variable. Cluster index, c is a C-valued discrete random variable with
sian
dimensional (
)latent image, has mean,
probability distribution, . The
, and diagonal covariance, ; the x matrix
is the factor loading matrix
for class c. An observation is obtained by applying a transformation
, (with distribution
) on the latent image and adding independent Gaussian noise, . Fig. 1b
illustrates this generative process for a one class MTCA. The subspace coordinates , are
used to generate a latent image, (without noise), and the horizontal and vertical image
position
and
are used to shift the latent image to obtain . In fact, ,
and
shown in the figure are actually inferred from the captured video sequence (see sec. 3).
The joint distribution over all variables is [6],
!)( *!+
243
"!
76
/
98
'
'
,-!
$#%#&
.!
10
/
'
5
.6
/
: ;=<>'? @/ACB : D : ' ;<E : / ?' 7+F 7+F <E
?' G
B ;G 0 I
!)( .!;H,-!H /IG 9'?H5JK2 3 L!
98
Figure 2: Means and components in learned using (a) FA, (b) FA applied on data normalized
using a correlation tracker, and (c) transformed component analysis (TCA) applied directly on data.
F >< /A requires
evaluating the joint,
F /I=<> * B F / < 7+F E< +F 7 B /IG *! ,.!H 5 2 3 ! (1)
!
!(
(
A/ B /IG .! ,.!H 5 K2 3 !
The likelihood of / : is
(2)
!
(
! (
! 3
The parameters of the model are learned from i.i.d training examples by maximizing their
likelihood (
: /
) using an exact EM algorithm. The only inputs to the EM are the
training examples, the number
of factors, , the number of clusters, , and the set of all
possible transformations, . Starting at a random initialization, EM algorithm for MTCA
iterates between E
step, where
it probabilistically fills in for hidden variables by finding the
exact posterior : ;=' =<> /A and M step in which it updates the parameters.
Performing inference on transformations and class,
The likelihood of the data (eqn. 2) requires summing over all possible transformations and
is very expensive. In fact, each of inference and update equations in [6] has a complexity
of
. In this section, we show how these equations can be derived and evaluated in
log . We focus on
Fourier domain at a considerably lower computational cost of
inferring the means of
and
as examples for efficient computation.
Similar mathematical manipulations will result in the inferences provided in the appendix.
: @< +/A
: ' <>/A
We assume that data is represented in a coordinate system in which transformations are
discrete shifts with wrap-around. For translations, it is 2D rectangular grid, while for rotations and scales it is shifts in a radial grid (c.f. [3] [13]). We also assume that the post , so that covariances matrices,COV
transformation noise is isotropic,
become independent of
and COV
. In fact, for isotropic , it is possible to
preset (in our experiments we set it to .001). By presetting the sensor noise, , to small
value, if the actual value in the data is larger, it can be accounted for in .
5 B
@<>/
5
' @< +/
,
First, we describe the notation that simplifies expressions for transformation that corresponds to a shift in input coordinates. Define to be an integer vector in the coordinate
system in which
),
is the element
input is measured. For 2D nxn image, (
where
!
#"%$&$'$(
"%$'$&$(
. Vectors in the input coordinate system
such as
are defined this way. For diagonal matrices such as , defines the element corresponding to pixel at coordinate . This notation enables treating transformations
corresponding to a shift be represented as a vector in
so that a shift of
the same system,
by is represented as
such that
mod
mod .
*)
*)
0
I/ '
B
I B
'
' ( 7
( B (
B
/
, ,
I (
A
Figure 3: Transformation invariant clustering with and without a subspace model: (a) Parameters
of a three-cluster TMG [6], and a three-cluster MTCA (b) Frames from the video sequence , corresponding TMG mean
and the object appearance
in the corresponding subspace of
MTCA; (c) An illustration of the role of components for the first class. Factor tends to model
lighting variation and tends to model small out-of-plane rotations
In the appendix, we show that all expensive operations in inference and learning involve
correlation
is only
computing
( ), or convolution(
). These operations
, i.e.
for all shifts in frequency domain, while it is
in
the pixel domain. For notational ease, we represent column and row of a matrix, , by
and
respectively. Also, diag( ) extracts the diagonal elements of matrix
and defines an element wise product between and .
J
J
In principal component analysis (PCA), where there is no noise, the data is projected to
subspace
through the projection matrix. Similarly, in MTCA, we can derive that when
and
, the projection matrix is ! #"$
&%! and it accounts for
%
(' is the inverse of noise variances in input space, and "
variances.
%
)'+* is the inverse of the noise variances in the projected subspace. The mean
of subspace for a given
, c, and is obtained by subtracting the mean of the latent
image
from the transformation-normalized
and applying the projection matrix:
,
/.
-
.
For
each
factor,
, it reduces to
0
5 B
B
, !(
5
! ( E
!
/
!
/ @< "B /
!
,
/
@< "B 1 1 /
B
B
7!
B
/
. 7
.
+ ;B /
!
.
!
As the summation over for all is a correlation, it can be efficiently computed for all
at the same time in the frequency domain in log time.
' <>+/
B-2.! -!
'
F < +/
The inference on the latent image
is given by its expected
value:
,
! ( ,.!= '
!)( -2 ! 5
'
/
3%
where 2 ! B COV ' @=<>+/
is independent of /
and . The first term, dictated by
the model can be easily computed. 3 3 F
< +/
/
is a convolution of /
with the probability map F
<>/
defined for all , as a particular element in the sum
Figure 4: Comparison of FA applied on data normalized for translations using correlation tracker
and TCA. (a)Frames from sequence. (b) shift normalized frames, using correlation-based tracker and
obtained through factor analysis model. (c)
and
for the TCA model.
Figure 5: Simulated walk sequence synthesized after training an AR model on the subspace and
image motion parameters. The sequence enlarged for better viewing of translations. The first row
contains a few frames from the sequence simulated directly from the model. The second row contains
a few frames from the video texture generated by picking the frames in the original sequence for
which the recent subspace trajectory was similar to the one generated by the AR model.
is 3
3 F >< +/
/
. 7 . We can efficiently compute this sum for all :
,.!H '
,
' <>/
"B 2-! .!
!(
!D( 2-! 5 ' F < +/
/
Note that multiplication with x matrices above can be done efficiently by factorizing
them and applying a sequence of vector multiplication from right to left.
3 Experimental Results
Clustering face poses. In Fig. 3b the first column shows examples from a video sequence
of one person with different facial poses walking across cluttered background. We trained
a transformation-invariant mixture of Gaussians (TMG) [6] with 3 clusters that learned
means shown in Fig. 3b. TMG captures the three generic poses in the data.However, due
to presence of lighting variations and small out-of-plane rotations in addition to big pose
changes, it is difficult for TMG to learn these variations without many more classes.
We trained a MTCA model with 3 classes and 2 factors, initializing the parameters to those
learned by TMG. Fig. 3a compares TMG means and components to those learned using
MTCA. The MTCA model learns to capture small variations around the cluster means.
For example, for the first cluster, the two subspace coordinates tend to model
out-of-plane
,
,
rotations and illumination changes (Fig. 3c). In Fig. 3b, we compare
(
), of TMG and MTCA for various training examples, illustrating
better tracking and appearance modelling of MTCA.
< B
!=F < /A
! (
! /
Figure 6: Clustering faces extracted from a personal database prepared using face detector. (a)
Training examples (b) Means, variances and components for two classes learned using
MTCA. (c)
column contains several photos in which
the detector [8] failed to find the face. column contains
central 100x100 portion of
.
column contains central 100x100 portion of
.
-
Modeling a walking person. Fig. 4a. shows three 165x285 frames from a video sequence
of a person walking. For effective summarization, we need to learn a compact representation for the dynamically and periodically changing hand and leg movements.
A regular PCA or FA will learn a representation that focuses more on learning linearized
shifts, and less on the more interesting motion of hands and legs (Fig. 2a.). The traditional
approach is to track the object using, for example,a correlation tracker and then learn the
subspace model on normalized images. The parameters learned in this fashion are shown in
Fig. 2b. Without previously learning a good model, the tracker fails to provide the perfect
tracking necessary for precise subspace modelling of limb motion and thus the inferred
subspace projection is blurred. (Fig. 2b).
As TCA performs tracking and learns appearance model at the same time, not only does it
avoids the tracker initialization
that plagues the ?tracking first? approaches,
but also pro,
,
vides perfectly aligned and infers a much cleaner projection
.
' /
( 7 /
The TCA model can be used to create video textures based on frame reshuffling similar
to [10]. However, instead
of shuffling frames based directly on pixel similarity, we use
the subspace position
and image position
generated from an AR process [9],
and for each t find the best frame u in the original video
for which
the window
,
,
,
is
the
most
similar
to
. Then, gen
'
'
'
,
erated transformation is applied on the normalized image
. The result is shown
in fig. 5b and contains a bit sharper images than the ones simulated directly from the generative model, fig. 5a. We let the simulated walk last longer than in the original sequence
letting MTCA live on twice as wide frames.
/
/
/
/
' /
Clustering and face recognition We used a standard face detector [8] to obtain 85 32x32
images of faces of 2 persons, from a personal photo database of a mother and her daughter.
In fig. 6a. we present examples from the training set.
We learned a MTCA model with 2 classes and 4 factors. To model global lighting variation,
we preset one of the factors to be uniform at .01 (see fig. 6b.). This handles linearized
version of ambient lighting condition. We also preset another factor to be smoothly varying
in brightness (see fig. 6b.) to capture side illumination. The other two components are
learned and they model slight appearance deformation such as facial expressions. The
model learned to cluster faces in the training example with " accuracy.
An interesting application is to use the learned representation of the faces to detect and
recognize faces in the original photos. For instance, the face detector did not recognize
faces in many photographs (for eg.,fig. 6c), which we were able to using the learned model
(fig. 6c). We increased the resolution of model parameters
to match the resolution of photos (640x480), padding around the original parameters with uniform mean,
zero factors and high variance. Then,
we performed inference, inferring most likely class,c,
,
most likely, for that class and
. We also incorporated 3 rotations and 4 scales
as possible transformations, in addition to all possible shifts. In fig. 6c , we present three
examples which were not in the training set and the face detector we used failed. In all
three cases MTCA detected and recognized the face correctly as belonging to class 2.
! .!E ,7!
' /I=<
4 Conclusion
Mixture of transformation-invariant component analyzers is a technique for modeling visual data that finds major clusters and major transformations in the data and learns subspace
models of appearance. In this paper, we have described how a fast implementation of learning the model is possible through efficient use of identities from matrix algebra and the use
of fast Fourier transforms.
Appendix: EM updates Before performing inference in Estep, we pre-compute,for all classes the
quantities
that are independent
of training
examples:
I
I
! "
&
#
'(
) +, , .- /) +',10+
032 546
1-87 91:;
/) +,
6< is evaluated and saved.
For each training example *
Computing posteriors over
and c, requires evaluating =>
?) +,
@012A (eqn. 1). To compute this
distribution, we require the determinant of covariance matrix, COV ) +,
032 and the Mahalanobis
distance between input and the transformed latent image. The determinant is simplified to
COV * ) +', @
012
B
D
C
)
'
+
,
The Mahalanobis distance between ?
and the latent image is
) +',
) +, # E
) +,
# @
0 ) +', @
0 ) +',
"
#%
$
K
K
KJ
1-87 98:
GL?M 7 9 7 -
J
!
) +',
1
& 1-N7 9N:
X
OQP EQR
2S0 ) +',
OQP EQR # OQP EQR
UTWV
9 7 -[T\
9 7 -ZY
OQP EQR
]T\
G^_
280 ) +', U`a
) +',
X
#
OQP EQR
) +',6h
UTcb
ed V
Gf 9 7 -]g
G^_
) +, E
GFH0 1]`
'
Y
i
X
Xi
X
V
Gf 9 7 -/4_V
Gf 9 7
^_
) +, E
GFH0 E
Gm60 6<on
-'Y
Y
M?jlk
The summation over takes only p time, after E
GFo0 qF and is computed in r p log s time.
) +', 0(2
4 ^_
280 ) +', U`
!
) +, <
) +, < h
g 4 ^_
2S0 ) +, U`
E
GFH0 I
) +,
K
J
K
J
7
+
) +', 0(2
OQP EQR
7 +
X
M?jlk
Defining,
) +', UT
K
) +', 0 @012 T
+
-ZY
X
g V
-ZY
^_
2S0
) +, '
GL?M 7 9 7 -[`
4 4 280K ) +',
# 2S0K ) +,
<
)
)
,, , in the Mstep, parameters are updated according to
2S0K ) +, #
2S0K ) +,
OQP EQR
#
#
>'
#
#
> ) +', 0 2
2S0 ) +', #
280 ) +', ? 280K ) +,
82 0 ) +',
+
) +, 0 @0(2
) +, 012 OQP EQR
280K ) +', >
TI
9 7 - T@g
) +', 0(2 9 7 - # 2S0K ) +', h
2S0K ) +',
2S0 ) +, .T
X
V ^_
280 ) +', '
GL?M 7 9 7 -
#
'
#
#
XV
-'Y
#
E
T
2S0K ) +', K
& !
#
280
G^_
) +, h
References
[1] Everitt, B.S. An Introduction to Latent variable models Chapman and Hall, New York NY 1984
[2] Frey, B.J. , Colmenarez, A. & Huang, T.S. Mixtures of local linear subspaces for face recognition.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1998 IEEE
Computer Society Press: Los Alamitos, CA.
[3] Frey, B.J. & Jojic, N. Fast, large-scale transformation-invariant clustering. In Advances in Neural
Information Processing Systems 14. Cambridge, MA: MIT Press 2002
[4] Ghahramani, Z. & Hinton, G. The EM Algorithm for Mixtures of Factor Analyzers University
of Toronto Technical Report CRG-TR-96-1, 1996
[5] Hinton, G., Dayan, P. & Revow, M. Modeling the manifolds of images of handwritten digits In
IEEE Transactions on Neural Networks 1997
[6] Jojic, N. & Frey, B.J. Topographic transformation as a discrete latent variable In Advances in
Neural Information Processing Systems 13. Cambridge, MA: MIT Press 1999
[7] Jolliffe,I.T. Principal Component Analysis Springer-Verlag, New York NY, 1986.
[8] Li, S.Z., Zhu.L , Zhang, Z.Q. & Zhang,H.J. Learning to Detect Multi-View Faces in Real-Time
In Proceedings of the 2nd International Conference on Development and Learning, June, 2002.
[9] Neumaier, A. & Schneider,T. Estimation of parameters and eigenmodes of multivariate autoregressive models In ACM Transactions on Math Software 2001.
[10] Schdl,A. Szeliski,R.,Salesin,D.& Irfan Essa Video textures In Proceedings of SIGGRAPH2000
[11] Simard, P. , LeCun, Y. & Denker, J. Efficient pattern recognition using a new transformation
distance In Advances in Neural Information Processing Systems 1993
[12] Turk, M. & Pentland, A. Face recognition using eigenfaces In Proceedings of IEEE Conference
on Computer Vision and Pattern Recognition Maui, Hawaii, 1991
[13] Wolberg,G. & Zokai,S. Robust image registration using log-polar transform In Proceedings
IEEE Intl. Conference on Image Processing, Canada 2000.
| 2212 |@word mild:1 illustrating:1 version:1 determinant:2 loading:2 nd:1 linearized:4 covariance:3 brightness:1 tr:1 tmg:8 reduction:3 contains:6 current:1 com:1 periodically:1 enables:2 mstep:1 treating:1 update:3 nebojsa:1 generative:5 plane:3 isotropic:2 iterates:1 math:1 toronto:3 zhang:2 mathematical:1 become:1 expected:1 multi:1 actual:1 window:1 provided:1 discover:1 notation:2 finding:1 transformation:32 ti:1 k2:2 before:1 frey:6 local:3 tends:2 xv:1 twice:1 initialization:2 dynamically:1 ease:1 practical:1 lecun:1 x256:1 digit:1 oqp:8 projection:5 pre:1 radial:1 regular:1 onto:1 applying:3 live:1 map:2 maximizing:1 starting:1 cluttered:1 rectangular:1 resolution:3 x32:1 fill:1 handle:1 coordinate:9 variation:5 updated:1 exact:2 origin:1 element:5 recognition:8 expensive:2 walking:4 database:2 erated:1 role:1 initializing:1 capture:4 movement:1 complexity:2 personal:2 trained:3 algebra:1 tca:5 easily:1 joint:2 various:3 represented:3 x100:2 fast:7 effective:2 describe:1 detected:1 widely:1 valued:1 larger:1 cov:5 topographic:1 transform:2 final:1 sequence:15 essa:1 subtracting:1 product:1 aligned:1 gen:1 los:1 cluster:12 requirement:1 intl:1 perfect:1 object:5 derive:1 pose:4 measured:1 saved:1 viewing:1 require:1 summation:2 crg:1 tracker:6 around:3 hall:1 cb:1 mapping:2 major:2 purpose:1 estimation:1 polar:1 create:1 mit:2 sensor:1 gaussian:1 reshuffling:1 varying:1 probabilistically:1 derived:1 focus:2 june:1 notational:1 modelling:2 likelihood:3 brendan:1 detect:2 inference:8 dayan:1 hidden:2 her:1 transformed:3 pixel:3 priori:1 development:1 chapman:1 report:1 few:2 recognize:2 microsoft:2 detection:2 mixture:8 ambient:1 necessary:1 facial:3 walk:2 deformation:3 instance:2 column:6 modeling:4 increased:1 ar:3 cost:2 uniform:2 mtca:23 stored:1 considerably:1 person:5 international:1 picking:1 central:2 opposed:1 huang:1 hawaii:1 simard:1 li:1 account:2 sec:1 blurred:1 performed:1 try:1 view:1 portion:2 accuracy:1 variance:5 efficiently:3 salesin:1 handwritten:1 zy:3 trajectory:1 lighting:4 detector:5 ed:1 frequency:2 turk:1 psi:1 ut:1 dimensionality:4 infers:1 actually:1 done:2 evaluated:2 correlation:6 hand:3 eqn:2 horizontal:1 defines:2 eigenmodes:1 usa:1 normalized:6 jojic:5 eg:1 mahalanobis:2 demonstrate:1 performs:1 motion:4 pro:1 image:26 wise:1 rotation:7 anitha:2 slight:1 synthesized:1 cambridge:2 mother:1 everitt:1 shuffling:1 grid:2 similarly:1 analyzer:8 similarity:1 longer:1 base:1 posterior:2 multivariate:1 recent:1 dictated:1 manipulation:1 verlag:1 captured:2 schneider:1 recognized:1 reduces:2 technical:1 faster:1 match:1 post:1 vision:3 represent:1 background:1 addition:2 tend:1 mod:2 effectiveness:2 n7:1 integer:1 presence:1 fft:1 perfectly:1 reduce:1 simplifies:1 shift:10 expression:3 pca:2 padding:1 york:2 wolberg:1 useful:1 neumaier:1 involve:1 cleaner:1 transforms:2 prepared:1 generate:1 track:1 correctly:1 discrete:4 enormous:2 zokai:1 changing:1 registration:1 sum:2 inverse:2 place:1 appendix:3 presetting:1 bit:1 software:1 fourier:4 performing:2 speedup:1 estep:1 according:1 belonging:1 across:1 em:7 leg:3 invariant:10 jlk:2 equation:2 previously:1 jolliffe:1 letting:1 photo:4 operation:2 gaussians:1 denker:1 limb:1 generic:1 original:5 clustering:10 ghahramani:1 society:1 alamitos:1 quantity:1 fa:4 diagonal:3 traditional:1 subspace:24 wrap:1 distance:3 utwv:1 simulated:4 manifold:2 kannan:1 index:2 insufficient:1 illustration:1 difficult:1 sharper:1 daughter:1 implementation:3 summarization:1 perform:1 vertical:1 observation:1 convolution:2 pentland:1 defining:1 hinton:2 variability:1 precise:1 incorporated:1 frame:11 canada:2 inferred:3 introduced:1 specified:1 plague:1 learned:13 able:1 redmond:1 pattern:3 video:14 s0k:6 sian:1 zhu:1 extract:1 gf:3 kj:1 colmenarez:1 multiplication:2 nxn:1 interesting:2 sufficient:1 s0:5 translation:6 row:3 accounted:1 gl:3 last:1 side:1 szeliski:1 wide:1 eigenfaces:1 face:18 evaluating:2 avoids:1 autoregressive:1 projected:2 ig:3 simplified:1 transaction:2 compact:1 dealing:1 global:3 summing:1 xi:1 factorizing:1 latent:11 learn:6 robust:1 ca:2 irfan:1 domain:4 diag:1 did:1 main:1 linearly:1 big:1 noise:9 enlarged:1 fig:21 fashion:1 ny:2 fails:1 position:4 inferring:2 learns:4 utoronto:1 showing:1 adding:1 texture:5 illumination:2 illustrates:1 smoothly:1 photograph:1 appearance:6 likely:2 visual:1 failed:2 tracking:6 springer:1 corresponds:1 extracted:1 ma:2 acm:1 identity:1 revow:1 change:2 vides:1 preset:3 principal:4 experimental:2 |
1,333 | 2,213 | Nonparametric Representation of Policies and
Value Functions: A Trajectory-Based Approach
Christopher G. Atkeson
Robotics Institute and HCII
Carnegie Mellon University
Pittsburgh, PA 15213, USA
[email protected]
Jun Morimoto
ATR Human Information Science Laboratories, Dept. 3
Keihanna Science City
Kyoto 619-0288, Japan
[email protected]
Abstract
A longstanding goal of reinforcement learning is to develop nonparametric representations of policies and value functions that support
rapid learning without suffering from interference or the curse of dimensionality. We have developed a trajectory-based approach, in which
policies and value functions are represented nonparametrically along trajectories. These trajectories, policies, and value functions are updated as
the value function becomes more accurate or as a model of the task is updated. We have applied this approach to periodic tasks such as hopping
and walking, which required handling discount factors and discontinuities in the task dynamics, and using function approximation to represent
value functions at discontinuities. We also describe extensions of the approach to make the policies more robust to modeling error and sensor
noise.
1 Introduction
The widespread application of reinforcement learning is hindered by excessive cost in terms
of one or more of representational resources, computation time, or amount of training data.
The goal of our research program is to minimize these costs. We reduce the amount of training data needed by learning models, and using a DYNA-like approach to do mental practice
in addition to actually attempting a task [1, 2]. This paper addresses concerns about computation time and representational resources. We reduce the computation time required by
using more powerful updates that update first and second derivatives of value functions
and first derivatives of policies, in addition to updating value function and policy values at
particular points [3, 4, 5]. We reduce the representational resources needed by representing
value functions and policies along carefully chosen trajectories. This non-parametric representation is well suited to the task of representing and updating value functions, providing
additional representational power as needed and avoiding interference.
This paper explores how the approach can be extended to periodic tasks such as hopping
and walking. Previous work has explored how to apply an early version of this approach
to tasks with an explicit goal state [3, 6] and how to simultaneously learn a model and
also affiliated with the ATR Human Information Science Laboratories, Dept. 3
use this approach to compute a policy and value function [6]. Handling periodic tasks
required accommodating discount factors and discontinuities in the task dynamics, and
using function approximation to represent value functions at discontinuities.
2 What is the approach?
Represent value functions and policies along trajectories. Our first key idea for creating
a more global policy is to coordinate many trajectories, similar to using the method of
characteristics to solve a partial differential equation. A more global value function is
created by combining value functions for the trajectories. As long as the value functions are
consistent between trajectories, and cover the appropriate space, the global value function
created will be correct. This representation supports accurate updating since any updates
must occur along densely represented optimized trajectories, and an adaptive resolution
representation that allocates resources to where optimal trajectories tend to go.
Segment trajectories at discontinuities. A second key idea is to segment the trajectories
at discontinuities of the system dynamics, to reduce the amount of discontinuity in the value
function within each segment, so our extrapolation operations are correct more often. We
assume smooth dynamics and criteria, so that first and second derivatives exist. Unfortunately, in periodic tasks such as hopping or walking the dynamics changes discontinuously
as feet touch and leave the ground. The locations in state space at which this happens
can be localized to lower dimensional surfaces that separate regions of smooth dynamics.
For periodic tasks we apply our approach along trajectory segments which end whenever
a dynamics (or criterion) discontinuity is reached. We also search for value function discontinuities not collocated with dynamics or criterion discontinuities. We can use all the
trajectory segments that start at the discontinuity and continue through the next region to
provide estimates of the value function at the other side of the discontinuity.
Use function approximation to represent value function at discontinuities. We use
locally weighted regression (LWR) to construct value functions at discontinuities [7].
Update first and second derivatives of the value function as well as first derivatives of
the policy (control gains for a linear controller) along the trajectory. We can think of
this as updating the first few terms of local Taylor series models of the global value and
policy functions. This non-parametric representation is well suited to the task of representing and updating value functions, providing additional representational power as needed
and avoiding interference.
We will derive the update rules. Because we are interested in periodic tasks, we must introduce a discount factor into Bellman?s equation, so value functions remain finite. Consider
a system with dynamics
and a one step cost function , where
is the state of the system and is a vector of actions or controls. The subscript serves
as a time index, but will be dropped in the equations that follow in cases where all time
indices are the same or are equal to .
A goal of reinforcement learning and optimal control is to find a policy that minimizes the
total cost, which is the sum of the costs for each time step. One approach to doing this is to
construct an optimal value function, . The value of this value function at a state is
the sum of all future costs, given that the system started in state and followed the optimal
policy (chose optimal actions at each time step as a function of the state). A local planner
or controller can choose globally optimal actions if it knew the future cost of each action.
This cost is simply the sum of the cost of taking the action right now and the discounted
future cost of the state that the action leads to, which is given by the value function. Thus,
the optimal action is given by: !#"%$'&)(* #+-, . /#0 where , is the
discount factor.
0.5
1
0.4
0.5
0.3
Height
0.2
G
0
0.1
?0.5
0
?0.1
?1
?1
0
1
?0.2
?4
?3
?2
?1
0
Velocity
1
2
3
4
Figure 1: Example trajectories where the value function and policy are explicitly represented for a regulator task at goal state G (left), a task with a point goal state G (middle),
and a periodic task (right).
Suppose at a point we have 1) a local second
approximation of
order Taylor series
the optimal value function: + +
where
. 2) a local
second order Taylor series approximation of the dynamics, which
can
be
learned
using
local models of the plant ( and correspond to the usual
and of the linear plant
model
used
in linear
quadratic
regulator
(LQR) design):
# + +
+
+
+
where - , and 3) a local second order
Taylor series approximation of the one step cost, which is often known analytically for
correspond to the usual
human specified criteria
and
of LQR design):
(
and
# +
+ +
+
+
Given a trajectory, one can integrate the value function and its first and second spatial
derivatives backwards in time to compute an improved value function and policy. The
backward sweep takes the following form (in discrete time):
+ , +-,
,
+ ,
+
,
+ , +
,
+ , +
"! $# "!
%'&)( #*
%+&)(
#
(1)
(2)
(3)
(4)
(5)
After the backward sweep, forward integration can be used to update the trajectory itself:
,.-0/ 1
2 #
,.-0/3
In order to use this approach we have to assume smooth dynamics and criteria, so that first
and second derivatives exist. Unfortunately, in periodic tasks such as hopping or walking
the dynamics changes discontinuously as feet touch and leave the ground. The locations in
state space at which this happens can be localized to lower dimensional surfaces that separate regions of smooth dynamics. For periodic tasks we apply our approach along trajectory
segments which end whenever a dynamics (or criterion) discontinuity is reached. We can
use all the trajectory segments that start at the discontinuity and continue through the next
region to provide estimates of the value function at the other side of the discontinuity.
Figure 1 shows our approach applied to several types of problems. On the left we see that
a task that requires steady state control about a goal point (a regulator task) can be solved
with a single trivial trajectory
that starts and ends at the goal and provides a value function
and constant linear policy #
in the vicinity of the goal.
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
Height
0.5
Height
0.5
Height
0.5
0.1
0.1
0.1
0
0
0
?0.1
?0.1
?0.1
?0.2
?4
?3
?2
?1
0
Velocity
1
2
3
4
?0.2
?4
?3
?2
?1
0
Velocity
1
2
3
4
?0.2
?4
?3
?2
?1
Figure
2: The optimal hopper controller with a range of penalties on
. #
+
0
Velocity
1
2
usage
3
4
The middle figure of Figure 1 shows the trajectories used to compute the value function for
a swing up problem [3]. In this problem the goal requires regulation about the state where
the pendulum is inverted and in an unstable equilibrium. However, the nonlinearities of the
problem limit the region of applicability of a linear policy, and non-trivial trajectories have
to be created to cover a larger region. In this case the region where the value function is less
than a target value is filled with trajectories. The neighboring trajectories have consistent
value functions and thus the globally optimal value function and policy is found in the
explored region [3].
The right figure of Figure 1 shows the trajectories used to compute the value function for a
periodic problem, control of vertical hopping in a hopping robot. In this problem, there is
no goal state, but a desired hopping height is specified. This problem has been extensively
studied in the robotics literature [8] from the point of view of how to manually design a
nonlinear controller with a large stability region. We note that optimal control provides
a methodology to design nonlinear controllers with large stability regions and also good
performance in terms of explicitly specified criteria. We describe later how to also make
these controller designs more robust.
In this figure the vertical axis corresponds to the height of the hopper, and the horizontal
axis is vertical velocity. The robot moves around the origin in a counterclockwise direction.
In the top two quadrants the robot is in the air, and in the bottom two quadrants the robot
is on the ground. Thus, the horizontal axis is a discontinuity of the robot dynamics, and
trajectory segments end and often begin at the discontinuity. We see that while the robot is
in the air it cannot change how much energy it has (how high it goes or how fast it is going
when it hits the ground), as the trajectories end with the same pattern they began with.
When the robot is on the ground it thrusts with its leg to ?focus? the trajectories so the
set of touchdown positions is mapped to a smaller set of takeoff positions. This funneling
effect is characteristic of controllers for periodic tasks, and how fast the funnel becomes
narrow is controlled by the size of the penalty on usage (Figure 2).
2.1 How are trajectory start points chosen?
In our approach trajectories are refined towards optimality given their fixed starting points.
However, an initial trajectory must first be created. For regulator tasks, the trajectory is
trivial and simply starts and ends at the known goal point. For tasks with a point goal,
trajectories can be extended backwards away from the goal [3]. For periodic tasks, crude
trajectories must be created using some other approach before this approach can refine
them.
We have used several methods to provide initial trajectories. Manually designed controllers
sometimes work. In learning from demonstration a teacher provides initial trajectories [6].
In policy optimization (aka ?policy search?) a parameterized policy is optimized [9].
Once a set of initial task trajectories are available, the following four methods are used to
generate trajectories in new parts of state space. We use all of these methods simultaneously, and locally optimize each of the trajectories produced. The best trajectory of the set
is then stored and the other trajectories are discarded. 1) Use the global policy generated
by policy optimization, if available. 2) Use the local policy from the nearest point with the
same type of dynamics. 3) Use the local value function estimate (and derivatives) from the
nearest point with the same type of dynamics. and 4) Use the policy from the nearest trajectory, where the nearest trajectory is selected at the beginning of the forward sweep and
kept the same throughout the sweep. Note that methods 2 and 3 can change which stored
trajectories they take points from on each time step, while method 4 uses a policy from a
single neighboring trajectory.
3 Control of a walking robot
As another example we will describe the search for a policy for walking of a simple planar
biped robot that walks along a bar. The simulated robot has two legs and a torque motor
between the legs. Instead of revolute or telescoping knees, the robot can grab the bar with
its foot as its leg swings past it. This is a model of a robot that walks along the trusses of
a large structure such as a bridge, much as a monkey brachiates with its arms. This simple
model has also been used in studies of robot passive dynamic walking [10].
This arrangement means the robot has a five dimensional state space: left leg angle . ,
right leg angle , left leg angular velocity . , right leg angular velocity , and stance
foot location. A simple policy is used to determine when to grab the bar (at the end of a
step when the swing foot passes the bar going downwards). The variable to be controlled
is the torque
at the hip.
The criterion we used is quite complex. We are a long way from specifying an abstract
or vague criterion such as ?cover a fixed distance with minimum fuel or battery usage?
or ?maximize the amount of your genes in future gene pools? and successfully finding an
optimal or reasonable policy. At this stage we need to include several ?shaping? terms in the
criterion, that reward keeping the hips at the right altitude with minimal vertical velocity,
keeping the leg amplitude within reason, maintaining a symmetric gait, and maintaining
the desired hip forward velocity:
+ +
+
+ + ! ! +
(6)
are weighting factors and are #
, .
,
%$ . The leg length is 1 meter (hence the 1 in
). The desired
'&)( . " provides a measure of how far the left or right leg has gone
where the "
, and
leg velocity ! +$
,.-/0,1
in the forward or backward direction. ' is the product of
past its limits *
the leg angles if the legs are both forward or both rearward, and zero otherwise. ! is the
hip location. The integration and control time steps are 1 millisecond each. The dynamics
of this walker are simulated using a commercial package, SDFAST.
Initial trajectories were generated by optimizing the coefficients of a linear policy. When
the left leg was in stance:
32 +42#5 +62
+
*+62:9
6287
+62:;
+
+
628< ! 628=
(7)
where
is the angle between the legs. When the right leg was in stance the same policy
was used with the appropriate signs negated.
3.1 Results
The trajectory-based approach was able to find a cheaper and more robust policy than
the parametric policy-optimization approach. This is not surprising given the flexible and
expandable representational capacity of an adaptive non-parametric representation, but it
does provide some indication that our update algorithms can usefully harness the additional
representation power.
Cost: For example, after training the parametric policy, we measured the undiscounted cost
over 1 second (roughly one step of each leg) starting in a state along the lowest cost cyclic
trajectory. The cost for the optimized parametric policy was 4316. The corresponding cost
for the trajectory-based approach starting from the same state was 3502.
Robustness: We did a simple assessment of robustness by adding offsets to the same
starting state until the optimized linear policy failed. The offsets were in terms of the
stance leg and the angle between the legs, and the corresponding angular
velocities. The
+$
maximum offsets for the linearized optimized parametric policy are
+$
%
$
+
$
+
$
%
$
+
$
%
$
,
,
, and
. We did a
similar test for the trajectory approach. In each direction the maximum offset the trajectorybased approach was able to handle was equal to or greater than the parametric policy-based
+$
$
and
. This
approach, extending the range most in the cases of
is not surprising, since the trajectory-based controller uses the parametric policy as one
of the ways to initially generate candidate trajectories for optimization. In cases where
the trajectory-based approach is not able to generate an appropriate trajectory, the system
will generate a series of trajectories with start points moving from regions it knows how
to handle towards the desired start point. Thus, we have not yet discovered situations that
are physically possible to recover that the trajectory-based approach cannot handle if it is
allowed as much computation time as it needs.
Interference: To demonstrate interference in the parametric policy approach, we optimized
its performance from a distribution of starting states. These states were the original state,
and states with positive offsets. The new cost for the original starting position was 14,747,
compared to 4316 before retraining. The trajectory approach has the same cost as before,
3502.
4 Robustness to modeling error and imperfect sensing
So far we have addressed robustness in terms of the range of initial states that can be
handled. Another form of robustness is robustness to modeling error (changes in masses,
friction, and other model parameters) and imperfect sensing, so that the controller does not
know exactly what state the robot is in. Since simulations are used to optimize policies, it
is relatively easy to include simulations with different model parameters and sensor noise
in the training and optimize for a robust parametric controller in policy shaping. How does
the trajectory-based approach achieve comparable robustness?
We have developed two approaches, a probabilistic approach with maintains distributional
information about unknown states and parameters, and a game-based or minimax approach.
The probabilistic approach supports actions by the controller to actively minimize uncertainty as well as achieve goals, which is known as dual control. The game-based approach
does not reduce uncertainty with experience, and is somewhat paranoid, assuming the
world is populated by evil spirits which choose the worst possible disturbance at each time
step for the controller. This results in robust, but often overly conservative policies.
In the probabilistic case, the state is augmented with any unknown parameters such as
masses of parts or friction coefficients, and the covariance of all the original elements of
the state as well as the added parameters. An extended Kalman filter is constructed as the
new dynamics equation, predicting the new estimates of the means and covariances given
the control signals to the system. The one step cost function is restated in terms of the
augmented state. The value function is now a function of the augmented state, including
covariances of the original state vector elements. These covariances interact with the curvature of the value function, causing additional cost in areas of the value function that have
high curvature or second derivatives. Thus the system is rewarded when it moves to areas
of the value function that are planar, and uncertainty has no effect on the expected cost. The
system is also rewarded when it learns, which reduces the covariances of the estimates, so
the system may choose actions that move away from a goal but reduce uncertainty. This
probabilistic approach does dramatically increase the dimensionality of the state vector and
thus the value function, but in the context of only a quadratic cost on dimensionality this is
not as fatal is it would seem.
A less expensive approach is to use a game-based uncertainty model with minimax optimization. In this case, we assume an opponent can pick a disturbance to maximally
increase our cost. This is closely related to robust nonlinear controller design techniques
based on the idea of
control [11, 12] and risk sensitive control [13, 14]. We augment
the dynamics equation with a disturbance term:
# , 5
#+
where is a vector of disturbance inputs. To limit the size of the disturbances, we include the disturbance magnitude in a modified one step cost function with a negative sign.
The opponent who controls the disturbance wants to increase our cost, so this new term
gives an incentive to the opponent to choose the worse direction for the disturbance, and
a disturbance magnitude that gives the highest ratio of increased cost to disturbance size:
. Initially,
is set to globally approximate the
# , 50
#
uncertainty of the model. Ultimately,
should vary with the local confidence in the
model. Highly practiced movements or portions of movements should have high , and
new movements should have lower . The optimal action is now given by Isaacs? equa" $'& (
# + , . /# 0 . How we solve Isaacs?
tion: ! "
equation and an application of this method are described in the companion paper [15].
5 How to cover a volume of state space
In tasks with a goal or point attractor, [3] showed that certain key trajectories can be grown
backwards from the goal in order to approximate the value function. In the case of a sparse
use of trajectories to cover a space, the cost of the approach is dominated by the - costs
of updating second derivative matrices, and thus the cost of the trajectory-based approach
increases quadratically as the dimensionality increases.
However, for periodic tasks the approach of growing trajectories backwards from the goal
cannot be used, as there is no goal point or set. In this case the trajectories that form the
optimal cycle can be used as key trajectories, with each point along them supplying a local
linear policy and local quadratic value function. These key trajectories can be computed
using any optimization method, and then the corresponding policy and value function estimates along the trajectory computed using the update rules given here.
It is important to point out that optimal trajectories need only be placed densely enough to
separate regions which have different local optima. The trajectories used in the representation usually follow local valleys of the value function. Also, we have found that natural
behavior often lies entirely on a low-dimensional manifold embedded in a high dimensional
space. Using these trajectories and creating new trajectories as task demands require it, we
expect to be able to handle a range of natural tasks.
6 Contributions
In order to accommodate periodic tasks, this paper has discussed how to incorporate discount factors into the trajectory-based approach, how to handle discontinuities in the dynamics (and equivalently, criteria and constraints), and how to find key trajectories for a
sparse trajectory-based approach. The trajectory-based approach requires less design skill
from humans since it doesn?t need a ?good? policy parameterization, produces cheaper and
more robust policies, which do not suffer from interference.
References
[1] Richard S. Sutton. Integrated architectures for learning , planning and reacting based on approximating dynamic programming. In Proceedings 7th International Conference on Machine
Learning., 1990.
[2] C. Atkeson and J. Santamaria. A comparison of direct and model-based reinforcement learning,
1997.
[3] Christopher G. Atkeson. Using local trajectory optimizers to speed up global optimization
in dynamic programming. In Jack D. Cowan, Gerald Tesauro, and Joshua Alspector, editors,
Advances in Neural Information Processing Systems, volume 6, pages 663?670. Morgan Kaufmann Publishers, Inc., 1994.
[4] P. Dyer and S. R. McReynolds. The Computation and Theory of Optimal Control. Academic
Press, New York, NY, 1970.
[5] D. H. Jacobson and D. Q. Mayne. Differential Dynamic Programming. Elsevier, New York,
NY, 1970.
[6] Christopher G. Atkeson and Stefan Schaal. Robot learning from demonstration. In Proc. 14th
International Conference on Machine Learning, pages 12?20. Morgan Kaufmann, 1997.
[7] C. G. Atkeson, A. W. Moore, and S. Schaal. Locally weighted learning. Artificial Intelligence
Review, 11:11?73, 1997.
[8] W. Schwind and D. Koditschek. Control of forward velocity for a simplified planar hopping
robot. In International Conference on Robotics and Automation, volume 1, pages 691?6, 1995.
[9] J. Andrew Bagnell and Jeff Schneider. Autonomous helicopter control using reinforcement
learning policy search methods. In International Conference on Robotics and Automation,
2001.
[10] M. Garcia, A. Chatterjee, and A. Ruina. Efficiency, speed, and scaling of two-dimensional
passive-dynamic walking. Dynamics and Stability of Systems, 15(2):75?99, 2000.
[11] K. Zhou, J. C. Doyle, and K. Glover. Robust Optimal Control. PRENTICE HALL, New Jersey,
1996.
[12] J. Morimoto and K. Doya. Robust Reinforcement Learning. In Todd K. Leen, Thomas G.
Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13,
pages 1061?1067. MIT Press, Cambridge, MA, 2001.
[13] R. Neuneier and O. Mihatsch. Risk Sensitive Reinforcement Learning. In M. S. Kearns, S. A.
Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11, pages
1031?1037. MIT Press, Cambridge, MA, USA, 1998.
[14] S. P. Coraluppi and S. I. Marcus. Risk-Sensitive and Minmax Control of Discrete-Time FiniteState Markov Decision Processes. Automatica, 35:301?309, 1999.
[15] J. Morimoto and C. Atkeson. Minimax differential dynamic programming: An application to
robust biped walking. In Advances in Neural Information Processing Systems 15. MIT Press,
Cambridge, MA, 2002.
| 2213 |@word middle:2 version:1 retraining:1 simulation:2 linearized:1 covariance:5 pick:1 accommodate:1 initial:6 cyclic:1 series:5 minmax:1 practiced:1 lqr:2 past:2 neuneier:1 surprising:2 yet:1 must:4 thrust:1 motor:1 designed:1 update:8 intelligence:1 selected:1 parameterization:1 beginning:1 supplying:1 mental:1 provides:4 location:4 five:1 height:6 glover:1 along:12 constructed:1 direct:1 differential:3 introduce:1 expected:1 rapid:1 behavior:1 alspector:1 planning:1 growing:1 roughly:1 bellman:1 torque:2 globally:3 discounted:1 curse:1 becomes:2 begin:1 mass:2 fuel:1 lowest:1 what:2 minimizes:1 monkey:1 developed:2 finding:1 usefully:1 exactly:1 hit:1 control:18 before:3 positive:1 dropped:1 local:14 trajectorybased:1 todd:1 limit:3 sutton:1 subscript:1 reacting:1 chose:1 studied:1 specifying:1 co:1 range:4 gone:1 equa:1 practice:1 optimizers:1 area:2 confidence:1 quadrant:2 cannot:3 valley:1 prentice:1 context:1 risk:3 optimize:3 go:2 starting:6 resolution:1 restated:1 knee:1 rule:2 stability:3 handle:5 coordinate:1 autonomous:1 updated:2 target:1 suppose:1 commercial:1 programming:4 us:2 origin:1 pa:1 velocity:12 element:2 expensive:1 walking:9 updating:6 distributional:1 takeoff:1 bottom:1 solved:1 worst:1 region:12 cycle:1 solla:1 movement:3 highest:1 reward:1 battery:1 dynamic:28 gerald:1 ultimately:1 segment:8 efficiency:1 vague:1 represented:3 jersey:1 grown:1 fast:2 describe:3 artificial:1 refined:1 quite:1 larger:1 solve:2 otherwise:1 fatal:1 think:1 itself:1 indication:1 gait:1 product:1 helicopter:1 neighboring:2 causing:1 combining:1 achieve:2 representational:6 mayne:1 undiscounted:1 extending:1 optimum:1 produce:1 leave:2 derive:1 develop:1 andrew:1 measured:1 nearest:4 direction:4 foot:5 closely:1 correct:2 mcreynolds:1 filter:1 human:4 require:1 extension:1 around:1 hall:1 ground:5 equilibrium:1 vary:1 early:1 proc:1 bridge:1 sensitive:3 city:1 successfully:1 weighted:2 koditschek:1 stefan:1 mit:3 sensor:2 modified:1 zhou:1 funneling:1 volker:1 focus:1 schaal:2 aka:1 elsevier:1 integrated:1 initially:2 going:2 interested:1 dual:1 flexible:1 augment:1 spatial:1 integration:2 equal:2 construct:2 lwr:1 once:1 manually:2 excessive:1 future:4 richard:1 few:1 simultaneously:2 densely:2 doyle:1 cheaper:2 attractor:1 highly:1 jacobson:1 accurate:2 partial:1 experience:1 allocates:1 filled:1 taylor:4 walk:2 desired:4 mihatsch:1 minimal:1 santamaria:1 hip:4 increased:1 modeling:3 cover:5 cost:29 applicability:1 stored:2 teacher:1 periodic:14 paranoid:1 explores:1 international:4 probabilistic:4 pool:1 choose:4 worse:1 creating:2 derivative:10 actively:1 japan:1 nonlinearities:1 automation:2 coefficient:2 inc:1 explicitly:2 tion:1 later:1 view:1 extrapolation:1 doing:1 pendulum:1 reached:2 start:7 recover:1 maintains:1 portion:1 contribution:1 minimize:2 air:2 morimoto:3 kaufmann:2 characteristic:2 who:1 correspond:2 expandable:1 produced:1 trajectory:77 whenever:2 finitestate:1 coraluppi:1 energy:1 isaac:2 gain:1 dimensionality:4 shaping:2 amplitude:1 carefully:1 actually:1 rearward:1 follow:2 methodology:1 planar:3 improved:1 harness:1 maximally:1 leen:1 angular:3 stage:1 until:1 xmorimo:1 horizontal:2 christopher:3 cohn:1 touch:2 assessment:1 nonlinear:3 nonparametrically:1 widespread:1 usage:3 effect:2 dietterich:1 usa:2 swing:3 analytically:1 vicinity:1 stance:4 hence:1 symmetric:1 laboratory:2 moore:1 game:3 steady:1 criterion:11 demonstrate:1 passive:2 jack:1 began:1 hopper:2 jp:1 volume:3 discussed:1 mellon:1 cambridge:3 populated:1 biped:2 moving:1 robot:17 surface:2 curvature:2 showed:1 optimizing:1 rewarded:2 tesauro:1 certain:1 continue:2 joshua:1 inverted:1 morgan:2 minimum:1 additional:4 greater:1 somewhat:1 schneider:1 determine:1 maximize:1 signal:1 kyoto:1 reduces:1 smooth:4 academic:1 long:2 controlled:2 regression:1 controller:14 cmu:1 physically:1 represent:4 sometimes:1 robotics:4 addition:2 want:1 addressed:1 evil:1 walker:1 publisher:1 pass:1 tend:1 counterclockwise:1 cowan:1 spirit:1 seem:1 backwards:4 keihanna:1 easy:1 enough:1 architecture:1 hindered:1 reduce:6 idea:3 imperfect:2 handled:1 penalty:2 suffer:1 york:2 action:10 dramatically:1 cga:1 amount:4 nonparametric:2 discount:5 locally:3 extensively:1 generate:4 exist:2 millisecond:1 sign:2 overly:1 carnegie:1 discrete:2 incentive:1 key:6 four:1 kept:1 backward:3 grab:2 sum:3 angle:5 parameterized:1 powerful:1 package:1 uncertainty:6 planner:1 throughout:1 reasonable:1 doya:1 decision:1 scaling:1 comparable:1 entirely:1 followed:1 quadratic:3 refine:1 occur:1 constraint:1 your:1 dominated:1 regulator:4 speed:2 friction:2 optimality:1 attempting:1 relatively:1 remain:1 smaller:1 happens:2 leg:20 interference:6 altitude:1 resource:4 equation:6 dyna:1 needed:4 know:2 dyer:1 end:7 serf:1 available:2 operation:1 opponent:3 apply:3 away:2 appropriate:3 robustness:7 original:4 thomas:1 top:1 include:3 touchdown:1 maintaining:2 hopping:8 approximating:1 sweep:4 move:3 arrangement:1 added:1 parametric:11 usual:2 bagnell:1 distance:1 separate:3 mapped:1 atr:3 simulated:2 capacity:1 accommodating:1 manifold:1 unstable:1 trivial:3 reason:1 marcus:1 assuming:1 length:1 kalman:1 index:2 providing:2 demonstration:2 ratio:1 equivalently:1 regulation:1 unfortunately:2 negative:1 design:7 affiliated:1 policy:50 unknown:2 negated:1 vertical:4 markov:1 discarded:1 finite:1 situation:1 extended:3 discovered:1 required:3 specified:3 optimized:6 learned:1 narrow:1 quadratically:1 discontinuity:20 address:1 able:4 bar:4 usually:1 pattern:1 program:1 including:1 power:3 natural:2 disturbance:10 predicting:1 telescoping:1 arm:1 representing:3 minimax:3 axis:3 created:5 started:1 jun:1 tresp:1 review:1 literature:1 meter:1 revolute:1 embedded:1 plant:2 expect:1 localized:2 funnel:1 integrate:1 consistent:2 editor:3 placed:1 keeping:2 side:2 institute:1 taking:1 sparse:2 world:1 doesn:1 forward:6 reinforcement:7 adaptive:2 simplified:1 longstanding:1 atkeson:6 far:2 approximate:2 skill:1 gene:2 global:6 pittsburgh:1 automatica:1 knew:1 search:4 learn:1 robust:10 interact:1 complex:1 did:2 noise:2 suffering:1 allowed:1 augmented:3 downwards:1 ny:2 position:3 explicit:1 candidate:1 crude:1 lie:1 weighting:1 learns:1 companion:1 sensing:2 explored:2 offset:5 concern:1 adding:1 magnitude:2 chatterjee:1 demand:1 suited:2 garcia:1 simply:2 failed:1 corresponds:1 ma:3 goal:20 towards:2 jeff:1 change:5 discontinuously:2 kearns:1 conservative:1 total:1 support:3 incorporate:1 dept:2 avoiding:2 handling:2 |
1,334 | 2,214 | Maximum Likelihood and the Information
Bottleneck
Noam Slonim Yair Weiss
School of Computer Science & Engineering,
Hebrew University, Jerusalem 91904, Israel
noamm,yweiss @cs.huji.ac.il
Abstract
The information bottleneck (IB) method is an information-theoretic formulation
for clustering problems. Given a joint distribution
, this method constructs
a new variable
that defines partitions over the values of that are informative
about . Maximum likelihood (ML) of mixture models is a standard statistical
approach to clustering problems. In this paper, we ask: how are the two methods
related ? We define a simple mapping between the IB problem and the ML problem for the multinomial mixture model. We show that under this mapping the
problems are strongly related. In fact, for uniform input distribution over or
for large sample size, the problems are mathematically equivalent. Specifically,
in these cases, every fixed point of the IB-functional defines a fixed point of the
(log) likelihood and vice versa. Moreover, the values of the functionals at the
fixed points are equal under simple transformations. As a result, in these cases,
every algorithm that solves one of the problems, induces a solution for the other.
1 Introduction
Unsupervised clustering is a central paradigm in data analysis. Given a set of objects ,
one would like to find a partition which optimizes some score function. Tishby
et al. [1] proposed a principled information-theoretic approach to this problem. In this
approach, given the joint distribution , one looks for a compact representation of ,
which preserves as much information as possible about (see [2] for a detailed discussion).
The mutual information, !#"$ , between the random variables and is given by [3]
2BA +DC
!%#"&('*),+.-./10 23-45%687 %:9<;.=?>3@ 2FC . In [1] it is argued that both the compactness
>E@
of the representation and the preserved relevant
information are naturally measured by mutual information, hence the above principle can be formulated as a trade-off between these
quantities. Specifically, Tishby et al. [1] suggested to introduce a compressed representation of , by defining GHF7 % . The compactness of the representation is then determined
by !?"I , while the quality of the clusters, , is measured by the fraction of information they capture about , !%?"&JB!#"$ . The IB problem can be stated as finding a
(stochastic) mapping GHF7 % such that the IB-functional KL'M!%?"ONQP!%?"& is minimized, where P is a positive Lagrange multiplier that determines the trade-off between
compression and precision. It was shown in [1] that this problem has an exact optimal
(formal) solution without any assumption about the origin of the joint distribution IR .
The standard statistical approach to clustering is mixture modeling. We assume the mea-
surements for each come from one of 7 $7 possible statistical sources, each with its own
parameters (e.g. in Gaussian mixtures). Clustering corresponds to first finding
the maximum likelihood estimates of and then using these parameters to calculate the
posterior probability that the measurements at were generated by each source. These
posterior probabilities define a ?soft? clustering of values.
While both approaches try to solve the same problem the viewpoints are quite different. In
the information-theoretic approach no assumption is made regarding how the data was generated but we assume that the joint distribution is known exactly. In the maximumlikelihood approach we assume a specific generative model for the data and assume we
have samples (IR , not the true probability.
In spite of these conceptual differences we show that under a proper choice of the generative model, these two problems are strongly related. Specifically, we use the multinomial
mixture model (a.k.a the one-sided [4] or the asymmetric clustering model [5]), and provide a simple ?mapping? between the concepts of one problem to those of the other. Using
this mapping we show that in general, searching for a solution of one problem induces a
search in the solution space of the other. Furthermore, for uniform input distribution 6%
or for large sample sizes, we show that the problems are mathematically equivalent. Hence,
in these cases, any algorithm which solves one problem, induces a solution for the other.
2 Short review of the IB method
In the IB framework, one is given as input a joint distribution IR . Given this distribution, a compressed representation of is introduced through the stochastic mapping
GHF7 % . The goal is to find GHF7 % such that the IB-functional, K '*!?"I ONP!?" is
minimized for a given value of P .
The joint distribution over and is defined through the IB Markovian independence
relation,
. Specifically, every choice of GHF7 % defines a specific joint probability G
%HI$' IGHF7 % . Therefore, the distributions GHI and G 7 HI that are
involved in calculating the IB-functional are given by
GHI(' ) +.0 2 G
IHI('*) + % GHF7
C
G87 HI('
@
) +(G
C
IIHI '
) + 6IR GHF7
@
(1)
In principle every choice of GHF7 is possible but as shown in [1], if GHI and G 7 HI are
given, the choice that minimizes K is defined through,
GHF7 % '
GHI
P(%
"!
@ >E@
2BA + C A B2 A CC
@
(2)
where P(I is the normalization (partition) function and #%$'&6 67 GB1' ) 9 ;= > is the
Kullback-Leibler divergence. Iterating over this equation and the !( -step defined in Eq.(1)
defines an iterative algorithm that is guaranteed to converge to a (local) fixed point of K [1].
3 Short review of ML for mixture models
In a multinomial mixture model, we assume that takes on discrete values and sample
it from a multinomial distribution ) 7 H I , where H % denotes ?s label. In the onesided clustering model [4] [5] we further assume that there can be multiple observations
corresponding to a single but they are all sampled from the same multinomial distribution.
This model can be described through the following generative process:
For each choose a unique label H by sampling from (HI .
For '
? choose by sampling from .
? choose by sampling from ) 7 H I and increase ( I by one.
Let H ' H <IH A / A denotes the random vector that defines the (typically hidden) labels,
or topics for all
. The complete likelihood is given by:
IH
)R8
AA // AA (H A / A A 4 A ) 7 H I
(H O ) 7 H I! @ +#" 0 2%$ C
'
'
(3)
(4)
where ( I is a count matrix. The (true) likelihood is defined through summing over
all the possible choices of H ,
&
' )R8(')(+*
(IR
, ) 8
6 H
(5)
, the goal of ML estimation is to find an assignment for the parameters
Given (
(HI ) 87 HI and such that the likelihood is (at least locally) maximized. Since it is
easy to show that the ML estimate for % is just the empirical counts (J
(where
( ' ) 2 (IR ), we further focus only on estimating ) .
-
.
A standard algorithm for this purpose is the EM algorithm [6]. Informally, in the -step
we replace the missing value of H % by its distribution H F7 % which we denote by
G + HI . In the -step we use that distribution to reestimate ) . Using standard derivation
it is easy to verify that in our context the -step is defined through
/
G + HI
'
'
'
0
where and
simply given by
0 8 %
.
0 %(HI @ +D+DC C:)29 1 @ 2.A2B+DA C-+ 3 C-4%3 574576 @6 2BA2B CA C
08%(HI @ + ) C 1 @ 2BA + C A 6 @ 2BA CC ) 1
08%(HI @ @ @ @
@
"!
87 '
are normalization factors and (
<
(HI>=*),+
) 7 HI?= )
-3 4%5
2BA + C
@
!;
(6)
2BA + C
(7)
/
+.0 2 C
@ + C . The
@
(8)
-step is
G + HI
+ (
IG + HI
(9)
Iterating over these EM steps is guaranteed to converge to a local fixed point of the likelihood. Moreover, every fixed point of the likelihood defines a fixed point of this algorithm.
An alternative derivation [7] is to define the free energy functional:
@
.
' A
(R .G: )
'
B( 0 +
EJ( 0 +
C D FEG( 2
G + HI
9<;.= (HI
N
@
G + HI:9<;.= G + HI
(9 ;= )
IH
87 HI
(10)
/
(11)
The -step then involves minimizing with respect to G while the -step minimizes it
with respect to ) . Since this functional is bounded (under mild conditions), the EM
algorithm will converge to a local fixed point of which corresponds to a fixed point of
the likelihood. At these fixed points, will become identical to N 9 ;= (IR
) .
@
@
&
4 The ML
IB mapping
As already mentioned, the IB problem and the ML problem stem from different motivations and involve different ?settings?. Hence, it is not entirely clear what is the purpose of
?mapping? between these problems. Here, we define this mapping to achieve two goals.
The first is theoretically motivated: using the mapping we show some mathematical equivalence between both problems. The second is practically motivated, where we show that
algorithms designed for one problem are (in some cases) suitable for solving the other.
A natural mapping would be to identify each distribution with its corresponding one. However, this direct mapping is problematic. Assume that we are mapping from ML to IB. If
we directly map G + HI (HI )87 HI to GHF7 % GHI G87 HI , respectively, obviously there is
no guarantee that the IB Markovian independence relation will hold once we complete the
mapping. Specifically, using this relation to extract GHI through Eq.(1) will in general result with a different prior over then by simply defining GHI ' (HI . However, we notice
that once we defined GHF7 % and , the other distributions could be extracted by performing the IB-step defined in Eq.(1). Moreover, as already shown in [1], performing this
step can only improve (decrease) the corresponding IB-functional. A similar phenomenon
is present once we map from IB to ML. Although in principle there are no ?consistency?
problems by mapping directly, we know that once we defined G + HI and (IR , we can extract and ) by a simple -step. This step, by definition, will only improve the likelihood,
which is our goal in this setting. The only remaining issue is to define a corresponding component in the ML setting for the trade-off parameter P . As we will show in the next section,
the natural choice for this purpose is the sample size, 'M)L+.0 2 (R .
A
/
&
Therefore, to summarize, we define the /
! ( mapping by
P
G + HI
GHF7 %
(IR
(12)
where is a positive (scaling) constant and the mapping is completed by performing an
IB-step or an / -step according to the mapping direction. Notice that under this mapping,
every search in the solution space of the IB problem induces a search in the solution space
of the ML problem, and vice versa (see Figure 2).
/ &
Observation 4.1 When is uniformly distributed (i.e., (% or 6% are constant), the
!( mapping is equivalent for a direct mapping of each distribution to its corresponding one.
This observation is a direct result from the fact that if is uniformly distributed, then the
IB-step defined in Eq.(1) and the -step defined in Eq.(9) are mathematically equivalent.
/
/ &
Observation 4.2 When is uniformly distributed, the EM algorithm is equivalent to the
! ( mapping with ' 7 7 .
IB iterative optimization algorithm under the
/
Again, this observation is a direct result from the equivalence of the IB-step and the -step
for uniform prior over . Additionally, we notice that in this case (1' A / A '
'*P ,
hence Eq.(6) and Eq.(2) are also equivalent. It is important to emphasize, though, that this
equivalence holds only for a specific choice of P*' (% . While clearly the IB iterative
algorithm (and problem) are meaningful for any value of P , there is no such freedom (for
good or worse) in the ML setting, and the exponential factor in EM must be (% .
5 Comparing ML and IB
&
Claim 5.1 When is uniformly distributed and ' 7 7 , all the fixed points of the likelihood are mapped to all the fixed points of the IB-functional K with P ' (% . Moreover,
& = JK E 0 , with 0
at the fixed points, N 9 ;=
&
constant.
Corollary 5.2 When is uniformly distributed, every algorithm which finds a fixed point
of , induces a fixed point of K with P ' (% , and vice versa. When the algorithm finds
several fixed points, the solution that maximizes is mapped to the one that minimizes K .
&
Proof: We prove the direction from ML to IB. the opposite direction is similar. We assume
that we are given observations (R where (% is constant, and ) that define a fixed
point of the likelihood . As a result, this is also a fixed point of the EM algorithm (where
G + HI is defined through an -step). Using observation 4.2 it follows that this fixed-point
is mapped to a fixed-point of K with P ' ( , as required.
&
.
@
& @
Since at the fixed point, N 9<;.=
, it is enough to show the relationship between
'
K . Rewriting from Eq.( 10) we get
@
' A
(R .G: ) '
Using the
/ & !(
@ ' (
0+
( 0+
G + HI
G + HI:9 ;=
N
(HI
GHF7 %9 ;=
( 0+
'
)
9 ;=
7 HI
(+
(IRIG +
HI
and
(13)
mapping and observation 4.1 we get
P ( 9<;.= G87 HI ( 6R GHF7 %
(14)
+
02
and using the IB Markovian independence
GHF7 %
N
GHI
Multiplying both sides by 6 '
relation, we find that
@
( 02
@
A/ A '
GHF7 %
NP
GHI
6%IGHF7 %:9<;.=
Reducing a (constant) P# 'MN P )
@
( 02
GHI G87 HI:9<;.= G87 HI
0 26GHIIG87 HI9 ;=R
to both sides gives:
N P# $ 'L!?"I 6NP!%"('LK
(16)
as required. We emphasize again that this equivalence is for a specific value of P '
is uniformly distributed and
@
Corollary 5.3 When
, iff it decreases K with P '
(% .
'
(15)
(% .
7 7 , every algorithm decreases
This corollary is a direct result from the above proof that showed the equivalence of the
free energy of the model and the IB-functional (up to linear transformations).
The previous claims dealt with the special case of uniform prior over . The following
claims provide similar results for the general case, when the (or P ) are large enough.
& &
= JE 0
Claim 5.4 For
(or P ), all the fixed points of are mapped to all the fixed
points of K , and vice versa. Moreover, at the fixed points, N 9<;.=
.
LK
&
Corollary 5.5 When
every algorithm which finds a fixed point of , induces a
fixed point of K with P , and vice versa. When the algorithm finds several different
fixed points, the solution that maximizes is mapped to the solution that minimize K .
&
A similar result was recently obtained independently in [8] for the special case of ?hard? clustering. It is also important to keep in mind that in many clustering applications, a uniform prior over
is ?forced? during the pre-process to avoid non-desirable bias. In particular this was done in several
previous applications of the IB method (see [2] for details).
Small b (iIB)
4. 2
F/r b H(Y)
4. 3
Large b (iIB)
Small N (EM)
4
1.22
LIB
x 10
43
F
r(LIB+b H(Y))
1.215
x 10
Large N (EM)
5
F
r(LIB+b H(Y))
L
IB
F/r b H(Y)
2.829
43.5
1.21
4. 4
2.828
1.205
4. 5
44
2.827
1.2
4. 6
0
10
20
30
40
1.195
0
50
Figure 1: Progress of K and
@
20
40
44.5
0
60
10
for different P and
20
30
40
2.826
0
50
10
20
30
40
values, while running iIB and EM.
Proof: Again, we prove only the direction from ML to IB as the opposite direction is
' ) +.0 2 (R and ) that define a fixed
similar. We are given (IR where
point of . Using the -step in Eq.(6) we extract G + HI , ending up with a fixed point of the
EM algorithm. We notice that from follows (%
. Therefore, the
mapping G + HI becomes deterministic:
&
.
G + HI '
#
H('
B
$'&O (
7 %D7 ) 87 H
(17)
& ! ( mapping (including the IB-step), it is easy to verify that we get
/
) 87 HI (but GHI' ( HI if the prior over is not uniform). After completing the
otherwise.
Performing the
G87 HI1'
mapping we try to update GHF7 through Eq.(2). Since now P
will remain deterministic. Specifically,
G
HF7 % '
#
H '
otherwise,
$'&
it follows that GHF7 %
87 F7 G87 H I
(18)
which is equal to its previous value. Therefore, we are at a fixed point of the IB iterative
algorithm, and by that at a fixed point of the IB-functional K , as required.
To show that N 9<;.=
Eq.(13) we see that
Using the
/ & @
9
!
& = K GE 0 we notice again that at the fixed point @
@ ' N ( 9<;.= )87 HI ( (IG + HI
9
+
02
'
(
N P!%"$+E P#
Corollary 5.6 When
K E
@
' N 9 ;=
(19)
mapping and similar algebra as above, we find that
9
('
9
every algorithm decreases
& . From
P # I
iff it decreases K with P
(20)
.
How large must (or P ) be? We address this question through numeric simulations. Yet,
roughly speaking, we notice that the value of for which the above claims (approximately)
hold is related to the ?amount of uniformity? in (% . Specifically, a crucial step in the
above proof assumed that each (% is large enough such that G + HI becomes deterministic.
Clearly, when (% is less uniform, achieving this situation requires larger values.
6 Simulations
We performed several different simulations using different IB and ML algorithms. Due
to the lack of space, only one example is reported below; In this example we used the
ML ? I B
IB ?r e a l ? w o r l d
T ?X ?Y
m a p p in g
M L ?i d e a l ? w o r l d
X ?T ?Y
+
+
+
Iterative IB
+
+
E M
+
IB ~ min DKL(q x ,y ,t ) | | Q (x ,y ,t ) )
M L ~ min DKL(p ^(x ,y ) | | L (n(x ,y ) : ?,?) )
Figure 2: In general, ML (for mixture models) and IB operate in different solution spaces.
Nonetheless, a sequence of probabilities that is obtained through some optimization routine
(e.g., EM) in the ?ML space?, can be mapped to a sequence of probabilities in the ?IB
space?, and vice versa. The main result of this paper is that under some conditions these
two sequences are completely equivalent.
subset of the 20-Newsgroups corpus [9], consisted of documents randomly
/chosen H from
different discussion groups. Denoting the documents by and the words
7 7' '
7 $7' .
by , after pre-processing [10] we have 7 7'
Since our main goal was to check the differences between IB and ML for different values
of (or P ), we further produced another dataset. In this data we randomly choose only
.
of the word occurrences for every document B , ending up with '
about
For both datasets we clustered the documents into clusters, using both EM and the
iterative IB (iIB) algorithm (where
IR ' (IR@ P ' ' 7 7 ). For
& we! ( took
& and K during the process
each algorithm we used the /
mapping to calculate
!
(
(e.g., for iIB,
after
each
iteration
we
mapped
from
to /
, including the / -step, and
@
different
initializations, for each dataset.
calculated ). We repeated this procedure for
In these
runs we found that usually both algorithms improved both functionals monotonically. Comparing the functionals during the process, we see that for the smaller sample
size the differences
are indeed more evident (Figure 1). Comparing the final valuesof
the
functionals (after iterations, which typically yielded convergence), we see that in out
of
runs iIB converged to a smaller value of than EM. In runs, EM converged to a
smaller value of K . Thus, occasionally, iIB finds a better ML solution or EM finds a better
IB solution. This phenomenon was much more common for the large sample size case.
@
7 Discussion
While we have shown that the ML and IB approaches are equivalent under certain conditions, it is important to keep in mind the different assumptions both approaches make
regarding the joint distribution over %H . The mixture model (1) assumes that is independent of given and (2) assumes that 87 % is one of a small number ( 7 $7 ) of
possible conditional distributions. For this reason, the marginal probability over (i.e.,
) ) is usually different from IR ' (R . Indeed, an alternative view of
ML estimation is as minimizing # $'& 6 IRF7 (
) .
&
On the other hand, in the IB framework, is defined through the IB Markovian independence relation:
. Therefore, the solution space is the family of distributions
for which this relation holds and the marginal distribution over I is consistent with the input. Interestingly, it is possible to give an alternative formulation for the IB problem which
also involves KL minimization [11]. In this formulation the IB problem is related to minimizing # $'& G
%HIF7 IIHI , where IIHI denotes the family of distributions
for which the mixture model assumption holds,
.
8
In this sense, we may say that while solving the IB problem, one tries to minimize the KL
with respect to the ?ideal? world, in which separates from . On the other hand, while
solving the ML problem, one assumes an ?ideal? world, and tries to minimize the KL with
respect to the given marginal distribution IR . Our theoretical analysis shows that under
!( mapping, these two procedures are in some cases equivalent (see Figure 2).
the
/ &
Once we are able to map between ML and IB, it should be interesting to try and adopt
additional concepts from one approach to the other. In the following we provide two such
examples. In the IB framework, for large enough P , the quality of a given solution is
4C
[1]. This measure provides a theoretical upper bound,
measured through
@/ 4C
@ purposes of model selection and more. Using the
which can be used for
! (
mapping, we can now adopt this measure for the ML estimation problem (for large enough
); In EM, the exponential factor (% in general depends on . However, its analogous
component in the IB framework, P , obviously does not. Nonetheless, in principle it is
possible to reformulate the IB problem while defining P ' P (without changing the
form of the optimal solution). We leave this issue for future research.
/ &
We have shown that for the multinomial mixture model, ML and IB are equivalent in some
cases. It is worth noting that in principle, by choosing a different generative model, one
may find further equivalences. Additionally, the IB method was recently extended into the
multivariate case, where a new family of IB-like variational problems was presented and
solved [11]. A natural question is to look for further generative models that can be mapped
to this multivariate IB problems, and we are working in this direction.
Acknowledgments
Insightful discussions with Nir Friedman, Naftali Tishby and Gal Elidan are greatly appreciated.
References
[1] N. Tishby, F. Pereira, and W. Bialek. The Information Bottleneck method. In Proc. 37th Allerton Conference on Communication and Computation, 1999.
[2] N. Slonim. The Information Bottleneck: theory and applications. Ph.D. thesis, The Hebrew University, 2002.
[3] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, New York, 1991.
[4] T. Hofmann, J. Puzicha, and M. I. Jordan. Learning from dyadic data. In Proc. of NIPS-11, 1998.
[5] J. Puzicha, T. Hofmann, and J. M. Buhmann. Histogram clustering for unsupervised segmentation and image retrieval. In
Pattern Recognition Letters 20(9), 899-909, 1999.
[6] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum Likelihood from incomplete data via the EM algorithm. Journal
of the Royal Statistical Society B, vol. 39, pp. 1-38, 1977.
[7] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I.
Jordan (editor), Learning in Graphical Models, pp. 355-368, 1998.
[8] L. Hermes, T. z?oller, and J. M. Buhmann. Parametric distributional clustering for image segmentation. In Proc. of European
Conference on Computer Vision (ECCV), 2002
[9] K. Lang. Learning to filter netnews. In Proc. of the 12th Int. Conf. on Machine Learning, 1995.
[10] N. Slonim, N. Friedman, and N. Tishby. Unsupervised document classification using sequential information maximization.
In Proc. of SIGIR-25, 2002.
[11] N. Friedman, O. Mosenzon, N. Slonim, and N. Tishby. Multivariate Information Bottleneck. In Proc. of UAI-17, 2001.
The KL with respect to is defined as the minimum over all the members in . Therefore,
here, both arguments of the KL are changing during the process, and the distributions involved in the
minimization are over all the three random variables.
| 2214 |@word mild:1 compression:1 simulation:3 score:1 denoting:1 document:5 interestingly:1 comparing:3 lang:1 yet:1 must:2 john:1 partition:3 informative:1 hofmann:2 designed:1 update:1 generative:5 noamm:1 short:2 provides:1 allerton:1 mathematical:1 direct:5 become:1 prove:2 introduce:1 theoretically:1 indeed:2 roughly:1 lib:3 becomes:2 estimating:1 moreover:5 bounded:1 maximizes:2 israel:1 what:1 minimizes:3 finding:2 transformation:2 gal:1 guarantee:1 every:11 exactly:1 positive:2 engineering:1 local:3 slonim:4 approximately:1 initialization:1 equivalence:6 ihi:3 unique:1 acknowledgment:1 procedure:2 empirical:1 pre:2 word:2 spite:1 get:3 selection:1 mea:1 context:1 equivalent:10 map:3 deterministic:3 missing:1 jerusalem:1 independently:1 onqp:1 sigir:1 searching:1 analogous:1 exact:1 origin:1 element:1 recognition:1 jk:1 iib:7 asymmetric:1 distributional:1 solved:1 capture:1 calculate:2 trade:3 decrease:5 principled:1 mentioned:1 dempster:1 uniformity:1 solving:3 algebra:1 completely:1 joint:8 derivation:2 forced:1 netnews:1 choosing:1 quite:1 larger:1 solve:1 say:1 otherwise:2 compressed:2 laird:1 final:1 obviously:2 sequence:3 hermes:1 took:1 relevant:1 iff:2 achieve:1 convergence:1 cluster:2 incremental:1 leave:1 object:1 ac:1 measured:3 school:1 progress:1 eq:11 solves:2 c:1 involves:2 come:1 direction:6 filter:1 stochastic:2 argued:1 clustered:1 yweiss:1 mathematically:3 hold:5 practically:1 mapping:30 claim:5 adopt:2 purpose:4 estimation:3 f7:4 proc:6 label:3 vice:6 minimization:2 clearly:2 gaussian:1 avoid:1 ej:1 corollary:5 focus:1 likelihood:14 check:1 greatly:1 sense:1 typically:2 compactness:2 hidden:1 relation:6 issue:2 classification:1 special:2 mutual:2 marginal:3 equal:2 construct:1 once:5 sampling:3 identical:1 look:2 unsupervised:3 future:1 minimized:2 jb:1 randomly:2 preserve:1 divergence:1 friedman:3 freedom:1 mixture:11 incomplete:1 theoretical:2 modeling:1 soft:1 markovian:4 cover:1 assignment:1 maximization:1 subset:1 uniform:7 tishby:6 reported:1 huji:1 off:3 again:4 central:1 thesis:1 choose:4 worse:1 conf:1 b2:1 int:1 depends:1 performed:1 try:5 view:2 surements:1 minimize:3 il:1 maximized:1 identify:1 dealt:1 produced:1 multiplying:1 worth:1 a2b:1 converged:2 reestimate:1 definition:1 energy:2 nonetheless:2 pp:2 involved:2 naturally:1 proof:4 sampled:1 dataset:2 ask:1 segmentation:2 routine:1 wei:1 improved:1 formulation:3 done:1 though:1 strongly:2 furthermore:1 just:1 hand:2 working:1 lack:1 defines:6 quality:2 concept:2 multiplier:1 true:2 verify:2 consisted:1 hence:4 leibler:1 neal:1 during:4 naftali:1 evident:1 theoretic:3 complete:2 image:2 variational:1 recently:2 common:1 multinomial:6 functional:10 measurement:1 versa:6 consistency:1 posterior:2 own:1 showed:1 multivariate:3 optimizes:1 occasionally:1 certain:1 minimum:1 additional:1 converge:3 paradigm:1 monotonically:1 elidan:1 multiple:1 desirable:1 d7:1 stem:1 retrieval:1 dkl:2 variant:1 vision:1 iteration:2 normalization:2 histogram:1 preserved:1 source:2 crucial:1 operate:1 member:1 jordan:2 noting:1 ideal:2 easy:3 enough:5 newsgroups:1 independence:4 opposite:2 regarding:2 bottleneck:5 motivated:2 gb:1 speaking:1 york:1 iterating:2 detailed:1 informally:1 involve:1 clear:1 amount:1 locally:1 ph:1 induces:6 problematic:1 notice:6 discrete:1 vol:1 group:1 achieving:1 changing:2 rewriting:1 fraction:1 run:3 letter:1 family:3 scaling:1 entirely:1 bound:1 hi:61 completing:1 guaranteed:2 yielded:1 argument:1 min:2 performing:4 according:1 remain:1 smaller:3 em:18 son:1 oller:1 feg:1 sided:1 equation:1 count:2 know:1 mind:2 ge:1 occurrence:1 alternative:3 yair:1 thomas:1 denotes:3 clustering:12 remaining:1 running:1 completed:1 assumes:3 graphical:1 calculating:1 society:1 already:2 quantity:1 question:2 parametric:1 bialek:1 separate:1 mapped:8 topic:1 reason:1 relationship:1 reformulate:1 minimizing:3 hebrew:2 noam:1 stated:1 ba:6 proper:1 upper:1 observation:8 datasets:1 defining:3 situation:1 extended:1 communication:1 hinton:1 dc:2 introduced:1 required:3 kl:6 nip:1 address:1 able:1 suggested:1 below:1 usually:2 pattern:1 summarize:1 including:2 royal:1 suitable:1 natural:3 buhmann:2 mn:1 improve:2 lk:2 extract:3 nir:1 review:2 prior:5 interesting:1 consistent:1 rubin:1 principle:5 viewpoint:1 editor:1 eccv:1 free:2 appreciated:1 formal:1 side:2 bias:1 sparse:1 distributed:6 calculated:1 ending:2 numeric:1 world:2 made:1 ig:6 functionals:4 compact:1 emphasize:2 kullback:1 keep:2 ml:27 uai:1 conceptual:1 summing:1 assumed:1 corpus:1 search:3 iterative:6 additionally:2 ca:1 european:1 da:1 main:2 motivation:1 repeated:1 dyadic:1 je:1 wiley:1 precision:1 pereira:1 exponential:2 ib:57 specific:4 insightful:1 ih:2 sequential:1 mosenzon:1 justifies:1 fc:1 simply:2 lagrange:1 aa:2 corresponds:2 determines:1 extracted:1 conditional:1 goal:5 formulated:1 replace:1 hard:1 specifically:7 determined:1 uniformly:6 reducing:1 meaningful:1 puzicha:2 maximumlikelihood:1 phenomenon:2 |
1,335 | 2,215 | Going Metric: Denoising Pairwise Data
Volker Roth
Informatik III, University of Bonn
Roemerstr 164, 53117 Bonn, Germany
roth?cs.uni-bonn.de
Julian Laub
Fraunhofer FIRST.IDA
Kekulestr. 7, 12489 Berlin, Germany
jlaub?first.fhg.de
Joachim M. Buhmann
Informatik III, University of Bonn
Roemerstr 164, 53117 Bonn, Germany
jb?cs.uni-bonn.de
Klaus-Robert Miiller
Fraunhofer FIRST.IDA,
12489 Berlin, Germany,
University of Potsdam,
14482 Potsdam, Germany
klaus?first.fhg.de
Abstract
Pairwise data in empirical sciences typically violate metricity, either due to noise or due to fallible estimates, and therefore are
hard to analyze by conventional machine learning technology. In
this paper we therefore study ways to work around this problem.
First, we present an alternative embedding to multi-dimensional
scaling (MDS) that allows us to apply a variety of classical machine learning and signal processing algorithms. The class of pairwise grouping algorithms which share the shift-invariance property
is statistically invariant under this embedding procedure, leading
to identical assignments of objects to clusters. Based on this new
vectorial representation, denoising methods are applied in a second step. Both steps provide a theoretically well controlled setup
to translate from pairwise data to the respective denoised metric representation. We demonstrate the practical usefulness of our
theoretical reasoning by discovering structure in protein sequence
data bases, visibly improving performance upon existing automatic
methods.
1
Introduction
Unsupervised grouping or clustering aims at extracting hidden structure from data
(see e.g. [5]). However, for several major applications, e.g. bioinformatics or imaging, the data is solely available as scores of pairwise comparisons. Pairwise data is
in no natural way related to the common viewpoint of objects lying in some "well
behaved" space like a vector space. Particularly, pairwise data may violate the triangular inequality. Two cases should be distinguished: (i) The triangle inequality
might not be satisfied as a result of noisy measurements (for instance using string
alignment algorithms in DNA analysis). (ii) The violation might be an intrinsic
feature of the data. This case, for instance, applies to datasets based upon some
human judgment, e.g. "X likes Y, Y likes Z =I? X likes Z".
Such violations preclude the use of well established machine learning methods, which
typically have been formulated for metric data only. This paper proposes an algorithm to metricize and subsequently de noise pairwise data. It uses the so-called
constant shift embedding (cf. [14]) for metrization, then constructs a positive semidefinite matrix which can in sequel be used for denoising and clustering purposes.
Regarding data-mining or clustering purposes, the most outstanding difference to
classical MDS is the following: for the class of pairwise clustering cost functions
sharing the shift-invariance property 1 the metrization step is loss-free in the sense
that the optimal assignments of objects to clusters remain unchanged.
The next section introduces techniques for metrization, denoising and clustering
pairwise data. This is followed by a section illustrating our methods for real world
data such as bacterial GyrE amino acid sequences and sequences from the ProD om
data base and a brief discussion.
2
Proximity-based clustering and denoising
One of the most popular methods for grouping vectorial data is k-means clustering
(see e.g. [1][5]). It derives a set of k prototype vectors which quantize the data set
with minimal quantization error.
Partitioning proximity data is considered a much harder problem, since the inherent
structure of n samples is hidden in n 2 pairwise relations. The pairwise proximities
can violate the requirements of a distance measure, i.e. they may be non-symmetric
and negative, and the triangular inequality does not necessarily hold. Thus, a lossfree embedding into a vector space is not possible, so that grouping problems of this
kind cannot be directly transformed into a vectorial representation by means of classical embedding strategies such as multi-dimensional scaling (MDS [4]). Moreover
clustering the MDS embedded data-vectors in general yields partitionings different from those obtained by directly solving the pairwise problem, since embedding
constraints might be in conflict with the clustering goal.
Let us start from a pairwise clustering loss function (see [12]) that combines the
properties of additivity, scale- and shift invariance, and statistical robustness
HPc
=
t
v=1
2:~=1 2:7=1MivMjvDij
2:~= 1 Mlv
(1)
'
where the data are characterized by the matrix of pairwise dissimilarities D ij .
The assignments of objects to clusters are encoded in the binary stochastic matrix M E {O, l}nxk : 2:~=1 Miv = 1. For such cost functions it can be shown [14]
that there always exists a set of vectorial data representations-the constant shift
embeddings-such that the grouping problem can be equivalently restated in terms
of Euclidian distances between these vectors. In order to handle non-symmetric
dissimilarities, it should be noticed that HPc is also invariant under symmetrizing transformations: Dij +- 1/2(Dij + Dji). In the following we will thus restrict
ourselves to the case of symmetric dissimilarity matrices.
Theorem 2.1. [141 Given an arbitrary (possibly non-metric) (n x n) dissimilarity
matrix D with zero self-dissimilarities, there exists a transformed matrix fJ such
that
(i) the matrix fJ can be interpreted as a matrix of squared Euclidian distances
IThe term shift-invariance means that the optimal assignments of objects to clusters
are not influenced by constant additive shifts of the pairwise dissimilarities (excluding the
self-dissimilarities which are assumed to be zero).
between a set of vectors {xdi=l' D is derived from D by both symmetrizing and
applying the constant shift embedding trick;
(ii) the original pairwise clustering problem is equivalent to a k-means problem in
this vector space, in the sense that the optimal assignments of objects to clusters
{MiV } are identical in both problems.
A re-formulation of pairwise clustering as a k-means problem is clearly advantageous: (i) the availability of prototype vectors defines a generic rule for using the
learned partitioning in a predictive sense, (ii) we can apply standard noise- and
dimensionality-reduction methods in order to both stabilize the estimation procedure and to speed up the grouping itself.
Constant shift embedding Let D = (Dij) E jRnxn be the matrix of pairwise
squared dissimilarities between n objects. For a generic noisy dataset yfJ5:j 1:.
JD ik + JD kj so that v15 is non metric. Since";-: is monotonically increasing,
~ Do such that JD ij + Do ~ JDik + Do + JDkj + Do V i,j, k = 1,2 ... n. Let
D=D+Do(ee T -In)
(2)
where e = (1 , 1, ... 1) T is a n-dimensional column-vector and In the identity matrix.
This corresponds to a constant additive shift Dij = Dij + Do for all i i:- j. We
look for the minimal constant shift Do such that D satisfy the triangle inequality.
In order to make the main result clear, we first need to introduce the notion of a
centralized matrix. Let P be an arbitrary matrix and let Q = I - ~ee T. Q is the
projection matrix on the orthogonal complement of e. Define the centralized P by:
pe
= QPQ.
(3)
Let D be fixed and let us decompose D as follows:
Dij
= Sii + Sjj - 2Sij .
(4)
This decomposition is motivated by the fact that if D is a squared Euclidian distance
between the vectorial data Xi, then Dij = Ilxi - xjl12 = IIxil12 + IIxjl12 - 2x{ Xj'
It follows from equation (4) that a constant off-diagonal shift on D corresponds to
a constant shift on the diagonal of S. S is not fixed by the choice of D, since we
may always change its diagonal elements, yet recover the same D. That is, any
matrix of the form (Sij + I/2~Si + I/2~Sj) gives the same distance D as S for
arbitrary ~Si's. By simple algebra it can be shown that se = - ~ De, i. e. se is
unique. Furthermore D derives from a squared Euclidian distance if and only if s e
is positive semi-definite [14]. Let s e = s e - An(se)In, where AnU is the minimal
eigenvalue of its argument. Then se is positive semi-definite [14]. These are the
main ingredients for proving the following:
Theorem 2.2 (Minimal Do). !4J. Do = -2An(se) is the minimal constant such
that D = D + Do (ee T - In) derive from squared Euclidian distance.
All proofs can be found in [14] . We have thus shown that applying large enough
additive shifts to the off-diagonal elements of D results in a matrix se that is positive semi-definite, and can thus be interpreted as a Gram matrix. This means, that
in some (n - I)-dimensional Euclidian space there exists a vector representation of
the objects, summarized in the "design" matrix X (the rows of X are the feature
vectors), such that se = XX T .
For the pairwise clustering cost function the optimal assignments of objects to
clusters are invariant under the constant-shift embedding procedure, according to
theorem 2.1. Hence, the grouping problem can be re-formulated as optimizing the
classical k-means criterion in the embedding space.
In many applications, however, it is advantageous not to cluster in the full space
but to insert some dimension reduction step, that serves the purpose of increasing
efficiency and noise reduction. While it is unclear how to denoise for the original
pairwise object representations while respecting additivity, scale- and shift invariance, and statistical robustness properties of the clustering criterion, we can easily
apply kernel PCA [16] to Be after the constant-shift embedding.
Denoising of pairwise data by Constant Shift Embedding For de noising
we construct D which derives from "real" points in a vector space, i.e. Be is positive
semi-definite. In a first step, we briefly describe, how these real points can be
recovered by loss-free kernel PCA [16]:
(i) Calculate the centralized kernel matrix
= -~QDQ .
(ii) Decompose
= V AV T where V = (Vl,'" v n ) with eigenvectors vi's and
A = diag(.A1 , '" .An) with eigenvalues .A1 ~ ... ~ .Ap > .Ap +1 = a ~ .Ap +2 ~ ... ~ .An.
(iii) Calculate the n x (n - 2) mapping matrix X~_2 = V':_2 (A~_2)1 /2, where
V':_2 = (V1, ... Vp ,Vp +2,??? vn-d and A~ _ 2 = diag(.A1 - .An, ... .Ap - .An,.Ap +2.An,'" .A n-1 - .An) (these are the constantly shifted eigenvalues).
se
se
The rows of X~_2 contain the vectors {xD (i = 1,2 ... n) in n - 2 dimensional
space, whose mutual distances are given by D. When focusing on noise reduction,
however, we are rather interested in some approximative reconstructions of the ''real''
vectors. In the PCA framework, one usually discards the directions which correspond to small eigenvalues as noise (c.f. [9]). We can thus obtain a representation
in a space of reduced dimension (with the well-defined error of PCA reconstruction)
when choosing t < n - 2 in step (iii) of the above algorithm:
X*t -- y.*(A*)1/2
t
t
,
where i't* consists of the first t column vectors of V':_2 and At is the top txt
submatrix of A~ _ 2' The vectors in ~t then differ the least from the vectors in ~n - 2
in the sense of a quadratic error.
The advantages of this method in comparison to directly applying classical scaling
via MDS are: (i) t can be larger than the number p of positive eigenvalues, (ii)
the embedded vectors are the best least squares error approximation to the optimal
vectors which preserve the grouping structure.
It should be noticed, however, that given the exactly reconstructed vectors in ~n-2
found by loss-free kernel PCA, we could have also applied any other standard methods for dimensionality reduction or visualization, such as projection pursuit [6], local
linear embedding (LLE) [15], Isomap [17] or Self-organizing maps [8].
3
3.1
Application on protein sequences
Bacterial GyrB amino acid sequences
We first illustrate our de noising technique on the gyrase subunit B. The dataset consists of 84 amino acid sequences from five genera in Actinobacteria: 1: Corynebacterium, 2: Mycobacterium, 3: Gordonia, 4: Nocardia and 5: Rhodococcus. A detailed description can be found in [7]. This dataset was used in [18] for illustration
of marginalized kernels. The authors hinted at the possibility of computing the distance matrix by using BLAST scores [2], noting, however, that these scores could
not be converted into positive semidefinite kernels.
In our experiment, the sequences have been aligned by the Smith-Waterman algorithm [11] which yields pairwise alignment scores. Using constant shift embedding
a positive semidefinite kernel is obtained, leaving the cluster assignment unchanged
for shift invariant cost functions.
The important step is the denoising. Several projections to lower dimensions have
been tested and t = 5 turned out to be a good choice, eliminating the bulk of noise
while retaining the essential cluster structure.
Figure 1 shows the striking improvement of the distance matrix after denoising.
On the left hand side the ideal distance matrix is depicted, consisting solely of O's
(black) and l 's (white), reflecting the true cluster membership. In the middle and
on the right the original and the denoised distance matrix are shown, respectively.
Denoising visibly accentuates the cluster structure in the pairwise data. Since we
10
10
20
20
30
30
40
40
50
50
60
60
70
70
80
80
20
40
60
80
20
40
60
80
20
40
60
80
Figure 1: Distance matrix: On the left the ideal distance matrix reflects the true
cluster structure. In the middle and on the right: distance matrix before and after
de noising
dispose of the true labels, we can quantitatively assess the improvement by denoising. We performed usual k-means clustering, followed by a majority voting
to match cluster labeling. For the denoised data we obtained 3 misclassifications
(3.61%) whereas we got 17 (20.48%) for the original data. This simple experiment
corroborates the usefulness of our embedding and denoising strategy for pairwise
data.
In order to fulfill the spirit of the theory of constant-shift embedding, the costfunction of the data-mining algorithm subsequent to the embedding needs to be
shift invariant. We may by the same token go a step further and apply algorithms
for which this condition does not hold. In doing so, however, we give up the mathematical traceability of the error.
To illustrate that denoised pairwise data can act as standalone quality data independent of the framework of algorithms based on shift invariant cost functions (and
in order to compare to the results obtained in [18]), a linear SVM is trained on 25%
of the total data to mutually classify the genera-pairs: 3 - 4, 3 - 5, 4 - 5. Genera 1
and 2 separate errorless and have therefore been omitted. Model selection over the
regularization parameter C has been performed by choosing the optimal value out
of 10 equally spaced values from [10-4, 10 2 ]. The results and have been averaged
by a lOOO-fold sampling (cf. table 1). The best values are printed in bold.
For the classification of genera 3 - 5 and 4 - 5 we obtain a substantial improvement by denoising. Interestingly this is not the case for genera 3 - 4 which may
be due to the elimination of discriminative features by the de noising procedure.
The error still is significantly smaller than the error obtained by MCK2 and FK,
which is in agreement with the superiority of a structure preserving embedding of
Smith-Waterman scores even when left undenoised: FK and MCK are kernels de-
Genera
3- 4
3-5
4-5
FK
10.4
10.9
23.1
MCK2
8.48
5.71
11.6
Undenoised
5.06
5.72
7.55
Denoised
5.43
3.83
3.17
Table 1: Comparison of mean test-error of supervised classification by linear SVM
of genera with training sample 25 % of the total sample. The results for MCK2
(Marginalized Count Kernel) and FK (Fisher Kernel) is obtained by kernel Fisher
discriminant analysis which compares favorably to the SVM in several benchmarks
[18].
rived from a generative model, whereas the alignment scores are obtained from a
matching algorithm specifically tuned for protein sequences, reflecting much better
the underlying structure of protein data.
3.2
Clustering of ProDom sequences
The analysis described in this section aims at finding a partition of domain sequences
from the ProDom database, [3], that is meaningful w.r.t. structural similarity. In
order to measure the quality of the grouping solution, we use the computed solution
in a predictive way to assign group labels to SCOP sequences, which have been
labeled by experts according to their structure, [10]. The predicted labels are then
compared with the "true" SCOP labels.
For demonstration purposes, we select the following subset of sequences from
prodom2001. 2. srs: among all sequences we choose those which are highly similar to at least one sequence contained in the first four folds of the SCOP database. 2
Between these sequences, we compute pairwise (length-corrected and standardized)
Smith-Waterman alignment scores, summarized in the matrix (Sij). These similarities are transformed into dissimilarities by setting Dij := Sii + Sjj - 2Sij . The
centralized score matrix SC = -1/2Dc possesses some highly negative eigenvalues,
indicating that metric properties are violated. Applying the constant-shift embedding method, a valid Mercer kernel is derived, with an eigenvalue spectrum that
shows only a few dominating components over a broad "noise"-spectrum (see figure
2). Extracting the first 16 leading principal components 3 leads to a vector representation of the sequences as points in ~16. These points are then clustered by
minimizing the k-means cost function within a deterministic annealing framework.
The model order was selected by applying a re-sampling based stability analysis,
which has been demonstrated to be a suitable model order selection criterion for
unsupervised grouping problems in [13].
In order to measure the quality of the grouping solution, all 1158 SCOP sequences
from the first four folds are embedded into the 16-dimensional space. The predicted
group structure on this test set is then compared with the true SCOP fold-labels.
Figure 3 shows both the predicted group membership of these sequences and their
true SCOP fold-label in the form of a bar diagram: the sequences are ordered by
increasing group label (the lower horizontal bar), and compared with the true fold
classification (upper bar) . In order to quantify the results, the inferred clusters are
2"Highly similar" here means that the highest alignment score exceeds a predefined
threshold. The result is a subset of roughly 2700 ProD om domain sequences.
3Subsampling techniques or deflation can be used to reduce computational load for
large-scale problems. We only used a subset of 800 randomly chosen proteins for estimating
the 16 leading eigenvectors.
Figure 2: (Partial) eigenvalue spectrum of the shifted score matrix. The
data are projected onto the first leading
16 eigenvectors, whereas the remaining
principal components are considered to
be dominated by noise.
<1,) 1200 -
"
"ij
~ "'OO
.~ ~)
16lcading cigcnvcctors selcctcd
'"
re-Iabeled (''re-colored'') according to the maximum number of correctly identifiable
fold-labels. This procedure allows us to correctly identify the fold label of roughly
94 % of the SCOP sequences.
1158 SCOP sequences from folds 1-4
~1==::j1"
1 -------~
I ~I(=~~~II-....~
I I SCOP fold label
r=11+1.=1~
II _ _ _....._IIIIIIJ
I ~I(==:_-~
I II Prediction
re- Iabeled by
1 Cluster 3 ...
Cluster I
Errors
majority voting
Cluster 2
Figure 3: Visualization of cluster membership of the SCOP sequences contained in
folds 1-4.
Despite this surprisingly high percentage, it is necessary to deeper analyze the
biological relevance of the inferred grouping solution. In order to check to what
extent the above "over-all" result is influenced by artefacts due to highly related
(or even almost identical) SCOP sequences, we repeated the analysis based on
the subset of 128 SCOP sequences with less than 50 % sequence identity (PDB50). Predicting the group membership of these 128 sequences and using the same
re-Iabeling approach, we can correctly identify 86 % of the fold-labels. This result
demonstrates that we have not only found trivial groups of almost identical proteins,
but that we have indeed extracted relevant structural information.
4
Discussion and Conclusion
This paper provides two main contributions that are highly useful when analyzing
pairwise data. First, we employ the concept of constant shift embedding to provide
a metric representation of the data. For a certain class of grouping principles sharing
a shift-invariance property, this embedding is distortion-less in the sense that it does
not influence the optimal assignments of objects to groups. Given the metricized
data we can now use common signal (pre- )processing and denoising techniques that
are typically only defined for vectorial data.
As we investigate the clustering of protein sequences from data bases like GyrB and
ProDom, we are given non-metric pairwise proximity information that is strongly
deteriorated by the shortcomings of the available alignment procedures. Thus, it
is important to apply denoising techniques to the data as a second step before
running the actual clustering procedure. We find that the combination of these two
processing steps is successful in unraveling protein structure, greatly improving over
existing methods (as exemplified for GyrB and ProDom).
Future research will be dedicated to further evaluation of the proposed algorithm.
We will also explore the perspectives it opens in any field handling pairwise data.
Acknowledgments The gyrE amino acid sequences where offered by courtesy of
Identification and Classification of Bacteria (ICB) databank team [19]. The authors
are partially supported by DFG grants # MU 987/ 1-1 and # BU 914/ 4-1.
References
[1] A.KJain, M.N. Murty, and P.J. Flynn. Data clustering: a review. ACM Computing
Surveys, 31(3):264- 323, 1999.
[2] S. F. Altschul, W. Gish, W. Miller, E. W. Myers, and D. J. Lipman. Basic local
alignment search tool. J. Mol. Bioi., 215:403 - 410, 1990.
[3] F. Corpet, F. Servant, J. Gouzy, and D. Kahn. Prodom and prodom-cg: tools for
protein domain analysis and whole genome comparisons. Nucleid Acids Res., 28:267269, 2000.
[4] T. F. Cox and M. A. A. Cox. Multidimensional Scaling. Chapman & Hall, London,
2001.
[5] R.O. Duda, P.E.Hart, and D.G.Stork. Pattern classification. John Wiley & Sons,
second edition, 2001.
[6] P. J. Huber. Projection pursuit. The Annals of Statistics, pages 435--475, 1985.
[7] H. Kasai, A. Bairoch, K Watanabe, K Isono, and S. Harayama. Construction of the
gyrb database for the identification and classification of bacteria. Genome Informatics,
pages 13 - 21, 1998.
[8] T. Kohonen. Self-Organizing Maps. Springer-Verlag, Berlin, 1995.
[9] S. Mika, B. SchOlkopf, A.J. Smola, K-R. Miiller, M. Scholz, and G. Ratsch. Kernel
PCA and de- noising in feature spaces. In M.S. Kearns, S.A. Solla, and D.A. Cohn,
editors, Advances in Neural Information Processing Systems, volume 11, pages 536542. MIT Press, 1999.
[10] A.G. Murzin, S.E. Brenner, T. Hubbard, and C. Chothia. Scop: a structural classification of proteins database for the investigation of sequences and structures. J. Mol.
Bioi., 247:536- 540, 1995.
[11] W. R. Pearson and D. J. Lipman. Improved tools for biological sequence analysis.
Proc. Natl. Acad. Sci, 85:2444 - 2448, 1988.
[12] J. Puzicha, T. Hofmann, and J. Buhmann. A theory of proximity based clustering:
Structure detection by optimization. Pattern Recognition, 33(4):617- 634, 1999.
[13] V. Roth, M. Braun, T. Lange, and J. Buhmann. A resampling approach to cluster
validation. In Computational Statistics-COMPSTAT'02, 2002. To appear.
[14] V. Roth, J. Laub, M. Kawanabe, and J.M. Buhmann. Optimal cluster preserving
embedding of non-metric proximity data. Technical Report IAI-TR-2002-5, University
of Bonn, 2002.
[15] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323-2326, 2000.
[16] B. Schiilkopf, A. Smola, and K-R. Miiller. Nonlinear component analysis as a kernel
eigenvalue problem. Neural Computation, 10:1299- 1319, 1998.
[17] J.B. Tenenbaum, V. Silva, and J.C. Langford. A global geometric framework for
nonlinear dimensionality reduction. Science, 290:2319- 2323, 2000.
[18] K Tsuda, T. Kin, and K Asai. Marginalized kernels for biological sequences. Proc.
ISMB, to appear:2002 , http://www.cbrc.jp/ tsuda/.
[19] K Watanabe, J. Nelson, S. Harayama, and H. Kasai. Icb database: the gyrb database
for identification and classification of bacteria. Nucleic Acids Res., 29:344 - 345, 2001.
| 2215 |@word illustrating:1 middle:2 briefly:1 eliminating:1 advantageous:2 cox:2 duda:1 open:1 gish:1 decomposition:1 euclidian:6 tr:1 harder:1 reduction:7 score:10 tuned:1 interestingly:1 existing:2 recovered:1 ida:2 si:2 yet:1 john:1 additive:3 subsequent:1 partition:1 j1:1 hofmann:1 standalone:1 resampling:1 generative:1 discovering:1 selected:1 smith:3 colored:1 provides:1 five:1 mathematical:1 sii:2 iabeled:2 ik:1 scholkopf:1 laub:2 consists:2 combine:1 introduce:1 blast:1 theoretically:1 pairwise:30 huber:1 indeed:1 roughly:2 multi:2 actual:1 preclude:1 increasing:3 xx:1 moreover:1 underlying:1 estimating:1 what:1 kind:1 interpreted:2 string:1 flynn:1 finding:1 transformation:1 multidimensional:1 voting:2 act:1 xd:1 braun:1 exactly:1 demonstrates:1 partitioning:3 grant:1 superiority:1 appear:2 positive:8 before:2 local:2 acad:1 despite:1 gyrb:5 analyzing:1 solely:2 ap:5 might:3 black:1 mika:1 genus:7 scholz:1 statistically:1 averaged:1 ismb:1 practical:1 unique:1 qdq:1 acknowledgment:1 definite:4 procedure:7 rived:1 empirical:1 got:1 printed:1 projection:4 significantly:1 matching:1 pre:1 murty:1 protein:10 cannot:1 onto:1 selection:2 noising:5 applying:5 influence:1 www:1 conventional:1 equivalent:1 map:2 roth:4 deterministic:1 demonstrated:1 go:1 courtesy:1 murzin:1 compstat:1 asai:1 survey:1 restated:1 rule:1 embedding:23 handle:1 notion:1 proving:1 stability:1 deteriorated:1 annals:1 construction:1 us:1 approximative:1 agreement:1 trick:1 element:2 recognition:1 particularly:1 database:6 labeled:1 calculate:2 solla:1 highest:1 substantial:1 mu:1 respecting:1 trained:1 solving:1 ithe:1 algebra:1 predictive:2 upon:2 efficiency:1 triangle:2 easily:1 additivity:2 describe:1 shortcoming:1 london:1 sc:1 labeling:1 klaus:2 choosing:2 pearson:1 whose:1 encoded:1 larger:1 dominating:1 distortion:1 triangular:2 statistic:2 noisy:2 itself:1 sequence:32 metrization:3 eigenvalue:9 advantage:1 accentuates:1 myers:1 reconstruction:2 kohonen:1 aligned:1 turned:1 relevant:1 organizing:2 translate:1 roweis:1 description:1 cluster:20 requirement:1 costfunction:1 object:11 derive:1 illustrate:2 oo:1 ij:3 c:2 predicted:3 quantify:1 differ:1 direction:1 artefact:1 subsequently:1 stochastic:1 human:1 elimination:1 assign:1 clustered:1 decompose:2 investigation:1 biological:3 hinted:1 harayama:2 insert:1 hold:2 lying:1 around:1 proximity:6 considered:2 hall:1 mapping:1 major:1 omitted:1 purpose:4 estimation:1 proc:2 label:11 hubbard:1 hpc:2 tool:3 reflects:1 mit:1 clearly:1 always:2 aim:2 rather:1 fulfill:1 volker:1 derived:2 joachim:1 improvement:3 check:1 visibly:2 greatly:1 cg:1 sense:5 membership:4 bairoch:1 vl:1 typically:3 hidden:2 relation:1 nxk:1 kahn:1 going:1 fhg:2 transformed:3 germany:5 interested:1 classification:8 among:1 retaining:1 proposes:1 icb:2 mutual:1 bacterial:2 construct:2 field:1 actinobacteria:1 lipman:2 sampling:2 chapman:1 identical:4 broad:1 look:1 unsupervised:2 future:1 jb:1 report:1 quantitatively:1 inherent:1 employ:1 few:1 randomly:1 preserve:1 dfg:1 ourselves:1 consisting:1 detection:1 centralized:4 mining:2 possibility:1 highly:5 investigate:1 evaluation:1 alignment:7 violation:2 introduces:1 semidefinite:3 natl:1 predefined:1 partial:1 necessary:1 bacteria:3 respective:1 orthogonal:1 re:9 tsuda:2 theoretical:1 minimal:5 instance:2 column:2 classify:1 v15:1 assignment:8 cost:6 subset:4 kasai:2 usefulness:2 successful:1 dij:8 xdi:1 sequel:1 bu:1 off:2 informatics:1 squared:5 satisfied:1 choose:1 possibly:1 expert:1 leading:4 converted:1 de:12 scop:13 summarized:2 stabilize:1 availability:1 bold:1 satisfy:1 vi:1 performed:2 servant:1 analyze:2 doing:1 start:1 recover:1 denoised:5 contribution:1 ass:1 square:1 kekulestr:1 om:2 acid:6 iixjl12:1 miller:1 judgment:1 yield:2 correspond:1 spaced:1 identify:2 vp:2 identification:3 informatik:2 influenced:2 sharing:2 proof:1 dataset:3 popular:1 dimensionality:4 reflecting:2 focusing:1 supervised:1 improved:1 iai:1 formulation:1 strongly:1 furthermore:1 smola:2 langford:1 hand:1 horizontal:1 cohn:1 nonlinear:3 defines:1 quality:3 behaved:1 contain:1 true:7 isomap:1 concept:1 hence:1 regularization:1 symmetric:3 mlv:1 white:1 self:4 criterion:3 demonstrate:1 dedicated:1 fj:2 silva:1 reasoning:1 ilxi:1 common:2 dji:1 stork:1 jp:1 volume:1 measurement:1 automatic:1 fk:4 similarity:2 base:3 perspective:1 optimizing:1 discard:1 altschul:1 certain:1 verlag:1 inequality:4 binary:1 preserving:2 monotonically:1 signal:2 ii:8 semi:4 violate:3 full:1 exceeds:1 technical:1 match:1 characterized:1 hart:1 equally:1 a1:3 controlled:1 prediction:1 basic:1 txt:1 metric:9 kernel:15 cbrc:1 whereas:3 annealing:1 diagram:1 ratsch:1 leaving:1 sr:1 posse:1 spirit:1 extracting:2 ee:3 structural:3 noting:1 ideal:2 iii:4 embeddings:1 enough:1 variety:1 xj:1 chothia:1 misclassifications:1 restrict:1 reduce:1 regarding:1 prototype:2 lange:1 fallible:1 shift:26 motivated:1 pca:6 miiller:3 useful:1 clear:1 se:9 eigenvectors:3 detailed:1 locally:1 tenenbaum:1 dna:1 reduced:1 http:1 percentage:1 shifted:2 dispose:1 correctly:3 bulk:1 databank:1 group:7 four:2 threshold:1 v1:1 imaging:1 striking:1 almost:2 vn:1 scaling:4 submatrix:1 followed:2 fold:12 quadratic:1 identifiable:1 vectorial:6 constraint:1 dominated:1 bonn:7 speed:1 argument:1 according:3 combination:1 remain:1 smaller:1 son:1 invariant:6 sij:4 equation:1 visualization:2 mutually:1 count:1 deflation:1 symmetrizing:2 serf:1 available:2 pursuit:2 apply:5 kawanabe:1 sjj:2 generic:2 distinguished:1 alternative:1 robustness:2 jd:3 original:4 top:1 clustering:20 cf:2 standardized:1 subsampling:1 remaining:1 running:1 marginalized:3 classical:5 unchanged:2 noticed:2 strategy:2 md:5 diagonal:4 usual:1 unclear:1 unraveling:1 distance:15 separate:1 berlin:3 sci:1 majority:2 nelson:1 extent:1 discriminant:1 trivial:1 length:1 illustration:1 julian:1 demonstration:1 minimizing:1 equivalently:1 setup:1 robert:1 favorably:1 negative:2 design:1 prodom:6 upper:1 av:1 nucleic:1 datasets:1 benchmark:1 waterman:3 subunit:1 excluding:1 team:1 dc:1 arbitrary:3 inferred:2 complement:1 pair:1 conflict:1 potsdam:2 learned:1 established:1 bar:3 usually:1 exemplified:1 pattern:2 suitable:1 natural:1 predicting:1 buhmann:4 miv:2 technology:1 brief:1 fraunhofer:2 kj:1 review:1 geometric:1 embedded:3 loss:4 ingredient:1 validation:1 offered:1 mercer:1 principle:1 viewpoint:1 editor:1 share:1 row:2 looo:1 token:1 surprisingly:1 supported:1 free:3 side:1 lle:1 deeper:1 saul:1 dimension:3 world:1 gram:1 valid:1 genome:2 author:2 projected:1 sj:1 reconstructed:1 uni:2 global:1 assumed:1 corroborates:1 xi:1 discriminative:1 jrnxn:1 spectrum:3 search:1 prod:2 table:2 mol:2 improving:2 quantize:1 necessarily:1 domain:3 diag:2 main:3 whole:1 noise:9 edition:1 denoise:1 repeated:1 amino:4 wiley:1 watanabe:2 pe:1 kin:1 metricity:1 theorem:3 load:1 svm:3 grouping:13 exists:3 intrinsic:1 derives:3 quantization:1 essential:1 dissimilarity:9 anu:1 depicted:1 explore:1 contained:2 ordered:1 partially:1 applies:1 springer:1 corresponds:2 constantly:1 extracted:1 acm:1 bioi:2 goal:1 formulated:2 identity:2 fisher:2 brenner:1 hard:1 change:1 specifically:1 corrected:1 denoising:14 principal:2 kearns:1 called:1 total:2 invariance:6 meaningful:1 indicating:1 select:1 puzicha:1 bioinformatics:1 outstanding:1 violated:1 relevance:1 tested:1 handling:1 |
1,336 | 2,216 | Information Diffusion Kernels
John Lafferty
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213 USA
[email protected]
Guy Lebanon
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213 USA
[email protected]
Abstract
A new family of kernels for statistical learning is introduced that exploits the geometric structure of statistical models. Based on the heat
equation on the Riemannian manifold defined by the Fisher information metric, information diffusion kernels generalize the Gaussian kernel
of Euclidean space, and provide a natural way of combining generative
statistical modeling with non-parametric discriminative learning. As a
special case, the kernels give a new approach to applying kernel-based
learning algorithms to discrete data. Bounds on covering numbers for
the new kernels are proved using spectral theory in differential geometry,
and experimental results are presented for text classification.
1 Introduction
The use of kernels is of increasing importance in machine learning. When ?kernelized,?
simple learning algorithms can become sophisticated tools for tackling nonlinear data analysis problems. Research in this area continues to progress rapidly, with most of the activity
focused on the underlying learning algorithms rather than on the kernels themselves.
Kernel methods have largely been a tool for data represented as points in Euclidean space,
with the collection of kernels employed limited to a few simple families such as polynomial
or Gaussian RBF kernels. However, recent work by Kondor and Lafferty [7], motivated
by the need for kernel methods that can be applied to discrete data such as graphs, has
proposed the use of diffusion kernels based on the tools of spectral graph theory. One
limitation of this approach is the difficulty of analyzing the associated learning algorithms
in the discrete setting. For example, there is no obvious way to bound covering numbers
and generalization error for this class of diffusion kernels, since the natural function spaces
are over discrete sets.
In this paper, we propose a related construction of kernels based on the heat equation. The
key idea in our approach is to begin with a statistical model of the data being analyzed, and
to consider the heat equation on the Riemannian manifold defined by the Fisher information
metric of the model. The result is a family of kernels that naturally generalizes the familiar
Gaussian kernel for Euclidean space, and that includes new kernels for discrete data by
beginning with statistical families such as the multinomial. Since the kernels are intimately
based on the geometry of the Fisher information metric and the heat or diffusion equation
on the associated Riemannian manifold, we refer to them as information diffusion kernels.
Unlike the diffusion kernels of [7], the kernels we investigate here are over continuous parameter spaces even in the case where the underlying data is discrete. As a consequence,
some of the machinery that has been developed for analyzing the generalization performance of kernel machines can be applied in our setting. In particular, the spectral approach
of Guo et al. [3] is applicable to information diffusion kernels, and in applying this approach it is possible to draw on the considerable body of research in differential geometry
that studies the eigenvalues of the geometric Laplacian.
In the following section we review the relevant concepts that are required from information
geometry and classical differential geometry, define the family of information diffusion
kernels, and present two concrete examples, where the underlying statistical models are
the multinomial and spherical normal families. Section 3 derives bounds on the covering
numbers for support vector machines using the new kernels, adopting the approach of [3].
Section 4 describes experiments on text classification, and Section 5 discusses the results
of the paper.
2 Information Geometry and Diffusion Kernels
be a -dimensional statistical model on a set ! . For each
#$%& "
is '( at each point in the interior of . Let
")* ! + assume themapping
1
3
0
5
2
6
4
"
. The Fisher information matrix 7 8 *:9 ;
=< of at
>? is
+-
, . and / , "
given by
(1)
8 *:9 ;
A@ , 7 )B* / , )9 / , <CEDGF )* 0H2B4I "
B )59 0H2B4I& "
J& "
"
Let
8 *:9 ;
1K DGF )*ML "
)59NL & "
"PO
* *:9 ;
defines a Riemannian metric on , giving
In coordinates , 8
or equivalently as
(2)
the structure of a
-dimensional Riemannian manifold. One of the motivating properties of the Fisher information metric is that, unlike the Euclidean distance, it is invariant under reparameterization.
For detailed treatments of information geometry we refer to [1, 6].
"
G "
For many statistical models there is a natural way to associate to each data point a pain the statistical model. For example, in the case of text, under the
rameter vector
multinomial model a document is naturally associated with the relative frequencies of the
word counts. This amounts to the mapping which sends a document to its maximum
. Given such a mapping, we propose to apply a kernel on parameter
likelihood model
space,
.
RQ "
W
US T " "V SUT ;
R " M
G " V
"
"
&;
& "
under
More generally, we may associate a data point with a posterior distribution
a suitable prior. In the case of text, this is one way of ?smoothing? the maximum likelihood
model, using, for example, a Dirichlet prior. Given a kernel on parameter space, we then
average over the posteriors to obtain a kernel on data:
S>T " " V XEDGYEDZY S>T ;
R[
V -;
& " -;
V " V
V O
(3)
It remains to define the kernel on parameter space. There is a fundamental choice: the kernel associated with heat diffusion on the parameter manifold under the Fisher information
metric.
For a manifold
coordinates by
\
8 *]9 the Laplacian ^`_Cab \ c$ ab \
*:9
^ e detd 8gf *:9 )*ML det 88 )59
with metric
is given in local
(4)
*]9
7 8 I< 7 8 *]9 <
*
* + Y
* +
+ J.
\
where
, generalizing the classical operator div
is
. When
with
corresponding
compact the Laplacian has discreteeigenvalues
. When the manifold has a boundary, approprieigenfunctions satisfying
ate boundary conditions must be imposed in order that is self-adjoint. Dirichlet boundary
and Neumann boundary conditions require
where
conditions set
is the outer normal direction. The following
theorem summarizes the basic properties for
on
the kernel of the heat equation
.
^ *
* *
b
^
++ . Y
+
^
\
++ T
Theorem 1 . G Let \ be a geodesically complete
Z SURiemannian
" , (2) 0 manifold.
"Then
G the heat
G ,
kernel SUT "
exists and satisfies (1) S T Y"
T
T
U
S
T
`
+
G
?
N
R
G
G
?
" S
S
U
S
T
(3) ^
, (4) S>T "
, and (5) SUT "
+
T
*
*
. T " G .
*(
G solves the heat
We refer to [9] for a proof. Properties 2 and 3 imply that S T "
G shows
"
Y from
Y " a Gfunction
equation in
, starting
. GIntegrating property Y 3 against
G
G
T
"
"
"
" `
U
S
T
that
Y " T " " S>T T . Therefore,
T is a positive operator; thus SgT " Z
since
is positive definite. Together, these properties show that S T defines a Mercer kernel.
Note * that
a kernel for classification, the discriminant function T "
* can such
T " " using
* * S>when
as the solution
with initial tem* toBthe
" * * * beoninterpreted
" Xheat
onequation
perature
labeled data point " , and
unlabeled points.
!
!
)
43 6
5 7
"
+*
!
"
-,
$#$%
0/
.
1/ 2
.
/
&
!
('
2
"
!
5 "9
,
8
)
8
,
5 "9
!
;:
8 *
8
8
5 "9
,
"
,
!
5 "9
8<=>
8
8
"
8
"
!
@?
A
A
?
The following two basic examples illustrate the geometry of the Fisher information metric
and its associated diffusion kernel: the multinomial corresponds to a Riemannian manifold of constant positive curvature, and the spherical normal family to a space of constant
negative curvature.
2.1
The Multinomial
-
M
e
* $#
* *
The multinomial is an important example of how information diffusion kernels can be
, is an element of
applied naturally to discrete
data. For the multinomial family
ED
F/
4C3 B
the -simplex,
. The transformation
maps the -simplex to
the -sphere of radius 2.
*
* d
The representation of the Fisher information metric given in equation (2) suggests the
geometry underlying
the multinomial. In particular, the information metric is given by
GB 3
I:
/
/
H
H
H
H
< so that the Fisher information corresponds to the inner product of tangent vectors to the sphere, and information geometry for
the multinomial is the geometry of the positive orthant of the sphere. The geodesic distance
is given by
between two points
8 *]9 ;
B
)* 03254
)9 0H2B4
)* R )59
M
V
2 L
*
*V O
f*
;
R[
V
CB
JDLKNM2O+O QPR
1S
43
(5)
This metric places greater emphasis on points near the boundary, which is expected to be
important for text problems, which have sparse statistics. In general for the heat kernel on
a Riemannian manifold, there is an asymptotic expansion in terms of the parametrices; see
for example [9]. This expands the kernel as
SUT " G K
2
QTVU
XW
ZYC[]\_^
b K " Z *
f
!
AU
`b43 a
* " G *
! U
dc
fehg
"U
a
(6)
Using the first order approximation and the explicit distance for the geodesic distance gives
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Figure 1: Example decision boundaries using support vector machines with information
diffusion kernels
for trinomial geometry on the 2-simplex (top right) and spherical normal
JD
geometry,
(bottom right), compared with the standard Gaussian kernel (left).
2 b L
*
*V
f*
a simple formula for the approximate information diffusion kernel for the multinomial as
SUT ;
R[
V K
ATVU
d
R
YC[]\
U
CB
KNM2O+O AP
R
S
43
S
(7)
In Figure 1 this kernel is comparedJwith
the standard Euclidean space Gaussian kernel for
D
the case of the trinomial model,
.
2.2
Spherical Normal
&
-
g 6
*8 :9 ;
d it is given
has a closed form [2]. For
)
b K b
d )
(8)
Now consider the statistical family given by
where
is the mean and is the scale of the variance. A calculation shows that
'
B
. Thus, the Fisher information metric gives
the structure of the
upper half plane in hyperbolic space.
e
*]9
b
The heat kernel on hyperbolic space
by
D
S>T " " V X d e K d
the kernel is given by
and for
e
! #" %$ b ' & T ' T)(
)
(
d
d
D
e 2 ! 2 ! (9)
SUT " " V e K )
" "V is the geodesic distance between the two points in . For d the
where
kernel is identical to the Gaussian kernel on .
>*
is unspecified, then the associated kernel is the standard Gaussian
If only the mean
JD
e
D
T
ATVU
^
P2#
`
Y+[]\_^
U
AU]`
D
D
T
D
AT U
^
YC[]\
P2#
`
B
O AP
.
O AP
RBF kernel. In Figure 1 the kernel for hyperbolic space is compared with the Euclidean
space Gaussian kernel for the case of aD 1-dimensional normal model with unknown mean
and variance, corresponding to
. Note that the curved decision boundary for the
diffusion kernel makes intuitive sense, since as the variance decreases the mean is known
with increasing certainty.
3 Spectral Bounds on Covering Numbers
In this section we prove bounds on the entropy and covering numbers for support vector
machines that use information diffusion kernels; these bounds in turn yield bounds on the
expected risk of the learning algorithms. We adopt the approach of Guo et al. [3], and make
use of bounds on the spectrum of the Laplacian on a Riemannian manifold, rather than
on VC dimension techniques. Our calculations give an indication of how the underlying
geometry influences the entropy numbers, which are inverse to the covering numbers.
\ $
Y
] G G
S _\ \
9
$
#
b
S
S 9 9
corresponding eigenfunctions. We assume that '
.
(
*
*
\ , the SVM hypothesis class for " with weight vector
Given
points "
bounded by is defined as the collection of functions
X " O O O " #$%
" O O O
P "
(10)
where
the mapping from \ to feature space defined by the Mercer kernel, and
H and is denote
the corresponding Hilbert space inner
product
It is of
and
,norm.
interest to obtain uniform
bounds
on
the
covering
numbers
defined
in the metric induced by the norm as the
size of the smallest -cover of
*
" * . The following is the main result of Guo et al. [3].
(
, let denote the smallest integer for which
Theorem 2 . Given an integer
)
$
"" " $* $ *
!
#"
"
"
%
$
9
&
*( 9 * * O Then
" ( and define
'' ( " (
Y
1
0
,+ J.. -/
.
We begin by recalling the main result of [3], modifying their notation slightly to conform
with ours. Let
be a compact subset of -dimensional Euclidean
space, and
=
suppose that
is a Mercer, kernel.2 Denote
by
=
= the
"
eigenvalues of , i.e., of the mapping 8
, and let c
denote the
8
def P
0:
:
%
c
:
<
<
<
K
8
3
[
P
\
B
\
8
7
7
7
7
W
W
e
W
43
To apply this result, we will obtain bounds on the indices 2 using spectral theory in Riemannian geometry. The following bounds on the eigenvalues of the Laplacian are due to
Li and Yau [8].
\
\
Theorem 3 . Let be a compact Riemannian manifold of dimension with non-negative
Ricci curvature, and assume that the boundary of
is convex. Let
denote the eigenvalues of the Laplacian with Dirichlet boundary conditions. Then
3
where
4
is the volume of
\
^
4
and 3
9
`
and 3
b
3
b
^
e
4
d
b
(11)
`
are constants depending only on the dimension.
Note that the manifold of the multinomial model satisfies the conditions of this theorem.
Using these results we can establish the following bounds on covering numbers for information diffusion kernels. We assume Dirichlet boundary conditions; a 4 similar result can
be proven for Neumann
boundary conditions. We include the constant
vol
and
U
diffusion coefficient in order to indicate how the bounds depend on the geometry.
\
4
\
ST
Theorem 4 . Let
be a compact Riemannian manifold, with volume , satisfying the
conditions of Theorem 3. Then the covering numbers for the Dirichlet heat kernel
on
satisfy
4
g
\
03254
X
SUT " G , which are given by
3
d 0HB2 4 9
b
9
9
f*
0H2B4 b b d
GB
^^
U
^
`
T $ , satisfy 0H2B4 9
b 03254
(12)
``
9 . Thus,
b 03254
3
4
(13)
9" " 9
. Now using the
Proof. By the lower bound in Theorem 3, the Dirichlet eigenvalues of the heat kernel
2
^
U
`
=
X
5
43
4
^
3
e
b
4
^
b
D
`
or equivalently
3
4
U
b
=
^
0H2B4 9
X
b
e
D
3
The above inequality will hold in case
R
=
U
4
D
3
b
0H2B4
3
3
since we may assume that
b
GB
b
=
3
e
b
D
3
e
=
4
R
e
3
U
3
b
`
D
B
=
4
D
e
`
^
`
*
^
D
GB
; thus,
)
e
D
b
will hold if
=
,
=
3
ZU
U
=
U
b
GB
S
43
B
3
9*
where the second inequality comes from
upper bound of Theorem 3, the inequality
U
D
e
`
^
0H2B4
(14)
B 0H2B4
D
T 0H2B4
0H2B4
D
e
(15)
b
GB
S
V`
(16)
for a new
. Plugging this bound on into the expression for in Theorem 2
*( 9 * * " 9 * ( , we have after some algebra that 0H2B4 " (
and using
" T ( 03254 . Inverting the above equation in 03254 gives equation (12).
constant
3
43
^
g
5
W
5
W
`
W
0H2B4
We note that Theorem 4 of [3] can be U used to show that this bound does not, in
fact,
depend
on
and . Thus, for fixed the covering numbersg scale as
g
U
) *
, and for fixed they scale as
in the diffusion
U
time .
" H0 2B4
(
03254
g
" (
4 Experiments
We compared the information diffusion kernel to linear and Gaussian kernels in the context of text classification using the WebKB dataset. The WebKB collection contains some
4000 university web pages that belong to five categories: course, faculty, student, project
and staff. A ?bag of words? representation was used for all three kernels, using only the
word frequencies. For simplicity, all hypertext information was ignored. The information
diffusion kernel is based on the multinomial model, which is the correct model under the
0.18
linear
rbf
diffusion
linear
rbf
diffusion
0.16
0.3
0.14
0.25
Test set error rate
Test set error rate
0.12
0.1
0.08
0.2
0.15
0.06
0.04
0.1
0.02
50
100
150
Number of training examples
200
250
50
100
150
Number of training examples
200
250
Figure 2: Experimental results on the WebKB corpus, using SVMs for linear (dot-dashed)
and Gaussian (dotted) kernels, compared with the information diffusion kernel for the
multinomial (solid). Results for two classification tasks are shown, faculty vs. course (left)
and faculty vs. student (right). The curves shown are the error rates averaged over 20-fold
cross validation.
#$ G
Q
(incorrect) assumption that the word occurrences are independent. The maximum likelihood mapping
was used to map a document to a multinomial model, simply
normalizing the counts to sum to one.
Figure 2 shows test set error rates obtained using support vector machines for linear, Gaussian, and information diffusion kernels for two binary classification tasks: faculty vs. course
and faculty vs. student. The curves shown are the mean error rates over 20-fold cross validation and the error bars represent twice the standard deviation. For the Gaussian and
U
) in
information diffusion
kernels
we tested values of the kernels? free parameter ( or
D
D
the set
. The plots in Figure 2 use the best parameter value in the
above range.
O d O O R d R
e
Our results are consistent with previous experiments on this dataset [5], which have observed that the linear and Gaussian kernels result in very similar performance. However
the information diffusion kernel significantly outperforms both of them, almost always obtaining lower error rate than the average error rate of the other kernels. For the faculty
vs. course task, the error rate is halved. This result is striking because the kernels use identical representations of the documents, vectors of word counts (in contrast to, for example,
string kernels). We attribute this improvement to the fact that the information metric places
more emphasis on points near the boundary of the simplex.
5 Discussion
Kernel-based methods generally are ?model free,? and do not make distributional assumptions about the data that the learning algorithm is applied to. Yet statistical models offer
many advantages, and thus it is attractive to explore methods that combine data models
and purely discriminative methods for classification and regression. Our approach brings
a new perspective to combining parametric statistical modeling with non-parametric discriminative learning. In this aspect it is related to the methods proposed by Jaakkola and
Haussler [4]. However, the kernels we investigate here differ significantly from the Fisher
kernel proposed in [4]. In particular, the latter is based on the Fisher score
at a single point in parameter space, and in the
family model it is
case
of an exponential
given by a covariance
)
)
.
In contrast, infor*
*
Q
S " "V c
* " * @ , 7 * < " V* @ , 7 * <
, 0H2B4I
BQ
mation diffusion kernels are based on the full geometry of the statistical family, and yet are
also invariant under reparameterization of the family.
Bounds on the covering numbers for information diffusion kernels were derived for the
case of positive curvature, which apply to the special case of the multinomial. We note that
the resulting bounds are essentially the same as those that would be obtained for the Gaussian kernel on the flat -dimensional torus, which is the standard way of ?compactifying?
Euclidean space to get a Laplacian having only discrete spectrum;
the results of [3] are
formulated for the case
, corresponding to the circle . Similar bounds for general
manifolds with curvature bounded below by a negative constant should also be attainable.
d
While information diffusion kernels are very general, they may be difficult to compute in
particular cases; explicit formulas such as equations (8?9) for hyperbolic space are rare.
To approximate an information diffusion kernel it may be attractive to use the parametrices
between points, as we have done for the multinomial. In
and geodesic distance
cases where the distance itself is difficult to compute exactly, a compromise may be to approximate the distance between nearby points
of the Kullback-Leibler divergence,
in terms
using the relation
.
;
RM
V
b ;
R[
V :
&
-
V
" $# R
"
" $#
GQ " 4 , P "
The primary ?degree of freedom? in the use of information diffusion kernels lies in the
specification of the mapping of data to model parameters,
. For the multinomial,
FKNM % K
[
we have used the maximum likelihood mapping
, which is
simple and well motivated. As indicated in Section 2, there are other possibilities. This
remains an interesting area to explore, particularly for latent variable models.
Acknowledgements
This work was supported in part by NSF grant CCR-0122581.
References
[1] S. Amari and H. Nagaoka. Methods of Information Geometry, volume 191 of Translations of Mathematical Monographs. American Mathematical Society, 2000.
[2] A. Grigor?yan and M. Noguchi. The heat kernel on hyperbolic space. Bulletin of the
London Mathematical Society, 30:643?650, 1998.
[3] Y. Guo, P. L. Bartlett, J. Shawe-Taylor, and R. C. Williamson. Covering numbers for
support vector machines. IEEE Trans. Information Theory, 48(1), January 2002.
[4] T. S. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In Advances in Neural Information Processing Systems, volume 11, 1998.
[5] T. Joachims, N. Cristianini, and J. Shawe-Taylor. Composite kernels for hypertext
categorisation. In Proceedings of the International Conference on Machine Learning
(ICML), 2001.
[6] R. E. Kass and P. W. Vos. Geometrical Foundations of Asymptotic Inference. Wiley
Series in Probability and Statistics. John Wiley & Sons, 1997.
[7] R. I. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete input
spaces. In Proceedings of the International Conference on Machine Learning (ICML),
2002.
[8] P. Li and S.-T. Yau. Estimates of eigenvalues of a compact Riemannian manifold. In
Geometry of the Laplace Operator, volume 36 of Proceedings of Symposia in Pure
Mathematics, pages 205?239, 1980.
[9] R. Schoen and S.-T. Yau. Lectures on Differential Geometry, volume 1 of Conference
Proceedings and Lecture Notes in Geometry and Topology. International Press, 1994.
| 2216 |@word faculty:6 schoen:1 kondor:2 polynomial:1 norm:2 covariance:1 attainable:1 solid:1 initial:1 contains:1 score:1 series:1 document:4 ours:1 outperforms:1 ka:1 tackling:1 yet:2 must:1 john:2 plot:1 v:5 generative:2 half:1 plane:1 beginning:1 five:1 trinomial:2 mathematical:3 differential:4 become:1 symposium:1 incorrect:1 prove:1 combine:1 expected:2 themselves:1 spherical:4 increasing:2 begin:2 webkb:3 underlying:5 bounded:2 notation:1 project:1 sut:7 unspecified:1 string:1 developed:1 transformation:1 certainty:1 expands:1 exactly:1 classifier:1 grant:1 positive:5 local:1 consequence:1 analyzing:2 ap:3 emphasis:2 au:2 twice:1 suggests:1 limited:1 range:1 averaged:1 definite:1 area:2 yan:1 hyperbolic:5 significantly:2 composite:1 word:5 integrating:1 get:1 interior:1 unlabeled:1 operator:3 risk:1 applying:2 influence:1 context:1 imposed:1 map:2 starting:1 convex:1 focused:1 simplicity:1 pure:1 haussler:2 reparameterization:2 coordinate:2 laplace:1 construction:1 suppose:1 hypothesis:1 pa:2 associate:2 element:1 satisfying:2 particularly:1 continues:1 distributional:1 labeled:1 bottom:1 observed:1 hypertext:2 decrease:1 rq:1 monograph:1 cristianini:1 geodesic:4 depend:2 algebra:1 compromise:1 purely:1 po:1 represented:1 heat:13 london:1 h0:1 sgt:1 amari:1 statistic:2 nagaoka:1 itself:1 advantage:1 eigenvalue:7 indication:1 propose:2 product:2 gq:1 relevant:1 combining:2 rapidly:1 adjoint:1 intuitive:1 exploiting:1 neumann:2 illustrate:1 depending:1 school:2 progress:1 p2:2 solves:1 c:2 indicate:1 come:1 differ:1 direction:1 radius:1 correct:1 attribute:1 modifying:1 vc:1 require:1 ricci:1 generalization:2 hold:2 normal:6 cb:2 mapping:8 adopt:1 smallest:2 applicable:1 bag:1 tool:3 gaussian:14 always:1 mation:1 rather:2 jaakkola:2 derived:1 joachim:1 improvement:1 likelihood:4 contrast:2 geodesically:1 sense:1 inference:1 kernelized:1 relation:1 infor:1 classification:7 smoothing:1 special:2 having:1 identical:2 icml:2 tem:1 simplex:4 few:1 divergence:1 familiar:1 geometry:21 recalling:1 freedom:1 interest:1 investigate:2 possibility:1 analyzed:1 nl:1 machinery:1 bq:1 euclidean:8 taylor:2 circle:1 modeling:2 cover:1 deviation:1 subset:1 rare:1 uniform:1 motivating:1 st:1 fundamental:1 international:3 together:1 concrete:1 guy:1 yau:3 american:1 li:2 student:3 includes:1 coefficient:1 satisfy:2 ad:1 closed:1 variance:3 largely:1 yield:1 generalize:1 ed:1 against:1 frequency:2 obvious:1 naturally:3 associated:6 riemannian:12 proof:2 proved:1 treatment:1 dataset:2 hilbert:1 sophisticated:1 done:1 web:1 nonlinear:1 fehg:1 defines:2 brings:1 indicated:1 usa:2 concept:1 leibler:1 attractive:2 self:1 covering:12 complete:1 edgf:1 geometrical:1 multinomial:17 qp:1 b4:1 volume:6 belong:1 mellon:2 refer:3 mathematics:1 vos:1 shawe:2 dot:1 specification:1 curvature:5 posterior:2 halved:1 recent:1 perspective:1 inequality:3 binary:1 greater:1 staff:1 employed:1 dashed:1 full:1 calculation:2 cross:2 sphere:3 offer:1 rameter:1 plugging:1 laplacian:7 basic:2 regression:1 essentially:1 cmu:2 metric:14 kernel:87 adopting:1 represent:1 sends:1 unlike:2 eigenfunctions:1 induced:1 lafferty:4 integer:2 near:2 topology:1 inner:2 idea:1 det:1 motivated:2 expression:1 bartlett:1 gb:6 ignored:1 generally:2 detailed:1 amount:1 svms:1 category:1 nsf:1 dotted:1 ccr:1 conform:1 carnegie:2 discrete:10 vol:1 key:1 diffusion:34 graph:3 sum:1 inverse:1 striking:1 place:2 family:12 almost:1 draw:1 decision:2 summarizes:1 bound:20 def:1 fold:2 activity:1 categorisation:1 flat:1 nearby:1 aspect:1 describes:1 ate:1 slightly:1 intimately:1 son:1 invariant:2 equation:10 remains:2 discus:1 count:3 turn:1 generalizes:1 apply:3 spectral:5 occurrence:1 jd:2 top:1 dirichlet:6 include:1 xw:1 exploit:1 giving:1 dgf:1 establish:1 classical:2 society:2 parametric:3 primary:1 div:1 pain:1 distance:8 outer:1 manifold:16 discriminant:1 index:1 equivalently:2 difficult:2 negative:3 unknown:1 upper:2 orthant:1 curved:1 january:1 dc:1 introduced:1 inverting:1 required:1 c3:1 trans:1 bar:1 below:1 yc:2 suitable:1 natural:3 difficulty:1 imply:1 hb2:1 gf:1 text:6 review:1 geometric:2 prior:2 tangent:1 acknowledgement:1 relative:1 asymptotic:2 lecture:2 interesting:1 limitation:1 proven:1 validation:2 foundation:1 degree:1 consistent:1 mercer:3 translation:1 course:4 supported:1 free:2 bulletin:1 sparse:1 boundary:12 dimension:3 curve:2 collection:3 lebanon:2 approximate:3 compact:5 kullback:1 ml:2 corpus:1 pittsburgh:2 discriminative:4 spectrum:2 continuous:1 latent:1 ca:1 obtaining:1 expansion:1 williamson:1 main:2 body:1 wiley:2 explicit:2 torus:1 exponential:1 lie:1 theorem:11 formula:2 zu:1 svm:1 normalizing:1 derives:1 exists:1 importance:1 entropy:2 generalizing:1 simply:1 explore:2 corresponds:2 satisfies:2 formulated:1 rbf:4 fisher:12 considerable:1 experimental:2 support:5 guo:4 latter:1 tested:1 |
1,337 | 2,217 | Learning to Detect Natural Image Boundaries
Using Brightness and Texture
David R. Martin Charless C. Fowlkes Jitendra Malik
Computer Science Division, EECS, U.C. Berkeley, Berkeley, CA 94720
dmartin,fowlkes,malik @cs.berkeley.edu
Abstract
The goal of this work is to accurately detect and localize boundaries in
natural scenes using local image measurements. We formulate features
that respond to characteristic changes in brightness and texture associated
with natural boundaries. In order to combine the information from these
features in an optimal way, a classifier is trained using human labeled
images as ground truth. We present precision-recall curves showing that
the resulting detector outperforms existing approaches.
1 Introduction
Consider the image patches in Figure 1. Though they lack global context, it is clear which
contain boundaries and which do not. The goal of this paper is to use features extracted
from the image patch to estimate the posterior probability of a boundary passing through
the center point. Such a local boundary model is integral to higher-level segmentation algorithms, whether based on grouping pixels into regions [21, 8] or grouping edge fragments
into contours [22, 16].
The traditional approach to this problem is to look for discontinuities in image brightness.
For example, the widely employed Canny detector [2] models boundaries as brightness step
edges. The image patches show that this is an inadequate model for boundaries in natural
images, due to the ubiquitous phenomenon of texture. The Canny detector will fire wildly
inside textured regions where high-contrast contours are present but no boundary exists. In
addition, it is unable to detect the boundary between textured regions when there is only a
subtle change in average image brightness.
These significant problems have lead researchers to develop boundary detectors that explicitly model texture. While these work well on synthetic Brodatz mosaics, they have
problems in the vicinity of brightness edges. Texture descriptors over local windows that
straddle a boundary have different statistics from windows contained in either of the neighboring regions. This results in thin halo-like regions being detected around contours.
Clearly, boundaries in natural images are marked by changes in both texture and brightness.
Evidence from psychophysics [18] suggests that humans make combined use of these two
cues to improve detection and localization of boundaries. There has been limited work in
computational vision on addressing the difficult problem of cue combination. For example,
the authors of [8] associate a measure of texturedness with each point in an image in order
to suppress contour processing in textured regions and vice versa. However, their solution
is full of ad-hoc design decisions and hand chosen parameters.
The main contribution of this paper is to provide a more principled approach to cue combination by framing the task as a supervised learning problem. A large dataset of natural
images that have been manually segmented by multiple human subjects [10] provides the
ground truth label for each pixel as being on- or off-boundary. The task is then to model
the probability of a pixel being on-boundary conditioned on some set of locally measured
image features. This sort of quantitative approach to learning and evaluating boundary
detectors is similar to the work of Konishi et al. [7] using the Sowerby dataset of English countryside scenes. Our work is distinguished by an explicit treatment of texture and
brightness, enabling superior performance on a more diverse collection of natural images.
The outline of the paper is as follows. In Section 2 we describe the oriented energy and
texture gradient features used as input to our algorithm. Section 3 discusses the classifiers
we use to combine the local features. Section 4 presents our evaluation methodology along
with a quantitative comparison of our method to existing boundary detection methods. We
conclude in Section 5.
2 Image Features
2.1 Oriented Energy
In natural images, brightness edges are more than simple steps. Phenomena such as specularities, mutual illumination, and shading result in composite intensity profiles consisting
of steps, peaks, and roofs. The oriented energy (OE) approach [12] can be used to detect
and localize these composite edges [14]. OE is defined as:
where
are a quadrature pair of even- and odd-symmetric filters at orientation
and scaleand
the corre . Our even-symmetric filter is a Gaussiansecond-derivative,
has maximumandresponse
sponding odd-symmetric filter is its Hilbert transform.
!#"for
contours at orientation . We compute OE at 3 half-octave scales starting at
the image diagonal. The filters are elongated by a ratio of 3:1 along the putative boundary
direction.
2.2 Texture Gradient
We would
%like
operator
that measures the degree to which texture varies at
$'&)( a indirectional
. A natural
a location
direction
to operationalize this is to consider a disk
*$'&+( , and dividedway
of radius centered on
in two along a diameter at orientation . We
can then compare the texture in the two half discs with some texture dissimilarity measure.
Oriented texture processing along these lines has been pursued by [19].
What texture dissimilarity measure should one use? There is an emerging consensus that
for texture analysis, an image should first be convolved with a bank of filters tuned to
various orientations and spatial frequencies [4, 9]. After filtering, a texture descriptor is
then constructed using the empirical distribution of filter responses in the neighborhood of
a pixel. This approach has been shown to be very powerful both for texture synthesis [5] as
well as texture discrimination [15].
Puzicha et al. [15] evaluate a wide range of texture descriptors in this framework. We
choose the approach developed in [8]. Convolution with a filter bank containing both even
and odd filters at multiple orientations as well as a radially symmetric center-surround
filter associates a vector of filter responses to every pixel. These vectors are clustered
using k-means and each pixel is assigned to one of the cluster centers, or textons. Texture
dissimilarities can then be computed by comparing the histograms of textons in the two
disc halves. Let ,.- and /0- count how many pixels of texton type 1 occur in each half disk.
Intensity
Boundaries
Non-Boundaries
Image
Figure 1: Local image features. In each row, the first panel shows the image patch. The following
panels show feature profiles along the line marked in each patch. The features are raw image intensity,
raw oriented energy
, localized oriented energy
, raw texture gradient
, and localized
texture gradient
. The vertical line in each profile marks the patch center. The challenge is to
combine these features in order to detect and localize boundaries.
We define the texture gradient (TG) to be the
distance between these two histograms:
, - / -
,#- 0/
*$'&+( over 12 orientations and 3 half-octave
The texture gradient is computed
at each pixel
"
of the image diagonal.
scales starting at
, & /
2.3 Localization
The underlying function we are trying to learn is tightly peaked around the location of
image boundaries marked by humans. In contrast, Figure 1 shows that the features we have
discussed so far don?t have this structure. By nature of the fact that they pool information
over some support, they produce smooth, spatially extended outputs. The texture gradient
is particularly prone to this effect, since the texture in a window straddling the boundary is
distinctly different than the textures on either side of the boundary. This often results in a
wide plateau or even double peaks in the texture gradient.
Since each pixel is classified independently, these spatially extended features are particularly problematic as both on-boundary pixels and nearby off-boundary pixels will have
large OE and TG. In order to make this spatial structure available to the classifier we trans
%$
form the raw OE and TG signals in order to emphasize local maxima. Given a feature
$
defined over spatial coordinate orthogonal to the edge orientation, consider the derived
%$
*$
%$ , where
%$
%$
%$ is the first-order approximation
feature
*$ . We use the stabilized version
of the distance to the nearest maximum of
%
$ *
$
%
$
%$
with chosen to optimize the performance of the feature. By incorporating the
*$ will have narrower peaks than the raw
%$ .
localization term,
(1)
%$
To robustly estimate the directional derivatives and localize the peaks, we fit a cylindrical
parabola over a circular
of radius centered at each pixel. The coefficients of
$ window
$ provide
the quadratic fit
directly the signal derivatives, so the transform
above becomes
, where and require half-wave rectification.1
This transformation
is applied to the oriented energy and texture gradient signals at each
orientation and scale separately. In order to set and , we optimized the performance
of each feature independently with respect to the training data.2
Columns 4 and 6 in Figure 1 show the results of applying this transformation which clearly
has the effect of reducing noise and tightly localizing the boundaries. Our final feature set
and
, each at three scales. This yields a 6-element
consists of these localized signals
feature vector at 12 orientations at each pixel.
3 Cue Combination Using Classifiers
We would like to combine the cues given by the local feature vector
to estimate
*$'&+( &)in order
the posterior probability of a boundary at each image location
. Previous work
on learning boundary models includes [11, 7]. We consider several parametric and nonparametric models, covering a range of complexity and computational cost. The simplest
are able to capture the complementary information in the 6 features. The more powerful
classifiers have the potential to capture non-linear cue ?gating? effects. For example, one
may wish to ignore brightness edges inside high-contrast textures where OE is high and
TG is low. These are the classifiers we use:
Density Estimation Adaptive bins are provided by vector quantization using k-means.
Each centroid provides the density estimate of its Voronoi cell as the fraction of onboundary samples in the cell. We use k=128 and average the estimates from 10 runs.
Classification Trees The domain is partitioned hierarchically. Top-down axis-parallel
splits are made so as to maximize the information gain. A 5% bound on the error of
the density estimate is enforced by splitting cells only when both classes have 400 points
present.
Logistic Regression This is the simplest of our classifiers, and the one perhaps most easily
replicated by neurons in the visual cortex. Initialization is random, and convergence is fast
and reliable by maximizing the likelihood. We also consider two variants: quadratic combinations of features, and boosting using the confidence-rated generalization of AdaBoost
by Schapire and Singer [20]. No more than 10 rounds of boosting are required for this
problem.
Hierarchical Mixtures of Experts The HME model of Jordan and Jacobs [6] is a mixture
model where both the components and mixing coefficients are fit by logistic functions. We
1
Windowed parabolic fitting is known as 2nd-order Savitsky-Golay filtering. We also considered
Gaussian derivative filters !#"%$'&")($ &*")($,( + to estimate !#-$'&*-$ ( &*-$.
( ( + with nearly identical results.
2
The fitted values are / = ! 0.1,0.075,0.013 + and 0 = ! 2.1,2.5,3.1 + for OE, and / = ! .057,.016,.005 +
and 0 = ! 6.66,9.31,11.72 + for TG. 0 is measured in pixels.
Localized Features
1
0.75
0.75
Precision
Precision
Raw Features
1
0.5
all F=.65
oe0 F=.59
oe1 F=.60
oe2 F=.61
tg0 F=.64
tg1 F=.64
tg2 F=.61
0.25
0
0
0.25
0.5
all F=.67
oe0 F=.60
oe1 F=.62
oe2 F=.63
tg0 F=.65
tg1 F=.65
tg2 F=.63
0.25
0
0.5
Recall
0.75
1
0
0.25
0.5
0.75
1
Recall
Figure 2: Performance of raw (left) and localized features (right). The precision and recall axes are
described in Section 4. Curves towards the top (lower noise) and right (higher accuracy) are more
desirable. Each curve is scored by the F-measure, the value of which is shown in the legend. In all
the precision-recall graphs in this paper, the maximum F-measure occurs at a recall of approximately
75%. The left plot shows the performance of the raw OE and TG features using the logistic regression
classifier. The right plot shows the performance of the features after applying the localization process
of Equation 1. It is clear that the localization function greatly improves the quality of the individual
features, especially the texture gradient. The top curve in each graph shows the performance of the
features in combination. While tuning each feature?s ! / & 0 + parameters individually is suboptimal,
overall performance still improves.
consider small binary trees up to a depth of 3 (8 experts). The model is initialized in a
greedy, top-down manner and fit with EM.
Support Vector Machines We use the SVM package libsvm [3] to do soft-margin classification using Gaussian kernels. The optimal parameters were =0.2 and =0.2.
The ground truth boundary data is based on the dataset of [10] which provides 5-6 human
segmentations for each of 1000 natural images from the Corel image database. We used
200 images for training and algorithm development. The 100 test images were used only
to generate the final results for this paper. The authors of [10] show that the segmentations
of a single image by the different subjects are highly consistent,
consider all human
%$ &+( &+ sotowe
marked boundaries
valid.
We
declare
an
image
location
be
on-boundary if it is
$ =2 pixels and =30 degrees of any human-marked boundary.
within
The remainder
are labeled off-boundary.
This classification task is characterized by relatively low dimension, a large amount of data
(100M samples for our 240x160-pixel images), and poor separability. The maximum feasible amount of data, uniformly sampled, is given to each classifier. This varies from 50M
samples for density estimation to 20K samples for the SVM. Note that a high degree of
class overlap in any local feature space is inevitable because the human subjects make use
of both global constraints and high-level information to resolve locally ambiguous boundaries.
4 Results
images, which provide the probability of
The output of each classifier is a set
%of
$'&)oriented
( &+ based
a boundary at each image location
on local information. For several of the
(b) Classifiers
1
0.75
0.75
Precision
Precision
(a) Feature Combinations
1
0.5
0.25
0.5
Density Estimation F=.68
Classification Tree F=.68
Logistic Regression F=.67
Quadratic LR F=.68
Boosted LR F=.68
Hier. Mix. of Experts F=.68
Support Vector Machine F=.66
0.25
all F=.67
oe2+tg1 F=.67
tg* F=.66
oe* F=.63
0
0
0.25
0
0.5
Recall
0.75
1
0
0.25
0.5
0.75
1
Recall
Figure 3: Precision-recall curves for (a) different feature combinations, and (b) different classifiers.
The left panel shows the performance of different combinations of the localized features using the
logistic regression classifier: the 3 OE features (oe*), the 3 TG features (tg*), the best performing
single OE and TG features (oe2+tg1), and all 6 features together. There is clearly independent information in each feature, but most of the information is captured by the combination of one OE and
one TG feature. The right panel shows the performance of different classifiers using all 6 features.
All the classifiers achieve similar performance, except for the SVM which suffers from the poor separation of the data. Classification trees performs the best by a slim margin. Based on performance,
simplicity, and low computation cost, we favor the logistic regression and its variants.
classifiers we consider, the image provides actual posterior probabilities, which is particularly appropriate for the local measurement model in higher-level vision applications.
For the purpose of evaluation, we take the maximum over orientations.
In order to evaluate the boundary model against the human ground truth, we use the
precision-recall framework, a standard evaluation technique in the information retrieval
community [17]. It is closely related to the ROC curves used for by [1] to evaluate boundary models. The precision-recall curve captures the trade-off between accuracy and noise
as the detector threshold is varied. Precision is the fraction of detections which are true positives, while recall is the fraction of positives that are detected. These are computed using
a distance tolerance of 2 pixels to allow for small localization errors in both the machine
and human boundary maps.
The precision-recall curve is particularly meaningful in the context of boundary detection
when we consider applications that make use of boundary maps, such as stereo or object
recognition. It is reasonable to characterize higher level processing in terms of how much
true signal is required to succeed, and how much noise can be tolerated. Recall provides
the former and precision the latter. A particular application will define a relative cost
between these quantities, which focuses
at a specific point on the precision-recall
attention
curve. The F-measure, defined as
, captures this trade-off. The
location of the maximum F-measure along the curve provides the optimal threshold given
, which we set to 0.5 in our experiments.
Figure 2 shows the performance of the raw and localized features. This provides a clear
quantitative justification for the localization process described in Section 2.3. Figure 3a
shows the performance of various linear combinations of the localized features. The combination of multiple scales improves performance, but the largest gain comes from using
OE and TG together.
(a) Detector Comparison
(b) F-Measure vs. Tolerance
1
0.8
0.75
F-Measure
Precision
0.7
0.5
0.25
0.5
Human F=.75
Us F=.67
Nitzberg F=.65
Canny F=.57
0
0
0.25
0.6
Human
Us
Nitzberg
Canny
0.4
0.5
Recall
0.75
1
1
1.5
2
2.5
3
Tolerance (in pixels)
Figure 4: The left panel shows precision-recall curves for a variety of boundary detection schemes,
along with the precision and recall of the human segmentations when compared with each other. The
right panel shows the F-measure of each detector as the distance tolerance for measuring precision
and recall varies. We take the Canny detector as the baseline due to its widespread use. Our detector
outperforms the learning-based Nitzberg detector proposed by Konishi et al. [7], but there is still a
significant gap with respect to human performance.
The results presented so far use the logistic regression classifier. Figure 3b shows the performance of the 7 different classifiers on the complete feature set. The most obvious trend
is that they all perform similarly. The simple non-parametric models ? the classification
tree and density estimation ? perform the best, as they are most able to make use of the
large quantity of training data to provide unbiased estimates of the posterior. The plain
logistic regression model performs extremely well, with the variants of logistic regression
? quadratic, boosted, and HME ? performing only slightly better. The SVM is a disappointment because of its lower performance, high computational cost, and fragility. These
problems result from the non-separability of the data, which requires 20% of the training
examples to be used as support vectors. Balancing considerations of performance, model
complexity, and computational cost, we favor the logistic model and its variants. 3
Figure 4 shows the performance of our detector compared to two other approaches. Because of its widespread use, MATLAB?s implementation of the classic Canny [2] detector
forms the baseline. We also consider the Nitzberg detector [13, 7], since it is based on a
similar supervised learning approach, and Konishi et al. [7] show that it outperforms previous methods. To make the comparisons fair, the parameters of both Canny and Nitzberg
were optimized using the training data. For Canny, this amounts to choosing the optimal
scale. The Nitzberg detector generates a feature vector containing eigenvalues of the 2nd
moment matrix; we train a classifier on these 2 features using logistic regression.
Figure 4 also shows the performance of the human data as an upper-bound for the algorithms. The human precision-recall points are computed for each segmentation by comparing it to the other segmentations of the same image. The approach of this paper is a
clear improvement over the state of the art in boundary detection, but it will take the addition of high-level and global information to close the gap between the machine and human
performance.
3
The fitted coefficients for the logistic are ! .088,-.029,.019 + for OE and ! .31,.26,.27 + for TG,
with an offset of -2.79. The features have been separately normalized to have unit variance.
5 Conclusion
We have defined a novel set of brightness and texture cues appropriate for constructing a
local boundary model. By using a very large dataset of human-labeled boundaries in natural
images, we have formulated the task of cue combination for local boundary detection as
a supervised learning problem. This approach models the true posterior probability of a
boundary at every image location and orientation, which is particularly useful for higherlevel algorithms. Based on a quantitative evaluation on 100 natural images, our detector
outperforms existing methods.
References
[1] K. Bowyer, C. Kranenburg, and S. Dougherty. Edge detector evaluation using empirical ROC
curves. Proc. IEEE Conf. Comput. Vision and Pattern Recognition, 1999.
[2] J. Canny. A computational approach to edge detection. IEEE Trans. Pattern Analysis and
Machine Intelligence, 8:679?698, 1986.
[3] C. Chang and C. Lin. LIBSVM: a library for support vector machines, 2001. Software available
at http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[4] I. Fogel and D. Sagi. Gabor filters as texture discriminator. Bio. Cybernetics, 61:103?13, 1989.
[5] D. J. Heeger and J. R. Bergen. Pyramid-based texture analysis/synthesis. In Proceedings of
SIGGRAPH ?95, pages 229?238, 1995.
[6] M. I. Jordan and R. A. Jacobs. Hierarchical mixtures of experts and the EM algorithm. Neural
Computation, 6:181?214, 1994.
[7] S. Konishi, A. L. Yuille, J. Coughlan, and S. C. Zhu. Fundamental bounds on edge detection:
an information theoretic evaluation of different edge cues. Proc. IEEE Conf. Comput. Vision
and Pattern Recognition, pages 573?579, 1999.
[8] J. Malik, S. Belongie, T. Leung, and J. Shi. Contour and texture analysis for image segmentation. Int?l. Journal of Computer Vision, 43(1):7?27, June 2001.
[9] J. Malik and P. Perona. Preattentive texture discrimination with early vision mechanisms. J.
Optical Society of America, 7(2):923?32, May 1990.
[10] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images
and its application to evaluating segmentation algorithms and measuring ecological statistics.
In Proc. 8th Int?l. Conf. Computer Vision, volume 2, pages 416?423, July 2001.
[11] M. Meil?a and J. Shi. Learning segmentation by random walks. In NIPS, 2001.
[12] M.C. Morrone and D.C. Burr. Feature detection in human vision: a phase dependent energy
model. Proc. R. Soc. Lond. B, 235:221?45, 1988.
[13] M. Nitzberg, D. Mumford, and T. Shiota. Filtering, Segmentation and Depth. Springer-Verlag,
1993.
[14] P. Perona and J. Malik. Detecting and localizing edges composed of steps, peaks and roofs. In
Proc. Int. Conf. Computer Vision, pages 52?7, Osaka, Japan, Dec 1990.
[15] J. Puzicha, T. Hofmann, and J. Buhmann. Non-parametric similarity measures for unsupervised
texture segmentation and image retrieval. In Computer Vision and Pattern Recognition, 1997.
[16] X. Ren and J. Malik. A probabilistic multi-scale model for contour completion based on image
statistics. Proc. 7th Europ. Conf. Comput. Vision, 2002.
[17] C. Van Rijsbergen. Information Retrieval, 2nd ed. Dept. of Comp. Sci., Univ. of Glasgow,
1979.
[18] J. Rivest and P. Cavanagh. Localizing contours defined by more than one attribute. Vision
Research, 36(1):53?66, 1996.
[19] Y. Rubner and C. Tomasi. Coalescing texture descriptors. ARPA Image Understanding Workshop, 1996.
[20] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions.
Machine Learning, 37(3):297?336, 1999.
[21] Z. Tu, S. Zhu, and H. Shum. Image segmentation by data driven markov chain monte carlo. In
Proc. 8th Int?l. Conf. Computer Vision, volume 2, pages 131?138, July 2001.
[22] L.R. Williams and D.W. Jacobs. Stochastic completion fields: a neural model of illusory contour
shape and salience. In Proc. 5th Int. Conf. Computer Vision, pages 408?15, June 1995.
| 2217 |@word cylindrical:1 version:1 nd:3 disk:2 jacob:3 brightness:11 shading:1 moment:1 shiota:1 fragment:1 shum:1 tuned:1 outperforms:4 existing:3 comparing:2 shape:1 hofmann:1 plot:2 x160:1 discrimination:2 v:1 cue:9 half:6 pursued:1 greedy:1 intelligence:1 coughlan:1 lr:2 straddling:1 provides:7 boosting:3 detecting:1 location:7 windowed:1 along:7 constructed:1 consists:1 combine:4 fitting:1 burr:1 inside:2 manner:1 multi:1 resolve:1 actual:1 window:4 becomes:1 provided:1 underlying:1 rivest:1 panel:6 what:1 emerging:1 developed:1 transformation:2 berkeley:3 quantitative:4 every:2 classifier:19 bio:1 unit:1 positive:2 declare:1 local:12 sagi:1 meil:1 approximately:1 initialization:1 suggests:1 limited:1 range:2 empirical:2 gabor:1 composite:2 confidence:2 close:1 operator:1 context:2 applying:2 www:1 optimize:1 elongated:1 map:2 center:4 maximizing:1 shi:2 williams:1 attention:1 starting:2 independently:2 formulate:1 simplicity:1 splitting:1 glasgow:1 osaka:1 konishi:4 classic:1 coordinate:1 justification:1 mosaic:1 associate:2 element:1 trend:1 recognition:4 particularly:5 parabola:1 labeled:3 database:2 csie:1 capture:4 region:6 oe:15 trade:2 principled:1 complexity:2 trained:1 yuille:1 localization:7 division:1 textured:3 easily:1 siggraph:1 various:2 america:1 train:1 univ:1 fast:1 describe:1 golay:1 monte:1 detected:2 neighborhood:1 choosing:1 widely:1 favor:2 statistic:3 dougherty:1 transform:2 final:2 hoc:1 higherlevel:1 eigenvalue:1 slim:1 remainder:1 canny:9 neighboring:1 tu:1 mixing:1 achieve:1 convergence:1 cluster:1 double:1 produce:1 brodatz:1 object:1 develop:1 completion:2 measured:2 nearest:1 odd:3 soc:1 c:1 europ:1 come:1 direction:2 radius:2 tg2:2 closely:1 attribute:1 filter:12 stochastic:1 centered:2 human:20 bin:1 require:1 clustered:1 generalization:1 ntu:1 around:2 considered:1 ground:4 early:1 purpose:1 estimation:4 proc:8 label:1 individually:1 largest:1 vice:1 clearly:3 gaussian:3 boosted:2 derived:1 ax:1 focus:1 june:2 improvement:1 likelihood:1 greatly:1 contrast:3 centroid:1 baseline:2 detect:5 voronoi:1 bergen:1 dependent:1 leung:1 perona:2 pixel:18 overall:1 classification:6 orientation:11 development:1 spatial:3 art:1 psychophysics:1 mutual:1 field:1 manually:1 identical:1 look:1 unsupervised:1 nearly:1 thin:1 inevitable:1 peaked:1 oriented:8 composed:1 tightly:2 roof:2 individual:1 phase:1 consisting:1 fire:1 detection:10 circular:1 highly:1 evaluation:6 mixture:3 chain:1 integral:1 edge:12 orthogonal:1 tree:5 initialized:1 walk:1 fitted:2 arpa:1 column:1 soft:1 localizing:3 measuring:2 tg:13 cost:5 addressing:1 inadequate:1 characterize:1 varies:3 eec:1 synthetic:1 combined:1 tolerated:1 density:6 straddle:1 peak:5 fundamental:1 probabilistic:1 off:5 pool:1 synthesis:2 together:2 containing:2 choose:1 conf:7 expert:4 derivative:4 japan:1 potential:1 hme:2 includes:1 coefficient:3 int:5 jitendra:1 textons:2 explicitly:1 ad:1 wave:1 sort:1 parallel:1 contribution:1 accuracy:2 descriptor:4 characteristic:1 variance:1 yield:1 directional:1 raw:9 accurately:1 disc:2 ren:1 carlo:1 comp:1 researcher:1 cybernetics:1 classified:1 detector:17 plateau:1 suffers:1 ed:1 against:1 energy:7 frequency:1 obvious:1 associated:1 gain:2 sampled:1 dataset:4 treatment:1 radially:1 illusory:1 recall:20 improves:3 ubiquitous:1 segmentation:12 subtle:1 hilbert:1 higher:4 supervised:3 methodology:1 response:2 adaboost:1 improved:1 though:1 wildly:1 hand:1 hier:1 lack:1 widespread:2 logistic:12 quality:1 perhaps:1 effect:3 nitzberg:7 normalized:1 contain:1 true:3 unbiased:1 former:1 vicinity:1 assigned:1 spatially:2 symmetric:4 round:1 covering:1 ambiguous:1 octave:2 trying:1 outline:1 complete:1 theoretic:1 performs:2 image:45 consideration:1 novel:1 charles:1 superior:1 corel:1 volume:2 discussed:1 measurement:2 significant:2 versa:1 surround:1 tuning:1 similarly:1 cortex:1 similarity:1 posterior:5 driven:1 verlag:1 ecological:1 binary:1 captured:1 employed:1 maximize:1 signal:5 july:2 full:1 multiple:3 desirable:1 mix:1 segmented:2 smooth:1 characterized:1 retrieval:3 lin:1 sowerby:1 prediction:1 variant:4 regression:9 vision:14 histogram:2 sponding:1 kernel:1 texton:1 pyramid:1 cell:3 dec:1 addition:2 separately:2 subject:3 legend:1 jordan:2 split:1 variety:1 fit:4 suboptimal:1 whether:1 stereo:1 passing:1 matlab:1 useful:1 clear:4 amount:3 nonparametric:1 locally:2 diameter:1 simplest:2 schapire:2 generate:1 http:1 problematic:1 stabilized:1 diverse:1 threshold:2 localize:4 libsvm:3 graph:2 fraction:3 enforced:1 run:1 package:1 powerful:2 respond:1 parabolic:1 reasonable:1 patch:6 putative:1 separation:1 decision:1 bowyer:1 bound:3 corre:1 quadratic:4 occur:1 constraint:1 scene:2 software:1 tal:1 nearby:1 generates:1 extremely:1 lond:1 performing:2 optical:1 martin:2 relatively:1 combination:12 poor:2 slightly:1 em:2 separability:2 partitioned:1 tw:1 rectification:1 equation:1 discus:1 count:1 cjlin:1 mechanism:1 singer:2 available:2 cavanagh:1 hierarchical:2 appropriate:2 fowlkes:3 distinguished:1 robustly:1 convolved:1 top:4 coalescing:1 countryside:1 especially:1 society:1 malik:7 quantity:2 occurs:1 mumford:1 parametric:3 traditional:1 diagonal:2 gradient:10 distance:4 unable:1 sci:1 evaluate:3 consensus:1 rijsbergen:1 ratio:1 difficult:1 suppress:1 design:1 implementation:1 perform:2 upper:1 vertical:1 convolution:1 neuron:1 markov:1 enabling:1 extended:2 halo:1 varied:1 community:1 intensity:3 david:1 pair:1 required:2 optimized:2 disappointment:1 fogel:1 discriminator:1 tomasi:1 framing:1 discontinuity:1 trans:2 nip:1 able:2 pattern:4 challenge:1 reliable:1 overlap:1 natural:13 buhmann:1 zhu:2 scheme:1 improve:1 rated:2 library:1 axis:1 understanding:1 relative:1 filtering:3 localized:8 degree:3 rubner:1 consistent:1 bank:2 balancing:1 row:1 prone:1 english:1 salience:1 side:1 allow:1 wide:2 distinctly:1 tolerance:4 van:1 boundary:48 curve:12 depth:2 evaluating:2 valid:1 contour:9 dimension:1 plain:1 author:2 collection:1 adaptive:1 made:1 replicated:1 far:2 emphasize:1 ignore:1 global:3 conclude:1 belongie:1 morrone:1 don:1 learn:1 nature:1 fragility:1 ca:1 constructing:1 domain:1 main:1 hierarchically:1 noise:4 scored:1 profile:3 fair:1 complementary:1 quadrature:1 roc:2 precision:19 explicit:1 wish:1 heeger:1 comput:3 down:2 operationalize:1 specific:1 showing:1 gating:1 offset:1 svm:4 evidence:1 grouping:2 exists:1 incorporating:1 quantization:1 workshop:1 texture:38 dissimilarity:3 illumination:1 conditioned:1 margin:2 gap:2 specularities:1 visual:1 contained:1 chang:1 springer:1 truth:4 extracted:1 succeed:1 goal:2 marked:5 narrower:1 formulated:1 towards:1 feasible:1 change:3 except:1 reducing:1 uniformly:1 meaningful:1 preattentive:1 puzicha:2 mark:1 support:5 latter:1 dept:1 phenomenon:2 |
1,338 | 2,218 | How to Combine Color and Shape
Information for 3D Object Recognition:
Kernels do the Thick
B. Caputo
Smith-Kettlewell Eye Research Institute,
2318 Fillmore Street,
94115 San Francisco, California, USA
[email protected]
Gy. Dorko
Department of Computer Science,
Chair for Pattern Recognition,
University of Erlangen-Nuremberg,
[email protected]
Abstract
This paper presents a kernel method that allows to combine color
and shape information for appearance-based object recognition. It
doesn't require to define a new common representation, but use the
power of kernels to combine different representations together in an
effective manner. These results are achieved using results of statistical mechanics of spin glasses combined with Markov random fields
via kernel functions. Experiments show an increase in recognition
rate up to 5.92% with respect to conventional strategies.
1
Introduction
Consider the two cars in Figure 1. They look very similar, but this wouldn't be
the case if we would look at color pictures: as the left car is yellow and the right
car is red, we would realize at a first glance that they are different. This simple
example shows that color and shape information are both important cues for object
recognition. In spite of this, just a few systems employ both. This is because most
of representations proposed in literature aren't suitable for both type of information
[5, 11, 13, 2]. Some authors tackled this problem building up new representations,
containing both color and shape information; these approaches show very good performances [7, 12,6]. However, this strategy has two important drawbacks:
? both types of information must be used always.
Although there are many cases where it is convenient to have both, a huge literature shows that color only, or shape only representations work very well for many
applications [9, 13, 11, 2]. A new, common representation doesn't always permit to
use just color or just shape information alone, depending on the task considered;
? the dimension of the feature vector.
If the new representation brings as much information as separate representations
do, then we must expect it to have a higher dimensionality than each separate
Figure 1: An example of objects similar with respect to shape but not with respect
to color (the left car is yellow while the right car is red).
representation alone, with all the risks of a curse of dimensionality effect. If the
dimension of the new representation vector is kept under control, we can expect
that the representation contains less information that single ones, with a possible
decrease of effectiveness
Our goal in this paper is to present a system that uses both types of information
while keeping them distinct, allowing the flexibility to use the information sometimes combined, sometimes separated, depending on the application considered. We
achieve this goal focusing the attention on how two given shape and color representations can be combined together as they are, rather than define a new representation.
We obtain this using Spin Glass-Markov Random Fields (SG-MRF), a new kernel
method that integrates results of statistical physics of spin glasses with Gibbs probability distributions via nonlinear kernel mapping. SG-MRFs have been used for
robust appearance-based object recognition with very good results, using a kernelized Hopfield energy [3]. Here we extend SG-MRF to a new SG-like energy function,
inspired by the ultrametric properties of the SG phase space. The structure of this
energy provides a natural framework for combining shape and color representations
together, without defining a new common representation (such as a concatenated
one, see for instance [7]). This approach presents two main advantages:
? it permits us to use existing and well tested representations both for shape
and color information;
? it permits us to use this knowledge in a flexible manner, depending on the
task considered.
To the best of our knowledge, there are no previous similar approaches to this
problem. Experimental results show the effectiveness of the new proposed kernel
method. The paper is organized as follows: section 2 defines the probabilistic
framework for object recognition, section 3 reviews SG-MRF and section 4 presents
the new energy function and how it can be used for combining together color and
shape information. Section 5 presents experiments that show the effectiveness of our
approach, compared to other conventional strategies (NNe, x2 and SVM [10, 14]).
The paper concludes with a summary discussion.
2
Probabilistic Appearance-based Object Recognition
Probabilistic appearance-based object recognition methods consider images as random feature vectors. Let x == [xij],i = 1, ... N,j = 1, ... M be an M x N image. We will consider each image as a random feature vector x E RMN. Assume
we have k different classes fh, fh, .. . ,D k of objects, and that for each object is
given a set ofnj data samples, d j = {xLx~, ... ,x~),j = 1, ... k. We will assign
each object to a pattern class 01,fh, ... ,Ok. How the object class OJ is represented, given a set of data samples dj (relative to that object class) , varies for
different appearance-based approaches: it can consider shape information only, or
color information only or both. This is equivalent to consider a set of features
{hL ht? .. , h~},
, j = 1, ... k, where each feature vector h~, is computed from the
image x~1o, h~ Jo = T(x~),ht
E G == ~m. Assuming that the data samples dJ are
J
1
a sufficient statistic for the pattern class OJ, the goal will be to estimate the probability distribution Po; (h) that has generated them. Then, given a test image x
and its associate feature vector h, the decision will be made using a Maximum A
Posteriori (MAP) classifier:
o
1*
= argmaxPo ; (h) = argmaxP(Ojlh) = argmaxP(hIOj)P(Oj),
j
j
j
(1)
using Bayes rule. P(hIOj ) are the Likelihood Functions (LFs) and P(Oj) are the
prior probabilities of the classes. In the rest of the paper we will assume that the
prior P(Oj) is the same for all object classes; thus the Bayes classifier (1) simplifies
to
j* = argmaxP(hIOj ).
(2)
j
A possible strategy for modeling P(hIOj ) is to use Gibbs distributions within a
Markov Random Field (MRF) framework. The MRF joint probability distribution
is given by
Z = Lexp(-E(hIOj )).
(3)
{h}
The normalizing constant Z is called the partition function, and E(hIOj ) is the
energy function. Using MRF modeling for appearance-based object recognition, eq
(2) will become
(4)
J
J
Only a few MRF approaches have been proposed for high level vision problems
such as object recognition [8], due to the modeling problem for MRF on irregular
sites (for a detailed discussion about this point, we refer the reader to [3]). Spin
Glass-Markov Random Fields overcome this limitation and can be effectively used
for robust appearance-based object recognition [3]0 Next sections review SG-MRF
and introduce a new energy function that allows to combine shape and color only
representations in a common probabilistic framework.
3
Spin Glass-Markov Random Fields
Consider k object classes 0 1 , O2 , ... , Ok, and for each object a set of nj data samples, dj = {xL ... x~), j = 1, ... k. We will suppose to extract, from each data
sample dJ a set of features {hi, ... h~, For instance, h~, can be a color histogram
computed from x~.
, The SG-MRF probability distribution is given by
o
0
}
.
0
Descendant
Descendant
Descendant
Figure 2: Hierarchical structure induced by the ultrametric energy function.
where ESGMRF (hIO j ) is a kernelized spin glass energy function. The most general
SG energy is given by [1]
E
=-
L
Jij
(6)
i,j = 1, ... N,
Si Sj
( i,j)
where the Si are random variables taking values in [-1, + 1], s = (Sl, ... , S N) is a
configuration and J = [Jij ],(i ,j) = 1, ... ,N is the connection matrix. When the
Jij is given by the Hopfield 's prescription
J ij =
~
P
L dl') ~]I')
(7)
,
1'=1
with {~(I') }~=1 given configurations of the system ( prototypes) having the following
properties: (aj ~(I') .1 ~(v), \;fjJ f:. V j (bj p = aN, a :::; 0.14, N --+ 00 , then it can be
demonstrated that ESGMRF becomes [3]
pj
ESGMRF(hIOj) = -
L
2
[K(h,h(l'j))] ,
(8)
1'=1
where the function K(h, h(l'j)) is a Generalized Gaussian kernel [14]:
K(x, y)
= exp{ -pda,b(X, y)},
(9)
{h(l'j)}~~l>j E [1 , k] are the prototypes selected (according to a chosen ansatz, [3])
from the training data. The number of prototypes per class must be finite, and
they must satisfy the condition K(h(i),h(l)) = 0, for all i,l = 1, ... pj,i f:.l and
j = 1, ... k. Note that SG-MRFs are defined on features rather than on raw pixels
data. The sites are fully connected, which ends in learning the neighborhood system
from the training data instead of choosing it heuristically. A key characteristic of the
model is that in SG-MRF the functional form of the energy is given by construction.
4
Ultrametric Spin Glass-Markov Random Fields
Consider the energy function (6) with the following connection matrix:
1 P
J ij = N ~ ~~JL) ~)JL)
(q".
1+
?; 1]~JLv)
)
1])JLv)
1
= N
PIP
~ ~~JL) ~)JL)
+N
~
?;q". d
JLv ) ~)JLv)
(10)
with ~~JLv) = ~~JL)1]~JLv). This energy induces a hierarchical organization of stored
prototypes ([1], see Figure 2). The set of prototypes {~(JL) g=1 are stored at the first
level of the hierarchy and are usually called the ancestors. Each of them will have
q descendants {~(JLv)} ~~ 1. The parameter 1]~JLv) measures the similarity between
ancestors and descendants. The first term in eq (10), right, is the Hopfield energy
(6)-(7); the second is a new term that allows us to store as prototypes patterns
correlated with the {~(JL) g=1; this is the case if we want to store, as separate sets
of prototypes, shape only and color only representations computed from the same
view. This energy will have p+ L~= 1 qJL minima, of which p absolute (ancestor level)
and L~=1 qJL local (descendant level). For a complete discussion on the properties
of this energy, we refer the reader to [1, 4].
Here we are interested in using this energy in the SG-MRF framework shown in
Section 4. To this purpose, we show that the energy (6), with the connection
matrix (10), can be written as a function of scalar product between configurations
[4]:
E = -
t
~ 2: [~ t dJL ) ~)JL) (1 + 1]~JLV)1]JJLV))] SiSj =
~
= -
JL= 1
v= 1
[~2 [t;(~(JL). S)2 + t;~(~(JLV) .S)2]].
(11)
The ultrametric energy (11) can be kernelized as done for the Hopfield energy and
thus can be used in a MRF framework. We call the resulting new MRF model
Ultrametric Spin Glass-Markov Random Fields (USG-MRF).
Now, consider the probabilistic appearance-based framework described in section 2.
Given a set of data samples dj for each object class Dj,j = 1, ... k, we will extract
two kinds of feature vectors, {hS~i }7=1 containing shape information and {he~i }7=1
containing color information. USG-MRF provides a straightforward manner to use
the Bayes classifier (2) using both these representations separately. We will consider
the color features {he~i }7=1 at the ancestor level and the shape features {hS~i }7=1
at the descendant level. The USG-MRF energy function will be
Pi
"
- (JL)
EUSGMRF = - L.)Kc(he
JL=1
Pi q".
2" "
- (JLV)
,he)] - L.J L.J[Ks(hs
, hs)] 2 ,
(12)
JL=1v=1
where {he (JL) }~~1 will be the set of prototypes relative to the ancestor level, and
- (JLV) q
{hs
}v~1' J1 = 1, ... Pj the set of prototypes at the descendant level. These
prototypes are selected from the training data as described in section 3 for SG-MRF. Kc is the generalized Gaussian kernel at the ancestor level, and Ks is the
generalized Gaussian kernel at the descendant level. We stress that the kernel must
be the same at each level of the hierarchy, but can be different between levels (as
to say between ancestor and descendant). The Bayes classifier based on USG-MRF
will be
(13)
Note that the parametric form of kernels is known (eq (9); thus, when (U)SG-MRF
is used in a Bayes classifier for classification purposes, it permits to learn the kernel
to be used from the training data, with a leave-one-out strategy.
5
Experiments
In order to show the effectiveness of USG-MRF for appearance-based object recognition, we perform several sets of experiments. All of them were ran on the COIL
database [9] ; it consists of 7200 color images of 100 objects (72 views for object);
each image is of 128 x 128 pixels. The images were obtained by placing the objects
on a turntable and taking a view every 5?. In all the experiments we performed,
the training set consisted of 12 views per object (one every 30?). The remaining
views constituted the test set.
Among the many representations proposed in literature, we chose a shape only
and color only representation, and we ran experiments using these representations
separated, concatenated together in a common feature vector and combined together
in the USG-MRF. The purpose of these experiments is to prove the effectiveness
of the USG-MRF model rather than select the optimal combination for the shape
and color representations. Thus, we limited the experiments to one shape only and
one color only representations; but USG-MRF can be applied to any other kind of
shape and/or color representation (see for instance [4]).
As color only representation, we chose two dimensional rg Color Histogram (CH),
with resolution of bin axis equal to 8 [13]. The CH was normalized to 1. As shape
only representation, we chose Multidimensional receptive Field Histograms (MFH)
[11], with two local characteristics based on Gaussian derivatives along x and y
directions , with u = 1.0 and resolution of bin axis equal to 8. The histograms were
normalized to 1. These two representations were used for performing the following
sets of experiments:
? Shape experiments: we ran the experiments using the shape features only.
Classification was performed using SG-MRF with the kernelized Hopfield energy
(6)-(7). The kernel parameters (a, b, p) were learned using a leave-one-out strategy.
The results were benchmarked with those obtained with a X2 and n similarity measures, which proved to be very effective for this representation, and with SVM with
Gaussian kernel, p E [0.001,10] (here we report only the best results obtained).
? Color experiments: we ran the experiments using the color features only. Classification and benchmarking were performed as in the shape experiment.
? Color-Shape experiments: we ran the experiments using the color and shape
features concatenated together to form a unique feature vector. Again, classification
and benchmarking were performed as in the shape experiment.
? Ultrametric experiment: we ran a single experiment using the shape and color
representation disjoint in the USG-MRF framework. The kernel parameters relative
to each level (as, bs , Ps and a e, be, Pc) are learned with the leave-one-out technique.
Results obtained with this approach cannot be directly benchmarked with other
similarity measures. Anyway, it is possible to compare the obtained results with
those of the previous experiments.
Table 1 reports the error rates obtained for the 4 sets of experiments.
II
x2
n
SVM
SG-MRF
Color (%)
23.47
25.68
19.78
20.10
I Shape
(%)
9.47
24.94
25.3
6.28
I Color-Shape
19.17
21.72
18.38
8.43
(%)
I Ultrametric
(%)
3.55
Table 1: Classification results; we report for each set of experiments the obtained
error rates.
Results presented in Table 1 show that for all series of experiments, for all representations, SG-MRF always gave the best recognition result. Moreover, the overall
best recognition result is obtained with USG-MRF. USG-MRF has an increase of
performance of +2.73% with respect to SG-MRF, best result, and of +5.92% with
respect to X2 (best result obtained with a non SG-MRF technique). Table 2 shows
some examples of objects misclassified by SG-MRF and correctly classified by USGMRF. We see that USG-MRF classifies correctly in cases where shape only or color
only gives the right answer (but not both, and not in the concatenated representation; Table 2, left and middle column), and also in cases where color only and shape
only don't classify correctly (Table 2, right column). These examples show clearly
that the better performance of USG-MRF is due to its hierarchical structure that
permits to use different kernels on different features, thus to weight their relevance
in a flexible manner with respect to the considered application.
We remark once again that all the kernel parameters (thus ultimately the kernel
itself) are learned from the training data; to the best of our knowledge (U)SG-MRF
is the first kernel method for vision application that doesn't select heuristically the
kernel to be used.
USG-MRF
SG - MRFs
SG - MRFe
SG - MRFse
1st match
2nd match
1st match
1st match
1st match
2nd match
2nd match
3rd match
1st match
3rd match
7th match
5th match
Table 2: Classification results for sample objects; USG-MRF classifies always correctly even when color only (SG - MRF s), color only (SG - MRF c) and common
representation (SG - MRFse) fail (right column).
6
Summary
In this paper we presented a kernel method that permits us to combine color and
shape information for appearance-based object recognition. It does not require us
to define a new common representation, but use the power of kernels to combine
different representations together in an effective manner. This result is achieved
using results of statistical mechanics of Spin Glasses combined with Markov Random
Fields via kernel functions. Experiments confirm the effectiveness of the proposed
approach. Future work will explore the possibility to use different representations
for color and shape and to use this method for tackling other challenging problems
in object recognition, such as recognition of objects in heterogeneous background
and under different lighting conditions.
Acknowledgments
This work has been supported by the "Graduate Research Center of the University
of Erlangen-Nuremberg for 3D Image Analysis and Synthesis" , and by the Foundation BLANCEFLOR Boncompagni-Ludovisi.
References
[1] D. J. Amit , "Modeling Brain Function", Cambridge University Press, 1989.
[2] S. Belongie, J. Malik, J. Puzicha, "Matching Shapes" , ICCV01 , 454-461.
[3] B. Caputo, S. Bouattour, H . Niemann, "A new kernel method for robust appearancebased object recognition: Spin Glass-Markov random fields", submitted to PR, available at http : //www.ski .org/ALYuillelabf.
[4] B. Caputo, Gy. Dorko , H. Niemann , "An ultrametric approach to object recognition" ,
submitted to VMV02, availabe at http://www.ski.org/ALYuillelab/.
[5] A. Leonardis, H. Bischof, "Robust recognition using eigenimages" , CVIU,78:99-118 ,
2000.
[6] J. Matas, R , Marik, J. Kittler, "On representation and matching of multi-coloured
objects", Proc ICCV95, 726-732, 1995.
[7] B. W. Mel, "SEEM ORE: combining color, shape and texture histogramming in a
neurally-inspired approach to visual object recognition", NC, 9: 777-804, 1997
[8] J.W. Modestino, J. Zhang. "A Markov random field model- based approach to image
interpretation" . PAMI, 14(6) ,606- 615 ,1992.
[9] Nene, S. A. , Nayar, S. K., Murase, H. , "Columbia Object Image Library (COIL-100) ",
TR CUCS-006-96, Dept. Compo Sc. , Columbia University, 1996.
[10] Pontil, M., Verri, A. "Support Vector Machines for 3D Object Recognition", PAMI,
20(6):637-646, 1998.
[11] B. Schiele, J . L. Crowley, "Recognition without correspondence using multidimensional receptive field histograms", IJCV, 36(1) ,:31- 52, 2000.
[12] D . Slater, G. Healey, "Combining color and geometric information for the illumination
invariant recognition of 3-D objects" , Proc ICCV95, 563-568, 1995.
[13] M. Swain, D. Ballard, "Color indexing" ,IJCV, 7(1):11-32 , 1991.
[14] B. Scholkopf, A. J. Smola, Learning with kernels, 2002, the MIT Press, Cambridge,
MA.
| 2218 |@word h:5 middle:1 nd:3 heuristically:2 tr:1 configuration:3 contains:1 series:1 o2:1 existing:1 si:2 tackling:1 must:5 written:1 realize:1 partition:1 j1:1 shape:36 alone:2 cue:1 selected:2 smith:1 compo:1 provides:2 org:3 zhang:1 along:1 become:1 kettlewell:1 scholkopf:1 descendant:10 consists:1 prove:1 ijcv:2 combine:6 introduce:1 manner:5 mechanic:2 multi:1 brain:1 inspired:2 curse:1 becomes:1 classifies:2 moreover:1 kind:2 benchmarked:2 nj:1 every:2 multidimensional:2 classifier:5 control:1 local:2 pami:2 chose:3 k:2 challenging:1 limited:1 graduate:1 unique:1 acknowledgment:1 lf:1 pontil:1 convenient:1 matching:2 djl:1 spite:1 cannot:1 risk:1 www:2 conventional:2 equivalent:1 map:1 demonstrated:1 center:1 straightforward:1 attention:1 resolution:2 rule:1 crowley:1 anyway:1 ultrametric:8 construction:1 suppose:1 hierarchy:2 us:1 associate:1 recognition:25 slater:1 database:1 kittler:1 connected:1 decrease:1 ran:6 schiele:1 pda:1 ultimately:1 po:1 joint:1 hopfield:5 eigenimages:1 represented:1 separated:2 distinct:1 effective:3 sc:1 neighborhood:1 choosing:1 say:1 statistic:1 itself:1 advantage:1 jij:3 product:1 combining:4 flexibility:1 achieve:1 p:1 argmaxp:3 leave:3 object:37 depending:3 ij:2 eq:3 murase:1 direction:1 thick:1 drawback:1 bin:2 require:2 assign:1 considered:4 exp:1 mapping:1 bj:1 fh:3 purpose:3 proc:2 integrates:1 mit:1 clearly:1 always:4 gaussian:5 rather:3 usg:15 sisj:1 pip:1 likelihood:1 glass:10 posteriori:1 mrfs:3 kernelized:4 kc:2 ancestor:7 misclassified:1 interested:1 pixel:2 overall:1 classification:6 flexible:2 among:1 histogramming:1 field:12 equal:2 once:1 having:1 placing:1 look:2 future:1 report:3 few:2 employ:1 phase:1 organization:1 huge:1 possibility:1 pc:1 instance:3 column:3 modeling:4 classify:1 swain:1 stored:2 answer:1 varies:1 combined:5 st:5 probabilistic:5 physic:1 ansatz:1 together:8 synthesis:1 jo:1 again:2 containing:3 derivative:1 de:1 gy:2 healey:1 satisfy:1 performed:4 view:5 red:2 bayes:5 xlx:1 spin:10 characteristic:2 yellow:2 raw:1 informatik:1 lighting:1 classified:1 submitted:2 nene:1 iccv95:2 energy:21 erlangen:3 proved:1 color:41 car:5 dimensionality:2 knowledge:3 organized:1 focusing:1 ok:2 higher:1 verri:1 done:1 just:3 smola:1 nonlinear:1 glance:1 defines:1 brings:1 aj:1 usa:1 effect:1 building:1 consisted:1 normalized:2 mel:1 generalized:3 stress:1 complete:1 image:11 common:7 rmn:1 functional:1 jl:14 extend:1 he:5 interpretation:1 refer:2 cambridge:2 gibbs:2 rd:2 dj:6 similarity:3 store:2 minimum:1 ii:1 neurally:1 match:12 prescription:1 fjj:1 mrf:40 hioj:7 vision:2 heterogeneous:1 histogram:5 kernel:26 sometimes:2 achieved:2 irregular:1 background:1 want:1 separately:1 ore:1 rest:1 induced:1 effectiveness:6 seem:1 call:1 gave:1 simplifies:1 prototype:10 remark:1 detailed:1 turntable:1 induces:1 http:2 sl:1 xij:1 qjl:2 disjoint:1 per:2 correctly:4 key:1 pj:3 ht:2 kept:1 reader:2 decision:1 hi:1 tackled:1 correspondence:1 x2:4 chair:1 performing:1 department:1 according:1 combination:1 b:1 hl:1 hio:1 invariant:1 pr:1 indexing:1 lexp:1 fail:1 end:1 available:1 permit:6 hierarchical:3 remaining:1 concatenated:4 amit:1 malik:1 matas:1 strategy:6 parametric:1 receptive:2 separate:3 street:1 assuming:1 nc:1 ski:3 perform:1 allowing:1 markov:10 finite:1 defining:1 connection:3 bischof:1 cucs:1 california:1 learned:3 leonardis:1 usually:1 pattern:4 oj:5 power:2 suitable:1 natural:1 eye:1 library:1 picture:1 axis:2 concludes:1 extract:2 columbia:2 review:2 literature:3 sg:27 prior:2 coloured:1 geometric:1 relative:3 fully:1 expect:2 limitation:1 foundation:1 sufficient:1 pi:2 nne:1 summary:2 supported:1 keeping:1 institute:1 taking:2 absolute:1 overcome:1 dimension:2 doesn:3 author:1 made:1 san:1 wouldn:1 sj:1 uni:1 confirm:1 belongie:1 francisco:1 don:1 table:7 learn:1 ballard:1 robust:4 correlated:1 caputo:4 main:1 constituted:1 fillmore:1 site:2 benchmarking:2 xl:1 svm:3 normalizing:1 dl:1 effectively:1 texture:1 illumination:1 cviu:1 aren:1 rg:1 appearance:10 explore:1 visual:1 scalar:1 ch:2 ma:1 coil:2 goal:3 jlv:12 called:2 experimental:1 select:2 puzicha:1 support:1 relevance:1 dept:1 tested:1 nayar:1 |
1,339 | 2,219 | Timing and Partial Observability in the
Dopamine System
1
Nathaniel D. Daw1,3 , Aaron C. Courville2,3 , and David S. Touretzky1,3
Computer Science Department, 2 Robotics Institute, 3 Center for the Neural Basis of Cognition
Carnegie Mellon University, Pittsburgh, PA 15213
{daw,aaronc,dst}@cs.cmu.edu
Abstract
According to a series of influential models, dopamine (DA) neurons signal reward prediction error using a temporal-difference (TD) algorithm.
We address a problem not convincingly solved in these accounts: how to
maintain a representation of cues that predict delayed consequences. Our
new model uses a TD rule grounded in partially observable semi-Markov
processes, a formalism that captures two largely neglected features of DA
experiments: hidden state and temporal variability. Previous models predicted rewards using a tapped delay line representation of sensory inputs;
we replace this with a more active process of inference about the underlying state of the world. The DA system can then learn to map these
inferred states to reward predictions using TD. The new model can explain previously vexing data on the responses of DA neurons in the face
of temporal variability. By combining statistical model-based learning
with a physiologically grounded TD theory, it also brings into contact
with physiology some insights about behavior that had previously been
confined to more abstract psychological models.
1 Introduction
A series of models [1, 2, 3, 4, 5] based on temporal-difference (TD) learning [6] has explained most responses of primate dopamine (DA) neurons during conditioning [7] as an
error signal for predicting reward, and has also identified the DA system as a substrate for
conditioning behavior [8]. We address a troublesome issue from these models: how to
maintain a representation of cues that predict delayed consequences. For this, we use a
formalism that extends the Markov processes in which previous models were grounded.
Even in the laboratory, the world is often poorly described as Markov in immediate sensory observations. In trace conditioning, for instance, nothing observable spans the delay
between a transient stimulus and the reward it predicts. For DA models, this raises problems of coping with hidden state and of tracking temporal intervals. Most previous models
address these issues using a tapped delay line representation of the world?s state. This
augments the representation of current sensory observations with remembered past observations, dividing temporal intervals into a series of states to mark the passage of time. But
linear combinations of tapped delay lines do not properly model variability in the intervals
between events. Also, the augmented representation may poorly match the contingency
structure of the experimental situation: for instance, depending on the amount of history
retained, it may be insufficient to span delays, or it may contain old, irrelevant data.
We propose a model that better reflects experimental situations by using a formalism that
explicitly incorporates hidden state and temporal variability: a partially observable semiMarkov process. The proposal envisions the interaction between a cortical perceptual system that infers the world?s hidden state using an internal world model, and a dopaminergic
TD system that learns reward predictions for these inferred states. This model improves on
its predecessors? descriptions of neuronal firing in situations involving temporal variability,
and suggests additional connections with animal behavior.
2 DA models and temporal variability
ISI
(b)
S
ITI
?
?
R
?
?
(a)
?S
?S
?
...
ITI
Markov TD model
1
0
1
0
1
0
1
0
1
0
0
(e)
?S
?
R
Markov TD model
semi?Markov TD model
1
?R
?R
0.1 S?
S?
0
0
?0.1
?1
1
?R
?R
0.1 S?
S?
0
0
?0.1
?1
1
?R
?R
0.1 S?
S?
0
0
?0.1
?1
0
1
2
3
0
1
2
3
(c)
(d)
Time ?
Time ?
?S
?
S
?
ISI
?S
semi?Markov TD model
0.1 ?S ?R
?0.1
0.1 ?S
?R
?R
?0.1
0.1 ?S
?R
?R
?0.1
0.1 ?S
?R
?R
?0.1
?R 0.1 ?S
?R
?0.1
2
4
0
2
4
(f)
Time ?
Time ?
?R
Figure 1: S: stimulus; R: reward. (a,b) State spaces for the Markov tapped delay line (a)
and our semi-Markov (b) TD models of a trace conditioning experiment. (c,d) Modeled
DA activity (TD error) when an expected reward is delivered early (top), on time (middle)
or late (bottom). The tapped delay line model (c) produces spurious negative error after
an early reward, while, in accord with experiments, our semi-Markov model (d) does not.
Shaded stripes under (d) and (f) track the model?s belief distribution over the world?s hidden
state (given a one-timestep backward pass), with the ISI in white, the ITI in black, and gray
for uncertainty between the two. (e,f) Modeled DA activity when reward timing varies
uniformly over a range. The tapped delay line model (e) incorrectly predicts identical
excitation to rewards delivered at all times, while, in accord with experiment, our model (f)
predicts a response that declines with delay.
Several models [1, 2, 3, 4, 5] identify the firing of DA neurons with the reward prediction
error signal ?t of a TD algorithm [6]. In the models, DA neurons are excited by positive error in reward prediction (caused by unexpected rewards or reward-predicting stimuli) and
inhibited by negative prediction error (caused by the omission of expected reward). If a
reward arrives as expected, the models predict no change in firing rate. These characteristics have been demonstrated in recordings of primate DA neurons [7]. In idealized form
(neglecting some instrumental contingencies), these experiments and the others that we
consider here are all variations on trace conditioning, in which a phasic stimulus such as a
flash of light signals that reward will be delivered after a delay.
TD systems map a representation of the state of the world to a prediction of future reward,
but previous DA modeling exploited few experimental constraints on the form of this representation. Houk et al. [1] computed values using only immediately observable stimuli
and allowed learning about rewards to accrue to previously observed stimuli using eligibility traces. But in trace conditioning, DA neurons show a timed pause in their background
firing when an expected reward fails to arrive [7]. Because the Houk et al. [1] model does
not learn temporal relationships, it cannot produce well timed inhibition. Montague et al.
[2] and Schultz et al. [3] addressed these data using a tapped delay line representation of
stimulus history [8]: at time t, each stimulus is represented by a vector whose nth element
codes whether the stimulus was observed at time t ? n. This representation allows the
models to learn the temporal relationship between stimulus and reward, and to correctly
predict phasic inhibition timelocked to omitted rewards.
These models, however, mispredict the behavior of DA neurons when the interval between
stimulus and reward varies. In one experiment [9], animals were trained to expect a constant stimulus-reward interval, which was later varied. When a reward is delivered earlier
than expected, the tapped delay line models correctly predict that it should trigger positive
error (dopaminergic excitation), but also incorrectly predict a further burst of negative error
(inhibition, not seen experimentally) when the reward fails to arrive at the time it was originally expected (Figure 1c, top). In part, this occurs because the models do not represent
the reward as an observation, so its arrival can have no effect on later predictions. More
fundamentally, this is a problem with how the models partition events into a state space.
Figure 1a illustrates how the tapped delay lines mark time in the interval between stimulus
and reward using a series of states, each of which learns its own reward prediction. After
the stimulus occurs, the model?s representation marches through each state in succession.
But this device fails to capture a distribution over the interval between two events. If
the second event has occurred, the interval is complete and the system should not expect
reward again, but the tapped delay line continues to advance. This may be correctable,
though awkwardly, by representing the reward with its own delay line, which can then
learn to suppress further reward expectation after a reward occurs [10]. However, to our
knowledge it is experimentally unclear whether the suppression of this response requires
repeated experience with the situation, as this account predicts. Also, whether this works
depends on how information from multiple cues is combined into an aggregate reward
prediction (i.e. on the function approximator used: it is easy to verify that a standard linear
combination of the delay lines does not suffice).
The models have a similar problem with a related experiment [11] (Figure 1e) where the
stimulus-reward interval varied uniformly over a range of delays throughout training. In
this case, all substates within the interval see reward with the same (low) probability, so
each produces identical positive error when reward occurs there. In animal experiments,
however, stronger dopaminergic activity is seen for earlier rewards [11].
3 A new model
Both of these experiments demonstrate that current TD models of DA do not adequately
treat variability in event timing. We address them with a TD model grounded in a formalism that incorporates temporal variability, a partially observable [12] semi-Markov [13]
process. Such a process is described by three functions, O, Q, and D, operating over two
sets: the hidden states S and observations O. Q associates each state with a probability
distribution over possible successors. If the process is in state s ? S, then the next state is
s0 with probability Qss0 . These discrete state transitions can occur irregularly in continuous
time (which we approximate to arbitrarily fine discretization). The dwell time ? spent in
s before making a transition is distributed with probability Ds? ; we define the indicator
?t as one if the state transitioned between t and t + 1 and zero otherwise. On entering s,
the process emits some observation o ? O with probability Oso . Some observations are
distinguished as rewarding; we separately write the reward magnitude of an observation as
r. Note that the processes we consider in this paper do not contain decisions.
In this formalism, a trace conditioning experiment can be treated as alternation between
two states (Figure 1b). The states correspond to the intervals between stimulus and reward
(interstimulus interval: ISI) and between reward and stimulus (intertrial interval: ITI). A
stimulus is the likely observation when entering the ISI and a reward when entering the ITI.
We will index variables both by the time t and by a discrete index n which counts state
P
transitions; e.g. the nth state, sn , is entered at time t = n?1
k=1 ?k and can thus also be written as st . If ?t = 0 (if the state did not transition between t and t + 1) then st+1 = st , ot+1
is null and rt+1 = 0 (i.e., nonempty observations and rewards occur only on transitions).
State transitions may be unsignaled: ot+1 may be null even if ?t = 1. An unsignaled
transition into the ITI state occurs in our model when reward is omitted, a common experimental manipulation [7]. This example demonstrates the relationship between temporal
variability and partial observability: if reward timing can vary, nothing in the observable
state reveals whether a late reward is still coming or has been omitted completely.
TD algorithms [6] approximate a function mapping each state to its value, defined as the
expectation (with respect to variability in reward magnitude, state succession, and dwell
times) of summed, discounted future reward, starting from that state. In the semi-Markov
case [13], a state?s value is defined as the reward expectation at the moment it is entered;
we do not count rewards received on the transition in. The value of the nth state entered is:
Vsn = E ? ?n rn+1 + ? ?n +?n+1 rn+2 + ...
= E ? ?n (rn+1 + Vsn+1 )
where ? < 1 is a discounting parameter.
We address partial observability by using model-based inference to determine a distribution
over the hidden states, which then serves as a basis over which a modified TD algorithm
can learn values. The approach is similar to the Q-learning algorithm of Chrisman [14].
In our setting, however, values can in principle be learned exactly, since without decisions,
they are linear in the space of hidden states.
For state inference, we assume that the brain?s sensory processing systems use an internal
model of the semi-Markov process ? that is, the functions O, Q, and D. Here we take
the model as given, though we have treated parts of the problem of learning such models
elsewhere [15]. A key assumption about this internal model is that its distributions over
intervals, rewards and observations contain asymptotic uncertainty, that is, they are not
arbitrarily sharp. When learning internal models, such uncertainty can result from an assumption that parameters of the world are constantly changing [16]. Thus, in the inference
model for the trace conditioning experiment, the ISI duration is modeled with a probability distribution with some nonzero variance rather than an impulse function. The model
likewise assigns a small probability to anomalous transitions and observations (e.g. unrewarded transitions into the ITI state). This uncertainty is present only in the internal model:
most anomalous events never occur in our simulations.
Given the model and a series of observations o1 . . . ot , we can determine the likelihood
that each hidden state is active using a standard forward-backward algorithm for hidden
semi-Markov models [17]. The important quantity is the probability, for each state, that
the system left that state at time t. With a one-timestep backward pass (to match the onetimestep value backups in the TD rule), this is:
?s,t = P (st = s, ?t = 1|o1 . . . ot+1 )
By Bayes? theorem, ?s,t ? P (ot+1 |st = s, ?t = 1) ? P (st = s, ?t = 1|o1 . . . ot ). The
first
P term can be computed by integrating over st+1 in the model: P (ot+1 |st =s, ?t =1) =
s0 ?S Qss0 ? Os0 ot+1 ; the second requires integrating over possible state sequences and
dwell times:
dX
lastO
P (st = s, ?t = 1|o1 . . . ot ) =
Ds? ?Osot?? +1 ?P (st?? +1 = s, ?t?? = 1|o1 . . . ot?? )
? =1
where dlastO is the number of timesteps since the last non-null observation and
P (st??P
+1 = s, ?t?? = 1|o1 . . . ot?? ), the chance that the process entered s at t ? ? + 1,
equals s0 ?S Qs0 s ? P (st?? = s0 , ?t?? = 1|o1 . . . ot?? ), allowing recursive computation.
? is used for TD learning because it represents the probability of a transition, which is
the event that triggers a value update in fully observable semi-Markov TD. Due to partial
observability, we may not be certain when transitions have occurred or from which states,
so we perform TD updates to every state at every timestep, weighted by ?. We denote our
estimate of the value of state s as V?s , to distinguish it from the true value Vs . The update
to V?s at time t is proportional to the TD error:
?s,t = ?s,t (E[? ? ] ? (rt+1 + E[V?s0 ]) ? V?s )
P
where E[? ? ] = k ? k P (?t = k|st = s, ?t = 1, o1 . . . ot+1 ) is the expected discounting
P
(since dwell time may be uncertain) and E[V?s0 ] = s0 ?S V?s0 P (st+1 = s0 |st = s, ?t =
1, ot+1 ) is the expected subsequent value. Both expectations are conditioned on the process
having left state s at time t, and computed using the internal world model.
As in previous models, we associate the error signal ? with DA activity. However, because of uncertainty as to the state of the world, the TD error signal is vector-valued rather
than scalar. DA neurons could code this vector in a distributed manner, which might explain experimentally observed response variability between neurons [7]. Alternatively, ? s,t
can be approximated with a scalar, which performs well if the inferred state occupancy is
sharply peaked. In our figures, we use such an approximation, plotting
P DA activity as the
cumulative TD error over states (implicitly weighted by ?): ?t = s?S ?s,t . An approximateP
version of the vector signal could be reconstructed at target areas by multiplying by
?s,t / s0 ?S ?s0 ,t .
Note that with full observability, the (vector) learning rule reduces to standard semi-Markov
TD, and conversely with full unobservability, it nudges states in the direction of a value
iteration backup. In fact, the algorithm is exact in that it has the same fixed point as value
iteration, assuming the inference model matches the contingencies of the world. (Due
to uncertainty it does so only approximately in our simulations.) We sketch the proof.
With each TD update, V?s is nudged toward some target value with some step size ?s,t ;
the fixed point is the average of the targets, weighted by their probabilities and their step
sizes. Fixing some arbitrary t, the update targets and ? are functions of the observations
o1 . . . ot+1 , which are generated according to P (o1 . . . ot+1 ). The fixed point is:
P
?
?
o1 ...ot+1 P (o1 . . . ot+1 ) ? ?s,t ? E[? ] ? (rt+1 + E[Vs0 ])
P
V?s =
o1 ...ot+1 P (o1 . . . ot ) ? ?s,t
Marginalizing out the observations reduces this to Bellman?s equation for V?s , which is also,
of course, the fixed-point equation for value iteration.
4 Results
When expected reward is delivered early, the semi-Markov model assumes that this signals
an early transition into the ITI state, and it thus does not expect further reward or produce
spurious negative error (Figure 1d, top). Because of variability in the model?s ISI estimate,
an early transition, while improbable, better explains the data than some other path through
the state space. The early reward is worth more than expected, due to reduced discounting,
and is thus accompanied by positive error.
The model can also infer a state transition from the passage of time, absent any observations. In Figure 1d (bottom), when the reward is delivered late, the system infers that the
world has entered the ITI state without reward, producing negative error.
Figure 1f shows our model?s behavior when the ISI is uniformly distributed [11]. (The
dwell time distribution D in the inference model was changed to reflect this distribution,
as an animal should learn a different model here.) Earlier-than-average rewards are worth
more than expected (due to discounting) and cause positive prediction error, while laterthan-average rewards cause negative error because they are more heavily discounted. This
is broadly consistent with the experimental finding of decreasing response with increasing
delay [11]. Inhibition at longer delays has not so far been observed in this experiment,
though inhibition is in general difficult to detect. If discovered, such inhibition would
support the semi-Markov model.
Because it combines a conditional probability model with TD learning, our approach can
incorporate insights from previous behavioral theories into a physiological model. Our
state inference approach is based on a hidden Markov model (HMM) account we previously advanced to explain animal learning about the temporal relationships of events [15].
The present theory (with the model learning scheme from that paper) would account for
the same data. Our model also accommodates two important theoretical ideas from more
abstract models of animal learning that previous TD models cannot. One is the notion of
uncertainty in some of its internal parameters, which Kakade and Dayan [16] use to explain interval timing and attentional effects in learning. Second, Gallistel has suggested
that animal learning processes are timescale invariant. For example, altering the speed of
events has no effect on the number of trials it takes animals to learn a stimulus-reward association [18]. This is not true of Markov TD models because their transitions are clocked
to a fixed timescale. With tapped delay lines, timescale dilation increases the number of
marker states in Figure 1a and slows learning. But our semi-Markov model is timescale
invariant: learning is induced by state transitions which in turn are triggered by events or
by the passage of time on a scale controlled by the internal model. (The form of temporal
discounting we use is not timescale invariant, but this can be corrected as in [5].)
5 Discussion
We have presented a model of the DA system that improves on previous models? accounts
of data involving temporal variability and partial observability, because, unlike prior models, it is grounded in a formalism that explicitly incorporates these considerations. Like
previous models, ours identifies the DA response with reward prediction error, but it differs in the representational systems driving the predictions. Previous models assumed that
tapped delay lines transcribed raw sensory events; ours envisions that these events inform
a more active process of inference about the underlying state of the world. This is a principled approach to the problem of representing state when events can be separated by delays.
Simpler schemes may capture the neuronal data, which are sparse, but without addressing the underlying computational issues we identify, they are unlikely to generalize. For
instance, Suri and Schultz [4] propose that reward delivery overrides stimulus representations, canceling pending predictions and eliminating the spurious negative error in Figure
1c (top). But this would disrupt the behaviorally demonstrated ability of animals to learn
that a stimulus predicts a series of rewards. Such static representational rules are insufficient since different tasks have different mnemonic requirements. In our account, unlike
more ad-hoc theories, the problem of learning an appropriate representation for a task is
well specified: it is the problem of modeling the task. Though we have not simulated model
learning here (this is an important area for future work), it is possible using online HMM
learning, and we have used this technique in a model of conditioning [15]. Another issue
for the future is extending our theory to encompass action selection. DA models often assume an actor-critic framework [1] in which reward predictions are used to evaluate action
selection policies. Partial observability complicates such an extension here, since policies
must be defined over belief states (distributions over the hidden states S) to accommodate
uncertainty; our use of S as a linear basis for value predictions is thus an oversimplification.
Puzzlingly, the data we consider suggest that animals build internal models but also use
sample-based TD methods to predict values. Given a full world model (which could in
principle be solved directly for V ), it seems unclear why TD learning should be necessary.
But since the world model must be learned incrementally online, it may be infeasible to
continually re-solve it, and parts of the model may be poorly specified. In this case, TD
learning in the inferred state space could maintain a reasonably current and observationally grounded value function. (Our particular formulation, which relies extensively on the
model in the TD rule, may not be ideal from this perspective.)
Suri [19] and Dayan [20] have also proposed TD theories of DA that incorporate world
models to explain behavioral effects, though they do not address the theoretical issues or
dopaminergic data considered here. While those accounts use the world model for directly
anticipating future events, we have proposed another role for it in state inference. Also
unlike our theory, the others cannot explain the experiments discussed in [15] because their
internal models cannot represent simultaneous or backward contingencies. However, they
treat the two major issues we have neglected: world model learning and action planning.
The formal models in question have roughly equivalent explanatory power: a semi-Markov
model can be simulated (to arbitrarily fine temporal discretization) by a Markov model
that subdivides its states by dwell time. There is also an isomorphism between higherorder and partially observable Markov models. Thus it would be possible to devise a state
representation for a Markov model that copes properly with temporal variability. But doing
so by elaborating the tapped delay line architecture would amount to building a clockwork
engine for the inference process we describe, without the benefit of useful abstractions such
as distributions over intervals; a clearer approach would subdivide the states in our model.
Though there exist isomorphisms between the formal models, there are algorithmic differences that may make our proposal experimentally distinguishable from others. The inhibitory responses in Figure 1f reflect the way semi-Markov models account for the costs of
delays; they would not be seen in a Markov model with subdivided states. Such inhibition
is somewhat parameter-dependent, since if inference parameters assign high probability to
unsignaled transitions the decrease in reward value with delay can be mitigated by increasing uncertainty about the hidden state. Nonetheless, should data not uphold our prediction
of inhibitory responses to late rewards, they would suggest a different definition of a state?s
value. One choice would be the subdivision of our semi-Markov states by dwell time discussed above, which in the experiment of Figure 1f would decrease TD error toward but
not past zero for longer delays. In this case, later rewards are less surprising because the
conditional probability of reward increases as time passes without reward.
A related prediction suggested by our model is that DA responses not just to rewards but
also to stimuli that signal reward might be modulated by their timing relative to expectation. Responses to reward-predicting stimuli disappear in overtrained animals, presumably
because the stimuli come to be predicted by events in the previous trial [7]. In tapped delay
line models, this is possible only for a constant ITI (since if expectancy is divided between
a number of states, stimulus delivery in any one of them cannot be completely predicted
away). But the response to a stimulus in the semi-Markov model can show behavior exactly
analogous to the reward response in Figure 1f ? positive or negative error depending on
the time of delivery relative to expectation. So, even in an experiment involving a randomized ITI, the net stimulus response (averaged over the range of ITIs) could be attenuated.
Such behavior occurred in our simulations; the modeled DA responses to the stimuli in
Figures 1d and 1f are positive because they were taken after shorter-than-average ITIs. It is
difficult to evaluate this observation against available data, since the experiment involving
overtrained monkeys [7] contained minimal ITI variability.
We have suggested that the TD error may be a vector signal, with different neurons signaling errors for different elements of a state distribution. This could be investigated experimentally by recording DA neurons as a situation of ambiguous reward expectancy (e.g. one
reward or three) resolved into a situation of intermediate, determinate reward expectancy
(e.g. two rewards). Neurons carrying an aggregate error should uniformly report no error,
but with a vector signal, different neurons might report both positive and negative error.
Acknowledgments
This work was supported by National Science Foundation grants IIS-9978403 and DGE9987588. Aaron Courville was funded in part by a Canadian NSERC PGS B fellowship.
We thank Sham Kakade and Peter Dayan for helpful discussions.
References
[1] JC Houk, JL Adams, and AG Barto. A model of how the basal ganglia generate and use neural
signals that predict reinforcement. In JC Houk, JL Davis, and DG Beiser, editors, Models of
Information Processing in the Basal Ganglia, pages 249?270. MIT Press, 1995.
[2] PR Montague, P Dayan, and TJ Sejnowski. A framework for mesencephalic dopamine systems
based on predictive Hebbian learning. J Neurosci, 16:1936?1947, 1996.
[3] W Schultz, P Dayan, and PR Montague. A neural substrate of prediction and reward. Science,
275:1593?1599, 1997.
[4] RE Suri and W Schultz. A neural network with dopamine-like reinforcement signal that learns
a spatial delayed response task. Neurosci, 91:871?890, 1999.
[5] ND Daw and DS Touretzky. Long-term reward prediction in TD models of the dopamine
system. Neural Comp, 14:2567?2583, 2002.
[6] RS Sutton. Learning to predict by the method of temporal differences. Machine Learning,
3:9?44, 1988.
[7] W Schultz. Predictive reward signal of dopamine neurons. J Neurophys, 80:1?27, 1998.
[8] RS Sutton and AG Barto. Time-derivative models of Pavlovian reinforcement. In M Gabriel
and J Moore, editors, Learning and Computational Neuroscience: Foundations of Adaptive
Networks, pages 497?537. MIT Press, 1990.
[9] JR Hollerman and W Schultz. Dopamine neurons report an error in the temporal prediction of
reward during learning. Nature Neurosci, 1:304?309, 1998.
[10] DS Touretzky, ND Daw, and EJ Tira-Thompson. Combining configural and TD learning on a
robot. In ICDL 2, pages 47?52. IEEE Computer Society, 2002.
[11] CD Fiorillo and W Schultz. The reward responses of dopamine neurons persist when prediction
of reward is probabilistic with respect to time or occurrence. In Soc. Neurosci. Abstracts, volume
27: 827.5, 2001.
[12] LP Kaelbling, ML Littman, and AR Cassandra. Planning and acting in partially observable
stochastic domains. Artif Intell, 101:99?134, 1998.
[13] SJ Bradtke and MO Duff. Reinforcement learning methods for continuous-time Markov Decision Problems. In NIPS 7, pages 393?400. MIT Press, 1995.
[14] L Chrisman. Reinforcement learning with perceptual aliasing: The perceptual distinctions approach. In AAAI 10, pages 183?188, 1992.
[15] AC Courville and DS Touretzky. Modeling temporal structure in classical conditioning. In
NIPS 14, pages 3?10. MIT Press, 2001.
[16] S Kakade and P Dayan. Acquisition in autoshaping. In NIPS 12, pages 24?30. MIT Press,
2000.
[17] Y Guedon and C Cocozza-Thivent. Explicit state occupancy modeling by hidden semi-Markov
models: Application of Derin?s scheme. Comp Speech and Lang, 4:167?192, 1990.
[18] CR Gallistel and J Gibbon. Time, rate and conditioning. Psych Rev, 107(2):289?344, 2000.
[19] RE Suri. Anticipatory responses of dopamine neurons and cortical neurons reproduced by
internal model. Exp Brain Research, 140:234?240, 2001.
[20] P Dayan. Motivated reinforcement learning. In NIPS 14, pages 11?18. MIT Press, 2001.
| 2219 |@word trial:2 version:1 middle:1 eliminating:1 instrumental:1 stronger:1 seems:1 nd:2 simulation:3 r:2 excited:1 uphold:1 accommodate:1 moment:1 series:6 ours:2 elaborating:1 past:2 current:3 discretization:2 neurophys:1 surprising:1 lang:1 dx:1 written:1 must:2 subsequent:1 partition:1 update:5 v:1 cue:3 device:1 simpler:1 burst:1 vs0:1 predecessor:1 gallistel:2 combine:1 behavioral:2 manner:1 expected:11 roughly:1 isi:8 planning:2 behavior:7 aliasing:1 brain:2 bellman:1 discounted:2 aaronc:1 decreasing:1 td:39 increasing:2 underlying:3 suffice:1 mitigated:1 null:3 psych:1 monkey:1 finding:1 ag:2 configural:1 temporal:21 every:2 exactly:2 oso:1 demonstrates:1 grant:1 producing:1 continually:1 positive:8 before:1 timing:6 treat:2 consequence:2 sutton:2 troublesome:1 firing:4 path:1 approximately:1 black:1 might:3 suggests:1 shaded:1 conversely:1 range:3 averaged:1 acknowledgment:1 recursive:1 differs:1 signaling:1 area:2 coping:1 physiology:1 integrating:2 suggest:2 cannot:5 selection:2 equivalent:1 map:2 demonstrated:2 center:1 clockwork:1 starting:1 duration:1 thompson:1 immediately:1 assigns:1 rule:5 insight:2 notion:1 variation:1 analogous:1 target:4 trigger:2 heavily:1 exact:1 substrate:2 us:1 tapped:14 pa:1 element:2 associate:2 approximated:1 continues:1 stripe:1 predicts:5 persist:1 observed:4 bottom:2 role:1 envisions:2 solved:2 capture:3 nudge:1 decrease:2 principled:1 gibbon:1 reward:83 littman:1 neglected:2 trained:1 raise:1 carrying:1 predictive:2 subdivides:1 basis:3 completely:2 resolved:1 montague:3 represented:1 separated:1 describe:1 sejnowski:1 aggregate:2 whose:1 valued:1 solve:1 otherwise:1 ability:1 autoshaping:1 timescale:5 delivered:6 online:2 reproduced:1 hoc:1 sequence:1 triggered:1 net:1 propose:2 interaction:1 coming:1 combining:2 entered:5 poorly:3 representational:2 description:1 interstimulus:1 requirement:1 extending:1 produce:4 adam:1 spent:1 depending:2 clearer:1 fixing:1 ac:1 received:1 dividing:1 soc:1 c:1 predicted:3 come:1 direction:1 nudged:1 stochastic:1 transient:1 successor:1 explains:1 oversimplification:1 vsn:2 subdivided:1 assign:1 extension:1 considered:1 houk:4 presumably:1 exp:1 cognition:1 predict:9 mapping:1 algorithmic:1 mo:1 driving:1 major:1 vary:1 early:6 omitted:3 derin:1 reflects:1 weighted:3 mit:6 behaviorally:1 modified:1 rather:2 ej:1 cr:1 barto:2 properly:2 likelihood:1 suppression:1 detect:1 helpful:1 inference:11 dayan:7 abstraction:1 dependent:1 unlikely:1 explanatory:1 hidden:14 spurious:3 issue:6 animal:11 spatial:1 summed:1 equal:1 never:1 having:1 identical:2 represents:1 observationally:1 peaked:1 future:5 report:3 others:3 stimulus:28 fundamentally:1 inhibited:1 few:1 dg:1 national:1 intell:1 delayed:3 maintain:3 dge9987588:1 arrives:1 light:1 tj:1 neglecting:1 partial:6 experience:1 improbable:1 necessary:1 shorter:1 old:1 timed:2 re:3 accrue:1 theoretical:2 minimal:1 uncertain:1 psychological:1 instance:3 formalism:6 modeling:4 earlier:3 complicates:1 ar:1 altering:1 cost:1 kaelbling:1 addressing:1 delay:27 varies:2 combined:1 st:15 randomized:1 probabilistic:1 rewarding:1 again:1 reflect:2 aaai:1 transcribed:1 derivative:1 semimarkov:1 account:8 accompanied:1 jc:2 explicitly:2 caused:2 idealized:1 depends:1 ad:1 later:3 doing:1 bayes:1 nathaniel:1 variance:1 largely:1 characteristic:1 succession:2 correspond:1 identify:2 likewise:1 generalize:1 raw:1 multiplying:1 worth:2 comp:2 history:2 explain:6 simultaneous:1 inform:1 touretzky:3 canceling:1 definition:1 against:1 nonetheless:1 acquisition:1 intertrial:1 proof:1 static:1 emits:1 knowledge:1 infers:2 improves:2 anticipating:1 originally:1 response:18 formulation:1 anticipatory:1 though:6 just:1 d:5 sketch:1 marker:1 incrementally:1 brings:1 gray:1 impulse:1 artif:1 building:1 effect:4 contain:3 verify:1 true:2 adequately:1 discounting:5 entering:3 laboratory:1 nonzero:1 moore:1 white:1 during:2 eligibility:1 ambiguous:1 davis:1 excitation:2 clocked:1 override:1 complete:1 demonstrate:1 performs:1 bradtke:1 passage:3 consideration:1 suri:4 common:1 conditioning:11 volume:1 jl:2 association:1 occurred:3 discussed:2 mellon:1 had:1 funded:1 robot:1 actor:1 longer:2 operating:1 inhibition:7 own:2 perspective:1 irrelevant:1 manipulation:1 certain:1 remembered:1 arbitrarily:3 alternation:1 exploited:1 devise:1 seen:3 additional:1 somewhat:1 determine:2 signal:14 semi:19 ii:1 multiple:1 full:3 encompass:1 reduces:2 infer:1 sham:1 hebbian:1 match:3 mnemonic:1 long:1 divided:1 controlled:1 prediction:22 involving:4 anomalous:2 cmu:1 expectation:6 dopamine:10 iteration:3 grounded:6 represent:2 accord:2 robotics:1 confined:1 proposal:2 background:1 fellowship:1 fine:2 separately:1 interval:16 addressed:1 ot:20 unlike:3 pass:1 recording:2 induced:1 incorporates:3 unrewarded:1 ideal:1 intermediate:1 canadian:1 easy:1 qs0:1 timesteps:1 architecture:1 identified:1 observability:7 decline:1 idea:1 attenuated:1 absent:1 whether:4 motivated:1 isomorphism:2 peter:1 speech:1 cause:2 action:3 gabriel:1 useful:1 amount:2 extensively:1 augments:1 reduced:1 generate:1 exist:1 inhibitory:2 neuroscience:1 track:1 correctly:2 correctable:1 broadly:1 carnegie:1 discrete:2 write:1 basal:2 key:1 changing:1 backward:4 timestep:3 uncertainty:9 dst:1 extends:1 arrive:2 throughout:1 delivery:3 decision:3 dwell:7 distinguish:1 courville:2 unsignaled:3 activity:5 occur:3 constraint:1 sharply:1 speed:1 span:2 pavlovian:1 dopaminergic:4 department:1 influential:1 according:2 combination:2 march:1 jr:1 kakade:3 lp:1 rev:1 primate:2 making:1 explained:1 invariant:3 pr:2 taken:1 equation:2 previously:4 turn:1 count:2 nonempty:1 phasic:2 irregularly:1 serf:1 available:1 away:1 appropriate:1 occurrence:1 distinguished:1 subdivide:1 top:4 assumes:1 overtrained:2 build:1 disappear:1 society:1 classical:1 contact:1 question:1 quantity:1 occurs:5 rt:3 unclear:2 attentional:1 higherorder:1 simulated:2 accommodates:1 hmm:2 thank:1 toward:2 assuming:1 code:2 o1:14 retained:1 modeled:4 insufficient:2 relationship:4 index:2 difficult:2 trace:7 negative:9 slows:1 suppress:1 policy:2 perform:1 allowing:1 neuron:19 observation:18 markov:31 mispredict:1 iti:12 incorrectly:2 immediate:1 situation:6 variability:15 rn:3 varied:2 discovered:1 duff:1 omission:1 sharp:1 arbitrary:1 inferred:4 david:1 specified:2 connection:1 engine:1 learned:2 distinction:1 chrisman:2 daw:3 nip:4 address:6 suggested:3 convincingly:1 belief:2 power:1 event:15 treated:2 predicting:3 indicator:1 pause:1 advanced:1 nth:3 representing:2 occupancy:2 scheme:3 identifies:1 sn:1 prior:1 marginalizing:1 asymptotic:1 relative:2 fully:1 expect:3 proportional:1 approximator:1 foundation:2 contingency:4 consistent:1 s0:11 substates:1 principle:2 plotting:1 editor:2 critic:1 itis:2 cd:1 elsewhere:1 course:1 changed:1 supported:1 last:1 infeasible:1 formal:2 institute:1 face:1 sparse:1 distributed:3 benefit:1 cortical:2 world:18 transition:18 cumulative:1 beiser:1 sensory:5 forward:1 reinforcement:6 expectancy:3 adaptive:1 schultz:7 far:1 cope:1 reconstructed:1 approximate:2 observable:9 mesencephalic:1 implicitly:1 sj:1 ml:1 active:3 reveals:1 pittsburgh:1 assumed:1 alternatively:1 disrupt:1 physiologically:1 continuous:2 dilation:1 why:1 transitioned:1 learn:8 reasonably:1 nature:1 pending:1 investigated:1 domain:1 da:27 did:1 neurosci:4 pgs:1 backup:2 arrival:1 nothing:2 allowed:1 repeated:1 augmented:1 neuronal:2 fails:3 explicit:1 perceptual:3 late:4 learns:3 theorem:1 physiological:1 magnitude:2 illustrates:1 conditioned:1 cassandra:1 distinguishable:1 likely:1 ganglion:2 unexpected:1 contained:1 nserc:1 tracking:1 partially:5 scalar:2 chance:1 constantly:1 relies:1 conditional:2 flash:1 replace:1 change:1 experimentally:5 uniformly:4 corrected:1 acting:1 pas:2 experimental:5 subdivision:1 aaron:2 internal:11 mark:2 support:1 modulated:1 icdl:1 incorporate:2 evaluate:2 |
1,340 | 222 | 186
Bourlard and Morgan
A Continuous Speech Recognition System
Embedding MLP into HMM
Herve Bourlard
Nelson Morgan
Philips Research Laboratory
Av. van Becelaere 2. Box 8
B-1170 Brussels. Belgium
IntI. Compo Sc. Institute
1947 Center Street. Suite 600
Berkeley. CA 94704. USA
ABSTRACT
We are developing a phoneme based. speaker-dependent continuous
speech recognition system embedding a Multilayer Perceptron (MLP)
(Le.? a feedforward Artificial Neural Network). into a Hidden Markov
Model (HMM) approach. In [Bourlard & Wellekens]. it was shown that
MLPs were approximating Maximum a Posteriori (MAP) probabilities
and could thus be embedded as an emission probability estimator in
HMMs. By using contextual information from a sliding window on the
input frames. we have been able to improve frame or phoneme classification performance over the corresponding performance for Simple
Maximum Likelihood (ML) or even MAP probabilities that are estimated without the benefit of context. However. recognition of words in
continuous speech was not so simply improved by the use of an MLP.
and several modifications of the original scheme were necessary for
getting acceptable performance. It is shown here that word recognition
performance for a simple discrete density HMM system appears to be
somewhat better when MLP methods are used to estimate the emission
probabilities.
1 INTRODUCTION
We have performed a number of experiments with a 1000-word vocabulary continuous speech recognition task. Our frame classification results [Bourlard et al .? 1989]
A Continuous Speech Recognition-System Embedding MLP into HMM
are consistent with other research showing the capabilities of MLPs trained with backpropagation-styled learning schemes for the recognition of voiced-unvoiced speech segments [Gevins & Morgan, 1984], isolated phonemes [Watrous & Shastri, 1987; Waibel
et al., 1988; Makino et al., 1983], or of isolated words [peeling & Moore, 1988]. These
results indicate that "neural network" approaches can, for some problems, perform pattern
classification at least as well as traditional HMM approaches. However, this is not particularly mysterious. When traditional statistical assumptions (distribution, independence
of multiple features, etc.) are not valid, systems which do not rely on these assumptions
can work better (as discussed in [Niles et al., 1989]). Furthermore, networks provide
an easy way to incorporate multiple sources of evidence (multiple features, contextual
windows, etc.) without restrictive assumptions.
However, it is not so easy to improve the recognition of words in continuous speech by
the use of an MLP. For instance, while it has been shown that the outputs of a feedforward
network can be used as emission probabilities in an HMM [Bourlard et al., 1989], the
corresponding word recognition performance can be very poor. This is true even when
the same network demonstrates extremely good performance at the frame or phoneme
levels. We have developed a hybrid MLP-HMM algorithm which (for a preliminary
experiment) appears to exceed perfonnance of the same HMM system using standard
statistical approaches to estimate the emission probabilities. This was only possible after
the original algorithm was modified in ways that did not necessarily maximize the frame
recognition performance for the training set We will describe these modifications below,
along with experimental results.
2 METHODS
As shown by both theoretical [Bourlard & Wellekens, 1989] and experimental [Bourlard
& Morgan, 1989] results, MLP output values may be considered to be good estimates of
MAP probabilities for pattern classification. Either these, or some other related quantity
(such as the output normalized by the prior probability of the corresponding class) may be
used in a Viterbi search to determine the best time-warped succession of states (speech
sounds) to explain the observed speech measurements. This hybrid approach (MLP
to estimate probabilities, HMM to incorporate them to recognize continuous speech as
a succession of words) has the potential of exploiting the interpolating capabilities of
MLPs while using a Dynamic Time Warping (DTW) procedure to capture the dynamics
of speech.
However, to achieve good perfonnance at the word level, the following modifications of
this basic scheme were necessary:
? MLP training methods - a new cross-validation [Stones, 1977] training algorithm
was designed in which the stopping criterion was based on perfonnance for an
independent validation set [Morgan & Bourlard, 1990]. In other words, training
was stopped when perfonnance on a second set of data began going down, and not
when training error leveled off. This greatly improved generalization, which could
be further tested on a third independent validation set
187
188
Bourlard and Morgan
? probability estimation from the MLP outputs - In the original scheme [Bourlard
& Wellekens, 1989], MLP outputs were used as MAP probabilities for the HMM
directly. While this helped frame performance, it hurt word performance. This
may have been due at least partly to a mismatch between the relative frequency
of phonemes in the training sets and test (word recognition) sets. Division by
the prior class probabilities as estimated from the training set removed this effect
of the priors on the DTW. This led to a small decrease in frame classification
performance, but a large (sometimes 10 - 20%) improvement in word recognition
rates (see Table 1 and accompanying description).
? word transition costs for the underlying HMM - word transition penalties had to be
increased for larger contextual windows to avoid a large number of insertions; see
Section 4. This is shown to be equivalent to keeping the same word transition cost
but scaling the log probabilities down by a number which reflected the dependence
of neighboring frames. A reasonable value for this can be determined from recognition on a small number of sentences (e.g., 50), choosing a value which results in
insertions at most equal to the number of deletions.
? segmentation of training data - much as with HMM systems, an iterative procedure
was required to time align the training labels in a manner that was statistically
consistent with the recognition methods used. In our most recent experiments, we
segmented the data using an iterative Viterbi alignment starting from a segmentation
based on average phoneme durations, and terminated at the segmentation which
led to the best performance on an independent test set For one of our speakers,
we had available a more accurate frame labeling (produced by an automatic but
more complex procedure [Aubert, 1987]) to use as a start point for the iteration,
which led to even better performance.
3
EXPERIMENTAL APPROACH
We have been using a speaker-dependent German database (available from our collaboration with Philips) called SPICaS [Ney & Noll, 1988]. The speech had been sampled at a
rate of 16 kHz, and 30 points of smoothed, "mel-scaled" logarithmic spectra (over bands
from 200 to 6400 Hz) were calculated every 10-ms from a 512-point FFf over a 25-ms
window. For our experiments, the mel spectrum and the energy were vector-quantized
to pointers into a single speaker-dependent table of prototypes.
Two independent sets of vocabularies for training and test are used. The training dataset consists of two sessions of 100 German sentences per speaker. These sentences
are representative of the phoneme distribution in the German language and include 2430
phonemes in each session. The two sessions of 100 sentences are phonetically segmented
on the basis of 50 phonemes, using a fully automated procedure [Aubert, 1987]. The
test set consists of one session of 200 sentences per speaker. The recognition vocabulary
contains 918 words (including the "silence" word) and the overlap between training and
recognition is 51 words. Most of the latter are articles, prepositions and other structural
words. Thus, the training and test are essentially vocabulary-independent. Initial tests
A Continuous Speech Recognition System Embedding MLP into HMM
used sentences from a single male speaker. The final algorithms were tested on an
additional male and female speaker.
The acoustic vectors were coded on the basis of 132 prototype vectors by a simple binary
representation with only one bit 'on'. Multiple frames were used as input to provide
context to the network. In the experiments reported here. the context was 9 frames. while
the size of the output layer was kept fixed at 50 units. corresponding to the 50 phonemes
to be recognized. The input field contained 9 x 132 = 1188 units. and the total number of
possible inputs was thus equal to 1329 ? There were 26767 training patterns (from the first
training session of 100 sentences) and 26702 independent test patterns (from the second
training session of 100 sentences). Of course. this represented only a very small fraction
of the possible inputs. and generalization was thus potentially difficult Training was done
by the classical "error-back propagation" algorithm. starting by minimizing an entropy
criterion. and then the standard least-mean-square error criterion. In each iteration. the
complete training set was presented. and the parameters were updated after each training
pattern. To avoid overtraining of the MLP. improvement on a cross-validation set was
checked after each iteration and if classification was decreasing. the adaptation parameter
of the gradient procedure was reduced. otherwise it was kept constant Later on this
approach was systematized by splitting the data in three parts: one for training. one for
cross-validation and a third one absolutely independent of the training procedure for the
actual validation. No Significant difference was observed between classification rates for
the last two data sets.
In [Bourlard et al .? 1989] this procedure was shown yielding improved frame classification
performance over simple ML and MAP estimates. However. acceptable word recognition
perfomance was still difficult to reach.
4 WORD RECOGNITION RESULTS
The output values of the MLP were evaluated for each frame. and (after division by the
prior probability of each phoneme) were used as emission probabilities in a discrete HMM
system. In this system. each phoneme was modeled with a single conditional density.
repeated D /2 times. where D was a prior estimate of the duration of the phoneme. Only
selfloops and sequential transitions were permitted. A Viterbi decoding was then used
for recognition of the first hundred sentences of the test session (on which word entrance
penalties were optimized), and our best results were validated by a further recognition on
the second hundred sentences of the test set Note that this same simplified HMM was
used for both the ML reference system (estimating probabilities directly from relative
frequencies) and the MLP system. and that the same input features were used for both.
Table 1 shows the recognition rate (100% - error rate, where errors includes insertions.
deletions. and substitutions) for the first 100 sentences of the test session. All runs except
the last were done with 20 hidden units in the MLP. as suggested by frame performance.
Note the significant positive effect of division of the MLP outputs. which are trained
to approximate MAP probabilities. by estimates of the prior probabilities for each class
(denoted "MLP/priors" in Thble 1).
189
190
Bourlard and Morgan
Table 1: Word Recognition, speaker mOO3
% correct
size of
system
method
context test I validation
MLP
MLP/priors
MLP
MLP/priors
ML
MLP/priors
(0 hidden)
1
1
9
9
1
9
27.3
49.7
40.9
51.9
52.6
53.3
52.2
52.5
Table 2: Word Recognition using Viterbi segmentation, speaker mOO3
I
method
MLP/priors
(0 hidden)
ML
I context I test I
9
65.3
1
56.9
Word transition probabilities were optimized for both the Maximum Likelihood and MLP
style HMMs. This led to a word exit probability of 10- 8 for the ML and for I-frame
MLP's, and 10- 14 for an MLP with 9 frames of context After these adjustments,
performance was essentially the same for the two approaches. Performance on the last
hundred sentence of the test session (shown in the last column of Table 1) validated that
the two systems generalized equivalently despite these tunings.
An initial time alignment of the phonetic transcription with the data (for this speaker)
had previously been calculated using a program incorporating speech-specific knowledge
[Aubert, 1987]. This labeling had been used for the targets of the frame-based training
described above. We then used this alignment as a ''bootstrap'' segmentation for an
iterative Viterbi procedure, much as is done in conventional HMM systems. As with the
MLP training, the data was divided into a training and cross-validation set, and the best
segmentation (corresponding to the best validation set frame classification rate) was used
for later training. For both cross-validation procedures, we switched to a training set of
150 sentences (two repetitions of 75 sentences) and a cross-validation set of 50 sentences
(two repetitions of 25 each). Finally, since the best performance in Table 1 was achieved
using no hidden layer, we continued our experiments using this simpler network, which
also required only a simple training procedure (entropy error criterion only). Table 2
shows this performance for the full 200 recognition sentences (test + validation sets from
Table 1).
Two of the more puzzling observations in this work were the need to increase word
entrance penalties with the width of the input context and the difficulty to reflect good
frame performance at the word level. MLPs can make better frame level discriminations
A Continuous Speech Recognition System Embedding MLP into HMM
than simple statistical classifiers, because they can easily incorporate multiple sources of
evidence (multiple frames, multiple features) without simplifying assumptions. However,
when the input features within a contextual window are roughly independent. the Viterbi
algorithm will already incorporate all of the context in choosing the best HMM state
sequence explaining an utterance. If emission probabilities are estimated from the outputs
of an MLP which has a 2c + 1 frame contextual input. the probability to observe a
feature sequence {It, 12, ... , fN} (where fn represents the feature vector at time n) on
a particular HMM state q" is estimated as:
N
II P{Ii-c, ... , fi,""
fi+clq,,),
i-I
where Bayes' rule has already been used to convert the MLP outputs (which estimate
MAP probabilities) into ML probabilities. If independence is assumed. and if boundary
effects (context extending before frame 1 or after frame N) are ignored (assume (2c+ 1) <:
N). this becomes:
N
N
c
II II
p{fi+;lq,,) =
II [P{lilq,,)]2c+l,
;--c
where the latter probability is just the classical Maximum Likelihood solution, raised to
the power 2c + 1. Thus. if the features are independent over time. to keep the effect of
transition costs the same as for the simple HMM. the log probabilities must be scaled
down by the size of the contextual window. Note that. in the more realistic case where
dependencies exist between frames. the optimal scaling factor will be less than 2c + 1.
down to a minimum of 1 for the case in which frames are completely dependent (e.g.?
same within a constant factor); the scaling factor should thus reflect the time correlation
of the input features. Thus. if the features are assumed independent over time. there is
no advantage to be gained by using an MLP to extract contextual information for the
estimation of emission probabilities for an HMM Viterbi decoding. In general. the relation
between the MLP and ML solutions will be more complex. because of interdependence
over time of the input features. However. the above relation may give some insight as
to the difficulty we have met in improving word recognition performance with a single
discrete feature (despite large improvements at the frame level). More positively. our
results show that the probabilities estimated by MLPs can be used at least as effectively
as conventional estimates and that some advantage can be gained by providing more
information for estimating these probabilities.
i-I
We have duplicated our recognition test\! for two other speakers from the same data base.
In this case. we labeled each training set (from the original male plus a male and a
female speaker) using a Viterbi iteration initialized from a time-alignment based on a
simple estimate of average phoneme duration. This reduced all of the recognition scores.
underlining the necessity of a good start point for the Viterbi iteration. However. as can be
seen from the Table 3 results (measured over the full 200 recognition sentences). the MLPbased methods appear to consistently offer at least some measurable improvement over the
simpler estimation technique. In particular. the performance for the two systems differed
significantly (p < 0.001) for two out of three speakers. as well as for a multispeaker
191
192
Bourlard and Morgan
Table 3: Word Recognition for 3 speakers. simple initialization
I speaker I MLE I MLP I
moo3
mOO 1
wOlO
54.4
47.4
54.2
59.7
51.9
54.3
comparison over the three speakers (in each case using a normal approximation to a
binomial distribution for the null hypothesis).
5 CONCLUSION
These results show some of the improvement for MLPs over conventional HMMs which
one might expect from the frame level results. MLPs can sometimes make better frame
level discriminations than simple statistical classifiers. because they can easily incorporate
multiple sources of evidence (multiple frames. multiple features). which is difficult to
do in HMMs without major simplifying assumptions. In general. the relation between
the MLP and ML word recognition is more complex. Part of the difficulty with good
recognition may be due to our choice of discrete. vector-quantized features. for which
no metric is defined over the prototype space. Despite these limitations. it now appears
that the probabilities estimated by MLPs may offer improved word recognition through
the incorporation of context in the estimation of emission probabilities. Furthermore. our
new result shows the effectiveness of Viterbi segmentation in labeling training data for an
MLP. This result appears to remove a major handicap of MLP use. i.e. the requirement
for hand-labeled speech. and also offers the possibility to deal with more complex HMMs.
Acknowledgments
Support from the International Computer Science Institute (ICSI) and Philips Research
for this work is gratefully acknowledged. Chuck Wooters of ICSI and UCB provided
much-needed assistance. and Xavier Aubert of Philips put together our Spicos materials.
References
X. Aubert. (1988). "Supervised Segmentation with Application to Speech Recognition'"
in Proc. Eur. ConE. Speech Technology. Edinburgh. p.161-164.
H. Bourlard, N. Morgan. & C.J. Wellekens. (1989). "Statistical Inference in Multilayer
Perceptrons and Hidden Markov Models with Applications in Continuous Speech Recognition", to appear in Neuro Computing, Algorithms, a.nd Applica.tions. NATO ASI
Series.
H. Bourlard, H. & N. Morgan. (1989). "Merging Multilayer Perceptrons and Hidden
Markov Models: Some Experiments in Continuous Speech Recognition" International
Computer Science Institute lR-89-033.
A Continuous Speech Recognition System Embedding MLP into HMM
H. Bourlard & CJ. Wellekens, (1989), "Links between Markov models and multilayer
perceptrons", to be published in IEEE Trans. on Pattern Analysis and Macbine
Intelligence, 1990.
A. Gevins & N. Morgan, (1984), "Ignorance-Based Systems'" Proc. IEEE IntI. ConE.
on Acoustics, Speecb, & Signal Processing, Vol. 3, 39A5.1-39A5.4, San Diego.
S. Makino, T. Kawabata, T. & K. Kido, (1983), "Recognition of consonants based on
the Perceptron Model", Proc. IEEE IntI. ConE. on Acoustics, Speecb, & Signal
Processing, Vol. 2. pp. 738-741. Boston, Mass.
N. Morgan & H. Bourlard. (1989), "Generalization and Parameter Estimation in Feedforward Nets: Some Experiments'" Advances in Neural Information Processing Systems
II. Morgan Kaufmann.
H. Ney & A. Noll. (1988), "Phoneme Modeling Using Continuous Mixture Densities'"
Proc. IEEE IntI. ConE. on Acoustics, Speecb, & Signal Processing, Vol. I, pp.
437-440. New York.
L. Niles. H. Silverman. G. Thjchman & M. Bush, (1989), "How Limited Training Data
Can Allow a Neural Network Classifier to Outperform an 'Optimal' Statistical Classifier",
Proc. IEEE IntI. ConE. on Acoustics, Speecb, & Signal Processing. Vol. I, pp. 1720, Glasgow, Scotland.
S.M. Peeling, S.M. & R.K. Moore, (1988), "Experiments in Isolated Digit Recognition
Using the Multi-Layer Perceptron'" Royal Speech and Radar Establishment. Technical
Report 4073, Malvern, Worcester.
M. Stone. (1987). "Cross-validation: a review'" Matb. Operationforscb. Statist. Ser.
Statist .? vol.9. pp. 127-139.
A. Waibel, T. Hanazawa. G. Hinton. K. Shikano & K. Lang. (1988), "Phoneme Recognition: Neural Networks vs. Hidden Markov Models'" Proc. IEEE IntI. ConE. on
Acoustics, Speecb, & Signal Processing, Vol. I, pp. 107-110, New York.
R. Watrous & L. Shastri. (1987). Learning phonetic features using connectionist networks:
an experiment in speech recognition", Proceedings of tbe First IntI. Conference on
Neural Networks, IV-381-388, San Diego, CA.
193
| 222 |@word nd:1 simplifying:2 noll:2 necessity:1 substitution:1 contains:1 score:1 series:1 initial:2 contextual:7 lang:1 must:1 moo:1 fn:2 realistic:1 entrance:2 applica:1 thble:1 remove:1 designed:1 discrimination:2 v:1 intelligence:1 scotland:1 compo:1 pointer:1 lr:1 quantized:2 simpler:2 along:1 consists:2 manner:1 interdependence:1 roughly:1 multi:1 decreasing:1 actual:1 window:6 becomes:1 provided:1 estimating:2 underlying:1 mass:1 null:1 watrous:2 developed:1 suite:1 berkeley:1 every:1 demonstrates:1 scaled:2 classifier:4 ser:1 unit:3 appear:2 positive:1 before:1 despite:3 gevins:2 might:1 plus:1 initialization:1 hmms:5 limited:1 statistically:1 acknowledgment:1 backpropagation:1 silverman:1 bootstrap:1 digit:1 procedure:10 asi:1 significantly:1 word:32 put:1 context:10 measurable:1 equivalent:1 map:7 conventional:3 center:1 starting:2 duration:3 splitting:1 glasgow:1 estimator:1 continued:1 rule:1 insight:1 embedding:6 hurt:1 updated:1 target:1 diego:2 hypothesis:1 recognition:42 particularly:1 database:1 labeled:2 observed:2 capture:1 decrease:1 removed:1 icsi:2 insertion:3 dynamic:2 radar:1 trained:2 segment:1 division:3 exit:1 basis:2 completely:1 easily:2 represented:1 describe:1 artificial:1 sc:1 labeling:3 choosing:2 larger:1 otherwise:1 hanazawa:1 final:1 sequence:2 advantage:2 net:1 adaptation:1 neighboring:1 achieve:1 description:1 getting:1 exploiting:1 requirement:1 extending:1 tions:1 measured:1 indicate:1 met:1 correct:1 fff:1 material:1 generalization:3 preliminary:1 accompanying:1 considered:1 normal:1 viterbi:10 major:2 belgium:1 estimation:5 proc:6 label:1 repetition:2 modified:1 establishment:1 avoid:2 validated:2 emission:8 improvement:5 consistently:1 likelihood:3 greatly:1 posteriori:1 inference:1 dependent:4 stopping:1 hidden:8 relation:3 going:1 classification:9 denoted:1 raised:1 equal:2 field:1 represents:1 report:1 wooters:1 connectionist:1 recognize:1 aubert:5 styled:1 mlp:39 a5:2 possibility:1 alignment:4 male:4 mixture:1 yielding:1 accurate:1 necessary:2 herve:1 perfonnance:4 iv:1 initialized:1 isolated:3 theoretical:1 stopped:1 instance:1 increased:1 column:1 modeling:1 cost:3 hundred:3 reported:1 dependency:1 eur:1 density:3 international:2 off:1 decoding:2 together:1 reflect:2 warped:1 style:1 potential:1 includes:1 leveled:1 performed:1 helped:1 later:2 start:2 bayes:1 capability:2 voiced:1 mlps:8 square:1 phonetically:1 phoneme:16 kaufmann:1 succession:2 produced:1 published:1 overtraining:1 explain:1 reach:1 checked:1 energy:1 mysterious:1 frequency:2 pp:5 sampled:1 dataset:1 duplicated:1 knowledge:1 segmentation:8 cj:1 back:1 appears:4 supervised:1 reflected:1 permitted:1 improved:4 done:3 box:1 evaluated:1 underlining:1 furthermore:2 just:1 correlation:1 hand:1 propagation:1 usa:1 effect:4 normalized:1 true:1 xavier:1 laboratory:1 moore:2 deal:1 ignorance:1 assistance:1 width:1 speaker:17 mel:2 criterion:4 m:2 generalized:1 stone:2 complete:1 multispeaker:1 fi:3 began:1 khz:1 discussed:1 perfomance:1 measurement:1 significant:2 automatic:1 tuning:1 session:9 language:1 had:5 gratefully:1 etc:2 align:1 base:1 recent:1 female:2 phonetic:2 binary:1 chuck:1 morgan:13 minimum:1 additional:1 somewhat:1 seen:1 recognized:1 determine:1 maximize:1 signal:5 ii:6 sliding:1 multiple:10 sound:1 full:2 segmented:2 technical:1 cross:7 offer:3 divided:1 mle:1 coded:1 neuro:1 basic:1 multilayer:4 essentially:2 metric:1 iteration:5 sometimes:2 achieved:1 source:3 hz:1 effectiveness:1 structural:1 feedforward:3 exceed:1 easy:2 automated:1 independence:2 prototype:3 penalty:3 speech:23 york:2 ignored:1 band:1 statist:2 reduced:2 outperform:1 exist:1 estimated:6 per:2 discrete:4 vol:6 acknowledged:1 kept:2 fraction:1 convert:1 cone:6 tbe:1 run:1 reasonable:1 acceptable:2 scaling:3 bit:1 layer:3 handicap:1 incorporation:1 extremely:1 developing:1 waibel:2 brussels:1 poor:1 modification:3 inti:7 wellekens:5 previously:1 german:3 needed:1 available:2 observe:1 ney:2 original:4 binomial:1 include:1 restrictive:1 approximating:1 classical:2 warping:1 already:2 quantity:1 dependence:1 traditional:2 gradient:1 link:1 philip:4 hmm:22 street:1 nelson:1 modeled:1 providing:1 minimizing:1 equivalently:1 difficult:3 shastri:2 potentially:1 perform:1 av:1 observation:1 markov:5 unvoiced:1 hinton:1 frame:30 smoothed:1 required:2 sentence:17 optimized:2 acoustic:6 deletion:2 trans:1 able:1 suggested:1 below:1 pattern:6 mismatch:1 program:1 including:1 royal:1 power:1 overlap:1 difficulty:3 rely:1 hybrid:2 bourlard:17 scheme:4 improve:2 technology:1 dtw:2 extract:1 utterance:1 prior:11 review:1 relative:2 embedded:1 fully:1 expect:1 limitation:1 validation:13 switched:1 spicos:1 consistent:2 article:1 collaboration:1 preposition:1 course:1 last:4 keeping:1 silence:1 allow:1 perceptron:3 institute:3 explaining:1 van:1 benefit:1 boundary:1 calculated:2 vocabulary:4 valid:1 transition:6 edinburgh:1 san:2 simplified:1 makino:2 approximate:1 nato:1 transcription:1 keep:1 ml:9 assumed:2 consonant:1 shikano:1 spectrum:2 continuous:13 search:1 iterative:3 table:11 ca:2 improving:1 necessarily:1 interpolating:1 complex:4 did:1 terminated:1 repeated:1 positively:1 representative:1 nile:2 malvern:1 differed:1 lq:1 third:2 peeling:2 down:4 specific:1 showing:1 evidence:3 incorporating:1 sequential:1 effectively:1 gained:2 merging:1 boston:1 entropy:2 led:4 logarithmic:1 simply:1 contained:1 adjustment:1 worcester:1 conditional:1 determined:1 except:1 called:1 total:1 partly:1 experimental:3 ucb:1 perceptrons:3 puzzling:1 support:1 latter:2 absolutely:1 bush:1 incorporate:5 tested:2 |
1,341 | 2,220 | Stable Fixed Points of Loopy Belief
Propagation Are Minima of the Bethe
Free Energy
Tom Heskes
SNN, University of Nijmegen
Geert Grooteplein 21, 6252 EZ, Nijmegen, The Netherlands
Abstract
We extend recent work on the connection between loopy belief propagation
and the Bethe free energy. Constrained minimization of the Bethe free energy
can be turned into an unconstrained saddle-point problem. Both converging
double-loop algorithms and standard loopy belief propagation can be interpreted as attempts to solve this saddle-point problem. Stability analysis then
leads us to conclude that stable fixed points of loopy belief propagation must
be (local) minima of the Bethe free energy. Perhaps surprisingly, the converse
need not be the case: minima can be unstable fixed points. We illustrate this
with an example and discuss implications.
1
Introduction
Pearl?s belief propagation [1] is a popular algorithm for inference in Bayesian networks. It is exact in special cases, e.g., for tree-structured (singly-connected) networks with just Gaussian or just discrete nodes. But also on networks containing
cycles, so-called loopy belief propagation often leads to good performance (approximate marginals close to exact marginals) [2]. The notion that fixed points of loopy
belief propagation correspond to extrema of the so-called Bethe free energy [3] has
been an important step in the theoretical understanding of this success. Empirically
it has further been observed that loopy belief propagation, when it does, converges
to a minimum. The main goal of this article is to understand why.
In Section 2 we will introduce loopy belief propagation in terms of a sum-product
algorithm on factor graphs [4]. The corresponding Bethe free energy is derived in
Section 3 from a variational point of view, indicating that we should be particularly
interested in minima. In Section 4 we show that minimization of the Bethe free
energy under the appropriate constraints is equivalent to an unconstrained saddlepoint problem. The converging double-loop algorithm, described in Section 3, as
well as the standard sum-product algorithm are in fact attempts to solve this saddlepoint problem. More specifically, (a damped version of) the sum-product algorithm
has the same local stability properties as a gradient descent-ascent procedure. Stability analysis of this gradient descent-ascent procedure then leads to the conclusion
in the title. With an example we illustrate that the converse need not be the case.
In Section 5 we discuss further implications and relations to other studies.
x1
x3
EE
EE yyy
EEy
E
yy
yy EEE
y
y
x2
1, 2 R
1, 3 R
1, 4 R
2, 3
2, 4
3, 4
RRR
RRR
ll
ll
EERRRR
Ry
RRy
R
lll
lll y
EE RRR
RRR yyyy RRRRR yyyy RRlRlRlRll
lll yyyy
EE
l
l
R
R
l
l
R
EE
RyyR
RRRlll
RyyR ll
yy
E
lR
yy RRRR
ylylllRlRRR
yy
lll RR
1
2
3
4
x4
(a) Graphical model of
(b) Factor graph with potentials
P (x1 , . . . , xn ) ?
exp
hP
ij
wij xi xj +
?ij (xi , xj ) = exp wij xi xj +
P
i
i
?i xi .
1
?x
n?1 i i
+
1
? x
n?1 j j
.
Figure 1: A Boltzmann machine. (a) Graphical representation of the probability
distribution. (b) Corresponding factor graph with a factor for each pair of nodes.
2
The sum-product algorithm on factor graphs
We start with a description of (loopy) belief propagation as the sum-product algorithm on factor graphs [4]. We assume that the probability distribution over
(disjoint subsets of) variables x? factorizes over ?factors? ?? (X? ):
1Y
P (x1 , . . . , x? , . . . , xN ) =
?? (X? ) ,
(1)
Z ?
with Z a proper normalization constant. We will use notation similar to [4]: uppercase X? for the factors (?local function nodes?) and lowercase x? for the variables.
? ? ? means that x? is a neighbor of X? in the factor graph, i.e., is included in the
potential ?? (X? ). An example of the transformation of a Markov network into a
factor graph is shown in Figure 1. In a similar manner one can transform Bayesian
networks into factor graphs, where each factor contains the child and its parents [4].
On singly-connected structures, Pearl?s belief propagation algorithm [1] can be applied to compute the exact marginals (?beliefs?)
X
X
P (X? ) =
P (X) and P (x? ) =
P (X) .
X\?
X\?
If the structure contains cycles, one can still apply (loopy) belief propagation, in an
attempt to obtain accurate approximations P? (X? ) and P? (x? ).
Pseudo-code for the sum-product algorithm is given in Algorithm 1. In the factorgraph representation we distinguish messages from factor ? to variable ?, ???? (x? ),
and vice versa, ?? ?? (x? ). The beliefs follow by multiplying the potential, a mere 1
for the variables and ?? (X? ) for the factors, with the incoming messages, see (1.3)
and (1.2) in Algorithm 1. The update for an outgoing message is the variable belief,
either calculated with the definition (1.2) or through the marginalization (1.6),
divided by the incoming message, see (1.4) and (1.5).
We interpret the update of factor-variable message ???? in line 8 of Algorithm 1
as the only actual update: beliefs and variable-factor messages directly follow from
definitions in lines 11 to 15. For later reference we introduce the damped update
full
log ?new
(2)
??? (x? ) = log ???? (x? ) + log ???? (x? ) ? log ???? (x? ) ,
where ?full refers to the result of the full update (1.5) and ? to the previous message.
These and other seemingly arbitrary choices, among which the particular ordering
Initial messages:
1: repeat
2: for all variables ? do
3:
for all factors ? ? ? do
4:
if initial then
5:
initialize message (1.1)
6:
else
7:
marginalize (1.6)
8:
update message (1.5)
9:
end if
10:
end for
11:
compute variable belief (1.2)
12:
for all factors ? ? ? do
13:
compute message (1.4)
14:
compute factor belief (1.3)
15:
end for
16: end for
17: until convergence
???? (x? ) = 1
(1.1)
Beliefs:
P? (x? ) =
1 Y
???? (x? )
Z?
(1.2)
???
P? (X? ) =
Y
1
?? (X? )
?? ?? (x? ) (1.3)
Z?
???
Messages:
P? (x? )
???? (x? )
P? (x? )
???? (x? ) =
?? ?? (x? )
?? ?? (x? ) =
with
P? (x? ) ?
X
P? (X? )
(1.4)
(1.5)
(1.6)
X?\?
Algorithm 1: The sum-product algorithm on factor graphs.
of updates, follow naturally from the analysis below. Besides, for the results on
local stability we will consider the limit of small step sizes , where any effects of
the ordering disappear. Last but not least, the description in Algorithm 1 is mainly
pedagogical and can be made more efficient in several ways.
3
The Bethe free energy
The exact distribution (1) can be written as the result of the variational problem
#
"
X
P? (X)
?
,
(3)
P (X) = argmin
P (X) log Q
? ?? (X? )
P?
X
where here and in the following normalization and positivity constraints on probabilities are implicitly assumed. Next we confine our search to ?tree-like? probability
distributions of the form
Q
X
P? (X? )
?
P (X) ? Q ?
with n? ?
1,
(4)
n
?1
?
? P? (x? )
???
the number of neighboring factors of variable ?. Here P? (X? ) and P? (x? ) are
interpreted as (approximate) local marginals that should normalize to 1, but should
also be consistent, i.e., obey
?? ???? P? (x? ) = P? (x? ) ,
(5)
with P? (x? ) as in (1.6). The denominator in (4) prevents double-counting. For
singly-connected structures, it can be shown that the exact solution P (X) is of this
form, with proportionality constant equal to 1 and where P? (X? ) = P (X? ) and
P? (x? ) = P (x? ). For structures containing cycles, this need not be the case, but
we can still assume it to be true approximately. Plugging (4) into the objective (3)
and implementing the above assumptions, we obtain the Bethe free energy
X
XX
X
P? (X? )
F (P ) =
P? (X? ) log
?
(n? ? 1)
P? (x? ) log P? (x? ) . (6)
?? (X? )
?
x
X?
?
?
Initial messages and beliefs:
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
for all ? and ? ? ? do
initialize (2.1)
end for
repeat
for all factors ? do
update potential (2.4)
update variable belief (2.3)
end for
inner loop with (2.2) and (2.3)
until convergence
?? ?? (x? ) = 1 and P? (x? ) = 1
Beliefs:
?
1 ?
P? (x? ) =
Z?
P? (X? ) =
? n1
(2.1)
?
Y
???
???? (x? )?
(2.2)
Y
1 ?
?? (X? )
?? ?? (x? ) (2.3)
Z?
???
Potential update:
? ? (X? ) = log ?? (X? )
log ?
X n? ? 1
+
log P?old (x? ) (2.4)
n?
???
Algorithm 2: Double-loop algorithm for minimizing the Bethe free energy. The
inner loop is Algorithm 1 with redefinitions of the factor and variable beliefs.
Minus the Bethe free energy is an approximation, but not a bound of the loglikelihood log Z. A key observation in [3] is that the fixed points of the sum-product
algorithm, described in the previous section, correspond to extrema of the Bethe
free energy under the constraints (5).
The above derivation suggests that we should be specifically interested in minima
of the Bethe free energy, not ?just? stationary points. The resulting constrained
minimization problem is well-defined (the Bethe free energy is bounded from below),
but not necessarily convex, mainly because of the negative P? log P? -terms. The
crucial trick, implicit or explicit in recently suggested procedures is to bound [5] or
clamp [6] the possibly concave part (outer loop: recompute the bound) and solve
the remaining convex problem (inner loop: maximization with respect to Lagrange
multipliers; see below). Here we propose to use the linear bound
X
X
P? (x? ) log P?old (x? ) ,
P? (x? ) log P? (x? ) ? ?
(7)
?
x?
x?
with P?old (x? ) from the result of the previous inner loop. The (convex) bound of
the Bethe free energy then boils down to
"
#
XX
P? (X? )
Fbound (P ) =
P? (X? ) log
? F (P ) ,
? ? (X? )
?
?
X?
? ? as in (2.4). The outer loop corresponds to a reset of the bound,
if we define ?
i.e., at the start of the inner loop we have Fbound (P ) = F (P ). In the inner loop
(see the next section for its derivation), we solve the remaining convex constrained
minimization problem with the method of Lagrange multipliers. At the end of the
inner loop, we then have F (P new ) ? Fbound (P new ) ? Fbound (P ) = F (P ).
4
Saddle-point problem
In this section we will translate the (non-convex) minimization of the Bethe free energy under linear constraints into an equivalent (non-convex/concave) saddle-point
problem. We replace the bound (7) with an explicit minimization over auxiliary
variables ? (see also [7]; an alternative interpretation is a Legendre transform):
?
??
?
?
? X
X
X
?
P? (x? ) log P? (x? ) = min ?
?? (x? )P? (x? ) + log ?
e?? (x? ) ? . (8)
?? ?
?
x
x
x
?
?
?
Substitution into (6) then yields a constrained minimization problem, where the
minimization is w.r.t. {P? , P? , ?? } under constraints (5). Using (any other convex
combination will work as well, but this symmetric one is most convenient)
1 X
P? (x? ) =
P? (x? )
n?
???
we can get rid of all dependencies on P? , both in (8) and in the constraints (5),
which simplifies the following analysis and derivations considerably. For fixed ?? ,
the remaining minimization problem is convex in P? with linear constraints and
can thus be solved with the method of Lagrange multipliers. In terms of these
multipliers ? and the auxiliary variables ?, the solution for P? reads
?
?
X
1
? ?? (x? ) + n? ? 1 ?? (x? )? ,
P? (X? ) =
?? (X? ) exp ?
?
(9)
Z? (?, ?)
n?
???
with Z? (?, ?) the proper normalization and
X
? ?? (x? ) ? ??? (x? ) ? 1
??0 ? (x? ) .
?
n? 0
? ??
Substituting this back into the Lagrangian, we end up with an unconstrained saddlepoint problem of the type min? max? F (?, ?) with
?
?
X
X
X
e?? (x? ) ? .
F (?, ?) =
log Z? (?, ?) ?
(n? ? 1) log ?
?
x?
?
From the fixed-point equations we derive the updates
?new
?? (x? )
= ??? (x? ) ? log P? (x? ) +
1 X
log P?0 (x? ) ,
n? 0
(10)
? ??
??new (x? )
?
X
1
P? (x? )? ,
= log ?
n?
?
(11)
???
with P? (x? ) the marginal computed from P? (X? ) as in (9).
Proof. Introduce a new set of auxiliary variables Z?? by writing
? log Z? = max
??
Z
(
? log Z?? +
1 X
1?
P? (X? )Z?
Z??
X?
!)
.
Next consider maximizing ??? (x? ) for a particular variable ? and all ? ? ?, while keeping
all others as well as all Z?? fixed (by convention, we update Z?? to Z? after each update of
? new should satisfy
??s). Taking derivatives, we find that the new ?
e
? new (x )
?
?
??
?
P? (x? )
e??? (x? )
? new (x )
?
1 X e ?0 ? ? P?0 (x? )
=
.
?
n? 0
e??0 ? (x? )
? ??
Any update of the form ?new
?? (x? ) = ? log P? (x? ) + ??? (x? ) + ?? (x? ) will do, where
? new
choosing ?? (x? ) such that ?new
?? = ??? yields (10).
The updates (10) and (11) are properly aligned with the respective gradients and
satisfy the saddle-point equations
F (?new , ?) ? F (?, ?) ? F (?, ? new ) .
(12)
This saddle-point problem is concave in ?, but not necessarily convex in ?. One
way to guarantee convergence to a ?correct? saddle point is then to solve the (up
to irrelevant linear translations unique) maximization with respect to ? in an inner
loop, followed by an update of ? in the outer loop. This is precisely the doubleloop algorithm sketched in the previous section. We obtain the description given in
Algorithm 2 if we substitute (up to irrelevant constants)
? ?? (x? ) = log ?? ?? (x? ), and ??? (x? ) = ? log ???? (x? ) .
?? (x? ) = log P?old (x? ), ?
Note that in the inner loop of the double-loop algorithm the scheduling does matter. The ordering described in Algorithm 1 - run over variables ? and update all
corresponding messages from and to neighboring factors before moving on to the
next variable - satisfies (12) without damping.
An alternative approach is to apply (damped versions of) the updates (10) and
(11) in parallel. This can be loosely interpreted as doing gradient descent-ascent.
Gradient descent-ascent is a standard procedure for solving saddle-point problems
and guaranteed to converge to the correct solution if the saddle-point problem is
indeed convex/concave (see e.g. [8]). Similarly, it is easy to show that gradient
descent-ascent applied to a non-convex/concave problem is locally stable at a particular saddle point {?? , ? ? }, if and only if the objective is locally convex/concave.
The statement in the title now follows from two observations.
1. The damped version (2) of the sum-product algorithm has the same local stability
properties as a gradient descent-ascent procedure derived from (10) and (11).
Proof. We replace (11) with
??new (x? ) =
1 X
log P? (x? ) .
n?
(13)
???
At a saddle point P? (x? ) = P? (x? ) ???? and thus the difference between the logarithmic
average (13) and the linear average (11) as well as its derivatives vanish. Consequently,
(13) has the same local stability properties as (11). Now consider parallel application of
a damped version of (10), with step size , and (13), with step size n? . We obtain the
damped version (2) of the standard sum-product algorithm, in combination with the other
definitions in Algorithm 1, when we apply the definitions
? ?? (x? ) + n? ? 1 ?? (x? ) and log ???? (x? ) = 1 ?? (x? ) ? ??? (x? ) .
log ???? (x? ) = ?
n?
n?
2. Local stability of the gradient descent-ascent procedure at {?? , ? ? } implies that
the corresponding P? is at a minimum of the Bethe free energy and that all
constraints are satisfied. The converse need not be the case.
Proof. Local stability of the gradient descent-ascent procedure and thus the sum-product
algorithm depends on the local curvature of F (?, ?), defined through the Hessian matrices
H?? ?
? 2 F (?, ?)
???? T {?? ,? ? }
KL?divergence
(a)
(b)
1
1
10
0
?1
?1
50
#iterations
0
10
10
?1
10
0
10
0
10
10
(d)
1
10
0
10
(c)
1
10
?1
10
0
500
#iterations
10
0
10
#iterations
20
0
1000
#iterations
2000
Figure 2: Loopy belief propagation on a Boltzmann machine with 4 nodes, weights
(upper diagonal) (3, 2, 2; 1, 3; ?3), and thresholds (0, 0, 1, 1). Plotted is the KullbackLeibler divergence between the exact and the approximate single-node marginals.
(a) No damping leads to somewhat erratic cyclic behavior. (b) Damping with step
size 0.1 yields a smoother cycle, but no convergence. (c) The double-loop algorithm
does converge to a stable solution. (d) This solution is unstable under standard
loopy belief propagation (here again with step size 0.1).
and H?? . Gradient descent-ascent is locally stable iff H?? is positive and H?? negative
(semi-)definite. The latter is true by construction. The ?total? curvature, defined through
?
H??
?
can be shown to obey
? 2 F ? (?)
with F ? (?) ? max F (?, ?) ,
?
???? T ? ?
?
?1
H??
= H?? ? H?? H??
H?? .
With H?? negative definite, we then conclude that if H?? is positive definite (gradient
?
descent-ascent locally stable), then so is H??
(local minimum). The converse, however,
?
need not be the case: H?? can be positive definite (minimum) where H?? has one or more
negative eigenvalues (gradient descent-ascent unstable). An example of this phenomenom
is F (?, ?) = ??2 ? ? 2 + 4??.
Non-convergence of loopy belief propagation on a Boltzmann machine is shown in
Figure 2. Typically, standard loopy belief propagation converges to a stable solution without damping. In rare cases, damping is required to obtain convergence and
in very rare cases, even considerable damping does not help, as in Figure 2. The
double-loop algorithm does converge and the solution obtained is indeed unstable
under standard belief propagation, even with damping. The larger the weights, the
more often these instabilities seem to occur. This is consistent with the empirical observation that the max-product algorithm (?belief revision?) is typically less
stable than the sum-product algorithm: max-product on a Boltzmann machine corresponds to (a properly scaled version of) the sum-product algorithm in the limit of
infinite weights. The example in Figure 2 is about the smallest that we have found:
we have observed these instabilities in many other (larger) instances of Markov networks, as well as directed Bayesian networks, yet not in structures with just a single
loop. The latter seems consistent with the notion that not only for trees, but also
for networks with a single loop, the Bethe free energy is still convex.
5
Discussion
The above gradient descent-ascent interpretation shows that loopy belief propagation is more than just fixed-point iteration: the updates tend to move in the right
uphill-downhill directions, which might explain its success in practical applications.
Still, loopy belief propagation can fail to converge, and apparently for two different
reasons. The first rather innocent one is a too large step size, similar to taking
a too large ?learning parameter? in gradient-descent learning. Straightforwardly
damping the updates, as in (2), is then sufficient to converge to a stable fixed point.
Note that this damping is in the logarithmic domain and thus slightly different
from the damping linear in the messages as described in [2]. The damping proposed
in [7] is restricted to the Lagrange multipliers ? and may therefore not share the
nice properties of the damping discussed here. Local stability in the limit of small
step sizes is independent of the scheduling of messages, but in practice particular
schedules can still favor others and, for example, be stable with larger step sizes or
converge more rapidly. For example, in [9] the message updates follow the structure
of a spanning tree, which empirically seems to help a lot.
The other more serious reason for non-convergence is inherent instability of the
fixed point, even in the limit of infinitely small step sizes. In that case, loopy belief
propagation just does not work and one can resort to a more tedious double-loop
algorithm to guarantee convergence to a local minimum. The double-loop algorithm
described here is similar to the CCCP algorithm of [5]. The latter implicitly uses
a less strict bound, which makes it (slightly) less efficient and arguably a little
more complicated. Whether double-loop algorithms are worth the effort is an open
question: in several simulation studies a negative correlation between the quality
of the approximation and the convergence of standard belief propagation has been
found [6, 7, 10], but still without a convincing theoretical explanation.
Acknowledgments
I would like to thank Wim Wiegerink and Onno Zoeter for many helpful suggestions
and interesting discussions and the Dutch Technology Foundation STW for support.
References
[1] J. Pearl. Probabilistic Reasoning in Intelligent systems: Networks of Plausible
Inference. Morgan Kaufmann, San Francisco, CA, 1988.
[2] K. Murphy, Y. Weiss, and M. Jordan. Loopy belief propagation for approximate
inference: An empirical study. In UAI?99, pages 467?475, 1999.
[3] J. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. In
NIPS 13, pages 689?695, 2001.
[4] F. Kschischang, B. Frey, and H. Loeliger. Factor graphs and the sum-product
algorithm. IEEE Transactions on Information Theory, 47(2):498?519, 2001.
[5] A. Yuille. CCCP algorithms to minimize the Bethe and Kikuchi free energies:
Convergent alternatives to belief propagation. Neural Computation, 14:1691?
1722, 2002.
[6] Y. Teh and M. Welling. The unified propagation and scaling algorithm. In
NIPS 14, 2002.
[7] T. Minka. The EP energy function and minimization schemes. Technical
report, MIT Media Lab, 2001.
[8] S. Seung, T. Richardson, J. Lagarias, and J. Hopfield. Minimax and Hamiltonian dynamics of excitatory-inhibitory networks. In NIPS 10, 1998.
[9] M. Wainwright, T. Jaakola, and A. Willsky. Tree-based reparameterization for
approximate estimation on loopy graphs. In NIPS 14, 2002.
[10] T. Heskes and O. Zoeter. Expectation propagation for approximate inference
in dynamic Bayesian networks. In UAI-2002, pages 216?223, 2002.
| 2220 |@word version:6 seems:2 tedious:1 proportionality:1 open:1 grooteplein:1 simulation:1 minus:1 initial:3 substitution:1 contains:2 cyclic:1 loeliger:1 yet:1 must:1 written:1 update:21 stationary:1 hamiltonian:1 lr:1 recompute:1 node:5 manner:1 introduce:3 uphill:1 indeed:2 behavior:1 ry:1 freeman:1 snn:1 actual:1 lll:4 little:1 revision:1 xx:2 notation:1 bounded:1 medium:1 argmin:1 interpreted:3 unified:1 extremum:2 transformation:1 guarantee:2 pseudo:1 concave:6 innocent:1 scaled:1 converse:4 arguably:1 before:1 positive:3 local:13 frey:1 limit:4 approximately:1 might:1 suggests:1 jaakola:1 directed:1 unique:1 practical:1 acknowledgment:1 practice:1 definite:4 x3:1 procedure:7 empirical:2 convenient:1 refers:1 get:1 close:1 marginalize:1 scheduling:2 writing:1 instability:3 equivalent:2 lagrangian:1 maximizing:1 convex:13 reparameterization:1 geert:1 stability:9 notion:2 construction:1 exact:6 us:1 trick:1 rry:1 particularly:1 observed:2 ep:1 solved:1 connected:3 cycle:4 ordering:3 seung:1 dynamic:2 solving:1 yuille:1 hopfield:1 derivation:3 choosing:1 larger:3 solve:5 plausible:1 loglikelihood:1 yyy:1 favor:1 richardson:1 transform:2 seemingly:1 rr:1 eigenvalue:1 propose:1 clamp:1 product:16 reset:1 neighboring:2 turned:1 loop:23 aligned:1 rapidly:1 translate:1 iff:1 description:3 normalize:1 parent:1 double:10 convergence:9 converges:2 kikuchi:1 help:2 illustrate:2 derive:1 ij:2 auxiliary:3 implies:1 convention:1 direction:1 correct:2 implementing:1 confine:1 exp:3 substituting:1 smallest:1 estimation:1 wim:1 title:2 vice:1 minimization:10 mit:1 gaussian:1 rather:1 factorizes:1 derived:2 properly:2 mainly:2 helpful:1 inference:4 lowercase:1 typically:2 relation:1 wij:2 interested:2 sketched:1 among:1 constrained:4 special:1 initialize:2 marginal:1 equal:1 x4:1 others:2 report:1 intelligent:1 serious:1 inherent:1 divergence:2 murphy:1 n1:1 attempt:3 message:17 uppercase:1 damped:6 implication:2 accurate:1 respective:1 damping:12 tree:5 old:4 loosely:1 plotted:1 theoretical:2 instance:1 doubleloop:1 maximization:2 loopy:19 subset:1 rare:2 too:2 kullbackleibler:1 straightforwardly:1 dependency:1 considerably:1 probabilistic:1 again:1 satisfied:1 containing:2 possibly:1 positivity:1 resort:1 derivative:2 potential:5 matter:1 satisfy:2 depends:1 later:1 view:1 lot:1 lab:1 doing:1 apparently:1 start:2 zoeter:2 parallel:2 complicated:1 minimize:1 kaufmann:1 correspond:2 yield:3 bayesian:4 mere:1 multiplying:1 worth:1 explain:1 definition:4 energy:20 minka:1 naturally:1 proof:3 boil:1 popular:1 schedule:1 back:1 follow:4 tom:1 wei:2 just:6 implicit:1 until:2 correlation:1 propagation:26 quality:1 perhaps:1 effect:1 true:2 multiplier:5 read:1 symmetric:1 ll:3 onno:1 generalized:1 pedagogical:1 reasoning:1 variational:2 recently:1 empirically:2 extend:1 interpretation:2 discussed:1 marginals:5 interpret:1 versa:1 unconstrained:3 heskes:2 hp:1 similarly:1 moving:1 stable:10 curvature:2 recent:1 irrelevant:2 success:2 morgan:1 minimum:10 somewhat:1 converge:6 semi:1 smoother:1 full:3 technical:1 divided:1 cccp:2 plugging:1 converging:2 denominator:1 redefinition:1 expectation:1 dutch:1 iteration:5 normalization:3 else:1 crucial:1 ascent:12 strict:1 tend:1 seem:1 jordan:1 ee:5 counting:1 easy:1 xj:3 marginalization:1 inner:9 simplifies:1 whether:1 effort:1 hessian:1 netherlands:1 singly:3 locally:4 inhibitory:1 disjoint:1 yy:5 discrete:1 key:1 threshold:1 graph:11 sum:14 run:1 eee:1 scaling:1 bound:8 followed:1 distinguish:1 guaranteed:1 convergent:1 occur:1 constraint:8 precisely:1 x2:1 min:2 structured:1 combination:2 legendre:1 slightly:2 saddlepoint:3 rrr:4 restricted:1 equation:2 discus:2 fail:1 end:8 yedidia:1 apply:3 obey:2 appropriate:1 alternative:3 substitute:1 remaining:3 graphical:2 disappear:1 objective:2 move:1 question:1 diagonal:1 gradient:14 thank:1 outer:3 unstable:4 reason:2 spanning:1 willsky:1 code:1 besides:1 minimizing:1 convincing:1 statement:1 nijmegen:2 negative:5 stw:1 proper:2 boltzmann:4 teh:1 upper:1 observation:3 markov:2 descent:13 rrrr:1 arbitrary:1 pair:1 required:1 kl:1 connection:1 pearl:3 nip:4 suggested:1 below:3 max:5 erratic:1 belief:36 explanation:1 wainwright:1 minimax:1 scheme:1 technology:1 nice:1 understanding:1 suggestion:1 interesting:1 foundation:1 sufficient:1 consistent:3 article:1 share:1 translation:1 excitatory:1 surprisingly:1 repeat:2 free:19 last:1 keeping:1 understand:1 neighbor:1 taking:2 calculated:1 xn:2 made:1 san:1 welling:1 transaction:1 approximate:6 implicitly:2 incoming:2 uai:2 rid:1 conclude:2 assumed:1 francisco:1 xi:4 search:1 why:1 bethe:19 ca:1 kschischang:1 necessarily:2 domain:1 main:1 lagarias:1 child:1 x1:3 downhill:1 explicit:2 vanish:1 down:1 logarithmic:2 saddle:11 infinitely:1 ez:1 prevents:1 lagrange:4 corresponds:2 satisfies:1 goal:1 consequently:1 replace:2 considerable:1 included:1 specifically:2 infinite:1 called:2 total:1 indicating:1 support:1 latter:3 rrrrr:1 outgoing:1 |
1,342 | 2,221 | Boosted Dyadic Kernel Discriminants
Baback Moghaddam
Mitsubishi Electric Research Laboratory
201 Broadway
Cambridge MA 02139 USA
[email protected]
Gregory Shakhnarovich
MIT AI Laboratory
200 Technology Square
Cambridge MA 02139 USA
[email protected]
Abstract
We introduce a novel learning algorithm for binary classification
with hyperplane discriminants based on pairs of training points
from opposite classes (dyadic hypercuts). This algorithm is further
extended to nonlinear discriminants using kernel functions satisfying Mercer?s conditions. An ensemble of simple dyadic hypercuts is
learned incrementally by means of a confidence-rated version of AdaBoost, which provides a sound strategy for searching through the
finite set of hypercut hypotheses. In experiments with real-world
datasets from the UCI repository, the generalization performance
of the hypercut classifiers was found to be comparable to that of
SVMs and k-NN classifiers. Furthermore, the computational cost
of classification (at run time) was found to be similar to, or better than, that of SVM. Similarly to SVMs, boosted dyadic kernel
discriminants tend to maximize the margin (via AdaBoost). In
contrast to SVMs, however, we offer an on-line and incremental
learning machine for building kernel discriminants whose complexity (number of kernel evaluations) can be directly controlled (traded
off for accuracy).
1
Introduction
This paper introduces a novel algorithm for learning complex binary classifiers by
superposition of simpler hyperplane-type discriminants. In this algorithm, each of
the simple discriminants is based on the projection of a test point onto a vector
joining a dyad, defined as a pair of training data points with opposite labels. The
learning algorithm itself is based on a real-valued variant of AdaBoost [7], and the
hyperplane classifiers use kernels of the type used, e.g., by support vector machines
(SVMs) [9] for mapping linearly non-separable problems to high-dimensional feature
spaces.
When the concept class consists of linear discriminants (hyperplanes), this amounts
to using a hyperplane orthogonal to the vector connecting the point in a dyad.
We shall refer to such a classifier as a hypercut. By applying the same notion of
linear hypercuts to a nonlinearly transformed feature space obtained by Mercertype kernels [3], we are able to implement nonlinear kernel discriminants similar in
form to SVMs.
In each iteration of AdaBoost, the space of all dyadic hypercuts is searched. It can
be easily shown that this hypothesis space spans the subspace of the data and that it
must include the optimal hyperplane discriminant. This notion is readily extended
to non-linear classifiers obtained by kernel transformations, by noting that in the
feature space, the optimal discriminant resides in the span of the transformed data.
Therefore, for both linear and nonlinear classification, searching the space of dyadic
hypercuts forms an efficient strategy for exploring the space of all hypotheses.
1.1
Related work
The most general framework to consider is the theory of potential functions for
pattern classification [1] in which potential fields1 of the form
X
H(x) =
?i yi K(x, xi )
(1)
i
are thresholded to predict classification labels, y? = sign(H(x)). In a probabilistic
kernel regression framework recently proposed in [5], the coefficients ? that minimize
the classification error are obtained by maximizing
X
1X
J(?) = ?
?i ?j yi yj K(xi , xj ) +
F (?i ),
(2)
2 i,j
i
where the potential function F is concave and continuous (corresponding to positive
semi-definite kernels). This framework subsumes SVMs, which correspond to the
simplest case F (?) = ?. Generalized linear models [6] can also be shown to be
members of this class by considering logistic regression where F (?) becomes the
binary entropy function and K is related to the covariance function of a Gaussian
process classifier for the GLM?s intermediate variables.
In this paper we propose and design classifiers with dyadic discriminants, which
have potential functions of the form
X
H(x) =
?t K(x, xpt ) ? ?t K(x, xnt ),
(3)
t
p
n
where x and x are positively and negatively labeled data, respectively. The coefficients ?t are determined not by minimizing a convex quadratic function J(?) but
rather by selecting an optimal classifier in the t-th iteration of AdaBoost. Thus the
potential function is constrained to the form of a weighted sum of dyadic hypercuts,
or differences of kernel functions. Another way to view this is to think of a pair of
opposite ? polarity ?basis vectors? sharing the same coefficient ?t .
The most closely related potential function technique to ours is that of SVMs [9],
where the classification margin (and thus the bound on generalization) is maximized by a simultaneous optimization with respect to all of the training points.
However, there are important differences between SVMs and our iterative hypercut
algorithm. In each step of the boosting process, we do not maximize the margin
of the resulting strong classifier directly, which makes for a much simpler optimization task. Meanwhile, we are assured that with AdaBoost we tend to maximize
(although in an asymptotic sense) the margin of the final classifier [7].
The most important difference that distinguishes our method from SVMs (and,
by extension, from the general kernel discriminant family described above) is that
1
The physical analogy here is to the linear superposition of electrostatic charges of
strength ?i , polarity yi and location xi with distance defined by the kernel K.
the points in our dyads are not typically located near the decision boundary, as
is the case with support vectors. As a result, the final set of ?basis vectors? used
by the boosted strong classifier can be viewed as a representative subset of the
data (i.e. those points needed for classification), whereas with SVMs the support
vectors are simply the minimal number of training points needed to build (support)
the decision boundary and are almost certainly not ?typical? or high-likelihood
members of either class.2
The classification complexity of a kernel-based classifier ? the cost of classifying
a test point ? depends on the number of kernel function evaluations on which
the classifier is based. In the case of SVMs, there is (usually) no direct way of
controlling this number (the quadratic programming solution will automatically
determine all positive Lagrange multipliers). In our boosted hypercut algorithm,
however, the number of dyadic ?basis vectors?, and therefore of the required kernel
evaluations, is determined by the number of iterations of the boosting algorithm and
can therefore be controlled. Note that we are not referring here to the complexity
of training classifiers here, only to their run-time computational cost.
2
Methodology
Consider a binary classification task where we are given a training set of vectors
T = {x1 , . . . , xM } where x ? RN , with corresponding labels {y1 , . . . , yM } where
y ? {?1, +1}. Let there be Mp samples with label +1 and Mn samples with label
?1 so that M = Mp + Mn . Consider a simple linear hyperplane classifier defined
by a discriminant function of the form
f (x) = hw ? xi + b
(4)
where sign(f (x)) ? {+1, ?1} gives the binary classification.
Under certain assumptions, Gaussianity in particular, the optimal hyperplane, specified by the projection w? and bias b? , is easily computed using standard statistical techniques based on class means and sample covariances for linear classifiers.
However, in the absence of such assumptions, one must resort to searching for the
optimal hyperplane. When searching for w? , an efficient strategy is to consider only
hyperplanes whose surface normal is parallel to the line joining a dyad (xi , xj ):
xi ? x j
wij =
, yi 6= yj , i < j
(5)
c
where yi 6= yj by definition, i < j for uniqueness, and c is a scale factor. The
vector wij is parallel to the line segment connecting the points in a dyad. Setting
c = kxi ? xj k makes wij a unit-norm direction vector.
The hypothesis space to be searched consists of | {wij } |= Mp Mn hypercuts, each
having a free bias parameter bij which is typically determined by minimizing the
weighted classification error (as we shall see in the next section). Each hypothesis
is then given by the sign of the discriminant as in (4):
hij (x) = sign(hwij ? xi + bij )
(6)
Let {hij } = {wij , bij } denote the complete set of hypercuts for a given training
set. Strictly speaking, this set is uncountable since bij is continuous and arbitrary.
However, since we always select one bias parameter for each hypercut w ij , we do
in fact end up with only Mp Mn classifiers.
2
Although unrelated to our technique, the Relevance Vector machine [8] is another
kernel learning algorithm that tends to produce ?prototypical? basis vectors in the interior
as opposed to the boundary of the distributions.
2.1
AdaBoost
The AdaBoost algorithm [4] provides a practical framework for combining a number
of weak classifiers into a strong final classifier by means of linear combination and
thresholding. AdaBoost works by maintaining over the training set an iteratively
evolving distribution (weights) Dt (i) based on the difficulty of classification (i.e.
points which are harder to classify have greater weight). Consequently, a ?weak?
hypothesis h(x) : x ? {+1, ?1} will have classification error t weighted by
Dt . In our case, in each iteration t, we select from the complete set of Mp Mn
hypercuts {hij } one which minimizes t . The data are then re-weighted based on
their (mis)classification to obtain an updated distribution Dt+1 .
The final classifier is a linear combination of the selected weak classifiers ht and has
the form of a weighted ?voting? scheme
!
T
X
H(x) = sign
?t ht (x)
(7)
i=1
t
where ?t = 21 ln( 1?
t ). In [7] a framework was developed where ht (x) can be
real-valued (as opposed to binary) and is interpreted as a ?confidence-rated prediction.? The sign of ht (x) is the predicted label while the magnitude | ht (x) | is the
confidence. For such real-valued classifiers we have
1
1 + rt
?t =
ln
(8)
2
1 ? rt
P
where the ?correlation? rt = i Dt (i) yi ht (xi ) is inversely related to the error by
t = (1 ? rt )/2.
2.2
Nonlinear Hypercuts
The logical extension beyond the boosted linear dyadic discriminants described in
the previous section is that of nonlinear discriminants using positive definite kernels
as suggested in [3] for use with SVMs. In the resulting ?reproducing kernel Hilbert
spaces?, dot products between high-dimensional mappings ?(x) : X ? F are easily
evaluated using Mercer kernels
k(x, x0 ) = h?(x) ? ?(x0 )i.
(9)
This has the desirable property that any algorithm based on dot products, e.g.
our linear hypercut classifier (6), can first nonlinearly transform its inputs (using
kernels) and implicitly perform dot-products in the transformed space. The preimage of the linear hyperplane solution back in the input space is thus a nonlinear
hypersurface.
Applying the above kernel property to the hypercut concept (5) we can rewrite it
in nonlinear form by considering the linear hypercut in the transformed space F
where the projection operator is
wij = ?(xi ) ? ?(xj ),
yi 6= yj ,
i<j
(10)
(we have absorbed the scale constant c in (5) into wij for simplicity in this case).3
Due to the implicit nature of the nonlinear mapping, we can not directly evaluate
wij . However, we only need its dot product with the transformed input vectors
3
?
Since the optimal projection wij
must lie in the span of {?(xi )}, we should restrict
the search for an optimal hyperplane accordingly, e.g. by considering pair-wise hypercuts.
?(x). Considering the linear discriminant (4) and substituting the above we obtain
fij (x) = h(?(xi ) ? ?(xj )) ? ?(x)i + bij ,
(11)
which by applying the kernel property (9) is equivalent to
fij (x) = k(x, xi ) ? k(x, xj ) + bij
(12)
Note that fij now represents a single dyadic term in the potential function introduced in (3). The binary-valued hypercut classifier is given by a simple thresholding
hij (x) = sign(fij (x)).
(13)
A ?confidence-rated? classifier with output in the range [?1, +1] can be obtained by
passing fij through a bipolar sigmoidal nonlinearity such as a hyperbolic tangent
hij (x) = tanh (?fij (x))
(14)
where ? determines the ?slope? of the sigmoid. We note that in order to obtain
a continuous-valued hypercut classifier that suitably occupies the range [?1, +1] it
may be necessary to experiment and adjust both constants c and ?.
The final classifier constructed by AdaBoost, following (7), is given by
!
T
X
t
t
t
H(x) = sign
?t tanh ? k(x, xi ) ? k(x, xj ) + bij
,
(15)
t=1
where we have superscripted the elements of fij selected in iteration t of boosting.
Note that besides the monotonic sigmoid and offset transformation, this form is
essentially a (nonlinear) equivalent of the dyadic potential function of (3).
If we assume, without loss of generality, that an equal number N/2 of d-dimensional
training points is available from each class, defining O(N 2 ) hypercuts. The values
of fij (x) for each hypercut and each training point (12) can be computed only once,
typically in O(d), and used in every iteration of the algorithm, making the setup
cost for the algorithm O(dN 3 ). Each iteration requires examination of all fij (xk )
and takes O(N3 ). To summarize, the cost of learning a classifier with K dyads is
O (d + K)N 3 . It is important to note that both the setup step and the search
for an optimal hypercut in each iteration are naturally parallelizable, leading to a
reduction in time linear in the number of processors.
3
Experiments
Before applying our algorithm to standard benchmarks, we illustrate a simple 2D
example of nonlinear boosted dyadic hypercuts on a ?toy? problem. Consider a
classification task on the dataset of 20 points (10 for each class) shown in Figure 1.
The hypercuts algorithm (using Gaussian kernels) was able to separate the classes
using two iterations (two cuts) as shown in Figure 1(a). Note how the dyads of
training points (connected by dashed lines) define the discriminant boundary. For
comparison, we used an SVM with Gaussian kernels on the same dataset, as shown
in Figure 1(b). Although the SVM has a wider margin, the same would be expected
from our algorithm with additional rounds of boosting.
The computational cost of classifying a point can be directly compared in terms of
the number of required kernel evaluations in (2), which dominate the computation
for high-dimensional data and kernels like Gaussians. For SVM, this is the number
of support vectors. For hypercuts, this is the number of distinct training points
(a)
(b)
Figure 1: A toy problem: classification based on (a) hypercuts (2 dyads) (b) SVM (4
support vectors).
in the selected dyads. After n rounds of boosting this number is bounded by 2n,
since a point can participate in multiple dyads. For instance, the SVM in Figure 1
requires 4 kernel evaluations, compared to 3 for the boosted hypercuts.
3.1
Experiments with real data sets
We evaluated the performance of the dyadic hypercuts algorithm on a number of
real-world data sets from the UCI repository [2], and compared the performance
to that of two established classification methods: SVM with Gaussian RBF kernel
and k-Nearest Neighbor (k-NN). We chose sets large enough for reasonable training/validation/test partitioning, and that represent binary (or easily converted to
binary) classification problems.
Dataset
Heart
Ionosphere
WBC
WPBC
WDBC
Wine
Spam
Sonar
Pima
N
90
120
200
65
190
60
150
70
200
d
13
34
9
32
30
13
57
60
8
k-NN
.196 ?.042
.168 ?.024
.034 ?.011
.250 ?.024
.044 ?.015
.053 ?.030
.159 ?.025
.227 ?.041
.267 ?.024
SVM
.202 ?.038
.064 ?.018
.032 ?.008
.243 ?.006
.035 ?.013
.032 ?.022
.123 ?.016
.226 ?.037
.244 ?.014
#SV
62 ?10
73 ?7
50 ?26
63 ?3
67 ?15
40 ?9
101 ?8
66 ?3
129 ?7
Hypercuts
.202 ?.030
.083 ?.022
.028 ?.007
.253 ?.025
.038 ?.014
.040 ?.026
.116 ?.019
.202 ?.045
.260 ?.017
#k.ev.
50 ?12
63 ?7
30 ?12
41 ?5
47 ?12
23 ?4
73 ?15
52 ?5
110 ?16
Table 1: The results of the experiments described in Section 3.1. N is the size of the
training set, d the dimension, #SV the number of support vectors for the SVM, and #k.ev.
the number of kernel evaluations required by a boosted hypercuts classifier. Means and
standard deviations in 30 trials are reported for each data set. WBC,WPBC,WDBC are
Wisconsin Breast Cancer, Prognosis and Diagnosis data sets, respectively.
In each experiment, the data set was randomly partitioned into training, validation
and test sets of similar sizes. The validation set was used to ?tune? the parameters
of each of the classifiers (k for k-NN, ? for RBF kernels of SVMs and hypercuts),
by choosing from a suitable range the parameter value with lowest error on the validation set. Each of the three classifiers was then trained with the chosen parameter
on the training set, and tested on the test set.
For each data data set the above experiment was repeated 30 times. The columns
of Table 1, left to right, show the following, with means and standard deviations
over the 30 trials for each dataset: size of the training set, dimension, the test error
0.14
SVM, 96 Support Vectors
Classification error
0.12
0.1
Hypercuts, test error
27 k.ev.
58 k.ev.
72 k.ev.
0.08
78 k.ev.
0.06
Hypercuts, training error
Dataset
Heart
Ion.
WBC
WPBC
WDBC
Wine
Spam
Sonar
Pima
10%
.202
.178
.028
.302
.365
.064
.142
.248
.269
25%
.200
.113
.028
.269
.384
.051
.124
.233
.268
50%
.197
.094
. 028
.266
.383
.043
.117
.214
.263
0.04
0.02
20
40
60
80
100
120
140
Iteartions of AdaBoost
Figure 2: An example of the progress of training
(dotted line) and test (solid line) error in a run of
hypercuts algorithm with RBF kernel on Spam data.
The number of kernel evaluations in the combined
classifier is shown for indicated points in the run.
The dashed line shows the test error of the SVM
with RBF kernel.
Table 2: Test error as a function
of number of kernel evaluations allowed by the user; the percentage
values are relative to the number
of SVs in each experiment. Averaged over 30 trials for each data
set.
of k-NN, the test error of SVM, the number of support vectors, the test error of
hypercuts, and the number of kernel evaluations in the final hypercuts classifier.
The size of the hypercuts classifier can be controlled via the number of AdaBoost
iterations, thus affecting the accuracy of the classifier. In our experiments boosting
was stopped after a prolonged plateau in the training error was observed; in some
cases, further continuation of boosting could lead to better results.
3.2
Discussion
The most important conclusion from these empirical results is that for all data sets,
the RBF boosted dyadic hypercuts achieve test performance statistically equivalent
to that of SVMs4 , and usually better than that of k-NN classifiers, while the complexity of the trained classifier is typically lower (in some cases, which appear in
bold in Table 1, the difference in complexity is significant).
In addition, our experiments demonstrate the trade-off between the complexity and
accuracy of the hypercuts. Figure 2 shows an example run of hypercuts algorithm
on Spam data set, with 150 training points. After 24 iterations, the test error of
the final classifier becomes consistently lower than that of SVM trained on the same
training set, which found 96 support vectors. At that point the classifier requires
27 kernel evaluations (about 28% of the number of SVs). The following 115 iterations achieve further improvement of only 1.8% in test error, while increasing the
required number of kernel evaluations to 78. Here the automatic criterion stopped
the AdaBoost after no significant improvement in training error was observed for 25
iterations. But the user can instead specify the desired bound on the complexity of
the classifier. Table 2 shows the behavior of test error as a function of the number
of kernel evaluations by the classifier, averaged over all 30 trials. For some data
sets, e.g. Heart and WBC, the hypercuts classifier with only 10% of the number
of kernel evaluations in an SVM already achieves comparable test error.
4
i.e. the difference of the means is within one standard deviation from both sides
4
Conclusions
The contribution of this paper is two-fold. First, we proposed a family of simple
discriminants (hypercuts), based on pairs of training points from opposite classes
(dyads), and extended this family using a nonlinear mapping with Mercer-type
kernels. Second, we have designed a greedy selection algorithm based on boosting
with confidence-rated (real-valued) hypercut classifiers with continuous output in
the interval [-1,1].
This is a new kernel based approach to classification. We have shown that this
approach performs on par with SVMs, without having to solve large QP problems.
In contrast, our algorithm allows the user to trade off the classifier?s computational
complexity for its accuracy, and benefits from AdaBoost?s exponential error convergence and the assurance of asymptotic margin maximization.
The generalization performance of our algorithm was evaluated on a number of data
sets from the UCI repository, and demonstrated to be comparable to that of established state-of-the-art algorithms (SVMs, k-NN), often with reduced classification
time and reduced classifier size. We emphasize this performance advantage, since in
practical applications it is often desirable to minimize complexity even at the cost
of increased training time.
We are currently looking into optimal strategies for sampling the hypothesis space
(Mp Mn possible hypercuts) based on the distribution Dt (i) and forming hypercuts
that are not necessarily based on training samples but rather, for example, on cluster
centroids or other points derived from the input distribution. This has the potential
to dramatically reduce the computational cost of learning in the boosted hypercuts
algorithm, thus making it even more attractive for a practitioner.
References
[1] M. A. Aizerman, E. M. Braverman, and L. I. Rozonoer. Theoretical foundations of the
potential function method in pattern recognition learning. Automation and Remote
Control, 25:821?837, 1964.
[2] C. L. Blake and C. J. Merz. UCI repository of machine learning databases.
[http://www.ics.uci.edu/?mlearn/MLRepository.html], 1998.
[3] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin
classifiers. In D. Haussler, editor, Proc. 5th Annual ACM Workshop on Computational
Learning Theory, pages 144?152. ACM Press, 1992.
[4] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning
and an application to boosting. Journal of Computer and System Sciences, 55(1):119?
139, 1995.
[5] T. Jaakkola and D. Haussler. Probabilistic kernel regression models. In D. Heckerman
and J. Whittaker, editors, Proc. of 7th International Workshop on AI and Statistics.
Morgan Kaufman, 1999.
[6] P. McCallugh and J. Nelder. Generalized Linear Models. Chapman and Hall, London,
1983.
[7] Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidencerated predictions. In Proc. of 11th Annual Conf. on Computational Learning Theory,
pages 80?91, 1998.
[8] M. E. Tipping. The Relevance Vector Machine. In Advances in Neural Information
Processing Systems 12, pages 652?658. MIT Press, 2000.
[9] V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.
| 2221 |@word trial:4 repository:4 version:1 norm:1 suitably:1 mitsubishi:1 covariance:2 solid:1 harder:1 reduction:1 selecting:1 ours:1 com:1 must:3 readily:1 designed:1 greedy:1 selected:3 assurance:1 accordingly:1 xk:1 provides:2 boosting:10 location:1 hyperplanes:2 sigmoidal:1 simpler:2 dn:1 constructed:1 direct:1 consists:2 introduce:1 x0:2 expected:1 behavior:1 automatically:1 prolonged:1 considering:4 increasing:1 becomes:2 unrelated:1 bounded:1 lowest:1 kaufman:1 interpreted:1 minimizes:1 developed:1 transformation:2 every:1 voting:1 concave:1 charge:1 bipolar:1 classifier:47 partitioning:1 unit:1 control:1 appear:1 positive:3 before:1 tends:1 joining:2 chose:1 hwij:1 range:3 statistically:1 averaged:2 practical:2 yj:4 implement:1 definite:2 empirical:1 evolving:1 hyperbolic:1 projection:4 confidence:5 onto:1 interior:1 selection:1 operator:1 applying:4 www:1 equivalent:3 demonstrated:1 maximizing:1 convex:1 simplicity:1 haussler:2 dominate:1 searching:4 notion:2 updated:1 rozonoer:1 controlling:1 user:3 programming:1 hypothesis:7 element:1 satisfying:1 recognition:1 located:1 cut:1 labeled:1 database:1 observed:2 connected:1 remote:1 trade:2 complexity:9 trained:3 shakhnarovich:1 segment:1 rewrite:1 negatively:1 basis:4 easily:4 distinct:1 london:1 choosing:1 whose:2 valued:6 solve:1 statistic:1 think:1 transform:1 itself:1 final:7 advantage:1 propose:1 product:4 uci:5 combining:1 achieve:2 convergence:1 cluster:1 produce:1 incremental:1 wider:1 illustrate:1 nearest:1 ij:1 progress:1 strong:3 predicted:1 direction:1 fij:9 closely:1 occupies:1 generalization:4 exploring:1 extension:2 strictly:1 hall:1 blake:1 normal:1 ic:1 mapping:4 predict:1 traded:1 substituting:1 achieves:1 wine:2 uniqueness:1 proc:3 label:6 tanh:2 superposition:2 currently:1 weighted:5 mit:3 gaussian:4 always:1 rather:2 boosted:10 jaakkola:1 derived:1 improvement:2 consistently:1 likelihood:1 contrast:2 centroid:1 sense:1 nn:7 typically:4 transformed:5 wij:9 classification:22 html:1 constrained:1 art:1 equal:1 once:1 having:2 sampling:1 chapman:1 represents:1 distinguishes:1 randomly:1 braverman:1 evaluation:13 certainly:1 adjust:1 introduces:1 moghaddam:1 necessary:1 orthogonal:1 re:1 desired:1 theoretical:1 minimal:1 stopped:2 merl:1 classify:1 instance:1 column:1 increased:1 maximization:1 cost:8 deviation:3 subset:1 reported:1 gregory:2 kxi:1 sv:2 combined:1 referring:1 international:1 probabilistic:2 off:3 connecting:2 ym:1 opposed:2 conf:1 resort:1 leading:1 toy:2 potential:10 converted:1 bold:1 subsumes:1 gaussianity:1 coefficient:3 automation:1 mp:6 depends:1 view:1 parallel:2 slope:1 contribution:1 minimize:2 square:1 accuracy:4 ensemble:1 correspond:1 maximized:1 weak:3 processor:1 mlearn:1 simultaneous:1 plateau:1 parallelizable:1 sharing:1 definition:1 naturally:1 mi:1 dataset:5 logical:1 hilbert:1 back:1 dt:5 tipping:1 adaboost:14 methodology:1 specify:1 improved:1 evaluated:3 generality:1 furthermore:1 implicit:1 correlation:1 nonlinear:11 incrementally:1 logistic:1 indicated:1 preimage:1 usa:2 building:1 concept:2 multiplier:1 laboratory:2 iteratively:1 attractive:1 round:2 mlrepository:1 criterion:1 generalized:2 complete:2 demonstrate:1 theoretic:1 performs:1 wise:1 novel:2 recently:1 sigmoid:2 discriminants:13 physical:1 qp:1 refer:1 significant:2 cambridge:2 ai:3 automatic:1 similarly:1 nonlinearity:1 dot:4 surface:1 electrostatic:1 certain:1 binary:9 yi:7 morgan:1 greater:1 additional:1 baback:2 determine:1 maximize:3 dashed:2 semi:1 multiple:1 desirable:2 sound:1 offer:1 controlled:3 prediction:2 variant:1 regression:3 breast:1 xpt:1 essentially:1 iteration:13 kernel:44 represent:1 ion:1 whereas:1 affecting:1 addition:1 interval:1 tend:2 member:2 practitioner:1 near:1 noting:1 intermediate:1 enough:1 xj:7 restrict:1 opposite:4 prognosis:1 reduce:1 dyad:11 speaking:1 passing:1 svs:2 dramatically:1 tune:1 amount:1 svms:15 simplest:1 reduced:2 continuation:1 http:1 schapire:2 percentage:1 dotted:1 sign:8 diagnosis:1 shall:2 thresholded:1 ht:6 superscripted:1 sum:1 run:5 family:3 almost:1 reasonable:1 guyon:1 decision:3 comparable:3 bound:2 fold:1 quadratic:2 annual:2 strength:1 wbc:4 span:3 separable:1 combination:2 heckerman:1 partitioned:1 making:2 glm:1 heart:3 ln:2 needed:2 singer:1 end:1 available:1 gaussians:1 uncountable:1 include:1 maintaining:1 yoram:1 build:1 already:1 strategy:4 rt:4 subspace:1 distance:1 separate:1 participate:1 discriminant:7 besides:1 polarity:2 minimizing:2 setup:2 robert:1 pima:2 hij:5 broadway:1 xnt:1 design:1 perform:1 datasets:1 benchmark:1 finite:1 defining:1 extended:3 looking:1 y1:1 rn:1 reproducing:1 arbitrary:1 introduced:1 pair:5 nonlinearly:2 required:4 specified:1 learned:1 boser:1 established:2 able:2 beyond:1 suggested:1 usually:2 pattern:2 xm:1 ev:6 aizerman:1 summarize:1 suitable:1 difficulty:1 examination:1 mn:6 scheme:1 technology:1 rated:4 inversely:1 tangent:1 asymptotic:2 wisconsin:1 relative:1 loss:1 par:1 freund:1 prototypical:1 analogy:1 validation:4 foundation:1 mercer:3 thresholding:2 editor:2 classifying:2 cancer:1 wpbc:3 free:1 bias:3 side:1 neighbor:1 benefit:1 boundary:4 dimension:2 world:2 resides:1 spam:4 hypersurface:1 emphasize:1 implicitly:1 nelder:1 xi:13 continuous:4 iterative:1 search:2 sonar:2 table:5 nature:2 complex:1 meanwhile:1 electric:1 necessarily:1 assured:1 linearly:1 dyadic:15 repeated:1 allowed:1 positively:1 x1:1 representative:1 exponential:1 lie:1 hw:1 bij:7 offset:1 svm:14 ionosphere:1 workshop:2 vapnik:2 magnitude:1 margin:7 wdbc:3 entropy:1 simply:1 forming:1 absorbed:1 lagrange:1 monotonic:1 springer:1 determines:1 acm:2 ma:2 whittaker:1 viewed:1 consequently:1 rbf:5 confidencerated:1 absence:1 determined:3 typical:1 hyperplane:10 merz:1 select:2 support:10 searched:2 relevance:2 evaluate:1 tested:1 |
1,343 | 2,222 | Knowledge-Based Support Vector
Machine Classifiers
Glenn M. Fung, Olvi L. Mangasarian and Jude W. Shavlik
Computer Sciences Department, University of Wisconsin
Madison, WI 53706
gfung, olvi, [email protected]
Abstract
Prior knowledge in the form of multiple polyhedral sets, each belonging to one of two categories, is introduced into a reformulation
of a linear support vector machine classifier. The resulting formulation leads to a linear program that can be solved efficiently. Real
world examples, from DNA sequencing and breast cancer prognosis,
demonstrate the effectiveness of the proposed method. Numerical
results show improvement in test set accuracy after the incorporation of prior knowledge into ordinary, data-based linear support
vector machine classifiers. One experiment also shows that a linear classifier, based solely on prior knowledge, far outperforms the
direct application of prior knowledge rules to classify data.
Keywords: use and refinement of prior knowledge, support vector machines, linear programming
1
Introduction
Support vector machines (SVMs) have played a major role in classification problems
[18,3, 11]. However unlike other classification tools such as knowledge-based neural
networks [16, 17, 7], little work [15] has gone into incorporating prior knowledge into
support vector machines. In this work we present a novel approach to incorporating
prior knowledge in the form of polyhedral knowledge sets in the input space of the
given data. These knowledge sets, which can be as simple as cubes, are supposed
to belong to one of two categories into which all the data is divided. Thus, a
single knowledge set can be interpreted as a generalization of a training example,
which typically consists of a single point in input space. In contrast, each of our
knowledge sets consists of a region in the same space. By using a powerful tool from
mathematical programming, theorems of the alternative [9, Chapter 2], we are able
to embed such prior data into a linear program that can be efficiently solved by any
of the publicly available solvers.
We briefly summarize the contents of the paper now. In Section 2 we describe the
linear support vector machine classifier and give a linear program for it. We then
describe how prior knowledge, in the form of polyhedral knowledge sets belonging to
one of two classes can be characterized. In Section 3 we incorporate these polyhedral
sets into our linear programming formulation which results in our knowledge-based
support vector machine (KSVM) formulation (19). This formulation is capable of
generating a linear classifier based on real data and/or prior knowledge. Section
4 gives a brief summary of numerical results that compare various linear and nonlinear classifiers with and without the incorporation of prior knowledge. Section 5
concludes the paper.
We now describe our notation. All vectors will be column vectors unless transposed
to a row vector by a prime I. The scalar (inner) product of two vectors x and y
in the n-dimensional real space Rn will be denoted by x' y. For a vector x in Rn,
the sign function sign(x) is defined as sign(x)i = 1 if Xi > a else sign(x)i = -1 if
Xi::; 0, for i = 1, ... ,no For x ERn, Ilxll p denotes the p-norm, p = 1,2,00. The
notation A E Rmxn will signify a real m x n matrix. For such a matrix, A' will
denote the transpose of A and Ai will denote the i-th row of A. A vector of ones
in a real space of arbitrary dimension will be denoted bye. Thus for e E Rm and
y E R m the notation e'y will denote the sum of the components of y. A vector
of zeros in a real space of arbitrary dimension will be denoted by O. The identity
matrix of arbitrary dimension will be denoted by I. A separating plane, with respect
to two given point sets A and B in R n , is a plane that attempts to separate R n
into two halfspaces such that each open halfspace contains points mostly of A or
B. A bounding plane to the set A is a plane that places A in one of the two closed
halfspaces that the plane generates. The symbol 1\ will denote the logical "and".
The abbreviation "s.t." stands for "such that" .
2
Linear Support Vector Machines and Prior Knowledge
We consider the problem, depicted in Figure l(a), of classifying m points in the
n-dimensional input space R n , represented by the m x n matrix A, according to
membership of each point Ai in the class A + or A-as specified by a given m x m
diagonal matrix D with plus ones or minus ones along its diagonal. For this problem,
the linear programming support vector machine [11, 2] with a linear kernel, which
is a variant of the standard support vector machine [18, 3], is given by the following
linear program with parameter v > 0:
min
(W ,"Y,y)ERn +l +=
{ve'y
+ Ilwlll I D(Aw - WI') + y
~ e, y ~ a},
(1)
where I . III denotes the I-norm as defined in the Introduction, y is a vector of
slack variables measuring empirical error and (w, 'Y) characterize a separating plane
depicted in Figure 1. That this problem is indeed a linear program, can be easily
seen from the equivalent formulation:
min
(W ,"Y ,y ,t)ERn +l +=+n
{ve'y+e't I D(Aw - q) +y ~ e,t ~ w ~ -t,y ~ a},
(2)
where e is a vector of ones of appropriate dimension. For economy of notation
we shall use the first formulation (1) with the understanding that computational
implementation is via (2). As depicted in Figure l(a), w is the normal to the
bounding planes:
x'w
= 'Y + 1, x'w = 'Y - 1,
(3)
that bound the points belonging to the sets A + and A-respectively. The constant
'Y determines their location relative to the origin. When the two classes are strictly
linearly separable, that is when the error variable y = a in (1) (which is the case
shown in Figure 1 (a)), the plane x' w = 'Y + 1 bounds all of the class A + points,
while the plane x' w = 'Y - 1 bounds all of the class A-points as follows:
AiW ~ 'Y + 1, for Dii
= 1, AiW ::; 'Y - 1, for Dii = -1.
(4)
Consequently, the plane:
x'w = 'Y,
(5)
midway between the bounding planes (3), is a separating plane that separates points
belonging to A + from those belonging to A-completely if y = 0, else only approximately. The I-norm term Ilwlll in (1), which is half the reciprocal of the distance
11,,7111 measured using the oo-norm distance [10] between the two bounding planes of
(3) (see Figure l(a)), maximizes this distance, often called the "margin". Maximizing the margin enhances the generalization capability of a support vector machine
[18, 3]. If the classes are linearly inseparable, then the two planes bound the two
classes with a "soft margin" (i.e. bound approximately with some error) determined
by the nonnegative error variable y, that is:
AiW + Yi 2: ry + 1, for Dii = 1, AiW - Yi ::; ry - 1, for Dii = -1.
(6)
The I-norm of the error variable Y is minimized parametrically with weight /J in
(1), resulting in an approximate separating plane (5) which classifies as follows:
x E A+ if sign(x'w - ry) = 1, x E A- if sign(x'w - ry) = -1.
(7)
Suppose now that we have prior information of the following type. All points x
lying in the polyhedral set determined by the linear inequalities:
Bx ::; b,
(8)
belong to class A +. Such inequalities generalize simple box constraints such as
a ::; x ::; d. Looking at Figure 1 (a) or at the inequalities (4) we conclude that the
following implication must hold:
Bx::; b ===? x'w 2: ry+ 1.
(9)
That is, the knowledge set {x I Bx ::; b} lies on the A + side of the bounding plane
x'w = ry+ 1. Later, in (19), we will accommodate the case when the implication (9)
cannot be satisfied exactly by the introduction of slack error variables. For now,
assuming that the implication (9) holds for a given (w, ry), it follows that (9) is
equivalent to:
Bx ::; b, x'w < ry + 1, has no solution x.
(10)
This statement in turn is implied by the following statement:
B'u+w = 0, b'u+ry+ 1::; 0, u 2: 0, has a solution (u,w).
(11)
To see this simple backward implication: (10)?=(11), we suppose the contrary that
there exists an x satisfying (10) and obtain the contradiction b'u > b'u as follows:
b'u 2: u'Bx = -w'x > -ry-l2: b'u,
(12)
where the first inequality follows by premultiplying Bx ::; b by u 2: O. In fact, under
the natural assumption that the prior knowledge set {x I Bx ::; b} is nonempty,
the forward implication: (10)===?(11) is also true, as a direct consequence of the
nonhomogeneous Farkas theorem of the alternative [9, Theorem 2.4.8]. We state
this equivalence as the following key proposition to our knowledge-based approach.
Proposition 2.1 Knowledge Set Classification. Let the set {x I Bx ::; b} be
nonempty. Then for a given (w, ry), the implication (9) is equivalent to the statement
(11). In other words, the set {x I Bx ::; b} lies in the halfspace {x I w' x 2: ry + I} if
and only if there exists u such that B'u + w = 0, b'u + ry + 1 ::; 0 and u 2: O.
Proof We establish the equivalence of (9) and (11) by showing the equivalence (10)
and (11). By the nonhomogeneous Farkas theorem [9, Theorem 2.4.8] we have that
(10) is equivalent to either:
B'u + w = 0, b'u + ry + 1::; 0, u 2: 0, having solution (u, w),
(13)
or
B'u = 0, b'u < 0, u 2: 0, having solution u.
(14)
However, the second alternative (14) contradicts the nonemptiness ofthe knowledgeset {x I Bx::; b}, because for x in this set and u solving (14) gives the contradiction:
02: u'(Bx - b) = x' B'u - b'u = -b'u > O.
(15)
Hence (14) is ruled out and we have that (10) is equivalent to (13) which is (11). D
This proposition will play a key role in incorporating knowledge sets, such as
{x I Bx ::; b}, into one of two categories in a support vector classifier formulation as demonstrated in the next section.
- 15
-15
-20
X'W= Y +1
-30 ~---:---j
x'w= y
-40
-~?~0------~'5~-----~
'0~-----~
5 ------~----~
-45 '--------~----~------~----~----~
-20
- 15
- 10
(a)
Figure 1:
-5
(b)
(a): A linear SVM separation for 200 points in R2 using the linear programming
formulation (1).
(b): A linear SVM separation for the salTIe 200 points in R2 as those in
Figure l(a) but using the linear programming forlTIulation (19) which incorporates three
knowledge sets: { x
I B ' x :'0
b'} into the halfspace of A + , and { x
into the halfspace of A - , as depicted above.
I C'x :'0
c'}, { x
I C 2 x :'0
c2 }
Note the substantial difference between the
linear classifiers x' w = , of both figures.
3
Knowledge-Based SVM Classification
We describe now how to incorporate prior knowledge in the form of polyhedral sets
into our linear programming SVM classifier formulation (1).
We assume that we are given the following knowledge sets:
k sets belonging to A+ : {x I B ix ::; bi } , i = 1, ... ,k
IZ sets belonging to A- : {x I eix::; ci }, i = 1, ... ,IZ
(16)
It follows by Proposition 2.1 that, relative to the bounding planes (3):
There exist u i , i = 1, ... ,k, v j , j = 1, ... ,IZ, such that:
i
i
0 , bi ' u+
B i ' u+w=
1'+ 1 <
_0, u i >
_O?
, Z= 1 , ... , k
j ' V j - W -- 0 ,c j' v j - l' + 1 <
_ 0 ,vj >_ 0 ,J. -- 1, ... ,1:-fi
e
(17)
We now incorporate the knowledge sets (16) into the SVM linear programming formulation (1) classifier, by adding the conditions (17) as constraints to it as follows:
min
w" ,(y ,u i ,v j )2':O
s.t.
ve'y + Ilwlll
D(Aw - q ) +y
., .
B
"
., . u" + w
b" u" + l' + 1
j - w
., ej'v
.
cJ v J -1'+1
>
<
<
e
0
0, i = 1, ... , k
0
0, j = 1, ... ,IZ
(18)
This linear programming formulation will ensure that each of the knowledge sets
{ x I BiX::; bi } , i = 1, ... , k a nd { x I eix::; ci } , i = 1, ... ,IZ lie on the appropriate side of the bounding planes (3). However, there is no guarantee that
such bounding planes exist that will precisely separate these two classes of knowledge sets, just as there is no a priori guarantee that the original points b elonging
to the sets A + and A-are linearly separable. We therefore add error variables
ri, pi, i = 1, ... ,k, sj, (J"j, j = 1, ... ,?, just like the slack error variable y of the
SVM formulation (1), and attempt to drive these error variables to zero by modifying our last formulation above as follows:
e
k
. min . .
W'f , (y , u~,r~,pt , vJ ,sJ ,aJ)~O
ve'y + j.L(l,)ri
i=l
+ /) +
l,) sj
+ (J"j)) + Ilwlll
j=l
s.t. D(Aw - wy) + y
_r i ::; Bil u i + W
>
<
?1
.
b"u"+I'+1 <
-sj ::; e jl v j - w <
?1
.
cJ v J -I'+1 <
e
ri
pi,i=I, ... ,k
sj
(J"j, j = 1, . .. ,?
(19)
This is our final knowledge-based linear programming formulation which incorporates the knowledge sets (16) into the linear classifier with weight j.L, while the
(empirical) error term e'y is given weight v. As usual, the value of these two parameters, v, j.L, are chosen by means of a tuning set extracted from the training
set. If we set j.L = a then the linear program (19) degenerates to (1), the linear
program associated with an ordinary linear SVM. However, if set v = 0, then the
linear program (19) generates a linear SVM that is strictly based on knowledge
sets, but not on any specific training data. This might be a useful paradigm for
situations where training datasets are not easily available, but expert knowledge,
such as doctors' experience in diagnosing certain diseases, is readily available. This
will be demonstrated in the breast cancer dataset of Section 4.
Note that the I-norm term Ilwlll can be replaced by one half the 2-norm squared,
~llwll~, which is the usual margin maximization term for ordinary support vector
machine classifiers [18, 3]. However, this changes the linear program (19) to a
quadratic program which typically takes longer time to solve.
For standard SVMs, support vectors consist of all data points which are the complement of the data points that can be dropped from the problem without changing
the separating plane (5) [18, 11]. Thus for our knowledge-based linear programming
formulation (19), support vectors correspond to data points (rows of the matrix A)
for which the Lagrange multipliers are nonzero, because solving (19) with these data
points only will give the same answer as solving (19) with the entire matrix A.
The concept of support vectors has to be modified as follows for our knowledge
sets. Since each knowledge set in (16) is represented by a matrix Bi or j , each
row of these matrices can be thought of as characterizing a boundary plane of
the knowledge set. In our formulation (19) above, such rows are wiped out if the
corresponding components of the variables u i or v j are zero at an optimal solution.
We call the complement of these components of the the knowledge sets (16), support
constraints. Deleting constraints (rows of Bi or j ), for which the corresponding
components of u i or v j are zero, will not alter the solution of the knowledge-based
linear program (19). This in fact is corroborated by numerical tests that were
carried out. Deletion of non-support constraints can be considered a refinement of
prior knowledge [17]. Another type of of refinement of prior knowledge may occur
when the separating plane x' w = I' intersects one of the knowledge sets. In such
a case the plane x'w = I' can be added as an inequality to the knowledge set it
intersects. This is illustrated in the following example.
e
e
We demonstrate the geometry of incorporating knowledge sets by considering a
synthetic example in R2 with m = 200 points, 100 of which are in A + and the other
100 in A -. Figure 1 (a) depicts ordinary linear separation using the linear SVM
formulation (1). We now incorporate three knowledge sets into the the problem:
{x I Blx ::; bl } belonging to A+ and {x I Clx ::; c l } and {x I C 2 x ::; c2 } belonging
to A -, and solve our linear program (19) with f-l = 100 and v = 1. We depict the
new linear separation in Figure 1 (b) and note the substantial change generated in
the linear separation by the incorporation of these three knowledge sets. Also note
that since the plane x'w = "( intersects the knowledge set {x I BlX ::; bl }, this
knowledge set can be refined to the following {x I B 1 X ::; bl, w' x 2: "(}.
4
Numerical Testing
Numerical tests, which are described in detail in [6], were carried out on the
DNA promoter recognition dataset [17] and the Wisconsin prognostic breast
cancer dataset WPBC (ftp:j /ftp.cs.wisc.edu/math-prog/cpo-dataset/machinelearn/cancer/WPBC/). We briefly summarize these results here.
Our first dataset, the promoter recognition dataset, is from the the domain of DNA
sequence analysis. A promoter, which is a short DNA sequence that precedes a
gene sequence, is to be distinguished from a nonpromoter. Promoters are important in identifying starting locations of genes in long uncharacterized sequences of
DNA. The prior knowledge for this dataset, which consists of a set of 14 prior rules,
matches none of the examples of the training set. Hence these rules by themselves
cannot serve as a classifier. However, they do capture significant information about
promoters and it is known that incorporating them into a classifier results in a
more accurate classifier [17]. These 14 prior rules were converted in a straightforward manner [6] into 64 knowledge sets. Following the methodology used in prior
work [17], we tested our algorithm on this dataset together with the knowledge sets,
using a "leave-one-out" cross validation methodology in which the entire training
set of 106 elements is repeatedly divided into a training set of size 105 and a test
set of size 1. The values of v and f-l associated with both KSVM and SVM l [2]
where obtained by a tuning procedure which consisted of varying them on a square
grid: {2- 6, 2- 5 , ... ,26} X {2- 6, 2- 5 , ... ,26}. After expressing the prior knowledge
in the form of polyhedral sets and applying KSVM, we obtained 5 errors out of 106
(5/106). KSVM gave a much better performance than five other different methods that do not use prior knowledge: Standard I-norm support vector machine [2]
(9/106), Quinlan's decision tree builder [13] (19/106), PEBLS Nearest algorithm
[4] with k = 3 (13/106), an empirical method suggested by a biologist based on
a collection of "filters" to be used for promoter recognition known as O'Neill's
Method [12] (12/106), neural networks with a simple connected layer of hidden
units trained using back-propagation [14] (8/106). Except for KSVM and SVM l ,
all of these results are taken from an earlier report [17]. KSVM was also compared
with [16] where a hybrid learning system maps problem specific prior knowledge,
represented in propositional logic into neural networks and then, refines this reformulated knowledge using back propagation. This method is known as Knowledge
Based Artificial Neural Networks (KBANN). KBANN was the only approach that
performed slightly better than our algorithm and obtained 4 misclassifications compared to our 5. However, it is important to note that our classifier is a much simpler
linear classifier, sign(x'w - "(), while the neural network classifier of KBANN is a
considerably more complex nonlinear classifier. Furthermore, we note that KSVM
is simpler to implement than KBANN and requires merely a commonly available
linear programming solver. In addition, KSVM which is a linear support vector
machine classifier, improves by 44.4% the error of an ordinary linear I-norm SVM
classifier that does not utilize prior knowledge sets.
The second dataset used in our numerical tests was the Wisconsin breast cancer
prognosis dataset WPBC using a 60-month cutoff for predicting recurrence or nonrecurrence of the disease [2]. The prior knowledge utilized in this experiment consisted
of the prognosis rules used by doctors [8] which depended on two features from the
dataset: tumor size (T)(feature 31), that is the diameter of the excised tumor in
centimeters and lymph node status (L) which refers to the number of metastasized
axillary lymph nodes (feature 32). The rules are:
(L:2: 5) 1\ (T:2: 4)
===}
RECUR
and
(L = 0) 1\ (T S 1.9)
===}
NON RECUR
It is important to note that the rules described above can be applied directly to
classify only 32 of the given 110 given points of the training dataset and correctly
classify 22 of these 32 points. The remaining 78 points are not classifiable by
the above rules. Hence, if the rules are applied as a classifier by themselves the
classification accuracy would be 20%. As such, these rules are not very useful by
themselves and doctors use them in conjunction with other rules [8]. However,
using our approach the rules were converted to linear inequalities and used in our
KSVM algorithm without any use of the data, i.e. l/ = 0 in the linear program (19).
The resulting linear classifier in the 2-dimensional space of L(ymph) and T(umor)
achieved 66.4% accuracy. The ten-fold, cross-validated test set correctness achieved
by standard SVM using all the data is 66.2% [2]. This result is remarkable because
our knowledge-based formulation can be applied to problems where training data
may not be available whereas expert knowledge may be readily available in the form
of knowledge sets. This fact makes this method considerably different from previous
hybrid methods like KBANN where training examples are needed in order to refine
prior knowledge. If training data are added to this knowledge-based formulation,
no noticeable improvement is obtained.
5
Conclusion & Future Directions
We have proposed an efficient procedure for incorporating prior knowledge in the
form of knowledge sets into a linear support vector machine classifier either in
combination with a given dataset or based solely on the knowledge sets. This novel
and promising approach of handling prior knowledge is worthy of further study,
especially ways to handle and simplify the combinatorial nature of incorporating
prior knowledge into linear inequalities. A class of possible future applications
might be to problems where training data may not be easily available whereas
expert knowledge may be readily available in the form of knowledge sets. This
would correspond to solving our knowledge based linear program (19) with l/ = O.
A typical example of this type was breast cancer prognosis [8] where knowledge sets
by themselves generated a linear classifier as good as any classifier based on data
points. This is a new way of incorporating prior knowledge into powerful support
vector machine classifiers. Also, the concept of support constraints as discussed
at the end of Section 3, warrants further study that may lead to a systematic
simplification of prior knowledge sets. Other avenues of research include, knowledge
sets characterized by nonpolyhedral convex sets as well as nonlinear kernels [18, ll]
which are capable of handling more complex classification problems, as well as the
incorporation of prior knowledge into multiple instance learning [1, 5] which might
lead to improved classifiers in that field.
Acknowledgments
Research in this UW Data Mining Institute Report 01-09, November 2001, was supported by NSF Grants CCR-9729842, IRI-9502990 and CDA-9623632, by AFOSR
Grant F49620-00-1-0085, by NLM Grant 1 ROI LM07050-01, and by Microsoft.
References
[1] P. Auer. On learning from multi-instance examples: Empirical evaluation of a
theoretical approach. pages 21- 29, 1987.
[2] P. S. Bradley and O. L. Mangasarian. Feature selection via concave minimization and support vector machines. In J. Shavlik, editor, Machine Learning Proceedings of the Fifteenth International Conference{ICML '98), pages 82-90, San
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
Francisco, California, 1998. Morgan Kaufmann. ftp:/ /ftp.cs.wisc.edu/mathprog/ tech-reports / 98-03. ps.
V. Cherkassky and F. Mulier. Learning from Data - Concepts, Theory and
Methods. John Wiley & Sons, New York, 1998.
S. Cost and S. Salzberg. A weighted nearest neighbor algorithm for learning
with symbolic features. Machine Learning, 10:57-58, 1993.
T. G. Dietterich, R. H. Lathrop, and T. Lozano-Perez. Solving the multipleinstance problem with axis-parallel rectangles. Artificial Intelligence, 89:31-71,
1998.
G. Fung, O. L. Mangasarian, and J. Shavlik. Knowledge-based support vector
machine classifiers. Technical Report 01-09, Data Mining Institute, Computer
Sciences Department, University of Wisconsin, Madison, Wisconsin, November
2001. ftp:/ /ftp.cs.wisc.edu/pub/dmi/tech-reports/01-09.ps.
F. Girosi and N. Chan. Prior knowledge and the creation of "virtual" examples
for RBF networks. In Neural networks for signal processing, Proceedings of
the 1995 IEEE-SP Workshop, pages 201-210, New York, 1995. IEEE Signal
Processing Society.
Y.-J. Lee, O. L. Mangasarian, and W. H. Wolberg. Survival-time classification of breast cancer patients. Technical Report 01-03, Data Mining Institute,
Computer Sciences Department, University of Wisconsin, Madison, Wisconsin, March 2001. Computational Optimization and Applications, to appear.
ftp:/ /ftp.cs.wisc.edu/pub/dmi/tech-reports/Ol-03.ps.
O. L. Mangasarian. Nonlinear Programming. SIAM, Philadelphia, PA, 1994.
O. L. Mangasarian. Arbitrary-norm separating plane. Operations Research Letters, 24:15- 23, 1999. ftp:/ /ftp.cs.wisc.edu/math-prog/tech-reports/97-07r.ps.
O. L. Mangasarian. Generalized support vector machines. In A. Smola,
P. Bartlett, B. Scholkopf, and D. Schuurmans, editors, Advances in Large
Margin Classifiers, pages 135-146, Cambridge, MA, 2000. MIT Press.
ftp:/ /ftp.cs.wisc.edu/math-prog/tech-reports/98-14.ps.
M. C. O 'Neill. Escherchia coli promoters: I. concensus as it relates to spacing class, specificity, repeat substructure, and three dimensional organization.
Journal of Biological Chemistry, 264:5522- 5530, 1989.
J. R. Quinlan. Induction of Decision Trees, volume 1. 1986.
D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing, pages 318- 362, Cambridge, Massachusetts, 1986. MIT Press.
B. Scholkopf, P. Simard, A. Smola, and V. Vapnik. Prior knowledge in support
vector kernels. In M. Jordan, M. Kearns, and S. Solla, editors, Advances in
Neural Information Processing Systems 10, pages 640 - 646, Cambridge, MA,
1998. MIT Press.
G. G. Towell and J. W. Shavlik. Knowledge-based artificial neural networks.
Artificial Intelligence, 70:119-165, 1994.
G. G. Towell, J. W. Shavlik, and M. N oordewier. Refinement of approximate
domain theories by knowledge-based artificial neural networks. In Proceedings
of the Eighth National Conference on Artificial Intelligence (AAAI-90) , pages
861-866, 1990.
V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York,
second edition, 2000.
| 2222 |@word briefly:2 norm:10 prognostic:1 nd:1 open:1 minus:1 accommodate:1 contains:1 pub:2 outperforms:1 bradley:1 must:1 readily:3 john:1 refines:1 numerical:6 midway:1 girosi:1 depict:1 farkas:2 half:2 intelligence:3 plane:25 reciprocal:1 short:1 math:3 node:2 location:2 mulier:1 simpler:2 diagnosing:1 five:1 mathematical:1 along:1 c2:2 direct:2 scholkopf:2 consists:3 polyhedral:7 manner:1 indeed:1 themselves:4 ry:14 multi:1 ol:1 little:1 solver:2 considering:1 classifies:1 notation:4 maximizes:1 interpreted:1 guarantee:2 concave:1 exactly:1 classifier:31 rm:1 unit:1 grant:3 appear:1 ilxll:1 dropped:1 depended:1 consequence:1 solely:2 approximately:2 might:3 plus:1 equivalence:3 bix:1 multipleinstance:1 gone:1 bi:5 acknowledgment:1 testing:1 implement:1 procedure:2 excised:1 empirical:4 thought:1 word:1 refers:1 specificity:1 symbolic:1 cannot:2 selection:1 applying:1 equivalent:5 map:1 demonstrated:2 maximizing:1 straightforward:1 iri:1 starting:1 williams:1 convex:1 identifying:1 contradiction:2 rule:12 eix:2 handle:1 pt:1 suppose:2 play:1 programming:13 origin:1 pa:1 element:1 rumelhart:2 satisfying:1 recognition:3 utilized:1 corroborated:1 role:2 solved:2 capture:1 region:1 connected:1 solla:1 halfspaces:2 substantial:2 disease:2 trained:1 solving:5 kbann:5 serve:1 creation:1 completely:1 easily:3 chapter:1 various:1 represented:3 intersects:3 describe:4 precedes:1 artificial:6 refined:1 solve:2 premultiplying:1 final:1 sequence:4 product:1 degenerate:1 supposed:1 p:5 generating:1 leave:1 ftp:12 oo:1 measured:1 nearest:2 keywords:1 noticeable:1 c:7 direction:1 modifying:1 filter:1 concensus:1 nlm:1 dii:4 virtual:1 generalization:2 proposition:4 biological:1 strictly:2 hold:2 lying:1 considered:1 normal:1 roi:1 major:1 inseparable:1 combinatorial:1 correctness:1 builder:1 tool:2 weighted:1 minimization:1 mit:3 modified:1 ej:1 varying:1 conjunction:1 validated:1 improvement:2 sequencing:1 tech:5 contrast:1 economy:1 membership:1 typically:2 entire:2 hidden:1 classification:7 denoted:4 priori:1 biologist:1 cube:1 field:1 dmi:2 having:2 blx:2 icml:1 warrant:1 alter:1 future:2 minimized:1 report:9 simplify:1 bil:1 ve:4 national:1 replaced:1 geometry:1 microsoft:1 attempt:2 organization:1 mining:3 evaluation:1 perez:1 implication:6 accurate:1 capable:2 experience:1 unless:1 tree:2 ruled:1 cda:1 theoretical:1 instance:2 classify:3 column:1 soft:1 earlier:1 salzberg:1 measuring:1 maximization:1 ordinary:5 cost:1 parametrically:1 ilwlll:5 characterize:1 answer:1 aw:4 synthetic:1 considerably:2 international:1 siam:1 recur:2 systematic:1 lee:1 together:1 squared:1 aaai:1 satisfied:1 expert:3 coli:1 simard:1 bx:12 converted:2 chemistry:1 later:1 performed:1 closed:1 doctor:3 capability:1 parallel:2 substructure:1 halfspace:4 square:1 publicly:1 accuracy:3 kaufmann:1 efficiently:2 correspond:2 ofthe:1 generalize:1 none:1 drive:1 proof:1 associated:2 transposed:1 dataset:13 massachusetts:1 logical:1 knowledge:83 improves:1 cj:2 auer:1 back:2 methodology:2 improved:1 formulation:19 box:1 furthermore:1 just:2 smola:2 nonlinear:4 propagation:3 aj:1 dietterich:1 concept:3 true:1 multiplier:1 consisted:2 lozano:1 hence:3 nonzero:1 illustrated:1 ll:1 recurrence:1 generalized:1 demonstrate:2 mangasarian:7 novel:2 fi:1 volume:1 jl:1 belong:2 discussed:1 significant:1 expressing:1 cambridge:3 ai:2 tuning:2 grid:1 longer:1 add:1 chan:1 prime:1 certain:1 inequality:7 yi:2 seen:1 morgan:1 paradigm:1 signal:2 relates:1 multiple:2 technical:2 match:1 characterized:2 cross:2 long:1 clx:1 divided:2 variant:1 breast:6 patient:1 fifteenth:1 jude:1 kernel:3 achieved:2 addition:1 whereas:2 signify:1 spacing:1 else:2 unlike:1 contrary:1 incorporates:2 effectiveness:1 jordan:1 call:1 iii:1 gave:1 misclassifications:1 prognosis:4 inner:1 avenue:1 bartlett:1 reformulated:1 york:3 aiw:4 repeatedly:1 wolberg:1 useful:2 ten:1 svms:2 category:3 dna:5 diameter:1 mcclelland:1 exist:2 nsf:1 sign:7 towell:2 correctly:1 ccr:1 shall:1 iz:5 key:2 reformulation:1 wisc:7 changing:1 cutoff:1 utilize:1 backward:1 uw:1 rectangle:1 merely:1 sum:1 letter:1 powerful:2 classifiable:1 place:1 prog:3 separation:5 decision:2 bound:5 layer:1 played:1 simplification:1 neill:2 fold:1 quadratic:1 refine:1 nonnegative:1 occur:1 incorporation:4 constraint:6 precisely:1 ri:3 generates:2 min:4 separable:2 ern:3 department:3 fung:2 according:1 combination:1 llwll:1 march:1 belonging:9 slightly:1 son:1 contradicts:1 wi:2 taken:1 slack:3 turn:1 nonempty:2 needed:1 end:1 available:8 operation:1 appropriate:2 distinguished:1 alternative:3 original:1 denotes:2 remaining:1 ensure:1 include:1 quinlan:2 madison:3 especially:1 establish:1 society:1 bl:3 implied:1 added:2 usual:2 diagonal:2 enhances:1 distance:3 separate:3 separating:7 olvi:2 uncharacterized:1 induction:1 assuming:1 mostly:1 statement:3 implementation:1 lm07050:1 datasets:1 november:2 situation:1 hinton:1 looking:1 rn:2 worthy:1 arbitrary:4 introduced:1 complement:2 propositional:1 specified:1 lymph:2 california:1 deletion:1 able:1 suggested:1 wy:1 eighth:1 summarize:2 program:14 deleting:1 natural:1 hybrid:2 predicting:1 brief:1 axis:1 concludes:1 carried:2 philadelphia:1 prior:35 understanding:1 l2:1 relative:2 wisconsin:7 cpo:1 afosr:1 remarkable:1 validation:1 editor:4 classifying:1 pi:2 row:6 cancer:7 centimeter:1 summary:1 wpbc:3 supported:1 bye:1 transpose:1 last:1 repeat:1 side:2 shavlik:6 institute:3 neighbor:1 characterizing:1 distributed:1 f49620:1 boundary:1 dimension:4 world:1 stand:1 forward:1 collection:1 refinement:4 commonly:1 san:1 far:1 sj:5 approximate:2 status:1 gene:2 logic:1 conclude:1 francisco:1 xi:2 glenn:1 promising:1 nature:2 nonhomogeneous:2 schuurmans:1 complex:2 domain:2 vj:2 sp:1 ksvm:9 promoter:7 linearly:3 bounding:8 edition:1 depicts:1 wiley:1 lie:3 ix:1 theorem:5 embed:1 specific:2 showing:1 symbol:1 r2:3 svm:13 survival:1 incorporating:8 exists:2 consist:1 workshop:1 adding:1 vapnik:2 ci:2 margin:5 cherkassky:1 depicted:4 lagrange:1 scalar:1 springer:1 determines:1 extracted:1 ma:2 abbreviation:1 identity:1 month:1 consequently:1 rbf:1 content:1 change:2 determined:2 except:1 typical:1 tumor:2 kearns:1 called:1 lathrop:1 internal:1 support:28 incorporate:4 tested:1 handling:2 |
1,344 | 2,223 | Extracting Relevant Structures with
Side Information
Gal Chechik and Naftali Tishby
ggal,tishby @cs.huji.ac.il
School of Computer Science and Engineering and
The Interdisciplinary Center for Neural Computation
The Hebrew University of Jerusalem, 91904, Israel
Abstract
The problem of extracting the relevant aspects of data, in face of multiple
conflicting structures, is inherent to modeling of complex data. Extracting structure in one random variable that is relevant for another variable
has been principally addressed recently via the information bottleneck
method [15]. However, such auxiliary variables often contain more information than is actually required due to structures that are irrelevant
for the task. In many other cases it is in fact easier to specify what is
irrelevant than what is, for the task at hand. Identifying the relevant
structures, however, can thus be considerably improved by also minimizing the information about another, irrelevant, variable. In this paper
we give a general formulation of this problem and derive its formal, as
well as algorithmic, solution. Its operation is demonstrated in a synthetic
example and in two real world problems in the context of text categorization and face images. While the original information bottleneck problem
is related to rate distortion theory, with the distortion measure replaced
by the relevant information, extracting relevant features while removing
irrelevant ones is related to rate distortion with side information.
1 Introduction
A fundamental goal of machine learning is to find regular structures in a given empirical
data, and use it to construct predictive or comprehensible models. This general goal, unfortunately, is very ill defined, as many data sets contain alternative, often conflicting, underlying structures. For example, documents may be classified either by subject or by writing
style; spoken words can be labeled by their meaning or by the identity of the speaker; proteins can be classified by their structure or function - all are valid alternatives. Which of
these alternative structures is ?relevant? is often implicit in the problem formulation.
The problem of identifying ?the? relevant structures is commonly addressed in supervised
learning tasks, by providing a ?relevant? label to the data, and selecting features that are
discriminative with respect to this label. An information theoretic generalization of this supervised approach has been proposed in [9, 15] through the information bottleneck method
(IB). In this approach, relevance is introduced through another random variable (as is the
label in supervised learning) and the goal is to compress one (the source) variable, while
maintaining as much information about the auxiliary (relevance) variable. This framework
has proven powerful for numerous applications, such as clustering the objects of sentences
with respect to the verbs [9], documents with respect to their terms [1, 6, 14], genes with
respect to tissues [8, 11], and stimuli with respect to spike patterns [10].
An important condition for this approach to work is that the auxiliary variable indeed corresponds to the task. In many situations, however, such ?pure? variable is not available.
The variable may in fact contain alternative and even conflicting structures. In this paper
we show that this general and common problem can be alleviated by providing ?negative
information?, i.e. information about ?unimportant?, or irrelevant, aspects of the data that
can interfere with the desired structure during the learning.
As an illustration, consider a simple nonlinear regression problem. Two variables and
are related through a functional form
, where
is in some known function
class and is noise with some distribution that depends on . When given a sample of
pairs with the goal of extracting the relevant dependence
, the noise which may contain information on and thus interfere with extracting - is an irrelevant
variable. Knowing the joint distribution of
can of course improve the regression
result.
A more ?real life? example can be found in the analysis of gene expression data. Such
data, as generated by the DNA-chips technology, can be considered as an empirical joint
distribution of gene expression levels and different tissues, where the tissues are taken from
different biological conditions and pathologies. The search for expressed genes that testify
for the existence of a pathology may be obscured by genetic correlations that exist also in
other conditions. Here again a sample of irrelevant expression data, taken for instance from
a healthy population, can enable clustering analysis to focus on the pathological features
only, and ignore spurious structures.
These two examples, and numerous others, are all instantiations of a common problem: in
order to better extract the relevant structures information about the irrelevant components
of the data should be incorporated. Naturally, various solutions have been suggested to this
basic problem in many different contexts (e.g. spectral subtraction, weighted regression
analysis). The current paper presents a general unified information theoretic framework for
such problems, extending the original information bottleneck variational problem to deal
with discriminative tasks of that nature, by observing its analogy with rate distortion theory
with side information.
2 Information Theoretic Formulation
To formalize the problem of extracting relevant structures consider first three categorical
variables ,
and
whose co-occurrence distributions are known. Our goal is to uncover structures in
, that do not exist in
. The distribution
may contain several conflicting underlying structures, some of which may also exist in
. These variables stand for example for a set of terms , a set of documents
whose structure we seek, and an additional set of documents
, or a set of genes
and two sets of tissues with different biological conditions. In all these examples
and
are conditionally independent given . We thus make the assumption that the joint
distribution factorizes as:
.
"'
!
"#
(' )*(
+(' - ,
( . ,
%$ &!
The relationship between the variables can be expressed by a Venn diagram (Figure 1A),
where the area of each circle corresponds to the entropy of a variable (see e.g. [2]
p.20 and [3] p.50 for discussion of this type of diagrams) and the intersection of two
circles corresponds to their mutual information. The mutual information of two random variables is the familiar symmetric functional of their joint distribution,
2354 6 ( 87:9;<>=@3D? 3A 4 6B6EA AGF
=C? =@?
/0%$ 1
.
A.
B.
Figure 1: A. A Venn diagram illustrating the relations between the entropy and mutual information of the variables , , . The area of each circle corresponds to the entropy of
a variable, while the intersection of two circles corresponds to their mutual information. As
and
are independent given , their mutual information vanishes when is known,
thus all their overlap is included in the circle of . B. A graphical model representation of
IB with side information. Given the three variables , , , we seek a compact stochastic representation of which preserves information about
but removes information
about
. In this graph
and
are indeed conditionally independent given .
&
&
(' !
To identify the relevant structures in the joint distribution
, we aim to extract a
compact representation of the variable with minimal loss of mutual information about
the relevant variable
, and at the same time with maximal loss of information about
the irrelevance variable
. The goal of information bottleneck with side information
, in a way that
(IBSI) is therefor to find a stochastic map of to a new variable ,
maximizes its mutual information with
and minimizes the mutual information about
. In general one can achieve this goal perfectly only asymptotically and the finite case
leads to a sub optimal compression, an example of which is depicted in the blue region in
figure 1. These constrains can be cast into a single variational functional,
/
%$
(' ,
/
1$ /
&$
(1)
where the Lagrange parameter determines the tradeoff between compression and information extraction while the parameter determines the tradeoff between preservation of
information about the relevant
variable and loss of information about the irrelevant one
. In some applications, such as in communication, the value of may be determined by
the relative cost of transmitting the information about
by other means.
&
The information bottleneck variational problem, introduced in [15], is a special case of
our current variational problem with , namely, no side or irrelevant information is
available. In that case only the distributions
, and
are determined.
(' ,
(
( ,
3 Solution Characterization
The complete Lagrangian of this constrained optimization problem is given by
( , /
%$
/
&$
/0 1$
3
( ,
(2)
where
, are the normalization Lagrange multipliers. Here, the minimization is performed with respect to the stochastic mapping
, taking into account its probabilis
tic relations to ,
and
. Interestingly, performing the minimization over
as independent variables leads to the same solution of self
consistent equations.
( ( ,
( , G$ ( B$(' , G$('
,
('
'
( ,
obey the following self consistent equations
(
=@? 6
3 A
=C? 6
A
=@? 6
3 A
=@? 6
A +
3 (' ,
(
Proposition 1 The extrema of
(' ,
('
( ,
(3)
( 3 ( ,
+(' ,
(
( 3 ( ,
( ,
(
( ,
( 2
where
a normalization factor and
gence [2],
# ( , ,:, (' , 3 A
(
,
, , (
, + is
( , , ! 2 3 '( 87 9 ; =@" ?? 3DA is the Kullback-Leibler diver
( , !%( ,
, we write (
! 2 3 (
, +('
2
3
2
3
term of Eq. 3
# (' , G
( ,
(
# ( ,
( ,
(
and obtain for the$ second
# ( ,
/
1$ # (' ,
6 3 ( ,
( ,
(
87:9; ('(' , & % (4)
$
('
6 ( ,
87:9; (( , ,
(( , %
('
' (' , ,:, ( , )( ( ( ,
, , ( *)
Proof:
Following the Markovian relation
#
$ ( ,
(
87:9; ( %
(
,+- (' ,
, , ( , .
Similar differentiation for the other terms yield
# ( ,
( ,
, , ( , /
(
3 AA
3
' !10 ? ( , ,:, (' !
=@?
(5)
where
(0 , ,:, ('
' + ,
holds all terms independent of . Equating the derivative to zero then yields the first equation of proposition 1.
The formal solutions of the above variational problem have an exponential form which is a
natural generalization of the solution of the original IB problem. As in the original IB, when
goes to infinity the Lagrangian reduces to
, and the exponents
become binary cluster membership
collapse to a hard clustering solution, where
probabilities.
/
1$ !
( ,
/
1$ '
6 2
A
/ &$&! /
&$6 '
A 2 2 6 266 2
(A G 6 A4
87:9; < =@? 6 A F
2 26 26 (' G 0 7 9 ; <=@? 6 A F 3 7 9 ; <1=@?6
A54 =@? 6 A)F76 4 6 4 6=C? A . For
=@?
=@?
=@?
=C? representa66
AAGF86 a compact
and a fixed level of /0%$ , IBSI thus operates to extract
4 6 4 6 A , measuring
tion that maximizes the mean log likelihood ratio 3 7 9 ; < =@?
=@?
=@?
the discriminability between the distribution of ( , and (
, .
Further intuition about the operation of IBSI can be obtained by rewriting the second
term in Eq. 2,
) ) ):
) ) )
6
3DA'
6
A 2
6
3DA
6
A
2
=@?
=C?
=@?
=@?
The above setup can be extended to the case of multiple variables on which multi
information should be preserved
and variables on which multi-information
should be removed
, as discussed in [8]. This yields
( ,
(
(6)
which can be solved together with the other self-consistent conditions, similarly to Eq. 4.
4 Relation to Rate Distortion Theory with Side Information
The problem formulated above is related to the theory of rate distortion with side information ([17],[2] p. 439). In rate distortion theory (RDT) a source variable is stochastically
encoded into a variable , which is decoded
at the other side of the channel with some
distortion. The achievable code rate, at a given distortion
level , is bounded by the
optimal rate, also known as the rate distortion function,
. The optimal encoding is
determined by the stochastic map
, where the representation quantization
is found by
minimizing the average distortion. For the optimal code
.
/0%$
( ,
This rate can be improved by utilizing side information in the form of another variable, ,
that is known at both ends of the channel. In this case, an improved rate can be achieved by
avoiding sending information about that can be extracted from . Indeed, in this case
the
rate distortion function with this side information has a lower lower-bound, given by
, where is the optimal quantization of in this case, under
the distortion constraint (see [17] for details). In the information bottleneck framework the
average distortion is replaced by the mutual information about the relevant variable, while
the rate-distortion function is turned into a convex curve that characterizes the complexity
of the relation between the variables, (see [15, 13]).
!/
%$ /0 1$
Similarly, IBSI avoids differentiating instances of that are informative about
if they
contain information also about
. The variable
is analogous to the side information
variable , while
is just the ?informative? of the original IB. While the formal
analogy between these problems helps in their mathematical formulation, it is important to
emphasize that these are very different problems both in motivation and scope. Whereas
RDT with side information is a specific communication problem with some given (often
arbitrary) distortion function, our problem is a general statistical non-parametric analysis
and
. Many differtechnique that depends solely by the choice of the variables ,
ent pattern recognition and discriminative learning problems can be cast into this general
information theoretic framework - far beyond the original setting of RDT with side information.
#
5 Algorithms
The set of self-consistent equations (Eq. 4), can be solved by iterating the equations, given
initial distributions, similar to the algorithm presented for the IB [15, 8], with similar convergence proofs. Unlike the original IB equations, convergence of the algorithm is no
longer allways guaranteed, simply because the problem is not guaranteed to have feasible
solutions for all values. However, there exist a non empty set of values for which this
algorithm is guaranteed to converge.
As in the case of IB, various heuristics can be applied, such as deterministic annealing in which increasing the parameter is used to obtain finer clusters; greedy agglomerative
hard clustering [13]; or a sequential K-means like algorithm [12]. The latter provides a
good compromise between top-down annealing and agglomerative greedy approaches and
achieves excellent performance. This is the algorithm we adopted in this paper, modifying
the algorithm detailed in [12], by using a target function
.
/0 1$
/0 1$ /
1$ 1!
6 Applications
We describe two applications of our method: a simple synthetic example, and a ?real world?
problem of hierarchical text categorization. We also used IBSI to extract relevant features
in face images, but these results will be published elsewhere due spavce considerations.
6.1 A synthetic example
To demonstrate the ability of our approach to uncover weak but interesting hidden structures in data, we designed a co-occurrences matrix contains two competing sub-structures
(see figure 2A). For demonstration purposes, the matrix was created such that the stronger
structure can be observed on the left and the weaker structure on the right. Compressing
into two clusters while preserving information on
using IB (
), yields the clustering of figure 2B, in which the upper half of ?s are all clustered together. This clustering
follows from the strong structure on the left of 2A.
We now created a second co-occurrence matrix, to be used for identifying the relevant structure, in which each half of
yield similar distributions
. Applying sequentialIBSI now successfully ignores the strong but irrelevant structure in
and retrieves
the weak structure. Importantly, this is done in an unsupervised manner, without explicitly
pointing to the strong but irrelevant structure.
,
This example was designed for demonstration purposes, thus the irrelevant structures is
. The next example shows that our approach is also useful
strongly manifested in
for real data, in which structures are much more covert.
%$
A.
B.
+
D.
?
P(X,T)
P(X,Y )
T
Y
P(X,T)
X
P(X,Y )
C.
+
Y
?
T
"
Figure 2: Demonstration of IBSI operation. A. A joint distribution
that contains two distinct and conflicting structure. B. Clustering
into two clusters using the
information bottleneck method separates upper and lower values of , according to the
stronger structure. C. A joint distribution
that contains a single structure, similar in nature to the stronger structure
. D. Clustering into two clusters using
IBSI successfully extract the weaker structure in
.
" !
!
0.65
accuracy
0.6
0.55
0.5
0
10
20
30
40
n chosen clusters
50
60
Figure 3: A. An illustration of the 20 newsgroups hierarchical data we used. B. Categorization accuracy vs. no of word clusters .
.IB dashed line. IBSI solid line.
6.2 Hierarchical text categorization
Text categorization is a fundamental task in information retrieval. Typically, one has to
group a large set of texts into groups of homogeneous subjects. Recently, Slonim and
colleagues showed that the IB method achieves categorization that predicts manually predefined categories with great accuracy, and largely outperforms competing methods [12].
Clearly, this unsupervised task becomes more difficult when the texts have similar subjects,
because alternative categories are extracted instead of the ?correct? one.
This problem can be alleviated by using side information in the form of additional documents from other categories. This is specifically useful in hierarchical document categorization, in which known categories are refined by grouping documents into sub-categories.
[4, 16]. IBSI can be applied to this problem by operating on the terms-documents cooccurrence matrix while using the other top-level groups for focusing on the relevant structures. To this end, IBSI is used to identify clusters of terms that will be later used to cluster
a group of documents into its subgroups,
While IBSI is targeted at learning structures in unsupervised manner, we have chosen to
apply it to a labelled dataset of documents in order to be able to measure how its results
agree with manual classification. Labels are not used by our algorithms during learning
and serve only to quantify the performance. We used the 20 Newsgroups database collected by [7] preprocessed as described in [12]. This database consists of 20 equal sized
groups of documents, hierarchically organized into groups according to their content (figure 3A). We aimed to cluster documents that belong to two newsgroups from the supergroup of computer documents and have very similar subjects comp.sys.ibm.pc.hardware
and comp.sys.mac.hardware. As side information we used all documents from the super
group of science ( sci.crypt, sci.electronics, sci.med, sci.space).
To demonstrate the power of IBSI we used double clustering to separate documents into two
groups. The goal of the first clustering phase is to use IBSI to identify clusters of terms that
extract the relevant structures of the data. The goal of the second clustering phase is simply
to provide a quantitative measure for the quality of the features extracted in the first phase.
We therefor performed the following procedure: First, the most frequent 2000 words in
Then, word clusters were sorted
these documents were clustered
into clusters using IBSI.
, and the clusters
by a single-cluster score
with the highest score were chosen. These word-clusters were then used for clustering
documents. The performance of this process is evaluated by measuring the overlap of the
resulting clusters with the manualy classified groups. Figure 3, plots document-clustering
, as a function of . IBSI (
accuracy for
) is compared with the IB method
(i.e.
). Using IBSI successfully improves mean clustering accuracy from about 55
percent to about 63 percents.
(8 , ,:, ( !
>
(
, ,:, (
'
7 Discussion and Further Research
We have presented an information theoretic approach for extracting relevant structures from
data, by utilizing additional data known to share irrelevant structures with the relevant data.
Naturally, the choice of side data may considerably influence the solutions obtained with
IBSI, simply because using different irrelevant variables, is equivalent to asking different
questions about the data analysed. In practice, side data can be naturally defined in numerous applications, in particular in exploratory analysis of scientific experiments, e.g. when
searching for features that characterize a disease but not healthy subjects.
While the current work is based on clustering to compress the source, the notion of extracting relevance through side information can be extended to other forms of dimentionality
reduction, such as non-linear embedding on low dimensional manifolds. In particular side
information can be naturally combined with information theoretic modeling approaches
such as SDR [5]. Our preliminary results with this approach were found very promissing.
Acknowledgements
We thank Amir Globerson, Noam Slonim, Israel Nelken and Nir Friedman for helpful
discussions. G.C. is supported by a grant from the ministry of Science, Israel.
References
[1] L.D. Baker and A. K. McCallum. Distributional clustering of words for text classification. In
Proc. of SIGIR, 1998.
[2] T.M. Cover and J.A. Thomas. The elements of information theory. Plenum Press, NY, 1991.
[3] I. Csiszar and J.Korner. Information theory: Coding Theorems for Discrete Memoryless Systems. Academic Press New York, 1997. 2nd edition.
[4] S. Dumais and H. Chen. Hierarchical classification of web content. In Proc. of SIGIR, pages
256?263, 2000.
[5] A. Globerson and N. Tishby. Sufficient dimentionality reduction. J. Mach. Learn. Res., 2003.
[6] T. Hoffman. Probabilistic latent semantic indexing. In Proc. of SIGIR, pages 50?57, 1999.
[7] K. Lang. Learning to filter netnews. In Proc. of 12th Int Conf. on machine Learning, 1995.
[8] N. Friedman O. Mosenzon, N. Slonim, and N. Tishby. Multivariate information bottleneck. In
Proc of UAI, pages 152?161, 2001.
[9] F.C. Pereira, N. Tishby, and L. Lee. Distributional clustering of english words. In Meeting of
the Association for Computational Linguistics, pages 183?190, 1993.
[10] E. Schneidman, N. Slonim, N. Tishby, R. deRuyter van Steveninck, and W. Bialek. Analyzing
neural codes using the information bottleneck method. Technical report, The Hebrew University, 2002.
[11] J. Sinkkonen and S. Kaski. Clustering based on conditional distribution in an auxiliary space.
Neural Computation, 14:217?239, 2001.
[12] N. Slonim, N. Friedman, and N. Tishby. Unsupervised document classification using sequential
information maximization. In Proc. of SIGIR, pages 129?136, 2002.
[13] N. Slonim and N. Tishby. Agglomerative information bottleneck. In Advances in Neural Information Processing Systems (NIPS), 1999.
[14] N. Slonim and N. Tishby. Document clustering using word clusters via the information bottleneck method. In Proc. of SIGIR, pages 208?215, 2000.
[15] N. Tishby, F.C. Pereira, and W. Bialek. The information bottleneck method. In Proc. of 37th
Allerton Conference on communication and computation, 1999.
[16] A. Vinokourov and M.Girolani. A probabilistic framework for the hierarchic organization and
classification of document collections. J. Intell. Info Sys., 18(23):153?172, 2002.
[17] A. Wyner and J. Ziv. The rate distortion function for source coding with side information at the
decoder. IEEE Trans. Information Theory, 22(1):1?10, 1976.
| 2223 |@word illustrating:1 agf:1 achievable:1 compression:2 stronger:3 nd:1 seek:2 solid:1 reduction:2 electronics:1 initial:1 contains:3 score:2 selecting:1 genetic:1 document:21 interestingly:1 outperforms:1 current:3 analysed:1 lang:1 informative:2 remove:1 designed:2 plot:1 v:1 greedy:2 half:2 amir:1 mccallum:1 sys:3 characterization:1 provides:1 allerton:1 mathematical:1 become:1 consists:1 korner:1 manner:2 indeed:3 multi:2 increasing:1 becomes:1 underlying:2 bounded:1 maximizes:2 baker:1 israel:3 what:2 tic:1 minimizes:1 spoken:1 unified:1 gal:1 extremum:1 differentiation:1 sinkkonen:1 quantitative:1 grant:1 engineering:1 slonim:7 encoding:1 mach:1 analyzing:1 solely:1 discriminability:1 equating:1 co:3 collapse:1 steveninck:1 globerson:2 practice:1 procedure:1 area:2 empirical:2 alleviated:2 chechik:1 word:8 regular:1 protein:1 ggal:1 context:2 influence:1 applying:1 writing:1 equivalent:1 map:2 demonstrated:1 center:1 lagrangian:2 deterministic:1 jerusalem:1 go:1 convex:1 sigir:5 identifying:3 pure:1 utilizing:2 importantly:1 population:1 searching:1 exploratory:1 notion:1 embedding:1 analogous:1 plenum:1 target:1 homogeneous:1 element:1 recognition:1 predicts:1 labeled:1 database:2 observed:1 distributional:2 solved:2 region:1 compressing:1 removed:1 highest:1 disease:1 intuition:1 vanishes:1 complexity:1 constrains:1 cooccurrence:1 compromise:1 predictive:1 serve:1 joint:7 chip:1 various:2 retrieves:1 kaski:1 distinct:1 describe:1 netnews:1 refined:1 whose:2 encoded:1 heuristic:1 distortion:17 ability:1 maximal:1 frequent:1 relevant:22 turned:1 achieve:1 ent:1 convergence:2 cluster:18 empty:1 extending:1 double:1 categorization:7 object:1 help:1 derive:1 ac:1 school:1 eq:4 strong:3 auxiliary:4 c:1 quantify:1 correct:1 modifying:1 stochastic:4 filter:1 enable:1 generalization:2 clustered:2 preliminary:1 proposition:2 biological:2 hold:1 considered:1 great:1 algorithmic:1 mapping:1 scope:1 pointing:1 achieves:2 purpose:2 proc:8 label:4 healthy:2 successfully:3 weighted:1 hoffman:1 minimization:2 clearly:1 aim:1 super:1 factorizes:1 focus:1 likelihood:1 helpful:1 membership:1 typically:1 spurious:1 relation:5 hidden:1 classification:5 ill:1 ziv:1 exponent:1 constrained:1 special:1 mutual:9 equal:1 construct:1 extraction:1 manually:1 unsupervised:4 others:1 stimulus:1 report:1 inherent:1 pathological:1 preserve:1 intell:1 familiar:1 replaced:2 phase:3 sdr:1 testify:1 friedman:3 organization:1 irrelevance:1 pc:1 csiszar:1 predefined:1 circle:5 desired:1 re:1 obscured:1 minimal:1 instance:2 modeling:2 markovian:1 asking:1 cover:1 measuring:2 maximization:1 cost:1 mac:1 tishby:10 characterize:1 considerably:2 synthetic:3 combined:1 dumais:1 fundamental:2 huji:1 interdisciplinary:1 probabilistic:2 lee:1 together:2 transmitting:1 again:1 stochastically:1 conf:1 derivative:1 style:1 account:1 coding:2 int:1 explicitly:1 depends:2 performed:2 tion:1 later:1 observing:1 characterizes:1 il:1 accuracy:5 largely:1 yield:5 identify:3 weak:2 comp:2 finer:1 published:1 tissue:4 classified:3 manual:1 rdt:3 crypt:1 colleague:1 naturally:4 proof:2 dataset:1 improves:1 organized:1 formalize:1 uncover:2 actually:1 focusing:1 supervised:3 specify:1 improved:3 formulation:4 done:1 evaluated:1 strongly:1 just:1 implicit:1 correlation:1 hand:1 web:1 ibsi:17 nonlinear:1 interfere:2 quality:1 scientific:1 contain:6 multiplier:1 symmetric:1 leibler:1 memoryless:1 semantic:1 deal:1 conditionally:2 during:2 self:4 naftali:1 speaker:1 theoretic:6 demonstrate:2 complete:1 covert:1 percent:2 image:2 meaning:1 consideration:1 variational:5 recently:2 common:2 functional:3 discussed:1 belong:1 association:1 similarly:2 therefor:2 pathology:2 longer:1 operating:1 multivariate:1 showed:1 irrelevant:15 manifested:1 binary:1 life:1 meeting:1 preserving:1 ministry:1 additional:3 subtraction:1 converge:1 schneidman:1 dashed:1 preservation:1 multiple:2 reduces:1 technical:1 academic:1 retrieval:1 regression:3 basic:1 normalization:2 promissing:1 achieved:1 preserved:1 whereas:1 addressed:2 annealing:2 diagram:3 source:4 unlike:1 subject:5 med:1 extracting:9 newsgroups:3 perfectly:1 competing:2 knowing:1 tradeoff:2 vinokourov:1 bottleneck:13 expression:3 york:1 useful:2 iterating:1 detailed:1 unimportant:1 aimed:1 hardware:2 category:5 dna:1 exist:4 blue:1 write:1 discrete:1 group:9 preprocessed:1 rewriting:1 graph:1 asymptotically:1 powerful:1 bound:1 guaranteed:3 infinity:1 constraint:1 aspect:2 performing:1 according:2 indexing:1 principally:1 taken:2 equation:6 agree:1 end:2 sending:1 adopted:1 available:2 operation:3 apply:1 obey:1 hierarchical:5 spectral:1 occurrence:3 alternative:5 existence:1 comprehensible:1 original:7 compress:2 clustering:19 top:2 thomas:1 linguistics:1 graphical:1 maintaining:1 question:1 spike:1 parametric:1 dependence:1 bialek:2 separate:2 thank:1 sci:4 gence:1 decoder:1 manifold:1 agglomerative:3 collected:1 code:3 relationship:1 illustration:2 providing:2 minimizing:2 hebrew:2 ratio:1 demonstration:3 setup:1 unfortunately:1 difficult:1 info:1 noam:1 negative:1 upper:2 finite:1 situation:1 extended:2 incorporated:1 communication:3 verb:1 arbitrary:1 introduced:2 pair:1 required:1 cast:2 namely:1 sentence:1 conflicting:5 subgroup:1 nip:1 trans:1 beyond:1 suggested:1 able:1 pattern:2 power:1 overlap:2 natural:1 improve:1 technology:1 wyner:1 numerous:3 created:2 categorical:1 extract:6 nir:1 text:7 acknowledgement:1 relative:1 loss:3 interesting:1 proven:1 analogy:2 sufficient:1 consistent:4 share:1 ibm:1 course:1 elsewhere:1 supported:1 english:1 side:21 formal:3 weaker:2 hierarchic:1 face:3 taking:1 differentiating:1 venn:2 diver:1 van:1 curve:1 world:2 valid:1 stand:1 avoids:1 ignores:1 nelken:1 commonly:1 collection:1 far:1 compact:3 ignore:1 emphasize:1 kullback:1 gene:5 instantiation:1 uai:1 discriminative:3 search:1 latent:1 nature:2 channel:2 learn:1 excellent:1 complex:1 da:3 hierarchically:1 motivation:1 noise:2 edition:1 dimentionality:2 ny:1 sub:3 decoded:1 pereira:2 exponential:1 ib:12 removing:1 down:1 theorem:1 specific:1 grouping:1 quantization:2 sequential:2 mosenzon:1 chen:1 easier:1 entropy:3 intersection:2 depicted:1 simply:3 lagrange:2 expressed:2 aa:1 corresponds:5 determines:2 extracted:3 conditional:1 goal:9 identity:1 formulated:1 targeted:1 sized:1 sorted:1 labelled:1 feasible:1 hard:2 content:2 included:1 determined:3 specifically:1 operates:1 latter:1 relevance:3 avoiding:1 |
1,345 | 2,224 | A Probabilistic Approach to Single Channel
Blind Signal Separation
Gil-Jin Jang
Spoken Language Laboratory
KAIST, Daejon 305-701, South Korea
[email protected]
http://speech.kaist.ac.kr/?jangbal
Te-Won Lee
Institute for Neural Computation
University of California, San Diego
La Jolla, CA 92093, U.S.A.
[email protected]
Abstract
We present a new technique for achieving source separation when given
only a single channel recording. The main idea is based on exploiting the
inherent time structure of sound sources by learning a priori sets of basis
filters in time domain that encode the sources in a statistically efficient
manner. We derive a learning algorithm using a maximum likelihood
approach given the observed single channel data and sets of basis filters.
For each time point we infer the source signals and their contribution
factors. This inference is possible due to the prior knowledge of the
basis filters and the associated coefficient densities. A flexible model
for density estimation allows accurate modeling of the observation and
our experimental results exhibit a high level of separation performance
for mixtures of two music signals as well as the separation of two voice
signals.
1 Introduction
Extracting individual sound sources from an additive mixture of different signals has been
attractive to many researchers in computational auditory scene analysis (CASA) [1] and
independent component analysis (ICA) [2]. In order to formulate the problem, we assume
that the observed signal is an addition of independent source signals
(1)
where is the sampled value of the source signal, and is the gain of each source
which is fixed over time. Note that superscripts indicate sample indices of time-varying
signals and subscripts indicate the source identification. The gain constants are affected
by several factors, such as powers, locations, directions and many other characteristics of
the source generators as well as sensitivities of the sensors. It is convenient to assume all
the sources to have zero mean and unit variance. The goal is to recover all
given only
a single sensor input . The problem is too ill-conditioned to be mathematically tractable
since the number of unknowns is "!
given only ! observations. Several earlier
attempts [3, 4, 5, 6] to this problem have been proposed based on the presumed properties
of the individual sounds in the frequency domain.
ICA is a data driven method which relaxes the strong characteristical frequency structure
assumptions. However, ICA algorithms perform best when the number of the observed
= ? ?
A
B
C
q=0.99
+ ? ?
=
q=0.52
?
+
q=0.26
?
+ !#" ?
' $&%
q=0.12
Figure 1: Generative models for the observed mixture and original source signals (A) A
single channel observation is generated by a weighted sum of two source signals with different characteristics. (B) Individual source signals are generated by weighted ( ( *) ) linear
*)
superpositions of basis functions (+ ). (C) Examples of the actual coefficient distributions.
They generally have more sharpened summits and longer tails than a Gaussian distribution,
and would be classified as super-Gaussian. The distributions are modeled by generalized
Gaussian density functions in the form of ,.-/( *)103254768 -9;: ( *) : < 0 , which provide good
matches to the non-Gaussian distributions by varying exponents. From left to right, the
exponent decreases, and the distribution becomes more super-Gaussian.
signals is greater than or equal to the number of sources [2]. Although some recent overcomplete representations may relax this assumption, the problem of separating sources
from a single channel observation remains difficult. ICA has been shown to be highly effective in other aspects such as encoding speech signals [7] and natural sounds [8]. The
basis functions and the coefficients learned by ICA constitute an efficient representation of
the given time-ordered sequences of a sound source by estimating the maximum likelihood
densities, thus reflecting the statistical structures of the sources.
The method presented in this paper aims at exploiting the ICA basis functions for separating
mixed sources from a single channel observation. Sets of basis functions are learned a
priori from a training data set and these sets are used to separate the unknown test sound
sources. The algorithm recovers the original auditory streams in a number of gradientascent adaptation steps maximizing the log-likelihood of the separated signals, calculated
using the basis functions and the probability density functions (pdf?s) of their coefficients
?the output of the ICA basis filters. The object function not only makes use of the ICA
basis functions as a strong prior for the source characteristics, but also their associated
coefficient pdf?s modeled by generalized Gaussian distributions [9]. Experiments showing
the separation of the two different sources was quite successful in the simulated mixtures
of rock and jazz music, and male and female speech signals.
2 Generative Models for Mixture and Source Signals
The algorithm first involves the learning of the time-domain basis functions of the sound
sources that we are interested in separating from a given training database. This corresponds to the prior information necessary to successfully separate the signals. We assume
two different types of generative models in the observed single channel mixture as well as
in the original sources. The first one is depicted in Figure 1-A. As described in Equation
1, at every >=@?BA !DC the observed instance is assumed to be a weighted sum of different
sources. In our approach only the case of FE is regarded. This corresponds to the situ-
ation defined in Section 1 in that two different signals are mixed and observed in a single
sensor.
For the individual source signals, we adopt a decomposition-based approach as another
generative model. This approach was employed formerly in analyzing sound sources [7, 8]
by expressing a fixed-length segment drawn from a time-varying signal as a linear superposition of a number of elementary patterns, called basis functions, with scalar multiples
! are chopped out of a source,
(Figure 1-B). Continuous samples of length with
9 A , and the subsequent segment
from to
is denoted
as an -dimensional column
vector in a boldface letter, ?
C , attaching the lead-off sample
index for the superscript and representing the transpose operator with . The constructed
column vector is then expressed as a linear combination of the basis functions such that
*)
) + ( *)
+ ) *)
(
(2)
is the number of basis functions,
is the
basis function of source
where
in the form of -dimensional column vector,
its coefficient (weight) and
? ( ( ( C . The r.h.s.*) is the matrix-vector notation. The second subscript followed
by the source index in ( represents the component number of the coefficient vector .
and be
We assume that
and has full rank so that the transforms between
reversible in both directions. The inverse of the basis matrix,
, refers to the ICA
. The purpose of this decomposition
filters that generate the coefficient vector:
is to model the multivariate distribution of in a statistically efficient manner. The ICA
learning algorithm is equivalent to searching for the linear transformation that make the
components as statistically independent as possible, as well as maximizing the marginal
densities of the transformed coordinates for the given training data [10],
!"$ #&6 % ' -( : 0 )* !+" #,6 % % ) ' -/( *) 0
' -.- 0
' - ( ) 0
(3)
where
denotes the probability of the value of a variable . Independence between
the components and over time samples factorizes the joint probabilities of the coefficients
into the product of marginal ones. What matters is therefore how well matched the model
distribution is to the true underlying distribution of
. The coefficient histogram
of real data reveals that the distribution has a highly sharpened point at the peak with a
long tail (Figure 1-C). Therefore we use a generalized Gaussian prior [9] that provides an
accurate estimate for symmetric non-Gaussian distributions by fitting the exponent in the
set of parameters in its simplest form
0
0
21 433 +6 5 33 <7 0 98 5 6 /;:
33 33
0
, -(0 *)
,.-/(: 0 2 476!8 9 (D9
5 =< 6 ' ? (- > @
/
(4)
? (7C , 0 ? (7C , and
where
is a realized pdf of variable and should be noted
distinctively with - . With the generalized Gaussian ICA learning algorithm [9], the
basis functions and their individual parameter set
are obtained beforehand and used as
prior information for the following source separation algorithm.
3 Separation Algorithm
The method is motivated by the pdf approximation property of ICA transformation (Equation 3). The probability of the source signals is computed by the generalized Gaussian
parameters in the transformed domain, and the method performs maximum a posteriori
(MAP) estimation in a number of adaptation steps on the source signals to maximize the
data likelihood. Scaling factors of the generative model are learned as well.
3.1 MAP estimation of Source Signals
We have demonstrated that the learned basis filters maximize the likelihood of the given
data. Suppose we know what kind of sound sources have been mixed and we were given
the set of basis filters from a training set. Could we infer the learning data? The answer is
generally ?no? when
! and no other information is given. In our problem of single
channel separation, half of the solution is already given by the constraint
,
where
constitutes the basis learning data (Figure 1-B). Essentially, the goal of the
source inferring algorithm presented in this paper is to complement the remaining half with
*)
the statistical information given by a set of coefficient density parameters
. If model
parameters are given, we can perform maximum a posteriori (MAP) estimation simply by
optimizing the data likelihood computed by the model parameters.
0
' - : 0 ,.- : 0 : 4 :
, - 0
0
' -
: 0 % ' -( : 0 % ,.- : 0 : 4 :
!
! 9
A
' -
: 0 ' -
: 0
) ,.- : 0
) ,.- : 0
! : 4 : : 4 :
= ?BA !D
C
C generates the independent coefficient
At every time point a segment ?
and
respectively. The likelihood of is
vector
(5)
is the generalized Gaussian density function, and
? paramewhere
ter group of all the coefficients, with the notation ?
? meaning an ordered set of the
elements from index to . Assuming the independence over time, the probability of the
whole signal
is obtained from the marginal ones of all the possible segments,
(6)
where, for convenience,
. The objective function to be maximized is the
multiplication of the data likelihoods of both sound sources, and we denote its log by :
(7)
and
for
, toward the maximum of . We
Our interest is in adapting
introduce a new variable
, a scaled value of with the contribution factor. The
adaptation is done on the values of instead of
, in order to infer the sound sources and
their contribution factors simultaneously. The learning rule is derived in a gradient-ascent
manner by summing up the gradients of all the segments where the sample lies:
1 ! ! , - $ # : 0
! ! , - $ # : 0 7
" %
-/( $ # ) 0* )," + 9
- ( $ # ) 0-* )." +0/
) )&'(
" % ) '&)(
# ) 0 ) "
# ) 0 ) "
2
" ) # 1( - ( * 9 ) 2( -/( * /
7 3;:=< 7> 5 #@? 3BA 5 DC E 7 # 398GF 3;:I 8 H ?
3
6
4
5
B
3
J
8
H
0 3N39M 8OGPR5 Q:4S T ? 3BA 5 * *3B) 8" 5 -. K 0 398
H
" ! 9LK !
A
/
(
!
!
(
VU 9 WU 3B4
!
!
which is derived by the fact that
and
(8)
9 A
,
where
,
, and
. Note that the gradient
of for ,
, always makes the condition
satisfy,
so learning rule on either
or
subsumes the counterpart. The overall process of the
proposed method is summarized as 4 steps in Figure 2. The figure shows one iteration of
the adaptation of each sample.
x?!
A
'
'
B
'
? (? ()
'
*
..
*
. /
0
)
)
?(
('
- ,* +
.
&
C
I
A
$
C
$
)
% $
I
JK
A
F
??
??
( )
( )
@
??
(
? >
H JD J
D
H
J K
L
A
EGF
BA I
$
F
JJ
A
$
J
>
>
>
)
D E
H
J
>
y
D
x?#
"
A
6
\
[
4
4
4
? (:
? (:
B
4
45
7
;<
7
;9;
=
?(
7 8
: 9
;
)
)
3
R
W
P
1
P
1
C
1
)
2 1
W
( )
( )
V S
XY
S
V X9
X
O
??
S T
V 9
X
(
N M
)
U
Z
P
PQ W
1
XX
P
??
??
U
XY
TGU
X
?x1t ,?x 2t
M
M
M
M
Figure 2: The overall structure of the proposed method. We are given single channel
data ] , and we have the estimates of the source signals,
^ , at every adaptation step.
(A)
`_ ( *) : At each timepoint, the current estimates of the source signals are passed
, generating sparse codes ( *) that are statistically independent.
through basis filters
_ba
*
)
)
(B) (
( : The stochastic gradient for each code is obtained by taking derivative of
log-likelihood. (C) a ( *) _ca
: The gradient is transformed to the source domain. (D)
The individual gradients are combined to be added to the current estimates of the source
signals.
3.2 Estimating and
Updating the contribution factors can be accomplished by simply finding the maximum
a posteriori values. To simplify the inferring steps, we force the sum of the factors to be
constant: e.g.
A . Then is completely dependent on as A 9 , and
and the current estimate of the
we need to consider only. Given the basis functions
sources
, the
posterior
probability
of
is
' - :
0 2 ' -
: 0 ' -
: 0 , E - 0
, E - 0
! 6 8
, E - 0 :
EH
! 8
) , E - 0 :U ! ! WU !
!
! 9
) 2( - ( ) 0 )
! ) ,.-/( *) 0 ! ) ,.-/( ) 0 ! ( ) ) 0 ) A
!
! ! -( 9
! W U ! ( *)
( A = ? A C
> : : > : :
> : :
> : :
> : :
> : :
where
is the prior density function of
probability also maximizes its log,
. The value of
(9)
maximizing the posterior
(10)
where is the log-likelihood of the estimated sources defined in Equation 7. Assuming
is uniformly distributed,
, which is calculated
that
as
d
d
where
d
gf
e
(11)
derived by the chain rule
f ih
e
Solving equation
ml
subject to
l
and
d
d
kj
(12)
gives
d
d
d
d
(13)
These values guarantee the local maxima of w.r.t. the current estimates of source signals.
The algorithm updates the contribution factors periodically during the learning steps.
Signal
Basis
Functions
2
1
1.5 q=0.61
1
0.5
0
-2
1
q=0.82
q=0.80
0.5
0
2
0
-2
0.5
0
2
0
-5
0
5
4
3
3 q=0.47
2
2
1
0
-5
6
q=0.53
Coef?s
PDF
1
0
5
0
-5
0
4
1.5
q=0.43
1
0.8
q=0.64
15
0.6 q=1.19
10
1.5
q=0.34
1
q=0.78
0.4
2
0
-5
5
(a) Rock music
0.5
0
0
-5
5
5
0.2
0
5
0
-5
0
5
0.5
0
-5
0
5
0
-5
0
5
(b) Jazz music
Signal
Basis
Functions
60
40
q=0.26
20
0
-2
0
2
40
30q=0.26
20
10
0
-5
0
5
20
15q=0.30
10
5
0
-2
0
30
20
30
q=0.29
20
10
2
0
-2
q=0.29
Coef?s
PDF
10
0
2
0
-2
0
30
15
20 q=0.29
10 q=0.34
10
0
-2
2
(c) Male speech
10
10
q=0.36
6
q=0.36
5
5
0
-2
0
-2
4
5
0
2
0
-2
q=0.41
2
0
2
0
2
0
2
0
-5
0
5
(d) Female speech
Average Powerspectrum
Figure 3: Waveforms of four sound sources, examples of the learned basis functions (5 were
chosen out of 64), and the corresponding coefficient distributions modeled by generalized
Gaussians. The full set of basis functions is available at the website also.
Rock
Jazz
Male
Female
20
10
0
0
1000
2000
3000
Frequency (Hz)
4000
Figure 4: Average powerspectra of the 4
sound sources. Frequency scale ranges in
0 4kHz (
-axis), since all the signals are
sampled at 8kHz. The powerspectra are
averaged and represented in the -axis.
4 Experiments and Discussion
We have tested the performance of the proposed method on the single channel mixtures of
four different sound types. They were monaural signals of rock and jazz music, male and
female speech. We used different sets of speech signals for learning basis functions and for
generating the mixtures. For the mixture generation, two sentences of the target speakers
?mcpm0? and ?fdaw0?, one for each, were selected from the TIMIT speech database. The
training set consisted of 21 sentences for each gender, 3 for each of randomly chosen 7
males (or females) from the same database excluding the 2 target speakers. Rock music
was mainly composed of guitar and drum sounds, and jazz was generated by a wind instrument. Vocal parts of both music sounds were excluded. All signals were downsampled
to 8kHz, from original 44.1kHz (music) and 16kHz (speech) data. The training data were
segmented in 64 samples (8ms) starting at every sample. Audio files for all the experiments
are accessible at the website1 .
Figure 3 displays the actual sources, adapted basis functions, and their coefficient distributions. Music basis functions exhibit consistent amplitudes with harmonics, and the speech
basis functions are similar to Gabor wavelets. Figure 4 compares 4 sources by the average
spectra. Each covers all the frequency bands, although they are different in amplitude. One
might expect that simple filtering or masking cannot separate the mixed sources clearly.
Before actual separation, the source signals were initialized to the values of mixture signal:
, and the initial were all l to satisfy Equation 1. The adaptation was repeated
more than 300 steps on each sample, and the scaling factors were updated every 10 steps.
Table 1 reports the signal-to-noise ratios (SNRs) of the mixed signal ( ) and the recovered
results ( ^ ) with the original sources (
). In terms of total SNR increase the
mixtures containing music were recovered more cleanly than the male-female mixture.
Separation of jazz music and male speech was the best, and the waveforms are illustrated
1
http://speech.kaist.ac.kr/?jangbal/ch1bss/
5
5
0
0
?5
?5
z1
2.5
z2
Time (sec)
3
3.5
4
2.5
Time (sec)
3
3.5
4
5
0
?5
z1+z2
2.5
Time (sec)
3
5
3.5
4
5
0
0
?5
?5
ez1
2.5
ez2
Time (sec)
3
3.5
4
2.5
Time (sec)
3
3.5
4
Figure 5: Separation result for the mixture of jazz music and male speech. In the vertical
order: original sources ( and ), mixed signal (
), and the recovered signals.
in Figure 5. We conjecture by the average spectra of the sources in Figure 4 that although
there exists plenty of overlap between jazz and speech, the structures are dissimilar, i.e. the
frequency components of jazz change less, so we were able to obtain relatively good SNR
results. However rock music exhibits scattered spectrum and less characteristical structure.
This explains the relatively poorer performances of rock mixtures.
It is very difficult to compare a separation method with other CASA techniques, because
their approaches are so different in many ways that an optimal tuning of their parameters
would be beyond the scope of this paper. However, we compared our method with Wiener
filtering [4], that provides optimal masking filters in the frequency domain if true spectrogram is given. So, we assumed that the other source was completely known. The filters
were computed every block of 8 ms (64 samples), 0.5 sec, and 1.0 sec. In this case, our
blind results were comparable in SNR with results obtained when the Wiener filters were
computed at 0.5 sec.
In summary, our method has several advantages over traditional approaches to signal separation. They involve either spectral techniques [5, 6] or time-domain nonlinear filtering
techniques [3, 4]. Spectral techniques assume that sources are disjoint in the spectrogram,
which frequently result in audible distortions of the signal in the region where the assumption mismatches. Recent time-domain filtering techniques are based on splitting the whole
signal space into several disjoint subspaces. Although they overcome the limit of spectral
representation, they consider second-order statistics only, such as autocorrelation, which
restricts the separable cases to orthogonal subspaces [4].
Our method avoids these strong assumptions by utilizing a prior set of basis functions that
captures the inherent statistical structures of the source signal. This generative model therefore makes use of spectral and temporal structures at the same time. The constraints are
dictated by the ICA algorithm that forces the basis functions to result in an efficient representation, i.e. the linearly independent source coefficients; and both, the basis functions
#
Table 1: SNR results. R, J, M, F stand for rock, jazz music, male, and female speech. All the
values are measured in dB. ?Mix? columns are the sources that are mixed to , and ? ?s are the
calculated SNR of mixed signal ( ) and recovered sources (
) with the original sources ( ).
Mix
R+J
R+M
R+F
snr
-3.7
-3.7
-3.9
H
3.3
3.1
2.2
snr
3.7
3.7
3.9
F
7.0
6.8
6.1
Total
inc.
10.3
9.9
8.3
Mix
J+M
J+F
M+F
snr
0.1
-0.1
-0.2
H
5.6
5.1
2.5
snr
-0.1
0.1
0.2
F
5.5
5.3
2.7
Total
inc.
11.1
10.4
5.2
and their corresponding pdf are key to obtaining a faithful MAP based inference algorithm.
An important question is how well the traing data has to match the test data. We have also
performed experiments with the set of basis functions learned from the test sounds and the
SNR decreased on average by 1dB.
5 Conclusions
We presented a technique for single channel source separation utilizing the time-domain
ICA basis functions. Instead of traditional prior knowledge of the sources, we exploited
the statistical structures of the sources that are inherently captured by the basis and its
coefficients from a training set. The algorithm recovers original sound streams through
gradient-ascent adaptation steps pursuing the maximum likelihood estimate, contraint by
the parameters of the basis filters and the generalized Gaussian distributions of the filter coefficients. With the separation results, we demonstrated that the proposed method
is applicable to the real world problems such as blind source separation, denoising, and
restoration of corrupted or lost data. Our current research includes the extension of this
framework to perform model comparision to estimate which set of basis functions to use
given a dictionary of basis functions. This is achieved by applying a variational Bayes
method to compare different basis function models to select the most likely source. This
method also allows us to cope with other unknown parameters such the as the number of
sources. Future work will address the optimization of the learning rules towards real-time
processing and the evaluation of this methodology with speech recognition tasks in noisy
environments, such as the AURORA database.
References
[1] G. J. Brown and M. Cooke, ?Computational auditory scene analysis,? Computer
Speech and Language, vol. 8, no. 4, pp. 297?336, 1994.
[2] P. Comon, ?Independent component analysis, A new concept?,? Signal Processing,
vol. 36, pp. 287?314, 1994.
[3] E. Wan and A. T. Nelson, ?Neural dual extended kalman filtering: Applications in
speech enhancement and monaural blind signal separation,? in Proc. of IEEE Workshop on Neural Networks and Signal Processing, 1997.
[4] J. Hopgood and P. Rayner, ?Single channel signal separation using linear time-varying
filters: Separability of non-stationary stochastic signals,? in Proc. ICASSP, vol. 3,
(Phoenix, Arizona), pp. 1449?1452, March 1999.
[5] S. T. Roweis, ?One microphone source separation,? Advances in Neural Information
Processing Systems, vol. 13, pp. 793?799, 2001.
[6] S. Rickard, R. Balan, and J. Rosca, ?Real-time time-frequency based blind source
separation,? in Proc. of International Conference on Independent Component Analysis
and Signal Separation (ICA2001), (San Diego, CA), pp. 651?656, December 2001.
[7] T.-W. Lee and G.-J. Jang, ?The statistical structures of male and female speech signals,? in Proc. ICASSP, (Salt Lake City, Utah), May 2001.
[8] A. J. Bell and T. J. Sejnowski, ?Learning the higher-order structures of a natural
sound,? Network: Computation in Neural Systems, vol. 7, pp. 261?266, July 1996.
[9] T.-W. Lee and M. S. Lewicki, ?The generalized Gaussian mixture model using ICA,?
in International Workshop on Independent Component Analysis (ICA?00), (Helsinki,
Finland), pp. 239?244, June 2000.
[10] B. Pearlmutter and L. Parra, ?A context-sensitive generalization of ICA,? in Proc.
ICONIP, (Hong Kong), pp. 151?157, September 1996.
| 2224 |@word kong:1 cleanly:1 decomposition:2 rayner:1 initial:1 current:5 recovered:4 z2:2 subsequent:1 periodically:1 additive:1 update:1 stationary:1 generative:6 half:2 website:1 selected:1 provides:2 location:1 org:1 constructed:1 fitting:1 autocorrelation:1 introduce:1 manner:3 presumed:1 ica:17 frequently:1 actual:3 becomes:1 estimating:2 notation:2 matched:1 underlying:1 maximizes:1 what:2 kind:1 spoken:1 finding:1 transformation:2 guarantee:1 temporal:1 every:6 scaled:1 unit:1 before:1 local:1 limit:1 encoding:1 analyzing:1 subscript:2 might:1 range:1 statistically:4 averaged:1 faithful:1 vu:1 lost:1 block:1 bell:1 adapting:1 gabor:1 convenient:1 refers:1 vocal:1 downsampled:1 convenience:1 cannot:1 operator:1 context:1 applying:1 equivalent:1 map:4 demonstrated:2 maximizing:3 starting:1 formulate:1 splitting:1 rule:4 utilizing:2 regarded:1 searching:1 coordinate:1 updated:1 diego:2 suppose:1 target:2 element:1 recognition:1 updating:1 summit:1 database:4 observed:7 capture:1 region:1 decrease:1 environment:1 traing:1 solving:1 segment:5 basis:40 completely:2 icassp:2 joint:1 represented:1 separated:1 snrs:1 effective:1 sejnowski:1 quite:1 kaist:3 distortion:1 relax:1 statistic:1 noisy:1 superscript:2 sequence:1 advantage:1 rock:8 product:1 adaptation:7 roweis:1 x1t:1 exploiting:2 enhancement:1 generating:2 object:1 derive:1 ac:2 measured:1 strong:3 involves:1 indicate:2 direction:2 waveform:2 filter:14 stochastic:2 explains:1 generalization:1 timepoint:1 elementary:1 parra:1 mathematically:1 extension:1 scope:1 dictionary:1 adopt:1 finland:1 purpose:1 estimation:4 proc:5 jazz:10 applicable:1 superposition:2 sensitive:1 successfully:1 city:1 weighted:3 clearly:1 sensor:3 gaussian:13 always:1 super:2 aim:1 varying:4 factorizes:1 encode:1 derived:3 june:1 rank:1 likelihood:11 mainly:1 posteriori:3 inference:2 dependent:1 aurora:1 transformed:3 interested:1 overall:2 dual:1 flexible:1 ill:1 denoted:1 priori:2 exponent:3 marginal:3 equal:1 represents:1 constitutes:1 plenty:1 future:1 report:1 simplify:1 inherent:2 randomly:1 composed:1 simultaneously:1 individual:6 attempt:1 interest:1 highly:2 situ:1 evaluation:1 male:10 mixture:15 chain:1 accurate:2 beforehand:1 poorer:1 necessary:1 korea:1 orthogonal:1 initialized:1 overcomplete:1 website1:1 instance:1 column:4 modeling:1 earlier:1 cover:1 restoration:1 snr:10 successful:1 too:1 answer:1 corrupted:1 combined:1 density:9 peak:1 sensitivity:1 international:2 accessible:1 probabilistic:1 lee:3 off:1 audible:1 d9:1 sharpened:2 x9:1 containing:1 wan:1 derivative:1 summarized:1 subsumes:1 sec:8 coefficient:18 inc:3 matter:1 satisfy:2 includes:1 blind:5 stream:2 performed:1 wind:1 recover:1 bayes:1 masking:2 timit:1 contribution:5 wiener:2 variance:1 characteristic:3 maximized:1 identification:1 researcher:1 classified:1 coef:2 frequency:8 pp:8 associated:2 recovers:2 sampled:2 auditory:3 gain:2 knowledge:2 amplitude:2 reflecting:1 higher:1 methodology:1 done:1 nonlinear:1 reversible:1 utah:1 consisted:1 true:2 brown:1 counterpart:1 concept:1 excluded:1 symmetric:1 laboratory:1 illustrated:1 attractive:1 during:1 noted:1 speaker:2 won:1 m:2 generalized:9 hong:1 pdf:7 iconip:1 pearlmutter:1 performs:1 meaning:1 harmonic:1 variational:1 phoenix:1 salt:1 khz:5 b4:1 tail:2 expressing:1 tuning:1 language:2 pq:1 longer:1 multivariate:1 posterior:2 recent:2 dictated:1 female:8 optimizing:1 jolla:1 driven:1 accomplished:1 exploited:1 captured:1 greater:1 spectrogram:2 employed:1 maximize:2 signal:53 july:1 multiple:1 sound:19 full:2 infer:3 mix:3 segmented:1 match:2 long:1 ez1:1 essentially:1 histogram:1 iteration:1 achieved:1 addition:1 chopped:1 decreased:1 source:70 ascent:2 south:1 recording:1 subject:1 hz:1 file:1 db:2 december:1 extracting:1 ter:1 relaxes:1 independence:2 drum:1 idea:1 motivated:1 passed:1 speech:19 constitute:1 generally:2 involve:1 transforms:1 band:1 simplest:1 http:2 generate:1 restricts:1 gil:1 estimated:1 disjoint:2 vol:5 affected:1 group:1 key:1 four:2 achieving:1 drawn:1 rosca:1 sum:3 inverse:1 letter:1 wu:2 pursuing:1 separation:21 lake:1 scaling:2 comparable:1 followed:1 display:1 arizona:1 adapted:1 comparision:1 constraint:2 scene:2 helsinki:1 generates:1 aspect:1 separable:1 relatively:2 conjecture:1 combination:1 march:1 separability:1 comon:1 equation:5 remains:1 know:1 tractable:1 instrument:1 available:1 gaussians:1 spectral:4 jang:2 voice:1 original:8 denotes:1 remaining:1 music:14 objective:1 already:1 realized:1 added:1 question:1 traditional:2 exhibit:3 gradient:7 september:1 subspace:2 separate:3 separating:3 simulated:1 nelson:1 toward:1 boldface:1 assuming:2 length:2 code:2 index:4 modeled:3 kalman:1 ratio:1 difficult:2 fe:1 ba:6 unknown:3 perform:3 vertical:1 observation:5 contraint:1 jin:1 extended:1 excluding:1 dc:2 ucsd:1 monaural:2 complement:1 sentence:2 z1:2 california:1 learned:6 address:1 able:1 beyond:1 pattern:1 mismatch:1 power:1 overlap:1 ation:1 natural:2 force:2 eh:1 representing:1 tewon:1 lk:1 axis:2 gf:2 kj:1 formerly:1 prior:8 multiplication:1 ez2:1 expect:1 mixed:8 generation:1 filtering:5 generator:1 consistent:1 cooke:1 balan:1 summary:1 transpose:1 institute:1 taking:1 attaching:1 sparse:1 distributed:1 overcome:1 calculated:3 stand:1 avoids:1 world:1 san:2 cope:1 ml:1 reveals:1 summing:1 assumed:2 spectrum:3 continuous:1 table:2 channel:12 ca:3 inherently:1 obtaining:1 domain:9 distinctively:1 main:1 linearly:1 whole:2 noise:1 repeated:1 scattered:1 inferring:2 lie:1 wavelet:1 casa:2 showing:1 guitar:1 exists:1 workshop:2 ih:1 rickard:1 kr:2 te:1 conditioned:1 depicted:1 simply:2 likely:1 expressed:1 ordered:2 scalar:1 lewicki:1 gender:1 corresponds:2 goal:2 towards:1 change:1 uniformly:1 denoising:1 microphone:1 called:1 total:3 egf:1 experimental:1 la:1 select:1 dissimilar:1 audio:1 tested:1 |
1,346 | 2,225 | Visual Development Aids the Acquisition of
Motion Velocity Sensitivities
Robert A. Jacobs
Department of Brain and Cognitive Sciences
University of Rochester
Rochester, NY 14627
[email protected]
Melissa Dominguez
Department of Computer Science
University of Rochester
Rochester, NY 14627
[email protected]
Abstract
We consider the hypothesis that systems learning aspects of visual perception may benefit from the use of suitably designed developmental progressions during training. Four models were trained to estimate motion
velocities in sequences of visual images. Three of the models were ?developmental models? in the sense that the nature of their input changed
during the course of training. They received a relatively impoverished
visual input early in training, and the quality of this input improved as
training progressed. One model used a coarse-to-multiscale developmental progression (i.e. it received coarse-scale motion features early
in training and finer-scale features were added to its input as training
progressed), another model used a fine-to-multiscale progression, and
the third model used a random progression. The final model was nondevelopmental in the sense that the nature of its input remained the same
throughout the training period. The simulation results show that the
coarse-to-multiscale model performed best. Hypotheses are offered to
account for this model?s superior performance. We conclude that suitably designed developmental sequences can be useful to systems learning to estimate motion velocities. The idea that visual development can
aid visual learning is a viable hypothesis in need of further study.
1 Introduction
With relatively few exceptions, relationships between development and learning have
largely been ignored by the neural computation community. This is surprising because development may be nature?s way of biasing biological learning systems so that they achieve
better performance. Development may also represent an effective means for engineers
to bias machine learning systems. Learning systems are inherently faced with the biasvariance dilemma [1]. Systems with little or no bias tend to interpolate in unpredictable
ways and, thus, have highly variable generalization performance. Systems with larger bias,
in contrast, tend to show better generalization performance when exposed to those training
sets that they can adequately learn. Development may be an effective means of adding
suitable bias to a system thereby enhancing the generalization performance of that system.
In previous work, we studied the effects of different types of developmental sequences on
the performances of systems trained to estimate the binocular disparities present in pairs
of visual images [2]. Systems consisted of three components. The first component was a
pair of right-eye and left-eye images. For example, the images may have depicted a light
or dark object against a gray background. The second component was a set of binocular
energy filters. These filters are widely used to model the binocular sensitivities of simple
and complex cells in primary visual cortex of primates [3]. Based on local patches of the
right-eye and left-eye images, each filter acted as a disparity feature detector at a coarse,
medium, or fine scale depending on whether the filter was tuned to a low, medium, or high
spatial frequency, respectively. The third component was an artificial neural network. The
outputs of the binocular energy filters were the inputs to this network. The network was
trained to estimate the disparity of the object which was defined as the amount that the
object was shifted between the right-eye and left-eye images.
A non-developmental system was compared to three developmental systems. The network of the non-developmental system received the outputs of all binocular energy filters
throughout the entire training period. The networks of the developmental systems, in contrast, were trained in three stages. The network of the coarse-to-multiscale system received
the outputs of binocular energy filters tuned to a low spatial frequency during the first training stage. It received the outputs of filters tuned to low and medium spatial frequencies
during the second training stage, and it received the outputs of all filters during the third
training stage. The network of the fine-to-multiscale system was trained in an analogous
way, though its filters were added in the opposite order. This network received the outputs
of filters tuned to a high frequency during the first training stage, and the outputs of medium
and then low frequency filters were added during subsequent stages. The network of the
random developmental model was also trained in stages, though its inputs were chosen at
random at each stage and, thus, were not organized by spatial frequency content.
The results show that the coarse-to-multiscale and fine-to-multiscale systems consistently
outperformed the non-developmental and random developmental systems. The fact that
they outperformed the non-developmental system is important because this demonstrates
that models that undergo a developmental maturation can acquire a more advanced perceptual ability than one that does not. The fact that they outperformed the random developmental system is important because this demonstrates that not all developmental sequences
can be expected to provide performance benefits. To the contrary, only sequences whose
characteristics are matched to the task should lead to superior performance. In conjunction
with other results not described here, these findings suggest that the most successful systems at learning to detect binocular disparities are systems that are exposed to visual inputs
at a single scale early in training, and for which the resolution of their inputs progresses in
an orderly fashion from one scale to a neighboring scale during the course of training.
At a more general level, these results suggest that the idea that visual development aids
visual learning is a viable hypothesis in need of further study. This paper studies this hypothesis in the context of visual motion velocity estimation. Our simulations show that
the tasks of disparity estimation and velocity estimation yield similar, though not identical, patterns of results. Although a developmental approach to the velocity estimation task
is shown to be beneficial, it is not the case that all developmental progressions that lead
to performance advantages on the disparity estimation task also lead to advantages on the
velocity estimation task. In particular, a coarse-to-multiscale developmental system outperformed non-developmental and random developmental systems on the velocity estimation
task, but a fine-to-multiscale system did not. We hypothesize that the performance advantage of the coarse-to-multiscale system relative to the fine-to-multiscale system is due to
the fact that the coarse-to-multiscale system learned to make greater use of motion energy
filters tuned to a low spatial frequency. Analyses suggest that coarse-scale motion features
are more informative for the velocity estimation task than fine-scale features.
2 Developmental and Non-developmental Systems
The structure of the developmental and non-developmental systems was as follows. The
input to each system was a sequence of 88 retinal images where each image was a onedimensional array 40 pixels in length. As described below, this sequence depicted an object
moving at a constant velocity in front of a stationary background. The retinal array was
treated as if it were shaped like a circle in the sense that the leftmost and rightmost pixels
were regarded as neighbors. This wraparound of the left and right edges was done to
avoid edge artifacts in the spatial dimension. Although a one-dimensional retina is a simplification, its use is justified by the need to keep the simulation times within reason. The
sequence of retinal images was filtered using motion energy filters.
Based on neurophysiological results, Adelson and Bergen [4] proposed motion energy filters as a way of modeling the motion sensitivities of simple and complex cells in primary
visual cortex. A sequence of one-dimensional images can be represented using a twodimensional array where one dimension encodes space and the other dimension encodes
time. In this case, motion energy filters are two-dimensional filters which extract motion
information in local patches of the spatiotemporal space.
The receptive field profile of a simple cell can be described mathematically as a Gabor
function which is a sinusoid multiplied by a Gaussian envelope. A quadrature pair of such
functions with even and odd phases tuned to leftward (-) and rightward (+) directions of
motion is given by
,
!#"%$ &(') +*
& ') +*
(
!.-0/1"%$
(1)
(2)
where
and are the spatial and temporal distances to the center of
and
the Gaussian,
are the spatial and temporal variances of the Gaussian,
and
and
are
the
spatial
$
$
+2
and temporal frequencies of the sinusoids. The ratio $
$ determines the orientation
of a Gabor function in the spatiotemporal space which, in turn, determines the velocity
sensitivity of the function.
The activity of a simple cell is given by the square of the convolution of the cell?s receptive
field profile with the spatiotemporal pattern. The activities of simple cells with even and
odd phases are summed in order to form the activity of a complex cell. This activity is
known as a motion energy.
In our simulations, we used a subset of the possible receptive-field locations in the twodimensional (40 pixels 3 88 time frames) spatiotemporal space. This subset formed
a 20 3 4 uniform grid such that receptive fields were centered on odd-numbered pixels
and odd-numbered time frames. This grid was located in the center of the space with respect to the temporal dimension. An advantage of this choice of locations was that edge
artifacts were avoided because all receptive-fields fell entirely within the spatiotemporal
space.
Fifteen complex cells corresponding to three spatial frequencies and five temporal frequencies were centered at each receptive-field location. The spatial and temporal frequencies
were each separated by an octave. Temporal frequencies were chosen so that the set of
cells at each spatial frequency had the same pattern of velocity tunings. Specifically, the
sets tuned to low (0.0625 cycles/pixel), medium (0.125 cycles/pixel), and high (0.25 cycles/pixel) spatial frequencies had velocity tunings of 0.25, 0.5, 1.0, 2.0, and 4.0 pixels per
time frame. All cells were tuned to rightward motion because we restricted our data sets to
only include objects that were moving to the right. A cell?s spatial and temporal standard
deviations were set to be inversely proportional to its spatial and temporal frequencies,
respectively. The outputs of the complex cells within each spatial frequency band were
normalized using a softmax nonlinearity. Consequently, complex cells tended to respond
to relative contrast in the image sequence rather than absolute contrast [5] [6].
The normalized outputs of the complex cells were the inputs to an artificial neural network.
The network had 1200 input units (the complex cells had 80 receptive-field locations and
there were 15 cells at each location). The network?s hidden layer contained 18 hidden
units which were organized into 3 groups of 6 units each. The connectivity of the hidden
units was set so that each group had a limited receptive field, and so that neighboring
groups had overlapping receptive fields. A group of hidden units received inputs from
thirty-two receptive field locations at the complex cell level, and the receptive fields of
neighboring groups overlapped by eight receptive-field locations. The hidden units used a
logistic activation function. The output layer consisted of a single linear unit; this unit?s
output was an estimate of the object velocity depicted in the sequence of retinal images.
The weights of an artificial neural network were initialized to small random values, and
were adjusted during the course of training to minimize a sum of squared error cost function
using a conjugate gradient optimization procedure [7]. Weight sharing was implemented at
the hidden unit level so that corresponding units within each group of hidden units had the
same incoming and outgoing weight values, and so that a hidden unit had the same set of
weight values from each receptive field location at the complex unit level. This provided
the network with a degree of translation invariance, and also dramatically decreased the
number of modifiable weight values in the network. It therefore decreased the number of
data items needed to train the network, and the amount of time needed to train the network.
Models were trained and tested using separate sets of training and test data items. Each
set contained 250 randomly generated items. Training was terminated after 100 iterations
through the training set. The results reported below are based on the data items from the
test set.
Three developmental systems and one non-developmental system were simulated. The
coarse-to-multiscale system, or model C2M, was trained using a coarse-to-multiscale developmental sequence which was implemented as follows. The training period was divided
into three stages. During the first stage, the neural network portion of the model only received the outputs of complex cells tuned to the low spatial frequency (the outputs of other
complex cells were set to zero). During the second stage, the network received the outputs
of complex cells tuned to low and medium spatial frequencies; it received the outputs of all
complex cells during the third stage. The training of the fine-to-multiscale system, or model
F2M, was identical to that of model C2M except that its training used a fine-to-multiscale
developmental sequence. During the first stage of training, its network received the outputs
of complex cells tuned to the high spatial frequency. This network received the outputs
of complex cells tuned to high and medium spatial frequencies during the second stage,
and received the outputs of all complex cells during the third stage. The training of the
random developmental system, or model RD, also used a developmental sequence, though
this sequence was generated randomly and, thus, was not based on the spatial frequency
tunings of the complex cells. The collection of complex cells was randomly partitioned
into three equal-sized subsets with the constraint that each subset included one-third of the
cells at each receptive-field location. During the first stage of training, the neural network
portion of the model only received the outputs of the complex cells in the first subset. It
received the outputs of the cells in the first and second subsets during the second stage of
training, and received the outputs of all complex cells during the third stage. In contrast,
the training period of the non-developmental system, or model ND, was not divided into
separate stages; its neural network received the outputs of all complex cells throughout the
entire training period.
Solid object data item
Noisy object data item
Figure 1: Ten frames of an image sequence from the solid object data set (top) and ten
frames of an image sequence from the noisy object data set (bottom).
3 Data Sets and Simulation Results
The performances of the four models were evaluated on two data sets. In all cases the
images were gray scale with luminance values between 0 and 1, and motion velocities were
rightward with magnitudes between 0 and 4 pixels per time frame. Fifteen simulations of
each model on each data set were conducted.
In the solid object data set, images consisted of a moving light or dark object in front of a
stationary gray background. The object?s gray-scale values were randomly chosen to either
be in the range from 0.0 to 0.1 or from 0.9 and 1.0, whereas the gray-scale value of the
background was always 0.5. The size of the object was randomly chosen to be an integer
between 6 and 12 pixels, its initial location was a randomly chosen pixel on the retina, and
its velocity was randomly chosen to be a real value between 0 and 4 pixels per time frame.
Given a sequence of images, the task of a model was to estimate the object?s velocity. The
top portion of Figure 1 gives an example of ten frames of an image sequence from the solid
object data set.
The bar graph in Figure 2 illustrates the results. The horizontal axis gives the model, and
the vertical axis gives the root mean squared error (RMSE) on the data items from the test
set at the end of training (the error bars give the standard error of the mean). The labels for
the developmental models C2M, F2M, and RD include a number. Recall that the training of
these models was divided into three training stages (or developmental stages). The number
in the label gives the length of developmental stages 1 and 2 (the length of developmental
stage 3 can be calculated using the fact that the entire training period lasted 100 iterations).
For example, the label ?C2M-5? corresponds to a version of model C2M in which the
solid object data set
0.55
0.50
RMSE
0.45
0.40
ND
RD-20
C2M-5 C2M-10 C2M-20 C2M-30 F2M-5 F2M-10 F2M-20 F2M-30
Figure 2: The root mean squared errors (RMSE) on the test set data items for model ND,
the best performing version of model RD, and different versions of models C2M and F2M
after training on the solid object data set (the error bars give the standard error of the mean).
first stage was 5 iterations, the second stage was 5 iterations, and the third stage was 90
iterations. In regard to model RD, we simulated four versions of this model (RD-5, RD10, RD-20, and RD-30). For the sake of brevity, only the version that performed best is
included in the graph.
Model C2M significantly outperformed all other models. The version of this model which
performed best was version C2M-20 which had an 11.5% smaller generalization error than
model ND (t = 2.50, p 0.02). In addition, C2M-20 had a 9.6% smaller error than the best
version of model F2M (t = 3.57, p 0.01), and a 7.2% smaller error than the best version
of model RD (t = 2.30, p 0.05).
The images in the second data set, referred to as the noisy object data set, were meant to
resemble random-dot kinematograms frequently used in behavioral experiments. Images
contained a noisy object which was moving to the right and a noisy background which
was stationary. The gray-scale values of the object pixels and the background pixels were
set to random numbers between 0 and 1. The size of the object was randomly chosen to
be an integer between 6 and 12 pixels, its initial location was a randomly chosen pixel on
the retina, and its velocity was randomly chosen to be an integer between 0 and 4 pixels
per time frame. As before, the task was to map an image sequence to an estimate of an
object velocity. The bottom portion of Figure 1 gives an example of ten frames of an image
sequence from the noisy object data set.
The results are shown in Figure 3. Model C2M, once again, outperformed the other models.
Relative to model ND, all versions of model C2M showed superior performance (ND vs.
C2M-5: t = 2.69, p
0.02; ND vs. C2M-10: t = 2.78, p
0.01; ND vs. C2M-20: t
= 3.03, p
0.01; ND vs. C2M-30: t = 4.14, p
0.001). The version of model C2M
which performed best was version C2M-30. On average, this version had an 8.9% smaller
generalization error than model ND, a 6.1% smaller error than the best version of model
F2M, and a 4.3% smaller error than the best version of model RD.
noisy object data set
0.80
0.75
RMSE
0.70
0.65
ND
RD-20
C2M-5 C2M-10 C2M-20 C2M-30 F2M-5 F2M-10 F2M-20 F2M-30
Figure 3: The root mean squared errors (RMSE) on the test set data items for model ND,
the best performing version of model RD, and different versions of models C2M and F2M
after training on the noisy object data set (the error bars give the standard error of the mean).
Why did model C2M show the best performance? Simulation results described in Jacobs
and Dominguez [8] suggest that coarse-scale motion features are more informative for the
velocity estimation task than fine-scale features. For example, networks that received only
the outputs of complex cells tuned to a low spatial frequency consistently outperformed
networks that received only the outputs of mid frequency complex cells or only the outputs
of high frequency complex cells. We speculate that coarse-scale motion features are more
informative for a number of reasons. First, complex cells tuned to the lowest spatial frequency have the largest receptive fields. As discussed by Weiss and Adelson [9], motion
signals tend to be less ambiguous when the stimulus is viewed for a long duration and more
ambiguous when the stimulus is viewed for a short duration. This type of reasoning also
applies to the activities of complex cells with receptive fields in the spatiotemporal domain.
That is, there is comparatively less ambiguity in the activities of complex cells with larger
receptive fields than in the activities of cells with smaller receptive fields. Because cells
tuned to a low spatial frequency tend to have larger receptive fields than cells tuned to a
high spatial frequency, low frequency tuned cells tend to be more reliable for the purposes
of motion velocity estimation. Second, model C2M may have benefited from the fact that
complex cells with large, overlapping receptive fields provide a high resolution coarse-code
of the spatiotemporal space [10]-[12]. This code could provide model C2M with accurate
information as to the location of the moving object at each moment in time. For example,
the activities of the population of these cells may have coded with high accuracy
the fact
and at location at time . If so, the
that the moving object was at location at time
model?s neural network* 2 could
have
* easily learned to accurately estimate the object velocity
. Model C2M would have an advantage over other modby calculating "
"
els because it received this high resolution coarse-code throughout training. In contrast,
model F2M, for example, received early in training only the outputs of complex cells with
smaller, less-overlapping receptive fields. The activities of a population of these cells form
a lower resolution coarse-code of the spatiotemporal space.
As described above, in earlier work we found that the most successful systems at learning
a binocular disparity estimation task were those that: (1) received inputs at a single frequency scale early in training, and (2) for which the resolution of their inputs progressed in
an orderly fashion from one scale to a neighboring scale during the course of training [2].
Condition (1) allowed a system to combine and compare input features at an early training
stage without the need to compensate for the fact that these features could be at different
spatial scales. If condition (2) was satisfied, when a system received inputs at a new spatial
scale, it was close to a scale with which the system was already familiar. Although not
described here (see Jacobs and Dominguez [8]), we tested the importance on the motion
velocity estimation task for the resolution of a system?s inputs to progress in an orderly
fashion from one scale to a neighboring scale. The results suggest that this factor is moderately important, but not highly important, for a developmental system learning to estimate
motion velocities. Overall, it is more important for a system to receive the outputs of the
low spatial frequency complex cells as early in training as possible.
Based on the entire set of simulations, we conclude that suitably designed developmental
sequences can be useful to systems learning to estimate motion velocities. The idea that
visual development can aid visual learning is a viable hypothesis in need of further study.
Acknowledgments
This work was supported by NIH research grant RO1-EY13149.
References
[1] Geman, S., Bienenstock, E., and Doursat, R. (1995) Neural networks and the bias/variance
dilemma. Neural Computation, 4, 1-58.
[2] Dominguez, M. and Jacobs, R.A. (2003) Developmental constraints aid the acquisition of binocular disparity sensitivities. Neural Computation, in press.
[3] Ohzawa, I., DeAngelis, G.C., and Freeman, R.D. (1990) Stereoscopic depth discrimination in
the visual cortex: Neurons ideally suited as disparity detectors. Science, 249, 1037-1041.
[4] Adelson, E.H. and Bergen, J.R. (1985) Spatiotemporal energy models for the perception of
motion. Journal of the Optical Society of America A, 2, 284-299.
[5] Heeger, D.J. (1992) Normalization of cell responses in cat striate cortex. Visual Neuroscience,
9, 181-197.
[6] Nowlan, S.J. and Sejnowski, T.J. (1994) Filter selection model for motion segmentation and
velocity integration. Journal of the Optical Society of America A, 11, 3177-3200.
[7] Press, W.H., Teukolsky, S.A., Vetterling, W.T., and Flannery, B.P. (1992) Numerical Recipes in
C: The Art of Scientific Computing. Cambridge, UK: Cambridge University Press.
[8] Jacobs, R.A. and Dominguez, M. (2003) Visual development and the acquisition of motion
velocity sensitivities. Neural Computation, in press.
[9] Weiss, Y. and Adelson, E.H. (1998) Slow and smooth: A Bayesian theory for the combination
of local motion signals in human vision. Center for Biological and Computational Learning
Paper Number 158, Massachusetts Institute of Technology, Cambridge, MA.
[10] Milner, P.M. (1974) A model for visual shape recognition. Psychological Review, 81, 521-535.
[11] Hinton, G.E. (1981) Shape representation in parallel systems. In A. Drina (Ed.), Proceedings of
the Seventh International Joint Conference on Artificial Intelligence.
[12] Ballard, D.H. (1986) Cortical connections and parallel processing: Structure and function. Behavioral and Brain Sciences, 9, 67-120.
| 2225 |@word version:17 nd:12 suitably:3 simulation:8 jacob:5 fifteen:2 thereby:1 solid:6 moment:1 initial:2 disparity:9 tuned:17 rightmost:1 surprising:1 nowlan:1 activation:1 numerical:1 subsequent:1 informative:3 shape:2 hypothesize:1 designed:3 v:4 stationary:3 discrimination:1 intelligence:1 item:9 short:1 filtered:1 coarse:17 location:14 five:1 viable:3 combine:1 behavioral:2 expected:1 frequently:1 brain:2 freeman:1 little:1 unpredictable:1 provided:1 matched:1 medium:7 lowest:1 finding:1 temporal:9 demonstrates:2 uk:1 unit:13 grant:1 before:1 local:3 studied:1 limited:1 range:1 acknowledgment:1 thirty:1 procedure:1 gabor:2 significantly:1 melissa:1 numbered:2 suggest:5 close:1 selection:1 twodimensional:2 context:1 map:1 center:3 duration:2 resolution:6 array:3 regarded:1 population:2 analogous:1 milner:1 hypothesis:6 overlapped:1 velocity:27 recognition:1 located:1 geman:1 bottom:2 cycle:3 developmental:39 moderately:1 ideally:1 trained:8 exposed:2 dilemma:2 rightward:3 easily:1 joint:1 represented:1 america:2 cat:1 train:2 separated:1 effective:2 sejnowski:1 deangelis:1 artificial:4 whose:1 larger:3 widely:1 ability:1 noisy:8 final:1 sequence:22 advantage:5 neighboring:5 achieve:1 recipe:1 object:29 depending:1 odd:4 received:24 progress:2 implemented:2 c:1 resemble:1 direction:1 filter:18 centered:2 human:1 generalization:5 biological:2 mathematically:1 adjusted:1 early:7 purpose:1 estimation:12 outperformed:7 label:3 largest:1 gaussian:3 always:1 rather:1 avoid:1 conjunction:1 consistently:2 lasted:1 contrast:6 sense:3 detect:1 bergen:2 el:1 vetterling:1 entire:4 hidden:8 bienenstock:1 pixel:17 overall:1 orientation:1 development:9 spatial:28 summed:1 softmax:1 integration:1 art:1 field:21 equal:1 once:1 shaped:1 identical:2 adelson:4 progressed:3 stimulus:2 few:1 retina:3 randomly:10 interpolate:1 familiar:1 phase:2 highly:2 light:2 accurate:1 edge:3 initialized:1 circle:1 psychological:1 modeling:1 earlier:1 cost:1 deviation:1 subset:6 uniform:1 successful:2 conducted:1 seventh:1 front:2 reported:1 spatiotemporal:9 international:1 sensitivity:6 connectivity:1 squared:4 again:1 ambiguity:1 satisfied:1 cognitive:1 account:1 retinal:4 speculate:1 performed:4 root:3 portion:4 parallel:2 rochester:6 rmse:5 minimize:1 square:1 formed:1 accuracy:1 variance:2 largely:1 characteristic:1 yield:1 bayesian:1 accurately:1 finer:1 detector:2 tended:1 sharing:1 ed:1 against:1 energy:10 acquisition:3 frequency:30 massachusetts:1 recall:1 organized:2 segmentation:1 impoverished:1 maturation:1 response:1 improved:1 wei:2 done:1 though:4 robbie:1 evaluated:1 binocular:9 stage:27 horizontal:1 multiscale:16 overlapping:3 logistic:1 quality:1 artifact:2 gray:6 scientific:1 effect:1 ohzawa:1 consisted:3 normalized:2 adequately:1 sinusoid:2 during:19 ambiguous:2 leftmost:1 octave:1 motion:27 reasoning:1 image:22 nih:1 superior:3 discussed:1 onedimensional:1 cambridge:3 tuning:3 rd:12 grid:2 nonlinearity:1 had:11 dot:1 moving:6 cortex:4 showed:1 leftward:1 greater:1 period:6 signal:2 bcs:1 smooth:1 long:1 compensate:1 divided:3 coded:1 enhancing:1 vision:1 iteration:5 represent:1 normalization:1 cell:47 receive:1 justified:1 background:6 fine:10 whereas:1 decreased:2 addition:1 biasvariance:1 envelope:1 doursat:1 fell:1 tend:5 undergo:1 contrary:1 integer:3 opposite:1 idea:3 whether:1 ignored:1 useful:2 dramatically:1 amount:2 dark:2 mid:1 band:1 ten:4 shifted:1 stereoscopic:1 neuroscience:1 per:4 modifiable:1 group:6 four:3 luminance:1 graph:2 sum:1 respond:1 throughout:4 patch:2 entirely:1 layer:2 simplification:1 activity:9 constraint:2 encodes:2 sake:1 aspect:1 performing:2 optical:2 relatively:2 acted:1 department:2 c2m:30 combination:1 conjugate:1 beneficial:1 smaller:8 ro1:1 partitioned:1 dominguez:5 primate:1 restricted:1 turn:1 needed:2 end:1 multiplied:1 eight:1 progression:5 top:2 include:2 calculating:1 society:2 comparatively:1 added:3 already:1 receptive:21 primary:2 striate:1 gradient:1 distance:1 separate:2 simulated:2 reason:2 length:3 code:4 relationship:1 ratio:1 acquire:1 robert:1 vertical:1 convolution:1 neuron:1 hinton:1 frame:10 community:1 wraparound:1 pair:3 connection:1 learned:2 bar:4 below:2 perception:2 pattern:3 biasing:1 reliable:1 suitable:1 treated:1 advanced:1 technology:1 eye:6 inversely:1 axis:2 extract:1 kinematograms:1 faced:1 review:1 relative:3 proportional:1 degree:1 offered:1 translation:1 course:4 changed:1 supported:1 bias:5 institute:1 neighbor:1 absolute:1 benefit:2 regard:1 dimension:4 calculated:1 depth:1 cortical:1 collection:1 avoided:1 keep:1 orderly:3 incoming:1 conclude:2 why:1 nature:3 learn:1 ballard:1 inherently:1 complex:31 domain:1 did:2 terminated:1 profile:2 allowed:1 quadrature:1 benefited:1 referred:1 fashion:3 ny:2 aid:5 slow:1 heeger:1 perceptual:1 third:8 remained:1 adding:1 importance:1 magnitude:1 illustrates:1 suited:1 flannery:1 depicted:3 neurophysiological:1 visual:19 contained:3 applies:1 corresponds:1 determines:2 teukolsky:1 ma:1 sized:1 viewed:2 consequently:1 content:1 included:2 specifically:1 except:1 engineer:1 invariance:1 exception:1 meant:1 brevity:1 outgoing:1 tested:2 |
1,347 | 2,226 | A Differential Semantics for Jointree
Algorithms
James D. P ark and Adnan Darwiche
Computer Science Department
Univ ersity of California, Los Angeles, CA 90095
{jd,darwiche}@cs.ucla.edu
Abstract
A new approach to inference in belief networks has been recently
proposed, which is based on an algebraic representation of belief
networks using multi?linear functions. According to this approach,
the key computational question is that of representing multi?linear
functions compactly, since inference reduces to a simple process of
ev aluating and differentiating such functions. W e show here that
mainstream inference algorithms based on jointrees are a special
case of this approach in a v ery precise sense. W e use this result to
prov e new properties of jointree algorithms, and then discuss some
of its practical and theoretical implications.
1
Introduction
It was recently shown that the probability distribution of a belief network can be
represented using a multi?linear function, and that most probabilistic queries of
interest can be retriev ed directly from the partial deriv ativ es of this function [2].
Although the multi?linear function has an exponential number of terms, it can
be represented using a small arithmetic circuit in certain situations [3].1 Once
a belief network is represented as an arithmetic circuit, probabilistic inference is
then performed by ev aluating and differentiating the circuit, using a v ery simple
procedure which resembles back?propagation in neural networks.
W e show in this paper that mainstream inference algorithms based on jointrees [14,
8] are a special-case of the arithmetic?circuit approach proposed in [2]. Specifically,
we show that each jointree is an implicit representation of an arithmetic circuit; that
the inward?pass in jointree propagation ev aluates this circuit; and that the outward?
pass differentiates the circuit. Using these results, we prov e new useful properties
of jointree propagation. W e also suggest a new interpretation of the process of
factoring graphical models into jointrees, as a process of factoring exponentially?
sized multi?linear functions into arithmetic circuits of smaller size.
1
For example, it was shown recently that real?world b elief networks with treewidth up
to 60 can b e compiled into arithmetic circuits with few thousand nodes [3]. Such networks
hav e local structure, and are outside the scope of mainstream algorithms for inference in
b elief networks whose complexity is exponential in treewidth.
A
true
true
false
false
B
true
false
true
false
?b|a
??
b|a
?b|?
a
??
b|?
a
=
=
=
=
.2
.8
.7
.3
A
true
false
?a = .6
?a
? = .4
A
true
true
false
false
C
true
false
true
false
?c|a
?c?|a
?c|?
a
?c?|?
a
=
=
=
=
.8
.2
.15
.85
Figure 1: The CPTs of belief network B ? A ? C.
This paper is structured as follows. Sections 2 and 3 are dedicated to a review of
inference approaches based on arithmetic circuits and jointrees. Section 4 shows
that the jointree approach is a special case of the arithmetic?circuit approach, and
discusses some practical implications of this finding. Finally, Section 5 closes with a
new perspective on factoring graphical models. Proofs of all theorems can be found
in the long version of this paper [11].
2
Belief netw orks as m ulti?linear functions
A belief network is a factored representation of a probability distribution. It consists
of two parts: a directed acyclic graph (D A G) and a set of conditional probability
tables (CPTs). For each node X and its parents U, we have a CPT that specifies
the distribution of X given each instantiation u of the parents; see Figure 1.2
A belief network is a representational factorization of a probability distribution,
not a computational one. That is, although the network compactly represents the
distribution, it needs to be processed further if one is to obtain answers to arbitrary
probabilistic queries. Mainstream algorithms for inference in belief networks operate on the network to generate a computational factorization, allowing one to answer
queries in time which is linear in the factorization size. A most influential computational factorization of belief networks is the jointree [14, 8, 6]. Standard jointree
factorizations are structure?based: their size depend only on the network topology and is invariant to local CPT structure. This observation has triggered much
research for alternative, finer?grained factorizations, since real-world networks can
exhibit significant local structure that leads to smaller factorizations if exploited.
W e discuss next one of the latest proposals in this direction, which calls for using
arithmetic circuits as a computational factorization of belief networks [2]. This
proposal is based on viewing each belief network as a multi?linear function, which
can be represented compactly using an arithmetic circuit. The multi?linear function
itself contains two types of variables. First, evidence indicators, where for each
variable X in the network , we have a variable ?x for each value x of X. Second,
network parameters, where for each variable X and its parents U in the network,
we have a variable ?x|u for each value x of X and instantiation u of U.
The multi?linear function has a term for each instantiation of the network variables,
which is constructed by multiplying all evidence indicators and network parameters
that are consistent with that instantiation. For example, the multi?linear function
of the network in Figure 1 has eight terms corresponding to the eight instantiations
of variables A, B, C: f = ?a ?b ?c ?a ?b|a ?c|a +?a ?b ?c??a ?b|a ?c?|a +. . .+?a? ??b ?c??a? ??b|?a ?c?|?a .
W e will often refer to such a multi?linear function as the network polynomial.
2
Variables are denoted by upper?case letters (A) and their v alues by lower?case letters
(a). Sets of v ariables are denoted by bold?face upper?case letters (A) and their instantiations are denoted by bold?face lower?case letters (a). For a v ariable A with v alues true
and false, we use a to denote A= true and a
? to denote A= false. Finally , for a v ariable X
and its parents U, we use ?x|u to denote the CPT entry corresponding to Pr (x | u).
+
?B?D?D|BC
*
*
A
BCD
+
*
+
*
*
B
C
D
E
*
ABC
?A?A?B|A?C|A
CE
?C?E?E|C
Figure 2: On the left: An arithmetic circuit which computes the function
?a ?b ?a ?b|a + ?a ??b ?a ??b|a + ?a? ?b ?a? ?b|?a + ?a? ??b ?a? ??b|?a . The circuit is a D AG, where
leaf nodes represent function variables and internal nodes represent arithmetic operations. On the right: A belief network structure and its corresponding jointree.
Given the network polynomial f , we can answer any query with respect to the
belief network. Specifically, let e be an instantiation of some network variables,
and suppose we want to compute the probability of e. W e can do this by simply
evaluating the polynomial f while setting each evidence indicator ?x to 1 if x
is consistent with e, and to 0 otherwise. For the network in Figure 1, we can
compute the probability of evidence e = b?
c by evaluating its polynomial above under
?a = 1,?a? = 1,?b = 1, ??b = 0 and ?c = 0, ?c? = 1. This leads to ?a ?b|a ?c?|a +?a? ?b|?a ?c?|?a ,
which equals the probability of b, c? in this case. W e use f (e) to denote the result
of evaluating the polynomial f under evidence e as given above.
This algebraic representation of belief networks is attractive as it allows us to obtain
answers to a large number of probabilistic queries directly from the derivatives of
the network polynomial [2]. For example, the posterior marginal Pr (x|e) for a
? f (e)
1 ? f (e)
variable X 6? E equals f (e)
? ?x , where ? ?x is the partial derivative of f wrt ?x
evaluated at e. Second, the probability of evidence e after having retracted the
P
. Third, the posterior
value of some variable X from e, Pr (e ? X), equals x ? ?f?(e)
x
marginal Pr (x, u|e) for a variable X and its parents U equals
?x|u ? f (e)
f (e) ? ?x|u .
The network polynomial has an exponential number of terms, yet one can represent
it compactly in certain cases using an arithmetic circuit; see Figure 2. The (first)
partial derivatives of an arithmetic circuit can all be computed simultaneously in
time linear in the circuit size [2, 12]. The procedure resembles the back?propagation
algorithm for neural networks as it evaluates the circuit in a single upward?pass,
and then differentiates it through a single downward?pass.
The main computational question is then that of generating the smallest arithmetic
circuit that computes the network polynomial. A structure?based approach for
this has been given in [2], which is guaranteed to generate a circuit whose size is
bounded by O(n exp(w)), where n is the number of nodes in the network and w
is its treewidth. A more recent approach, however, which exploits local structure
has been presented in [3] and was shown experimentally to generate small arithmetic circuits for networks whose treewidth is up to 60. As we show in the rest of
this paper, the process of factoring a belief network into a jointree is yet another
method for generating an arithmetic circuit for the network. Specifically, we show
that the jointree structure is an implicit representation of such a circuit, and that
jointree propagation corresponds to circuit evaluation and differentiation. Moreover, the difference between Shenoy?Shafer and Hugin propagation turns out to be
a difference in the numeric scheme used for circuit differentiation [11].
3
Join tree Algorithms
We now review jointree algorithms, which are quite influential in graphical models.
Let B be a belief network. A jointree for B is a pair (T , L), where T is a tree and
L is a function that assigns labels to nodes in T . A jointree must satisfy three
properties: (1) each label L(i) is a set of variables in the belief network; (2) each
network variable X and its parents U (a family) must appear together in some label
L(i); (3) if a variable appears in the labels of i and j, it must also appear in the
label of each node k on the path connecting them. The label of edge ij in T is
defined as L(i) ? L(j). We will refer to the nodes of a jointree (and sometimes their
labels) as clusters. We will also refer to the edges of a jointree (and sometimes their
labels) as separators. Figure 2 depicts a belief network and one of its jointrees.
Jointree algorithms start by constructing a jointree for a given belief network [14, 8,
6]. They also associate tables (also called potentials) with clusters and separators.3
The conditional probability table (CPT or CP Table) of each variable X with parents
U, denoted ?X|U , is assigned to a cluster that contains X and U. In addition, an
evidence table over variable X, denoted ?X , is assigned to a cluster that contains X.
Figure 2 depicts the assignments of evidence and CP tables to clusters. Evidence e
is entered into a jointree by initializing evidence tables as follows: we set ?X (x) to
1 if x is consistent with evidence e, and we set ?X (x) to 0 otherwise.
Given some evidence e, a jointree algorithm propagates messages between clusters.
After passing two message per edge in the jointree, one can compute the marginals
Pr (C, e) for every cluster C. There are two main methods for propagating messages
in a jointree: the Shenoy?Shafer architecture [14] and the Hugin architecture [8].
Shenoy?Shafer propagation proceeds as follows [14]. First, evidence e is then entered
into the jointree. A cluster is then selected as the root and message propagation
proceeds in two phases, inward and outward. In the inward phase, messages are
passed toward the root. In the outward phase, messages are passed away from the
root. Cluster i sends a message to cluster j only when it has received messages
from all its other neighborsPk. A message
from cluster i to cluster j is a table Mij
Q
defined as follows: Mij = C\S ?i k6=j Mki , where C are the variables of cluster
i, S are the variables of separator ij, and ?i is the multiplication of all evidence
and CP tablesQ
assigned to cluster i. Once message propagation is finished, we have
Pr (C, e) = ?i k Mki , where C are the variables of cluster i.
Hugin propagation proceeds similarly to Shenoy?Shafer by entering evidence; selecting a cluster as root; and propagating messages in two phases, inward and outward
[8]. The Hugin method, however, differs in some major ways. It maintains a table
?ij with each separator, whose entries are initialized to 1s. It also maintains a table
?i with each cluster i, initialized to the multiplication of all CPTs and evidence
tables assigned to cluster i. Cluster i passes a message to neighboring cluster j only
when i has received messages from all its other neighbors k. When cluster i is ready
to send a message to cluster j, it does the following. First, it saves thePtable of
separator ?ij into ?old
ij . Second, it computes a new separator table ?ij =
C\S ?i ,
where C are the variables of cluster i and S are the variables of separator ij. Third,
?ij
. Finally, it multiplies the computed
it computes a message to cluster j: Mij = ?old
ij
message into the table of cluster j: ?j = ?j Mij . After the inward and outward?
passes of Hugin propagation are completed, we have: Pr (C, e) = ?i , where C are
the variables of cluster i.
3
A table is an array which is indexed by v ariable instantiations. Sp ecifically , a table ?
ov er v ariables X is indexed by the instantiations x of X. Its entries ?(x) are in [0, 1].
4
Join trees as arithmetic circuits
We now show that every jointree (together with a root cluster and a particular
assignment of evidence and CP tables to clusters) corresponds precisely to an arithmetic circuit that computes the network polynomial. We also show that the inward?
pass of the Shenoy?Shafer architecture evaluates this circuit, while the outward?pass
differentiates it. We show a similar result for the Hugin architecture.
Definition 1 Given a root cluster, a particular assignment of evidence and CP
tables to clusters, the arithmetic circuit embedded in a jointree is defined as follows:4
Nodes: The circuit includes: an output addition node f ; an addition node s for each
instantiation of a separator S; a multiplication node c for each instantiation of a
cluster C; an input node ?x for each instantiation x of variable X; an input node
?x|u for each instantiation xu of family XU.
Edges: The children of the output node f are the multiplication nodes generated by
the root cluster; the children of an addition node s are all compatible nodes generated
by the child cluster; the children of a multiplication node c are all compatible nodes
generated by child separators, and all compatible input nodes assigned to cluster C.
Hence, separators contribute addition nodes and clusters contribute multiplication
nodes. Moreover, the structure of the jointree dictates how these nodes are connected into a circuit. The arithmetic circuit in Figure 2 is embedded in the jointree
A ? AB, with cluster A as the root, and with tables ?A , ?A assigned to cluster A
and tables ?B and ?B|A assigned to cluster B.
Theorem 1 The circuit embedded in a jointree computes the network polynomial.
Therefore, by constructing a jointree one is generating a compact representation of
the network polynomial in terms of an arithmetic circuit.
We are now ready to state our basic results on the differential semantics of jointree
propagation, but we need some notational conventions first. In the following three
theorems: f denotes the circuit embedded in a jointree or its (unique) output node;
s denotes a separator instantiation or the addition node generated by that instantiation; and c denotes a cluster instantiation or the multiplication node generated by
that instantiation. Moreover, the value that a circuit node v tak es under evidence
e is denoted v(e). Recall that a circuit (or network polynomial) is evaluated under
evidence e by setting each input ?x to 1 if x is consistent with e; and to 0 otherwise. Finally, recall that ? f /? v represents the derivative of the circuit output with
respect to node v. Our first result relates to Shenoy?Shafer propagation.
Theorem 2 The messages produced using Shenoy?Shafer propagation on a jointree
under evidence e have the following semantics. F or each inward message Mij , we
have Mij (s) = s(e). F or each outward message Mji , we have Mji (s) = ?f?s(e) .
Hence, if we interpret separator instantiations as addition nodes in a circuit as given
by Definition 1, we get that a message directed towards the jointree root contains
the values of these addition nodes, while a message directed outward from the root
contains the partial derivatives of the circuit output with respect to these nodes.
Shenoy?Shafer propagation does not compute derivatives with respect to input
nodes ?x and ?x|u , but these can be obtained using local computations as follows.
4
Given a root cluster, one can direct the jointree b y having arrows point away from the
root, which also defines a parent/child relationship b etween clusters and separators.
Theorem 3 If evidence table ?X is assigned to cluster i with variables C:
?
?
Y
? f (e) ? X Y
=
Mji
? ? (x),
? ?x
j
C\X
(1)
?6=?X
where ? ranges over all evidence and CP tables assigned to cluster i. Moreover, if
CPT ?X|U is assigned to cluster i with variables C:
?
?
X
Y
Y
? f (e) ?
=
? ? (xu),
Mji
(2)
? ?x|u
j
C\XU
?6=?X|U
where ? ranges over all evidence and CP tables assigned to cluster i.
Therefore, even though Shenoy?Shafer propagation does not fully differentiate the
embedded circuit, the differentiation process can be completed through local computations after propagation has finished.5
W e now discuss some applications of the partial derivatives with respect to evidence
indicators ?x and network parameters ?x|u .
F ast retraction & evidence flipping. Suppose jointree propagation has been
performed using evidence e, which gives us access directly to the probability of e.
Suppose now we are interested in the probability of a different evidence e0 , which
results from changing the value of some variable X in e to a new value x. The
(e)
probability of e0 in this case is equal to ?f
??x [2], which can be obtained as given
by Equation 1. The ability to perform this computation efficiently is crucial for
algorithms that try to approximate maximum ap osteriori hyp othesis (MAP) using
local search [9, 10]. Another application of this derivative is in computing the
probability of evidence e0 , which results from retracting the value of some variable
P ?f (e)
X from e: Pr (e0 ) =
x ??x . This computation is k ey to analyzing evidence
conflict, as it allows us to determine the extent to which one piece of evidence is
contradicted by the remaining pieces.
(e)
Sensitivity analysis & parameter learning. The derivative ?Pr
??x|u is essential for sensitivity analysis?it is the basis for an efficient approach that identifies minimal network parameters changes that are necessary to satisfy constraints
on probabilistic queries [1]. This derivative is also crucial for gradient ascent approaches for learning network parameters as it is required to compute the gradient
5
Hugin propagation also corresponds to circuit ev aluation/differentiation:
Theorem 4 Cluster tables, separator tables and messages produced using Hugin propagation under evidence e have the following semantics: F or table ?i of cluster i with variables
(e)
. F or table ?ij of separator ij with variables S: ?ij (s) = s(e) ?f?s(e) .
C: ?i (c) = c(e) ?f?c
F or each inward message Mij , we have Mij (s) = s(e). F or each outward message Mji , we
have Mji (s) = ?f?s(e) if s(e) 6= 0.
Again, Hugin propagation does not compute deriv ativ es with respect to input nodes ?x
and ?x|u . Ev en for addition and multiplication nodes, it only retains deriv ativ es multiplied
by v alues. Hence, if we want to recov er the deriv ativ e with respect to, say, multiplication
node c, we must know the v alue of this node and it must be different than zero. In such a
case, we hav e ?f (e)/?c = ?i (c)/c(e), where ?i is the table associated with the cluster i
that generates node c. One can also compute the quantity v ?f /?v for input nodes using
equations similar to those in Theorem 3. But such quantities will be useful for obtaining
deriv ativ es only if the v alues of such input nodes are not zero. Hence, Shenoy?Shafer
propagation is more informativ e than Hugin propagation as far as the computation of
deriv ativ es is concerned.
(e)
used for deciding moves in the search space [13]. This derivative equals ?f
??x|u , and
can be obtained as given by Equation 2. The only other method we are aware
of to compute this derivative (beyond the one in [2]) is the one using the identity
?Pr (e)/??x|u = Pr (x, u, e)/?x|u , which requires ?x|u 6= 0 [13]. Hence, our results
seem to suggest the first general approach for computing this derivative using standard jointree propagation.
Bounding rounding errors. Jointree propagation gives exact results only when
infinite precision arithmetic is used. In practice, however, finite precision floating?
point arithmetic is typically used, in which case the differential semantics of jointree
propagation can be used to bound the rounding error in the computed probability
of evidence. See the full paper [11] for details on computing this bound.
5
A new perspectiv e on factoring graphical models
W e have shown in this paper that each jointree can be viewed as an implicit representation of an arithmetic circuit which computes the network polynomial, and that
jointree propagation corresponds to an evaluation and differentiation of the circuit.
These results have been useful in unifying the circuit approach presented in [2] with
jointree approaches, and in uncovering more properties of jointree propagation.
Another outcome of these results relates to the level at which it is useful to phrase
the problem of factoring graphical probabilistic models. Specifically, the perspective
we are promoting here is that probability distributions defined by graphical models
should be viewed as multi?linear functions, and the construction of jointrees should
be viewed as a process of constructing arithmetic circuits that compute these functions. That is, the fundamental object being factored is a multi?linear function,
and the fundamental result of the factorization is an arithmetic circuit. A graphical
model is a useful abstraction of the multi?linear function, and a jointree is a useful
structure for embedding the arithmetic circuit.
This view of factoring is useful since it allows us to cast the factoring problem in
more refined terms, which puts us in a better position to exploit the local structure
of graphical models in the factorization process. Note that the topology of a graphical model defines the form of the multi?linear function, while the model?s local
structure (as exhibited in its CPTs) constrains the values of variables appearing
in the function. One can factor a multi?linear function without knowledge of such
constraints, but the resulting factorizations will not be optimal. For a dramatic
example, consider a fully connected network with variables X1 , . . . , Xn , where all
parameters are equal to 12 . Any jointree for the network will have a cluster of size
n, leading to O(exp(n)) complexity. There is, however, a circuit of
O(n) size here,
n Qn
since the network polynomial can be easily factored as: f = ( 12 )
i=1 (?xi + ?x?i ).
Hence, in the presence of local structure, it appears more promising to factor the
graphical model into the more refined arithmetic circuit since not every arithmetic
circuit can be embedded in a jointree. This promise is made apparent by the results
in [3], which we sketch next. First, the multi?linear function of a belief network
is ?encoded? using a propositional theory, which is expressive enough to capture
the form of the multi?linear function in addition to constraints on its variables.
The theory is then compiled into a special logical form, known as deterministic
decomposable negation normal form. An arithmetic circuit is finally extracted from
that form. The method was able to generate relatively small arithmetic circuits for
a significant suite of real?world belief networks with treewidths up to 60.
It is worth mentioning here that the above perspective is in harmony with recent
approaches that represent probabilistic models using algebraic decision diagrams
(ADDs), citing the promise of ADDs in exploiting local structure [5]. ADDs and
related representations, such as edge?v alued decision diagrams, are known to be
compact representations of multi?linear functions. Moreov er, each of these representations can be expanded in linear time into an arithmetic circuit that satisfies
some strong properties [4]. Hence, such representations are special cases of arithmetic circuits as well.
W e finally note that the relationship between multi?linear functions (polynomials in
general) and arithmetic circuits is a classical subject of algebraic complexity theory
[15]. In this field of complexity, computational problems are expressed as polynomials, and a central question is that of determining the size of the smallest arithmetic
circuit that computes a giv en polynomial, leading to the notion of circuit complexity. Using this notion, it is then meaningful to talk about the circuit complexity
of a graphical model: the size of the smallest arithmetic circuit that computes the
multi?linear function induced by the model.
Ackno wledgment This work has been partially supported by NSF grant IIS9988543 and MURI grant N00014-00-1-0617.
References
[1] H. Chan and A. Darwiche. When do numbers really matter? JAIR, 17: 265?287,
2002.
[2] A. Darwiche. A differential approach to inference in Bay esian networks. In UAI?00,
pages 123?132, 2000. T o appear in JACM.
[3] A. Darwiche. A logical approach to factoring belief networks. In KR?02, pages 409?
420, 2002.
[4] A. Darwiche. On the factorization of multi?linear functions. T echnical Report D?128,
UCLA, Los Angeles, Ca 90095, 2002.
[5] J. Hoey , R. St-Aubin, A. Hu, and G. Boutilier. SPUDD: Stochastic planning using
decision diagrams. In UAI?99, pages 279?288, 1999.
[6] C. Huang and A. Darwiche. Inference in belief networks: A procedural guide. IJAR,
15(3): 225?263, 1996.
[7] M. Iri. Simultaneous computation of functions, partial deriv ativ es and estimates of
rounding error. Japan J. Appl. Math., 1:223?252, 1984.
[8] F. V. Jensen, S.L. Lauritzen, and K.G. Olesen. Bay esian updating in recursiv e graphical models by local computation. Comp. Stat. Quart., 4:269?282, 1990.
[9] J. Park. MAP complexity results and approximation methods. In UAI?02, pages
388?396, 2002.
[10] J. Park and A. Darwiche. Approximating MAP using stochastic local search. In
UAI?01, pages 403?410, 2001.
[11] J. Park and A. Darwiche. A differential semantics for jointree algorithms. T echnical
Report D?118, UCLA, Los Angeles, Ca 90095, 2001.
[12] G. Rote. Path problems in graphs. Computing Suppl., 7:155?189, 1990.
[13] S. Russell, J. Binder, D. Koller, and K. Kanazawa. Local learning in probabilistic
networks with hidden v ariables. In UAI?95, pages 1146?1152, 1995.
[14] P. P. Shenoy and G. Shafer. Propagating belief functions with local computations.
IEEE Expert, 1(3):43?52, 1986.
[15] J. v on zur Gathen. Algebraic complexity theory . Ann. Rev. Comp. Sci., 3:317?347,
1988.
| 2226 |@word version:1 polynomial:17 jointree:47 adnan:1 hu:1 dramatic:1 contains:5 selecting:1 bc:1 yet:2 must:5 rote:1 leaf:1 selected:1 math:1 contribute:2 node:40 constructed:1 direct:1 differential:5 consists:1 hugin:10 darwiche:9 planning:1 multi:21 retriev:1 echnical:2 bounded:1 moreover:4 circuit:61 inward:8 finding:1 ag:1 differentiation:5 suite:1 every:3 grant:2 appear:3 shenoy:11 local:15 analyzing:1 path:2 ap:1 resembles:2 appl:1 binder:1 mentioning:1 factorization:12 range:2 directed:3 practical:2 unique:1 practice:1 differs:1 procedure:2 dictate:1 suggest:2 get:1 close:1 esian:2 put:1 ast:1 map:3 deterministic:1 send:1 latest:1 iri:1 citing:1 decomposable:1 assigns:1 factored:3 array:1 embedding:1 notion:2 construction:1 suppose:3 exact:1 associate:1 updating:1 ark:1 muri:1 initializing:1 capture:1 thousand:1 connected:2 contradicted:1 russell:1 complexity:8 constrains:1 retracting:1 depend:1 ov:1 basis:1 compactly:4 easily:1 represented:4 talk:1 univ:1 query:6 outside:1 outcome:1 refined:2 whose:4 quite:1 apparent:1 encoded:1 say:1 otherwise:3 ability:1 itself:1 differentiate:1 triggered:1 neighboring:1 entered:2 representational:1 los:3 exploiting:1 parent:8 cluster:49 generating:3 object:1 stat:1 propagating:3 ij:12 lauritzen:1 received:2 strong:1 c:1 treewidth:4 convention:1 direction:1 stochastic:2 viewing:1 alue:1 really:1 aubin:1 mki:2 ariables:3 normal:1 exp:2 deciding:1 scope:1 major:1 smallest:3 harmony:1 label:8 hav:2 notational:1 sense:1 inference:10 abstraction:1 factoring:9 typically:1 hidden:1 koller:1 tak:1 interested:1 semantics:6 upward:1 uncovering:1 denoted:6 k6:1 multiplies:1 special:5 marginal:2 equal:7 once:2 aware:1 having:2 field:1 represents:2 park:3 report:2 few:1 simultaneously:1 floating:1 phase:4 negation:1 ab:1 hyp:1 interest:1 message:24 evaluation:2 implication:2 edge:5 partial:6 necessary:1 tree:3 indexed:2 old:2 initialized:2 e0:4 theoretical:1 minimal:1 retains:1 assignment:3 etween:1 phrase:1 entry:3 rounding:3 answer:4 st:1 fundamental:2 sensitivity:2 probabilistic:8 together:2 connecting:1 again:1 central:1 huang:1 expert:1 derivative:13 leading:2 giv:1 japan:1 potential:1 bold:2 alues:4 includes:1 matter:1 satisfy:2 cpts:4 performed:2 root:12 try:1 piece:2 view:1 start:1 orks:1 maintains:2 ery:2 efficiently:1 mji:6 produced:2 multiplying:1 worth:1 comp:2 finer:1 simultaneous:1 retraction:1 ed:1 definition:2 evaluates:2 james:1 proof:1 associated:1 logical:2 recall:2 knowledge:1 back:2 appears:2 jair:1 evaluated:2 though:1 implicit:3 sketch:1 expressive:1 propagation:28 defines:2 true:11 hence:7 assigned:11 entering:1 attractive:1 ackno:1 ulti:1 dedicated:1 cp:7 recently:3 exponentially:1 interpretation:1 marginals:1 interpret:1 significant:2 refer:3 jointrees:6 similarly:1 access:1 mainstream:4 compiled:2 add:3 posterior:2 recent:2 chan:1 perspective:3 certain:2 n00014:1 exploited:1 ey:1 determine:1 arithmetic:37 relates:2 full:1 reduces:1 long:1 basic:1 represent:4 sometimes:2 suppl:1 zur:1 proposal:2 addition:10 want:2 diagram:3 sends:1 crucial:2 operate:1 rest:1 exhibited:1 ascent:1 pass:2 subject:1 induced:1 seem:1 call:1 presence:1 enough:1 concerned:1 architecture:4 topology:2 angeles:3 passed:2 algebraic:5 passing:1 cpt:5 boutilier:1 useful:7 quart:1 outward:9 processed:1 generate:4 specifies:1 nsf:1 per:1 promise:2 key:1 procedural:1 changing:1 ce:1 graph:2 letter:4 family:2 recov:1 decision:3 bound:2 guaranteed:1 precisely:1 constraint:3 bcd:1 ucla:3 generates:1 expanded:1 relatively:1 department:1 structured:1 according:1 influential:2 smaller:2 rev:1 invariant:1 pr:11 hoey:1 equation:3 discus:4 turn:1 differentiates:3 wrt:1 know:1 operation:1 multiplied:1 eight:2 promoting:1 away:2 appearing:1 save:1 alternative:1 jd:1 denotes:3 remaining:1 completed:2 graphical:12 unifying:1 exploit:2 approximating:1 classical:1 move:1 question:3 quantity:2 flipping:1 exhibit:1 gradient:2 sci:1 extent:1 toward:1 relationship:2 perform:1 allowing:1 upper:2 observation:1 finite:1 situation:1 precise:1 spudd:1 treewidths:1 arbitrary:1 propositional:1 pair:1 required:1 cast:1 conflict:1 california:1 beyond:1 able:1 proceeds:3 ev:5 belief:25 indicator:4 representing:1 scheme:1 finished:2 identifies:1 ready:2 review:2 multiplication:9 determining:1 embedded:6 fully:2 acyclic:1 consistent:4 propagates:1 compatible:3 supported:1 guide:1 neighbor:1 face:2 differentiating:2 xn:1 world:3 evaluating:3 numeric:1 computes:9 qn:1 made:1 far:1 approximate:1 compact:2 netw:1 retracted:1 instantiation:18 uai:5 xi:1 search:3 bay:2 table:27 promising:1 ca:3 obtaining:1 separator:15 constructing:3 sp:1 main:2 arrow:1 bounding:1 shafer:11 child:6 xu:4 x1:1 join:2 en:2 depicts:2 precision:2 position:1 elief:2 exponential:3 third:2 grained:1 theorem:7 er:3 jensen:1 deriv:7 evidence:33 essential:1 kanazawa:1 false:11 kr:1 downward:1 simply:1 jacm:1 expressed:1 partially:1 mij:8 corresponds:4 satisfies:1 abc:1 extracted:1 conditional:2 sized:1 identity:1 viewed:3 ann:1 towards:1 experimentally:1 change:1 specifically:4 infinite:1 called:1 pas:6 e:7 meaningful:1 internal:1 olesen:1 |
1,348 | 2,227 | Adaptive Classification by Variational Kalman
Filtering
Peter Sykacek
Department of Engineering Science
University of Oxford
Oxford, OX1 3PJ, UK
[email protected]
Stephen Roberts
Department of Engineering Science
University of Oxford
Oxford, OX1 3PJ, UK
[email protected]
Abstract
We propose in this paper a probabilistic approach for adaptive inference
of generalized nonlinear classification that combines the computational
advantage of a parametric solution with the flexibility of sequential sampling techniques. We regard the parameters of the classifier as latent
states in a first order Markov process and propose an algorithm which
can be regarded as variational generalization of standard Kalman filtering. The variational Kalman filter is based on two novel lower bounds
that enable us to use a non-degenerate distribution over the adaptation
rate. An extensive empirical evaluation demonstrates that the proposed
method is capable of infering competitive classifiers both in stationary
and non-stationary environments. Although we focus on classification,
the algorithm is easily extended to other generalized nonlinear models.
1 Introduction
The demand for adaptive learning methods, e.g. for use in brain computer interfaces (BCIs)
[15] has recently triggered a considerable interest in such algorithms. We may approach
adaptive learning with algorithms that were designed for stationary environments and use
learning rates to make these methods adaptive. These approaches can be traced back to
early work on learning algorithms (e.g. [1]). A more recent account to this approach is
[17], who combines the probabilistic method of sequential variational inference ([9]) and a
forgetting factor to obtain an adaptive learning method. Probabilistic or Bayesian methods
allow also for a completely different interpretation of adaptive learning. We may regard the
model coefficients as latent (i.e. unobserved) states of a first order Markov process.
, at state !
The posterior distribution,
(1)
summarizes all information obtained about the model. This posterior and the conditional distribution,
"#$% ,
represent the prior for the following state. The conditional distribution can be thought of as
additive process or state noise with precision . Predictions are obtained by a probabilistic
observation model & ' . Using this model, we obtain an appropriate adaptation
rate by hierarchical Bayesian inference of the process noise precision . Equation (1)
suggests that we may interpret adaptive Bayesian inference as generalization of the well
known Kalman filter ([12]). This view of adaptive learning has been used by [6], who use
extended Kalman filtering to obtain a Laplace approximation of the posterior over and
maximum likelihood II ([3]) for inference of the adaptation rate. Another generalization of
Kalman filtering are the recently quite popular particle filters (e.g. [7]). Being Monte Carlo
methods, particle filters have over Laplace approximations the advantage of much greater
flexibility. This comes however at the expense of a higher representational and computational complexity. To combine the flexibility of particle filtering with the computational
advantage of parametric methods, we propose a variational approximation (e.g. [11] , [2]
and [8]) for inference of the Markov process in Equation (1). Unlike maximum likelihood
II, the variational Kalman filter allows us to have a non degenerate distribution over the
process noise precision. We derive in this paper a variational Kalman filter classifier and
show with an extensive empirical evaluation that the resulting classifiers obtain excellent
generalization accuracies both in stationary and non-stationary domains.
2 Methods
2.1 A generalized nonlinear classifier
Classification is a prediction problem, where some regressor, , predicts the expectation
of a response variable . Since a categorical polytomous solution is easily recovered
from
dichotomous solutions ([16], pages 44-45), we restrict all further discussions
to dichotomos classification using responses. We thus have only one degree of freedom and predict the binary probability, # , which depends on the model
parameters . To obtain a flexible discriminant, we use a generalized nonlinear model, i.e.
a radial basis function (RBF) network ([14] and [5]), with logistic output transformation
(Equation (3)).
'
'
,
' "
(2)
(3)
The classifier has a nonlinear feature space
which for reasons of adaptivity depends
on and a linear mapping into latent space . We allow for Gaussian basis functions, i.e. !#" $&%('*)+" -, " /. or thin plate splines, i.e. !0"
, " 132*4 ' , " . Both basis functions are parameterized by their center locations , " .
Since we want to have a simple unimodal posterior over model parameters, we update the
coefficients of the basis set randomly according to a Metropolis Hastings kernel ([13])
and solve for the conditional posterior
analytically.
2.2 The variational Kalman filter
In order to ease discussion of adaptive inference, we illustrate the dependencies implied
by Equation (1) in figure 1 as a directed acyclic graph (DAG). In accordance with Kalman
filtering, we assume a Gaussian posterior at time with mean 5 and precision 6
and zero mean Gaussian state noise with isotropic precision . Inference of is based on
a ?flat? proper Gamma prior specified by parameters 7 and 8 . In order to obtain reasonable
posteriors
over , we follow [10] and assume constant adaptation within a window of size
9
. The proposed variational Bayesian approach ignores the anti-causal information flow
and is thus based on maximizing a lower bound on the logarithmic model evidence of a
windowed Kalman filter. Following these assumptions, we obtain the expression for the
log evidence in Equation (4) by substituting the generalized nonlinear model (Equations
(2) to (3)) into the formulation of adaptive Bayesian learning (1). We have then to make all
?
?
?I
?
n?1
w
w
n?1
n
y
n
wn?1
observation n
Figure 1: This figure illustrates adaptive inference as a directed acyclic graph. The coefficients of the classifier, , are assumed to be Gaussian, following a first order Markov
process. The hyper parameter is given a Gamma prior specified by parameters 7 and 8 .
distributions explicit and integrate over all model coefficients, which is done analytically
over all prior states
.
6
(4)
&
5
$ % '
5
6
"
8
" !
8 $ $# %
7
The structure of Equation (4) suggests that the approximate posterior % can be chosen
to be Gaussian and the approximate posterior % $ can be chosen to be a Gamma distribu132*4%
'
132*4
tion. These functional forms
do however not simply result from a mean field approximation
of the posterior as % $$& % . In order to obtain the required conjugacy, we
have
' "
and
to use lower
bounds
for
the
probability
of
the
target
label,
'
for both 6
and $&%(' 5
6
.
5
2.3 Variational lower bounds
In order to achieve conjugacy with a Gaussian distribution, we use the lower bound for the
logistic sigmoid proposed in [9]
12 $
4
')(
"
/ :9
5768 ;
1
.
<=?>
132*4% 132*4*,+ 2.-/0*21 43'3
.
@
1
. AB
(5)
are the variational parameters of a locally linear expansion in '
. of every prediction contained in the window. In order to get expressions that
are conjugate with a Gamma distribution
over the process noise precision , we derive two
novel lower bounds. Assuming a -dimensional parameter vector , we get
in which
1
$&%(' 132*4 6
(
and
132*4 6
6
5
12 4
(6)
5
6
5
(
(7)
5
6
5
5 6
. 5
which are expressions in and 132*4 and thus conjugate with a Gamma distribution. Both
$ % '
$ % '
$ % '
bounds are expanded in the identical parameter which is justified since both are linear
expansions in and maximization must thus lead to identical
values. Using these
& &
, and the usual
%
lower bounds together with a mean field assumption, %
Jensens inequalities, we immediately obtain a negative free energy as lower bound of the
log evidence in Equation (4). For reasons of brevity we do not include this expression here.
2.4 Parameter updates
In order to distinguish between the parameters
of the prior and posterior distributions, we
henceforth denote the latter with superscript . Inference requires to maximize the negative
9
free energy with respect to all variational
parameters. These are the coefficients of the
9
Gaussian distributions, %
, the
parameters in the bounds of the logistic sigmoid,
and
1 , the coefficients of the Gamma posterior over the noise process precision, %
with respect to %
the parameter in the Gamma conjugacy bounds, . Maximization
results in a Gaussian distribution with precision 6
and mean 5 .
/&
9
6
6
5768 .
(8)
1
"
#
5 6 6
5
Maximization with respect to % results in a Gamma distribution with location parameter
7
and scale parameter 8
7
7
9
.
(9)
6
5 5
6
.
&
5
5 5
' 6 6
.
5
According to [9], maximization with respect to
leads to
8
8
1
1
5
. 6 %
Maximization with respect to the variational parameter
7
8
%
(10)
leads for both bounds to
(11)
In order to allow the basis mapping in Equation (2) to track modifications
the input data
9 in
is drawn
distributions, we propose the perturbation
, where
from a Gaussian and accept the proposal according to probability
<
8 =
& . .
& . .
!
5 5
6
9
!
5 5
6
9
B !
#
A
#
(12)
If we assume that the negative free energy describes the log evidence exactly, this is a
Metropolis Hastings kernel ([13]) that leaves the marginal posterior
invariant.
We could thus represent the marginal posterior with random samples. For computational
reasons however, we use the scheme only for random updates of . An algorithm for parameter inference will first propose a random update of and then iterate maximizations
according to Equation (8) to Equation (11) until we observe convergence of the negative
free energy. Alternatively we can use a fixed number of iterations, for which our experiments suggest that ' iterations suffice.
2.5 Model predictions
Since we do not know the response when predicting, we
have to sum the negative free
energy over . This results in a new expression for 5 which we obtain from Equation
(8) by dropping the term that depends on . Due to the dependency on 1 , maximization
with respect to % has to alternate with maximization with respect to 1 , the latter
again being done according to Equation (10). Having reached convergence, we obtain an
approximate log probability for by taking the expectation of the bound of the sigmoid
in Equation (5) with respect to % and maximizing with respect to 1 .
#"
12 $
4
"
%$
"
132*4 132*4
* +2-"/
* 1
3'3
%
(13)
"
Exponentiating the approximate log probabilities results in a sub probability measure over
with
representing an
, with the difference
additional uncertainty about , introduced by the approximation of the logistic sigmoid.
3 Experiments
All experiments reported in this section use a model with Gaussian basis functions with
precision ) " &% ' . For updating the basis, we use zero mean Gaussian random variates
** . The initial prior over parameters is a zero mean Gaussian
with precision
&% . For maximizing the negative free energy we use '
with isotropic precision 6
iterations.
The
first
experiment
aims at obtaining a parametrization for 7 , 8 and the window
9
length, , that allows us to make inferences of the process noise that are insensitive to the
actual ?drift? of the problem. We use for that purpose the test set from the synthetic problem
in [16]1 . The samples of this balanced problem are reshuffled such that consecutive class
labels differ. In order to get a non-stationarity, we swap the class labels in the second half
.
of the data. The results shown in figure 2 are obtained 9 with 7 &% and 8
We propose these settings together with a window size
, because this is a good
compromise between fast tracking and high stationary accuracy.
'&
We are now ready to compare the algorithm with an equivalent static classifier using several public data sets and classification of single trial EEG which, due to learning effects in
humans, is known to be non-stationary. In order to avoid that the model has an influence on
1
This data set can be obtained at http://www.stats.ox.ac.uk/pub/PRNN/.
Simulations using ??=1e+003
350
300
250
window sz. 1
window sz. 5
window sz. 10
window sz. 15
window sz. 20
200
150
100
50
0
0
200
400
600
800
1000
Simulations using ??=1e+003
1
0.8
window sz. 1
window sz. 5
window sz. 10
window sz. 15
window sz. 20
0.6
0.4
0.2
0
0
200
400
600
800
1000
Figure 2: Results obtained on Ripleys? synthetic data set with swapped class labels after
sample 500. The top graph shows the expected value of the precision of the noise process,
! for different window sizes (i.e. for different numbers of samples used
for infering the adaptation rate). The bottom graph shows the instantaneous generalization
accuracy estimated in a window of size * . The prior over is a Gamma distribution with
expectation and variance .
the results, we compare the generalization accuracy of the variational Kalman filter classifier (vkf) with an identical non-adaptive model. Inference of the static model is based on
sequential variational learning ([9]). We obtain sequential variational inference (svi) from
our approach by setting in Equation (1) to infinity. The comparisons are evaluated for
significance using McNemar?s test, a method for analyzing paired results that is suggested
in [16]. The comparison uses vehicle data2 , satellite image data, Johns Hopkins University
ionosphere data, balance scale weight and distance data and the wine recognition database,
all taken from the StatLog database which is available at the UCI repository ([4]). The
satellite image data set is used as is provided with 4435 samples in the training and 2000
samples in the test set. Vehicle data are merged such that we have 500 samples in the training and 252 in the test set. The other data were split into two equal sized data sets, which
were both used as training and independent test sets respectively. We also use the pima
diabetes data set from [16]3 . Table 1 compares the generalization accuracies (in fractions)
obtained with the variational Kalman filter with generalization accuracies obtained with
, that both
sequential variational inference. The probability of the null hypothesis,
classifiers are equal suggests that only the differences for the Balance scale and the Pima
Indian data sets are significant, with either method being better in one case. Since the generalization accuracies of both methods are almost identical, we conclude that if applied to
2
3
Vehicle data was donated to StatLog by the Turing Institute Glasgow, Scotland.
This data set can be obtained at http://www.stats.ox.ac.uk/pub/PRNN/.
Data sets
J.H.U. ionosphere
Satellite image
Balance scale
Pima diabetes
Vehicle
Wine
Generalization results
vkf
svi
0.87 0.88
0.41
0.81 0.81
0.29
0.89 0.87
0.03
0.76 0.80
0.03
0.77 0.77
0.42
0.97 0.95
0.25
Table 1: Generalization accuracies obtained with the variational Kalman filter (vkf) and
sequential variational inference (svi).
Cognitive task
rest/move, no feedback
rest/move, feedback
move/math, no feedback
move/math, feedback
Generalization results
vkf
svi
0.69 0.61
0.00
0.71 0.70
0.39
0.69 0.62
0.00
0.64 0.60
0.00
Table 2: Generalization accuracies obtained for classification of single trial EEG show that
the variational Kalman filter significantly improves the results in three out of four cases.
stationary problems, we may expect the variational Kalman filter to obtain generalization
accuracies that are similar to those of static methods.
In order to assess the variational Kalman filter on a non-stationary problem, we apply it to
classification of single trial EEG, a problem which is part of BCIs. The data for this experiment has been obtained from eight untrained subjects that perform two different task combinations (rest EEG vs. imagined movements and imagined movements vs. a mathematical
task), once without and once with visual feedback. For one cognitive experiment each pair
of tasks is repeated ten times. We classify on a one second basis an thus have per subject
and task combination * samples. The regressors in this experiment are three reflection
coefficients (a parametrization of autoregressive models, see e.g. [18]). The comparison
in table 2 reports within subject results obtained by two fold cross testing. Using half of
the data, we allow for convergence of the methods before estimating the generalization accuracy on the other half of the data. The generalization accuracies in table 2 are averaged
across subjects. We obtain in three out of four experiments a significant improvement with
the variational Kalman filter.
4 Discussion
We propose in this paper a parametric approach for adaptive inference of nonlinear classification. Our algorithm can be regarded as variational generalization of Kalman filtering
which we obtain by using two novel lower bounds that allow us to have a non-degenerate
distribution over the adaptation rate. Inference is done by iteratively maximizing a lower
bound of the log evidence. As a result we obtain an approximate posterior that is a product
of a multivariate Gaussian and a Gamma distribution. Our simulations have shown that the
approach is capable of infering classifiers that have good generalization performance both
in stationary and non-stationary domains. In situations with moderate sized latent spaces,
e.g. in the BCI experiments reported above, prediction and parameter updates can be done
in real time on conventional PCs. Although we focus on classification, the algorithm is
based on general ideas and thus easily applicable to other generalized nonlinear models.
Acknowledgements
We would like to express gratitude to the anonymous reviewers of this paper for their
valuable suggestions for improving the paper. Peter Sykacek is currently supported by
grant Nr. F46/399 kindly provided by the BUPA foundation.
References
[1] S.-I. Amari. A theory of adaptive pattern classifiers. IEEE Transactions on Electronic Computers, 16:299?307, 1967.
[2] H. Attias. Inferring parameters and structure of latent variable models by variational Bayes. In
Proc. 15th Conf. on Uncertainty in AI, 1999, 1999.
[3] J. O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer, New York, 1985.
[4] C.L. Blake and C.J. Merz.
UCI repository of machine learning databases.
http://www.ics.uci.edu/ mlearn/MLRepository.html, 1998.
University of California,
Irvine, Dept. of Information and Computer Sciences.
[5] D. S. Broomhead and D. Lowe. Multivariable functional interpolation and adaptive networks.
Complex Systems, 2:321?355, 1988.
[6] J.F.G. de Freitas, M. Niranjan, and A.H. Gee. Regularisation in Sequential Learning Algorithms. In M. Jordan, M. Kearns, and S. Solla, editors, Advances in Neural Information Processing Systems (NIPS 10), pages 458?464, 1998.
[7] A. Doucet, J. F. G. de Freitas, and N. Gordon, editors. Sequential Monte Carlo Methods in
Practice. Springer-Verlag, 2001.
[8] Z. Ghahramani and M. J. Beal. Variational inference for Bayesian mixture of factor analysers.
In Advances in Neural Information Processing Systems 12, pages 449?455, 2000.
[9] T. S. Jaakkola and M. I. Jordan. Bayesian parameter estimation via variational methods. Statistics and Computing, 10:25?37, 2000.
[10] A.H. Jazwinski. Adaptive filtering. Automatica, pages 475?485, 1969.
[11] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational
methods for graphical models. In M. I. Jordan, editor, Learning in Graphical Models. MIT
Press, Cambridge, MA, 1999.
[12] R. E. Kalman. A new approach to linear filtering and prediction problems. Trans. ASME, J.
Basic Eng., 82:35?45, 1960.
[13] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller. Equations of state
calculations by fast computing machines. Journal of Chemical Physics, 21:1087?1091, 1953.
[14] J. Moody and C. J. Darken. Fast learning in networks of locally-tuned processing units. Neural
Computation, 1:281?294, 1989.
[15] W. Penny, S. Roberts, E. Curran, and M. Stokes. EEG-based communication: a pattern recognition approach. IEEE Trans. Rehab. Eng., pages 214?216, 2000.
[16] B. D. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, 1996.
[17] Masa-aki Sato. Online model selection based on the variational Bayes. Neural Computation,
pages 1649?1681, 2001.
[18] P. Sykacek and S. Roberts. Bayesian time series classification. In T.G. Dietterich, S. Becker,
and Z. Gharamani, editors, Advances in Neural Processing Systems 14, pages 937?944. MIT
Press, 2002.
| 2227 |@word trial:3 repository:2 simulation:3 eng:2 initial:1 series:1 pub:2 tuned:1 freitas:2 recovered:1 must:1 john:1 additive:1 designed:1 update:5 v:2 stationary:11 half:3 leaf:1 isotropic:2 parametrization:2 data2:1 scotland:1 math:2 location:2 windowed:1 mathematical:1 combine:3 forgetting:1 expected:1 brain:1 actual:1 window:16 prnn:2 provided:2 estimating:1 suffice:1 null:1 unobserved:1 transformation:1 every:1 donated:1 exactly:1 classifier:12 demonstrates:1 uk:6 unit:1 grant:1 before:1 engineering:2 accordance:1 oxford:4 analyzing:1 interpolation:1 suggests:3 bupa:1 ease:1 averaged:1 directed:2 testing:1 practice:1 svi:4 empirical:2 thought:1 significantly:1 radial:1 suggest:1 get:3 selection:1 influence:1 vkf:4 www:3 equivalent:1 conventional:1 reviewer:1 center:1 maximizing:4 immediately:1 stats:2 glasgow:1 regarded:2 laplace:2 target:1 us:1 curran:1 hypothesis:1 diabetes:2 recognition:3 updating:1 predicts:1 database:3 bottom:1 solla:1 movement:2 valuable:1 balanced:1 environment:2 complexity:1 compromise:1 completely:1 basis:8 swap:1 easily:3 fast:3 monte:2 analyser:1 hyper:1 quite:1 solve:1 amari:1 bci:1 statistic:1 superscript:1 online:1 beal:1 advantage:3 triggered:1 propose:7 product:1 adaptation:6 rehab:1 uci:3 flexibility:3 achieve:1 degenerate:3 representational:1 convergence:3 satellite:3 derive:2 illustrate:1 ac:4 come:1 differ:1 merged:1 filter:15 human:1 enable:1 public:1 generalization:18 anonymous:1 statlog:2 blake:1 ic:1 mapping:2 predict:1 substituting:1 early:1 consecutive:1 wine:2 purpose:1 estimation:1 proc:1 applicable:1 label:4 currently:1 mit:2 gaussian:13 aim:1 avoid:1 jaakkola:2 focus:2 improvement:1 likelihood:2 inference:19 jazwinski:1 accept:1 classification:11 flexible:1 html:1 marginal:2 field:2 equal:2 once:2 having:1 sampling:1 identical:4 thin:1 report:1 spline:1 gordon:1 randomly:1 gamma:10 ab:1 freedom:1 stationarity:1 interest:1 evaluation:2 mixture:1 masa:1 pc:1 capable:2 causal:1 classify:1 maximization:8 sjrob:1 infering:3 reported:2 dependency:2 synthetic:2 probabilistic:4 physic:1 regressor:1 together:2 hopkins:1 moody:1 again:1 henceforth:1 cognitive:2 conf:1 account:1 de:2 coefficient:7 depends:3 tion:1 view:1 vehicle:4 lowe:1 reached:1 competitive:1 bayes:2 ass:1 accuracy:12 variance:1 who:2 bayesian:9 carlo:2 mlearn:1 energy:6 static:3 irvine:1 popular:1 broomhead:1 improves:1 back:1 higher:1 follow:1 response:3 formulation:1 done:4 ox:4 evaluated:1 sykacek:3 until:1 hastings:2 ox1:2 nonlinear:8 logistic:4 bcis:2 effect:1 dietterich:1 analytically:2 chemical:1 iteratively:1 aki:1 mlrepository:1 multivariable:1 generalized:6 plate:1 asme:1 interface:1 reflection:1 image:3 variational:30 instantaneous:1 novel:3 recently:2 sigmoid:4 functional:2 insensitive:1 imagined:2 interpretation:1 interpret:1 significant:2 cambridge:3 dag:1 ai:1 particle:3 robot:2 posterior:15 multivariate:1 recent:1 moderate:1 verlag:1 inequality:1 binary:1 mcnemar:1 greater:1 additional:1 maximize:1 stephen:1 ii:2 unimodal:1 calculation:1 cross:1 niranjan:1 paired:1 prediction:6 basic:1 expectation:3 iteration:3 represent:2 kernel:2 justified:1 proposal:1 want:1 reshuffled:1 swapped:1 rest:3 unlike:1 subject:4 flow:1 jordan:4 split:1 wn:1 iterate:1 variate:1 restrict:1 idea:1 attias:1 expression:5 becker:1 peter:2 york:1 locally:2 ten:1 http:3 estimated:1 track:1 per:1 dropping:1 express:1 four:2 traced:1 drawn:1 pj:2 graph:4 fraction:1 sum:1 turing:1 parameterized:1 uncertainty:2 almost:1 reasonable:1 electronic:1 decision:1 summarizes:1 bound:15 distinguish:1 fold:1 sato:1 infinity:1 flat:1 dichotomous:1 expanded:1 department:2 according:5 alternate:1 combination:2 conjugate:2 describes:1 across:1 metropolis:3 modification:1 invariant:1 taken:1 equation:16 conjugacy:3 know:1 available:1 apply:1 observe:1 hierarchical:1 eight:1 appropriate:1 top:1 include:1 graphical:2 ghahramani:2 implied:1 move:4 parametric:3 usual:1 nr:1 distance:1 discriminant:1 reason:3 assuming:1 kalman:20 length:1 berger:1 balance:3 robert:3 pima:3 expense:1 negative:6 rosenbluth:2 proper:1 perform:1 observation:2 darken:1 markov:4 anti:1 situation:1 extended:2 stokes:1 communication:1 perturbation:1 drift:1 introduced:1 gratitude:1 pair:1 required:1 specified:2 extensive:2 california:1 nip:1 trans:2 suggested:1 pattern:3 predicting:1 representing:1 scheme:1 ready:1 categorical:1 prior:7 teller:2 acknowledgement:1 regularisation:1 expect:1 adaptivity:1 suggestion:1 filtering:9 acyclic:2 foundation:1 integrate:1 degree:1 editor:4 supported:1 free:6 gee:1 allow:5 institute:1 saul:1 taking:1 penny:1 regard:2 feedback:5 autoregressive:1 ignores:1 adaptive:17 exponentiating:1 regressors:1 transaction:1 approximate:5 sz:10 doucet:1 automatica:1 assumed:1 conclude:1 alternatively:1 ripley:1 latent:5 table:5 obtaining:1 eeg:5 improving:1 gharamani:1 expansion:2 excellent:1 untrained:1 complex:1 domain:2 kindly:1 significance:1 noise:8 repeated:1 precision:12 sub:1 inferring:1 explicit:1 jensen:1 ionosphere:2 evidence:5 sequential:8 illustrates:1 demand:1 logarithmic:1 simply:1 visual:1 contained:1 tracking:1 springer:2 ma:1 conditional:3 sized:2 rbf:1 considerable:1 kearns:1 merz:1 latter:2 brevity:1 indian:1 dept:1 |
1,349 | 2,228 | Learning in Zero-Sum Team Markov Games
Using Factored Value Functions
Michail G. Lagoudakis
Department of Computer Science
Duke University
Durham, NC 27708
[email protected]
Ronald Parr
Department of Computer Science
Duke University
Durham, NC 27708
[email protected]
Abstract
We present a new method for learning good strategies in zero-sum
Markov games in which each side is composed of multiple agents collaborating against an opposing team of agents. Our method requires full
observability and communication during learning, but the learned policies can be executed in a distributed manner. The value function is represented as a factored linear architecture and its structure determines the
necessary computational resources and communication bandwidth. This
approach permits a tradeoff between simple representations with little or
no communication between agents and complex, computationally intensive representations with extensive coordination between agents. Thus,
we provide a principled means of using approximation to combat the
exponential blowup in the joint action space of the participants. The approach is demonstrated with an example that shows the efficiency gains
over naive enumeration.
1 Introduction
The Markov game framework has received increased attention as a rigorous model for
defining and determining optimal behavior in multiagent systems. The zero-sum case, in
which one side?s gains come at the expense of the other?s, is the simplest and best understood case1 . Littman [7] demonstrated that reinforcement learning could be applied to
Markov games, albeit at the expense of solving one linear program for each state visited
during learning. This computational (and conceptual) burden is probably one factor behind
the relative dearth of ambitious Markov game applications using reinforcement learning.
In recent work [6], we demonstrated that many previous theoretical results justifying the
use of value function approximation to tackle large MDPs could be generalized to Markov
games. We applied the LSPI reinforcement learning algorithm [5] with function approximation to a two-player soccer game and a router/server flow control problem and derived
very good results. While the theoretical results [6] are general and apply to any reinforcement learning algorithm, we preferred to use LSPI because LSPI?s efficient use of data
meant that we solved fewer linear programs during learning.
1
The term Markov game in this paper refers to the zero-sum case unless stated otherwise.
Since soccer, routing, and many other natural applications of the Markov game framework
tend to involve multiple participants it would be very useful to generalize recent advances
in multiagent cooperative MDPs [2, 4] to Markov games. These methods use a factored
value function architecture and determine the optimal action using a cost network [1] and a
communication structure which is derived directly from the structure of the value function.
LSPI has been successfuly combined with such methods; in empirical experiments, the
number of state visits required to achieve good performance scaled linearly with the number
of agents despite the exponential growth in the joint action space [4].
In this paper, we integrate these ideas and we present an algorithm for learning good strategies for a team of agents that plays against an opponent team. In such games, players within
one team collaborate, whereas players in different teams compete. The key component of
this work is a method for computing efficiently the best strategy for a team, given an approximate factored value function which is a linear combination of features defined over
the state space and subsets of the joint action space for both sides. This method integrated
within LSPI yields a computationally efficient learning algorithm.
2 Markov Games
A two-player zero-sum Markov game is defined as a 6-tuple (S, A, O, P, R, ?), where:
S = {s1 , s2 , ..., sn } is a finite set of game states; A = {a1 , a2 , ..., am } and O =
{o1 , o2 , ..., ol } are finite sets of actions, one for each player; P is a Markovian state transition model ? P (s, a, o, s0 ) is the probability that s0 will be the next state of the game when
the players take actions a and o respectively in state s; R is a reward (or cost) function
? R(s, a, o) is the expected one-step reward for taking actions a and o in state s; and,
? ? (0, 1] is the discount factor for future rewards. We will refer to the first player as the
maximizer and the second player as the minimizer2 . Note that if either player is permitted
only a single action, the Markov game becomes an MDP for the other player.
A policy ? for a player in a Markov game is a mapping, ? : S ? ?(A), which yields
probability distributions over the maximizer?s actions for each state in S. Unlike MDPs,
the optimal policy for a Markov game may be stochastic, i.e., it may define a mixed strategy
for every state. By convention, for any policy ?, ?(s) denotes the probability distribution
over actions in state s and ?(s, a) denotes the probability of action a in state s.
The maximizer is interested in maximizing its expected, discounted return in the minimax
sense, that is, assuming the worst case of an optimal minimizer. Since the underlying
rewards are zero-sum, it is sufficient to view the minimizer as acting to minimize the maximizer?s return. For any policy ?, we can define Q? (s, a, o) as the expected total discounted
reward of the maximizer when following policy ? after the players take actions a and o for
the first step. The corresponding fixed point equation for Q? is:
X
Q? (s, a, o) = R(s, a, o) + ?
P (s, a, o, s0 ) min
0
o ?O
s0 ?S
X
Q? (s0 , a0 , o0 )?(s0 , a0 ) .
a0 ?A
Given any Q function, the maximizer can choose actions so as to maximize its value:
V (s) =
max
? 0 (s)??(A)
min
o?O
X
Q(s, a, o)? 0 (s, a) .
(1)
a?A
We will refer to the policy ? 0 chosen by Eq. (1) as the minimax policy with respect to Q.
2
Because of the duality, we adopt the maximizer?s point of view for presentation.
This policy can be determined in any state s by solving the following linear program:
Maximize:
Subject to:
V (s)
?a ? A, ? 0 (s, a) ? 0
? 0 (s, a) = 1
a?A
Q(s, a, o)? 0 (s, a) .
?o ? O, V (s) ?
a?A
If Q = Q? , the minimax policy is an improved policy compared to ?. A policy iteration
algorithm can be implemented for Markov games in a manner analogous to policy iteration
for MDPs by fixing a policy ?i , solving for Q?i , choosing ?i+1 as the minimax policy with
respect to Q?i and iterating. This algorithm converges to the optimal minimax policy ? ? .
3 Least Squares Policy Iteration (LSPI) for Markov Games
In practice, the state/action space is too large for an explicit representation of the Q function. We consider the standard approach of approximating the Q function as the linear
b a, o) = ?(s, a, o)| w.
combination of k basis functions ?j with weights wj , that is Q(s,
With this representation, the minimax policy ? for the maximizer is determined by
X
?(s) = arg max min
?(s, a)?(s, a, o)| w ,
?(s) ??(A) o?O a?A
and can be computed by solving the following linear program
Maximize:
Subject to:
V (s)
? a ? A, ?(s, a) ? 0
?(s, a) = 1
a?A
? o ? O, V (s) ?
?(s, a)?(s, a, o) w .
a?A
We chose the LSPI algorithm to learn the weights w of the approximate value function.
Least-Squares Policy Iteration (LSPI) [5] is an approximate policy iteration algorithm that
learns policies using a corpus of stored samples. LSPI applies also with minor modifications to Markov games [6]. In particular, at each iteration, LSPI evaluates the current
policy using the stored samples and keeps the learned weights to represent implicitly the
improved minimax policy for the next iteration by solving the linear program above. The
modified update equations account for the minimizer?s action and the distribution over next
maximizer actions since the minimax policy is, in general, stochastic. More specifically, at
b and bb, which are updated as follows:
each iteration LSPI maintains two matrices, A
?(s0 , a0 )?(s0 , a0 , o0 )
A ? A + ?(s, a, o) ?(s, a, o) ? ?
,
b ? b + ?(s, a, o)r ,
a0 ?A
for any sample (s, a, o, r, s0 ). The policy ? 0 (s0 ) for state s0 is computed using the linear
program above. The action o0 is the minimizing opponent action in computing ?(s0 ) and
can be identified by the tight constraint on V (s0 ). The weight vector w is computed at
b = bb. The key step in generalizing LSPI
the end of each iteration as the solution to Aw
to team Markov games is finding efficient means to perform these operations despite the
exponentially large joint action space.
4 Least Squares Policy Iteration for Team Markov Games
A team Markov game is a Markov game where a team of N maximizers is playing against
a team of M minimizers. Maximizer i chooses actions from Ai , so the team chooses
actions a
? = (a1 , a2 , ..., aN ) from A? = A1 ? A2 ? ... ? AN , where ai ? Ai . Minimizer
i chooses actions from Oi , so the minimizer team chooses actions o? = (o1 , o2 , ..., oM )
? = O1 ? O2 ? ... ? OM , where oi ? Oi . Consider now an approximate value
from O
b a
function Q(s,
?, o?). The minimax policy ? for the maximizer team in any given state s can
be computed (naively) by solving the following linear program:
V (s)
? ?(s, a
?a
? ? A,
?) ? 0
?(s, a
?) = 1
Maximize:
Subject to:
?
a
? ?A
? V (s) ?
? o? ? O,
?(s, a
?)Q(s, a
?, o?) .
?
a
? ?A
? is exponential in N and |O|
? is exponential in M , the linear program above
Since |A|
has an exponential number of variables and constraints and would be intractable to solve,
b We assume a factored approximation [2] of
unless we make certain assumptions about Q.
the Q function, given as a linear combination of k localized basis functions. Each basis
function can be thought of as an individual player?s perception of the environment, so
each ?j need not depend upon every feature of the state or the actions taken by every
player in the game. In particular, we assume that each ?j depends only on the actions of
a small subset of maximizers Aj and minimizers Oj , that is, ?j = ?j (s, a
?j , o?j ), where
?j (A?j is the joint action space of the palyers in Aj and O
?j is the
a
?j ? A?j and o?j ? O
joint action space of the palyers in Oj ). For example, if ?4 depends only on the actions of
maximizers {4, 5, 8}, and the actions of minimizers {3, 2, 7}, then a
? 4 ? A4 ? A5 ? A8
and o?4 ? O3 ? O2 ? O7 . Under this locality assumption, the approximate (factored) value
function is
k
X
b a
Q(s,
?, o?) =
?j (s, a
?j , o?j )wj ,
j=1
where the assignments to the a
?j ?s and o?j ?s are consistent with a
? and o?. Given this form
of the value function the linear program can be simplified significantly. We look at the
constraints for the value of the state first:
k
V (s)
?(s, a
?)
?
?
a
? ?A
?j (s, a
?j , o?j )wj
j=1
k
V (s)
?(s, a
?)?j (s, a
?j , o?j )wj
?
?
j=1 a
? ?A
k
V (s)
?(s, a
?)?j (s, a
?j , o?j )wj
?
j=1
?j
a
? j ?A
? A
?j
a
? 0 ?A\
k
V (s)
?
?j (s, a
?j , o?j )
wj
?j
a
? j ?A
j=1
?(s, a
?)
? A
?j
a
? 0 ?A\
k
V (s)
?
?j (s, a
?j , o?j )?j (s, a
?j ) ,
wj
j=1
?j
a
? j ?A
where each ?j (s, a
?j ) defines a probability distribution over the actions of the players that
appear in ?j . From the last expression, it is clear that we can use ?j (s, a
?j ) as the variables
of the linear program. The number of these variables will typically be much smaller than
the number of variables ?(s, a
?), depending on the size of the A j ?s. However, we must
add constraints to ensure that the local probability distributions ?j (s) are consistent with a
? The first set of constraints are the
global distribution over the entire joint action space A.
standard ones for any probability distribution:
X
? j = 1, ..., k :
?j (s, a
?j ) = 1
?j
a
? j ?A
? j = 1, ..., k
?a
?j ? A?j , ?j (s, a
?j ) ? 0 .
:
For consistency, we must ensure that all marginals over common variables are identical:
X
X
?1?j<h?k : ?a
?0 ? A?j ? A?h ,
?j (s, a
?j ) =
?h (s, a
?h ) .
?j \A
?h
a
?0j ?A
?h \A
?j
a
? 0h ?A
These constraints are sufficient if the running intersection property is satisfied by the
?j (s)?s [3]. If not, it is possible that the resulting ?j (s)?s will not be consistent with any
global distribution even though they are locally consistent. However, the running intersection property can be enforced by introducing certain additional local distributions in the set
of ?j (s)?s. This can be achieved using a variable elimination procedure.
First, we establish an elimination order for the maximizers and we let H 1 be the set of all
?j (s)?s and L = ?. At each step i, some agent i is eliminated and we let E i be the set of all
distributions in Hi that involve the actions of agent i or have empty domain. We then create
a new distribution ?i over the actions of all agents that appear in Ei and we place ?i in L.
We then create ?i0 defined as the distribution over the actions of all agents that appear in ? i
except agent i. Next, we update Hi+1 = Hi ? {?i0 } ? Ei and repeat until all agents have
been eliminated. Note that HN will necessarily be empty and L will contain at most N
new local probability distributions. We can manipulate the elimination order in an attempt
to keep the distributions in L small (local), however their size will be exponential in the
induced tree width. As with Bayes nets, the existence and hardness of discovering efficient
elimination orderings will depend upon the topology. The set H 1 ? L of local probability
distributions satisfies the running intersection property and so we can proceed with this set
instead of the original set of ?j (s)?s and apply the constraints listed above. Even though we
are only interested in the ?j (s)?s, the existence of the additional distributions in the linear
program will ensure that the ?j (s)?s will be globally consistent.
The number of constraints needed for the local probability distributions is much smaller
than the original number of constraints. In summary, the new linear program will be:
Maximize:
Subject to:
V (s)
? j = 1, ..., k : ? a
? j ? A?j , ?j (s, a
?j ) ? 0
?j (s, a
?j ) = 1
? j = 1, ..., k :
?j
a
? j ?A
?1?j<h?k :?a
?0 ? A?j ? A?h ,
?j (s, a
?j ) =
?j \A
?
a
? 0j ?A
h
?h (s, a
?h )
? \A
?j
a
? 0h ?A
h
k
? V (s) ?
? o? ? O,
?j (s, a
?j , o?j )?j (s, a
?j ) .
wj
?j
a
? j ?A
j=1
At this point we have eliminated the exponential dependency from the number of variables and partially from the number of constraints. The last set of (exponentially many)
constraints can be replaced by a single non-linear constraint:
k
V (s) ? min
?
o
??O
?j (s, a
?j , o?j )?j (s, a
?j ) .
wj
j=1
?j
a
? j ?A
We now show how this non-linear constraint can be turned into a number of linear constraints which is not exponential in M in general. The main idea is to embed a cost network
inside the linear program [2]. In particular, we define an elimination order for the o i ?s in o?
and, for each oi in turn, we push the min operator for just oi as far inside the summation
as possible, keeping only terms that have some dependency on o i or no dependency on
any of the opponent team actions. We replace this smaller min expression over o i with a
new function fi (represent by a set of new variables in the linear program) that depends
on the other opponent actions that appear in this min expression. Finally, we introduce a
set of linear constraints for the value of fi that express the fact that fi is the minimum of
the eliminated expression in all cases. We repeat this elimination process until all o i ?s and
therefore all min operators are eliminated.
More formally, at step i of the elimination, let Bi be the set of basis functions that have not
been eliminated up to that point and Fi be the set of the new functions that have not been
eliminated yet. For simplicity, we assume that the elimination order is o 1 , o2 , ..., oM (in
practice the elimination order needs to be chosen carefully in advance since a poor elimination ordering could have serious adverse effects on efficiency). At the very beginning of
the elimination process, B1 = {?1 , ?2 , ..., ?k } and F1 is empty. When eliminating oi at
step i, define Ei ? Bi ? Fi to be those functions that contain oi in their domain or have no
dependency on any opponent action. We generate a new function f i (o?i ) that depends on all
the opponent actions that appear in Ei excluding oi :
fi (o?i ) = min
oi ?Oi
fk (o?k )
?j (s, a
?j , o?j )?j (s, a
?j ) +
wj
?j ?Ei
?j
a
? j ?A
.
fk ?Ei
We introduce a new variable in the linear program for each possible setting of the domain
o?i of the new function fi (o?i ). We also introduce a set of constraints for these variables:
X
X
X
fk (o?k )
? oi ? Oi , ? o?i : fi (o?i ) ?
wj
?j (s, a
?j , o?j )?j (s, a
?j ) +
?j
a
? j ?A
?j ?Ei
fk ?Ei
These constraints ensure that the new function is the minimum over the possible choices
for oi . Now, we define Bi+1 = Bi ? Ei and Fi+1 = Fi ? Ei + {fi } and we continue with
the elimination of action oi+1 . Notice that oi does not appear anywhere in Bi+1 or Fi+1 .
Notice also that fM will necessarily have an empty domain and it is exactly the value of
the state, fM = V (s). Summarizing everything, the reduced linear program is
Maximize:
Subject to:
fM
? j = 1, ..., k : ? a
? j ? A?j , ?j (s, a
?j ) ? 0
? j = 1, ..., k :
?j (s, a
?j ) = 1
?j
a
? j ?A
?1?j<h?k :?a
?0 ? A?j ? A?h ,
?j (s, a
?j ) =
?j \A
?
a
? 0j ?A
h
? i, ? oi , ? o?i : fi (o?i ) ?
fk (o?k )
?j (s, a
?j , o?j )?j (s, a
?j ) +
wj
?j ?Ei
?h (s, a
?h )
? \A
?j
a
? 0h ?A
h
?j
a
? j ?A
fk ?Ei
Notice that the exponential dependency in N and M has been eliminated. The total number of variables and/or constraints is now exponentially dependent only on the number
of players that appear together as a group in any of the basis functions or the intermediate functions and distributions. It should be emphasized that this reduced linear program
solves the same problem as the naive linear program and yields the same solution (albeit in
a factored form).
To complete the learning algorithm, the update equations of LSPI must also be modified.
For any sample (s, a
?, o?, r, s0 ), the naive form would be
?(s0 , a
?0 )?(s0 , a
?0 , o?0 )
A ? A + ?(s, a
?, o?) ?(s, a
?, o?) ? ?
,
b ? b + ?(s, a
?, o?)r .
?
a
? 0 ?A
The action o?0 is the minimizing opponent?s action in computing ?(s0 ). Unfortunately,
the number of terms in the summation within the first update equation is exponential in
P
N . However, the vector ?(s, a
?, o?) ? ? a?0 ?A? ?(s0 , a
?0 )?(s0 , a
?0 , o?0 ) can be computed on a
component-by-component basis avoiding this exponential blowup. In particular, the j-th
component is:
?(s0 , a
?0 )?j (s0 , a
?0j , o?0 )
?j (s, a
?j , o?) ? ?
?
a
? 0 ?A
=
?(s0 , a
?0 )?j (s0 , a
?0j , o?0 )
?j (s, a
?, o?) ? ?
?j
a
? 0j ?A
=
? ?
a
? 00
j ?A\Aj
?j (s0 , a
?0j , o?0 )
?j (s, a
?, o?) ? ?
?j
a
? 0j ?A
=
?j (s, a
?, o?) ? ?
?(s0 , a
?0 )
? ?
a
? 00
j ?A\Aj
?j (s
0
,a
?0j , o?0 )?j (s0 , a
?0j )
,
?j
a
? 0j ?A
which can be easily computed without exponential enumeration.
A related question is how to find o?0 , the minimizing opponent?s joint action in computing
?(s0 ). This can be done after the linear program is solved by going through the f i ?s in
reverse order (compared to the elimination order) and finding the choice for o i that imposes
a tight constraint on fi (o?i ) conditioned on the minimizing choice for o?i that has been found
so far. The only complication is that the linear program has no incentive to maximize f i (o?i )
unless it contributes to maximizing the final value. Thus, a constraint that appears to be
tight may not correspond to the actual minimizing choice. The solution to this is to do
a forward pass first (according to the elimination order) marking the f i (o?i )?s that really
come from tight constraints. Then, the backward pass described above will find the true
minimizing choices by using only the marked fi (o?i )?s.
The last question is how to sample an action a
? from the global distribution defined by
the smaller distributions. We begin with all actions uninstantiated and we go through all
?j (s)?s. For each j, we marginalize out the instantiated actions (if any) from ? j (s) to
generate the conditional probability and then we sample jointly the actions that remain in
the distribution. We repeat with the next j until all actions are instantiated. Notice that this
operation can be performed in a distributed manner, that is, at execution time only agents
whose actions appear in the same ?j (s) need to communicate to sample actions jointly.
This communication structure is directly derived from the structure of the basis functions.
5 An Example
The algorithm has been implemented and is currently being tested on a large flow control
problem with multiple routers and servers. Since experimental results are still in progress,
we demonstrate the efficiency gained over exponential enumeration with an example. Consider a problem with N = 5 maximizers and M = 4 minimizers. Assume also that each
maximizer or minimizer has 5 actions to choose from. The naive solution would require
solving a linear program with 3126 variables and 3751 constraints for any representation
of the value function. Consider now the following factored value function:
b a
Q(s,
?, o?) = ?1 (s, a1 , a2 , o1 , o2 )w1 + ?2 (s, a1 , a3 , o1 , o3 )w2 +
?3 (s, a2 , a4 , o3 )w3 + ?4 (s, a3 , a5 , o4 )w4 + ?5 (s, a1 , o3 , o4 )w5 .
These basis functions satisfy the running intersection property (there is no cycle of length
longer than 3), so there is no need for additional probability distributions. Using the elimination order {o4 , o3 , o1 , o2 } for the cost network, the reduced linear program contains only
121 variables and 215 constraints (we present only the 80 constraints on the value of the
state that demonstrate the variable elimination procedure, omitting the common constrains
for validity and consistency of the local probability distributions):
Maximize:
f2
Subject to:
? o4 ? O4 , ? o3 ? O3 , f4 (o3 ) ?
w4 ?4 (s, a3 , a5 , o4 )?4 (s, a3 , a5 )
+
(a3 ,a5 )?A3 ?A5
w5 ?5 (s, a1 , o3 , o4 )?5 (s, a1 )
a1 ?A1
? o3 ? O3 , ? o1 ? O1 , f3 (o1 ) ?
w2 ?2 (s, a1 , a3 , o1 , o3 )?2 (s, a1 , a3 )
+
w3 ?3 (s, a2 , a4 , o3 )?3 (s, a2 , a4 )
+
(a1 ,a3 )?A1 ?A3
f4 (o3 )
(a2 ,a4 )?A2 ?A4
? o1 ? O1 , ? o2 ? O2 , f1 (o2 ) ?
w1 ?1 (s, a1 , a2 , o1 , o2 )?1 (s, a1 , a2 ) + f3 (o1 )
(a1 ,a2 )?A1 ?A2
? o2 ? O2 , f2 ? f1 (o2 )
6 Conclusion
We have presented a principled approach to the problem of solving large team Markov
games that builds on recent advances in value function approximation for Markov games
and multiagent coordination in reinforcement learning for MDPs. Our approach permits
a tradeoff between simple architectures with limited representational capability and sparse
communication and complex architectures with rich representations and more complex coordination structure. It is our belief that the algorithm presented in this paper can be used
successfully in real-world, large-scale domains where the available knowledge about the
underlying structure can be exploited to derive powerful and sufficient factored representations.
Acknowledgments
This work was supported by NSF grant 0209088. We would also like to thank Carlos Guestrin for
helpful discussions.
References
[1] R. Dechter. Bucket elimination: A unifying framework for reasoning. Artificial Intelligence,
113(1?2):41?85, 1999.
[2] Carlos Guestrin, Daphne Koller, and Ronald Parr. Multiagent planning with factored MDPs. In
Proceeding of the 14th Neural Information Processing Systems (NIPS-14), pages 1523?1530,
Vancouver, Canada, December 2001.
[3] Carlos Guestrin, Daphne Koller, and Ronald Parr. Solving factored POMDPs with linear value
functions. In IJCAI-01 workshop on Planning under Uncertainty and Incomplete Information,
2001.
[4] Carlos Guestrin, Michail G. Lagoudakis, and Ronald Parr. Coordinated reinforcement learning.
In Proceedings of the 19th International Conference on Machine Learning (ICML-02), pages
227?234, Sydney, Australia, July 2002.
[5] Michail Lagoudakis and Ronald Parr. Model free least squares policy iteration. In Proceedings
of the 14th Neural Information Processing Systems (NIPS-14), pages 1547?1554, Vancouver,
Canada, December 2001.
[6] Michail Lagoudakis and Ronald Parr. Value function approximation in zero sum Markov games.
In Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence (UAI 2002), pages
283?292, Edmonton, Canada, 2002.
[7] Michael L. Littman. Markov games as a framework for multi-agent reinforcement learning. In
Proceedings of the 11th International Conference on Machine Learning (ICML-94), pages 157?
163, San Francisco, CA, 1994. Morgan Kaufmann.
| 2228 |@word eliminating:1 contains:1 o2:14 current:1 yet:1 router:2 must:3 dechter:1 ronald:6 update:4 intelligence:2 fewer:1 discovering:1 beginning:1 mgl:1 complication:1 daphne:2 inside:2 introduce:3 manner:3 hardness:1 expected:3 blowup:2 behavior:1 planning:2 multi:1 ol:1 discounted:2 globally:1 little:1 enumeration:3 actual:1 becomes:1 begin:1 underlying:2 finding:2 combat:1 every:3 tackle:1 growth:1 exactly:1 scaled:1 control:2 grant:1 appear:8 understood:1 local:7 despite:2 chose:1 limited:1 bi:5 case1:1 acknowledgment:1 practice:2 procedure:2 empirical:1 w4:2 thought:1 significantly:1 refers:1 marginalize:1 operator:2 demonstrated:3 maximizing:2 go:1 attention:1 simplicity:1 factored:11 analogous:1 updated:1 play:1 duke:4 cooperative:1 solved:2 worst:1 wj:12 cycle:1 ordering:2 principled:2 environment:1 constrains:1 reward:5 littman:2 depend:2 solving:9 tight:4 upon:2 efficiency:3 f2:2 basis:8 easily:1 joint:8 represented:1 uninstantiated:1 instantiated:2 artificial:2 choosing:1 whose:1 solve:1 otherwise:1 jointly:2 final:1 net:1 turned:1 achieve:1 representational:1 ijcai:1 empty:4 converges:1 depending:1 derive:1 fixing:1 minor:1 received:1 progress:1 eq:1 solves:1 sydney:1 implemented:2 c:2 come:2 convention:1 f4:2 stochastic:2 routing:1 australia:1 elimination:17 everything:1 require:1 f1:3 really:1 summation:2 mapping:1 parr:7 adopt:1 a2:13 currently:1 visited:1 coordination:3 create:2 successfully:1 modified:2 derived:3 rigorous:1 am:1 sense:1 helpful:1 summarizing:1 dependent:1 minimizers:4 i0:2 entire:1 integrated:1 typically:1 a0:6 koller:2 going:1 interested:2 arg:1 f3:2 eliminated:8 identical:1 look:1 icml:2 future:1 serious:1 composed:1 individual:1 replaced:1 opposing:1 attempt:1 w5:2 a5:6 behind:1 tuple:1 necessary:1 unless:3 tree:1 incomplete:1 theoretical:2 increased:1 markovian:1 assignment:1 cost:4 introducing:1 subset:2 too:1 stored:2 dependency:5 aw:1 combined:1 chooses:4 international:2 michael:1 together:1 w1:2 satisfied:1 choose:2 hn:1 return:2 account:1 o7:1 satisfy:1 coordinated:1 depends:4 performed:1 view:2 bayes:1 participant:2 maintains:1 capability:1 carlos:4 minimize:1 square:4 oi:16 om:3 kaufmann:1 efficiently:1 yield:3 correspond:1 generalize:1 pomdps:1 against:3 evaluates:1 gain:2 knowledge:1 carefully:1 appears:1 permitted:1 improved:2 done:1 though:2 just:1 anywhere:1 until:3 ei:12 maximizer:12 defines:1 aj:4 mdp:1 effect:1 omitting:1 contain:2 true:1 validity:1 game:30 during:3 width:1 soccer:2 generalized:1 o3:14 complete:1 demonstrate:2 reasoning:1 fi:15 lagoudakis:4 common:2 exponentially:3 marginals:1 refer:2 ai:3 collaborate:1 consistency:2 fk:6 longer:1 add:1 recent:3 reverse:1 certain:2 server:2 continue:1 exploited:1 guestrin:4 minimum:2 additional:3 morgan:1 michail:4 determine:1 maximize:8 july:1 multiple:3 full:1 justifying:1 manipulate:1 visit:1 a1:18 iteration:11 represent:2 achieved:1 whereas:1 w2:2 unlike:1 probably:1 subject:6 tend:1 induced:1 december:2 flow:2 intermediate:1 w3:2 architecture:4 fm:3 bandwidth:1 identified:1 topology:1 observability:1 idea:2 tradeoff:2 intensive:1 expression:4 o0:3 proceed:1 action:50 useful:1 iterating:1 clear:1 involve:2 listed:1 discount:1 locally:1 simplest:1 reduced:3 generate:2 nsf:1 notice:4 incentive:1 express:1 group:1 key:2 backward:1 sum:7 enforced:1 compete:1 powerful:1 communicate:1 uncertainty:2 place:1 hi:3 constraint:24 min:9 department:2 marking:1 according:1 combination:3 poor:1 smaller:4 remain:1 modification:1 s1:1 bucket:1 taken:1 computationally:2 resource:1 equation:4 turn:1 needed:1 end:1 available:1 operation:2 permit:2 opponent:8 apply:2 existence:2 original:2 denotes:2 running:4 ensure:4 a4:6 unifying:1 build:1 establish:1 approximating:1 lspi:13 question:2 strategy:4 thank:1 o4:7 assuming:1 length:1 o1:14 minimizing:6 nc:2 executed:1 unfortunately:1 expense:2 stated:1 ambitious:1 policy:28 perform:1 markov:25 finite:2 defining:1 communication:6 team:17 excluding:1 canada:3 successfuly:1 required:1 extensive:1 learned:2 nip:2 perception:1 program:22 max:2 oj:2 belief:1 natural:1 minimax:9 mdps:6 naive:4 sn:1 vancouver:2 determining:1 relative:1 multiagent:4 mixed:1 localized:1 integrate:1 agent:14 sufficient:3 consistent:5 s0:27 imposes:1 playing:1 summary:1 repeat:3 last:3 keeping:1 supported:1 free:1 side:3 taking:1 sparse:1 distributed:2 transition:1 world:1 rich:1 forward:1 reinforcement:7 san:1 simplified:1 far:2 dearth:1 bb:2 approximate:5 preferred:1 implicitly:1 keep:2 global:3 uai:1 conceptual:1 corpus:1 b1:1 francisco:1 learn:1 ca:1 contributes:1 complex:3 necessarily:2 domain:5 main:1 linearly:1 s2:1 edmonton:1 explicit:1 exponential:13 learns:1 embed:1 emphasized:1 a3:10 maximizers:5 burden:1 naively:1 intractable:1 albeit:2 workshop:1 gained:1 execution:1 conditioned:1 push:1 durham:2 locality:1 generalizing:1 intersection:4 partially:1 applies:1 a8:1 collaborating:1 determines:1 minimizer:6 satisfies:1 conditional:1 marked:1 presentation:1 replace:1 adverse:1 determined:2 specifically:1 except:1 acting:1 total:2 pas:2 duality:1 experimental:1 player:16 formally:1 meant:1 tested:1 avoiding:1 |
1,350 | 2,229 | Morton-Style Factorial Coding of Color in
Primary Visual Cortex
Javier R. Movellan
Institute for Neural Computation
University of California San Diego
La Jolla, CA 92093-0515
[email protected]
Thomas Wachtler
Sloan Center for Theoretical Neurobiology
The Salk Institute
La Jolla, CA 92037, USA
[email protected]
Thomas D. Albright
Howard Hughes Medical Institute
The Salk Institute
La Jolla, CA 92037, USA
[email protected]
Terrence Sejnowski
Computational Neurobiology Laboratory
The Salk Institute
La Jolla, CA 92037, USA
[email protected]
Abstract
We introduce the notion of Morton-style factorial coding and illustrate
how it may help understand information integration and perceptual coding in the brain. We show that by focusing on average responses one
may miss the existence of factorial coding mechanisms that become only
apparent when analyzing spike count histograms. We show evidence
suggesting that the classical/non-classical receptive field organization in
the cortex effectively enforces the development of Morton-style factorial
codes. This may provide some cues to help understand perceptual coding in the brain and to develop new unsupervised learning algorithms.
While methods like ICA (Bell & Sejnowski, 1997) develop independent
codes, in Morton-style coding the goal is to make two or more external
aspects of the world become independent when conditioning on internal
representations.
In this paper we introduce the notion of Morton-style factorial coding and illustrate how it
may help analyze information integration and perceptual organization in the brain. In the
neurosciences factorial codes are often studied in the context of mean tuning curves. A
tuning curve is called separable if it can be expressed as the product of terms selectively
influenced by different stimulus dimensions. Separable tuning curves are taken as evidence of factorial coding mechanisms. In this paper we show that by focusing on average
responses one may miss the existence of factorial coding mechanisms that become only
apparent when analyzing spike count histograms.
Morton (1969) analyzed a wide variety of psychophysical experiments on word perception
and showed that they could be explained using a model in which stimulus and context
have separable effects on perception. More precisely, in Mortons? model the joint effect of
stimulus and context on a perceptual representation can be obtained by multiplying terms
selectively controlled by stimulus and by context, i.e.,
(1)
is the empirical probability of perceiving the perceptual alternative in
where
in context ,
represents the support of stimulus for percept
response
the support
and ! to stimulus
of the context for percept . Massaro (1987b, 1987a, 1989a) has
shown that this form of factorization describes accurately a wide variety of psychophysical
studies in domains such as word recognition, phoneme recognition, audiovisual speech
recognition, and recognition of facial expressions.
Morton-style factorial codes used to be taken as evidence for a feedforward coding mechanism (Massaro, 1989b) but Movellan & McClelland (2001) showed that neural networks
with feedback connections can develop factorial codes when they follow an architectural
constraint named ?channel separability?. Channel separability is defined as follows: First
we identify the neurons which have a direct influence on the observed responses (e.g., the
set of neurons that affect an electrode). For a given set of response units, the stimulus
chanel is defined as the set of units modulated by the stimulus provided the response specification units are excised from the rest of the network. The context channel is the set of
units modulated by the context provided the response units are excised from the rest of
the networks. Two channels are called separable if they have no units in common. Channel separability implies that the influences of an information source upon the channel of
another information source should be mediated via the response specification units (see
Figure 1). While the models used in Movellan and McClelland (2001) are a simplification of actual neural circuits, the analysis suggests that the form of separability expressed
in the the Morton-Massaro model may be a useful paradigm for the study of information
integration in the brain. Indeed it is quite remarkable that the functional organization of
cortex into classical/non-classical receptive fields provides a separable architecture (See
Figure 1). Such organization may be nature?s way of enforcing Morton-style perceptual
coding. In this paper we present evidence in favor of this view by investigating how color
is encoded in primary visual cortex.
It is well known that stimuli of equal chromaticity can evoke different color percepts, depending on the visual context (Wesner & Shevell, 1992; Brown & MacLeod, 1997). Context dependent responses to color stimuli have been found in V4 (Zeki, 1983). More recently the last three authors of this article investigated the chromatic tuning properties of
V1 cells in response to stimuli presented in different chromatic contexts (Wachtler, Sejnowski, & Albright, 2003). The experiment showed that the background color, outside
the cell?s classical receptive field, had a significant effect on the response to colors inside
the receptive field. No attempt was made to model the form of such influence. In this
paper we analyze quantitatively the results of that experiment and show that a large proportion of these neurons, adhered to the Morton-Massaro law, i.e., stimulus and context had a
separable influence on the spike count histograms of these cells.
1 Methods
The animal preparation and methods of this experiment are described in Wachtler et al.
(in press) in great detail. Here we briefly describe the portion of the experiment relevant
to us. Two adult female rhesus monkeys were used in the study. Extracellular potentials
from single isolated neurons were recorded from two macaque monkeys. The monkeys
were awake and were required to fixate a small fixation target for the duration of each trial
(2500 ms.). Amplified electrical activity from the cortex was passed to a data acquisition
system for spike detection and sorting. Once a neuron was isolated, its receptive field was
determined using flashed and moving bars of different size, orientation, and color. All the
Background Channel
Response Specification
Response
Stimulus Channel
Response
Specification
Units
Stimulus
Relays
Context
Relays
Stimulus
Sensors
Context
Sensors
Stimulus
Context
Input
Electrode
Background
Stimulus
Background
Figure 1: Left: A network with separable context and stimulus processing channels. Right:
The arrows connecting the stimulus to the unit in the center represent the classical receptive
field of that unit. External inputs affecting the classical receptive field are called ?stimuli?
and all the other inputs are called ?background?. In this preparation the stimulus and background channels are separable.
neurons recorded had receptive fields at eccentricities between and .
Once the receptive fields were located, the color tuning of the neurons was mapped by
flashing 8 stimuli of different chromaticity. The stimuli were homogenous color squares,
centered on and at least twice as large as the receptive field of the neuron under study. They
were flashed for 500 ms. Chromaticity was defined in a color space similar to the one used
in Derrington, Krauskopf, and Lennie (1984). Cone excitations were calculated on the basis
of the human cone fundamentals proposed by Stockman (Stockman, MaCleod, & Johnson,
1993). The origin of the color space corresponded to a homogeneous gray background to
which the animal had been adapted (luminance 48 cd/m ). The three coordinate axis of the
color space corresponded to L versus M-cone contrast, S-cone contrast, and achromatic luminance. The 8 color stimuli were isoluminant with the gray background, had a fixed color
contrast (distance from origin of color space) and had chromatic directions corresponding
to polar angles .
After several presentations of the stimuli, the chromatic directions for which the neurons
showed a clear response were determined, and one of them was selected as the second
background condition. In the second condition, the color of the background changed during
stimulus presentation (i.e., for 500 ms) to a different color. This color was isoluminant with
the gray background, was in the direction of a stimulus color to which the cell showed clear
response, but was of lower chromatic contrast than the stimulus colors. In subsequent trials
combinations of the 8 stimulus and 2 background conditions were presented in random
order.
For each trial we recored the number of spikes in a 100 ms window starting 50 ms after
stimulus onset. This time window was chosen because color tuning was usually more
pronounced in the first response phase as compared to later periods of the response and
because it maximized the effects of context. Data were recorded for a total of 94 units. Of
these, 20 neurons were selected for having the strongest background effect and a minimum
of 16 trials per condition. No other criteria were used for the selection of these neurons.
2 Results
Figure 2 shows example tuning curves of 4 different neurons. The thick lines represent
the average response for a particular color stimulus in the plane defined by the first two
chromatic axis. The dark curve represents responses for the gray background condition.
The light curve represents responses for the color background condition. The boxes around
the tuning curves represent average response rates as a function of stimulus onset for the
two background conditions.
Testing whether a code is factorial is like testing for the absence of interaction terms in
Analysis of Variance (ANOVA). The complexity (i.e., degrees of freedom) of an ANOVA
model without interaction terms is identical to the complexity of the Morton-Massaro
model. When testing for interaction effects we analyze whether the addition of interaction terms provides significant improvement on data fit over a simple additive model. In
our case we investigate whether the addition of non-factorial terms provides a significant
improvement on data fit over the factorial Morton-Massaro model. For each neuron there
were 8 stimulus conditions, 2 background conditions, and 10 response alternatives, one per
bin in the spike count histogram. The probabilities of the spike count histogram add up to
independent probability estimates per neuron.
one thus, there is a total of
In this case the Morton-Massaro model requires
parameters
(Movellan & McClelland, 2001), thus there is a total of 63 nonfactorial terms.
For each neuron we fitted Morton-Massaro?s model and performed a standard likelihood
test to see whether the additional nonfactorial terms improved data fit significantly (i.e.,
whether the deviations from the Morton-Massaro factorial model where significant). We
found that of the 20 neurons only 5 showed significant deviations from the Morton-Massaro
). While the Morton-Massaro
model (chi-square test, 63 degrees of freedom,
model had 81 parameters many of them were highly redundant. We also evaluated a 30 parameter version of the model by performing PCA independently on the stimulus and on the
context parameters of the full model and deleting coefficients with small eigenvalues. The
30 parameter model provided fits almost indistinguishable from the 81 parameter model. In
this case only 4 neurons showed significant deviations from the model (chi-square, 124 df,
). On a pool of 20 neurons compliant with the Morton-Massaro model one would
expect the test to mistakenly reject 1 neuron by chance. Rejection of 4 or more neurons out
of 20 is not inconsistent with the idea that all the neurons were in fact compliant with the
Morton-Massaro model (
, binomial test).
Figure 2 shows the obtained and predicted spike count histograms for a typical neuron. The
top row represents the 8 stimulus conditions with gray background. The bottom row shows
the 8 conditions with color background. Lines represent spike count histograms predicted
by the Morton-Massaro model, dots represent obtained spike count histograms.
In order to test the statistical power of the likelihood-ratio test, we generated 20 neurons
with random histograms. The histograms were unimodal, with peak response randomly
selected between 0 and 9, with fall-offs similar to those found in the actual neurons and
with the same number of observations per condition as in the actual neurons. We then fitted
the 81-parameter Morton-Massaro model to each of these neurons and tested it using a
likelihood ratio test. All the simulated neurons exhibited statistically significant deviations
) suggesting that the test was quite sensitive.
from the model (chi-square, 63 df,
Finally, for comparison purposes we tested a model of information integration that uses the
same number of parameters as the Morton-Massaro model but in which the stimulus and
context terms are are combined additively instead of multiplicatively, i.e.,
(2)
Figure 2: Effect of the stimulus and background on the chromatic mean tuning curves of
4 neurons. The thick dark and light lines show mean responses in the isoluminant plane
(x axis: L-M cone variation; y axis: S cone variation) for the two background conditions.
Black: gray background; Light: colored background. The 8 boxes around each tuning
curve shows the average response rate as a function of the time from stimulus onset for the
two background conditions.
Figure 3: Predicted (lines) and obtained (dots) spike count histograms for a typical neuron.
The horizontal axis represents spike counts in a 100 ms. window. The vertical axis represents probabilities. Each row represents a different background condition. Each column
represents a different stimulus condition.
After fitting the new model, we performed a likelihood-ratio test. 80 % of the neurons
).
showed significant deviations from this model (chi-square, 63 df,
3 Relation to Tuning Curve Separability
In neuroscience separability is commonly studied in the context of mean tuning curves. For
example, a tuning curve is called (multiplicatively) separable if the conditional expected
value of a neuron?s response can be decomposed as the product of two different factors each
selectively influenced by a single stimulus dimension. An important aspect of the MortonMassaro model is that it applies to entire response histograms, not to expected values. If the
Morton-Massaro model holds, then separability appears in the following sense: If we are
allowed to see the response histograms for all the stimuli in background condition A and
the response histogram for a reference stimulus in background condition B, then it should
be possible to predict the response histograms for any stimulus in background condition B.
For example, by looking at the top row of Figure 1 and one of the cells of the bottom row
of Figure 1, it should be possible to reproduce all the other cells in the bottom row.
Obviously if we can predict response histograms then we can also predict tuning curves,
since they are based on averages of response histograms. Most importantly, there are forms
of separability of the tuning curve that become only apparent when studying the entire
response histogram. Figure 4 illustrates this fact with an example. The curve shows the
tuning curves of a particular neuron from an experiment fitted using the Morton-Massaro
model. These curves were obtained by fitting the entire spike count histograms for each
stimulus and background condition, and then obtaining the mean response for the predicted
histograms. The large open circles represent the obtained average responses. The dots
represent 95 % confidence intervals around those responses. Note that the two tuning
curves do not appear separable in a discernable way (it is not possible to predict curve B by
looking at curve A and a single point of curve B). Separability becomes only apparent when
the entire histogram is analyzed, not just the tuning curves based on response averages.
Figure 4: Tuning curves for a typical neuron as predicted by the Morton-Massaro model.
The two curves represent the average response of the neuron to isoluminant stimulus, for
two different background conditions. The elongated curve corresponds to the homogenous
gray background and the circular curve to the colored background. The open dots are the
obtained mean responses. The dots represent 95 % confidence interval of those responses.
Note that the predicted curves do not appear separable in a classic sense. However since
they are generated by Morton?s model the underlying code is factorial. This becomes apparent only when one looks at spike count histograms, not just mean tuning curves.
4 Discussion
We introduced the notion of Morton-style factorial coding and illustrated how it may help
analyze information integration and perceptual organization in the brain. We showed that
by focusing on average responses one may miss the existence of factorial coding mechanisms that become only apparent when analyzing spike count histograms. The results of
our study suggest that V1 represents color using a Morton-style factorial code. This may
provide some cues to help understand perceptual coding in the brain and to develop new
unsupervised learning algorithms. While methods like ICA (Bell & Sejnowski, 1997) develop independent codes, in Morton-style coding the goal is to make two or more external
aspects of the world become independent when conditioning on internal representations.
Morton-style coding is optimal when the statistics of stimulus and background exhibit a
particular property: when conditioning on each possible response category (i.e., spike
counts) the empirical likelihood ratios of stimulus and background factorize. Our study
suggests that Morton coding of color in natural scenes should be optimal or approximately
optimal, a prediction that can be tested via statistical analysis of color in natural scenes.
Acknowledgments
This project was supported by NSF?s grant ITR IIS-0223052.
5 References
Bell, A., & Sejnowski, T. (1997). The ?independent components? of natural scenes are edge filters.
Vision Research, 37(23), 3327?3338.
Brown, R. O., & MacLeod, D. I. A. (1997). Color appearance depends on the variance of surround
colors. Current Biology, (7), 844?849.
Derrington, A. M., Krauskopf, J., & Lennie, P. (1984). Chromatic mechanisms in lateral geniculate
nucleus of macaque. Journal of Physiology, 357, 241?265.
Domingos, P., & Pazzani, M. (1997). On the optimality of the simple Bayesian classifier under
zero-one loss. Journal of Machine Learning, 29, 103?130.
Massaro, D. W. (1987a). Categorical perception: A fuzzy logical model of categorization behavior.
In S. Harnad (Ed.), Categorical perception. Cambridge,England: Cambridge University Press.
Massaro, D. W. (1987b). Speech perception by ear and eye: A paradigm for psychological research.
Hillsdale, NJ: Erlbaum.
Massaro, D. W. (1989a). Perceiving talking faces. Cambridge, Massachusetts: MIT Press.
Massaro, D. W. (1989b). Testing between the TRACE model and the fuzzy logical model of speech
perception. Cognitive Psychology, 21, 398?421.
Morton, J. (1969). The interaction of information in word recognition. Psychological Review, 76,
165?178.
Movellan, J. R., & McClelland, J. L. (2001). The Morton-Massaro law of information integration:
Implications for models of perception. Psychological Review, (1), 113?148.
Stockman, A., MaCleod, D. I. A., & Johnson, N. E. (1993). Spectral sensitivities of the human cones.
Journal of the Optical Society of America A, (10), 2491?2521.
Wachtler, T., Sejnowski, T. J., & Albright, T. D. (2003). Representation of color stimuli in awake
macaque primary visual cortex. Neuron, 37, 1?20.
Wesner, M. F., & Shevell, S. K. (1992). Color perception within a chromatic context: Changes in
red/green equilibria caused by noncontiguous light. Vision Research, (32), 1623?1634.
Zeki, S. (1983). Colour coding in cerebral cortex: the responses of wavelength selective and colourcoded cells in monkey visual cortex to changes in wavelenght composition. Neuroscience, 9,
767?781.
| 2229 |@word trial:4 version:1 briefly:1 proportion:1 open:2 additively:1 rhesus:1 current:1 additive:1 subsequent:1 cue:2 selected:3 plane:2 colored:2 provides:3 direct:1 become:6 fixation:1 fitting:2 inside:1 introduce:2 indeed:1 expected:2 ica:2 behavior:1 brain:6 chi:4 krauskopf:2 audiovisual:1 decomposed:1 actual:3 window:3 becomes:2 provided:3 project:1 underlying:1 circuit:1 monkey:4 fuzzy:2 nj:1 classifier:1 unit:11 medical:1 grant:1 appear:2 analyzing:3 approximately:1 black:1 twice:1 studied:2 suggests:2 factorization:1 statistically:1 acknowledgment:1 enforces:1 testing:4 hughes:1 movellan:6 excised:2 empirical:2 bell:3 significantly:1 reject:1 physiology:1 word:3 confidence:2 suggest:1 selection:1 context:21 influence:4 elongated:1 center:2 starting:1 duration:1 independently:1 importantly:1 classic:1 notion:3 coordinate:1 variation:2 diego:1 target:1 massaro:24 homogeneous:1 us:1 domingo:1 origin:2 recognition:5 located:1 observed:1 bottom:3 electrical:1 adhered:1 complexity:2 stockman:3 upon:1 basis:1 joint:1 america:1 describe:1 sejnowski:6 corresponded:2 outside:1 apparent:6 quite:2 encoded:1 isoluminant:4 favor:1 achromatic:1 statistic:1 obviously:1 eigenvalue:1 interaction:5 product:2 relevant:1 amplified:1 pronounced:1 electrode:2 eccentricity:1 categorization:1 help:5 illustrate:2 develop:5 depending:1 predicted:6 implies:1 direction:3 thick:2 filter:1 centered:1 human:2 bin:1 hillsdale:1 hold:1 around:3 great:1 equilibrium:1 predict:4 relay:2 purpose:1 polar:1 geniculate:1 sensitive:1 wachtler:4 offs:1 mit:1 sensor:2 chromatic:9 morton:34 improvement:2 likelihood:5 contrast:4 sense:2 lennie:2 dependent:1 entire:4 nonfactorial:2 relation:1 reproduce:1 selective:1 orientation:1 development:1 animal:2 recored:1 integration:6 homogenous:2 field:10 equal:1 once:2 having:1 identical:1 represents:9 biology:1 look:1 unsupervised:2 stimulus:48 quantitatively:1 randomly:1 phase:1 attempt:1 freedom:2 detection:1 organization:5 investigate:1 highly:1 circular:1 analyzed:2 light:4 implication:1 edge:1 facial:1 circle:1 isolated:2 theoretical:1 fitted:3 psychological:3 column:1 deviation:5 johnson:2 erlbaum:1 combined:1 fundamental:1 peak:1 sensitivity:1 v4:1 terrence:1 compliant:2 pool:1 connecting:1 recorded:3 ear:1 external:3 cognitive:1 style:11 suggesting:2 potential:1 coding:18 coefficient:1 inc:1 sloan:1 caused:1 onset:3 depends:1 later:1 view:1 performed:2 analyze:4 portion:1 red:1 square:5 phoneme:1 variance:2 percept:3 maximized:1 identify:1 bayesian:1 accurately:1 multiplying:1 strongest:1 influenced:2 ed:1 acquisition:1 fixate:1 massachusetts:1 logical:2 color:31 javier:1 focusing:3 appears:1 follow:1 tom:1 response:44 improved:1 evaluated:1 box:2 just:2 horizontal:1 mistakenly:1 gray:7 usa:3 effect:7 brown:2 laboratory:1 flashed:2 illustrated:1 chromaticity:3 indistinguishable:1 during:1 excitation:1 m:6 criterion:1 derrington:2 recently:1 common:1 functional:1 conditioning:3 cerebral:1 significant:8 composition:1 surround:1 cambridge:3 tuning:20 had:7 dot:5 moving:1 specification:4 cortex:8 add:1 showed:9 female:1 jolla:4 minimum:1 additional:1 paradigm:2 period:1 redundant:1 ii:1 full:1 unimodal:1 england:1 controlled:1 prediction:1 vision:2 df:3 histogram:23 represent:9 cell:7 background:34 affecting:1 addition:2 interval:2 source:2 rest:2 exhibited:1 inconsistent:1 feedforward:1 variety:2 affect:1 fit:4 psychology:1 architecture:1 idea:1 itr:1 whether:5 expression:1 pca:1 colour:1 passed:1 speech:3 useful:1 clear:2 factorial:18 dark:2 mcclelland:4 category:1 nsf:1 neuroscience:3 per:4 zeki:2 anova:2 v1:2 luminance:2 cone:7 angle:1 named:1 almost:1 architectural:1 simplification:1 discernable:1 activity:1 adapted:1 precisely:1 constraint:1 awake:2 scene:3 aspect:3 optimality:1 noncontiguous:1 performing:1 separable:11 optical:1 extracellular:1 combination:1 describes:1 separability:9 explained:1 taken:2 count:14 mechanism:6 studying:1 spectral:1 alternative:2 existence:3 thomas:3 binomial:1 top:2 macleod:4 classical:7 society:1 psychophysical:2 spike:16 receptive:10 primary:3 exhibit:1 distance:1 mapped:1 simulated:1 lateral:1 enforcing:1 code:9 multiplicatively:2 ratio:4 trace:1 vertical:1 neuron:35 observation:1 howard:1 neurobiology:2 looking:2 ucsd:1 introduced:1 required:1 connection:1 california:1 macaque:3 adult:1 bar:1 usually:1 perception:8 green:1 deleting:1 terry:1 power:1 natural:3 eye:1 axis:6 categorical:2 mediated:1 review:2 law:2 loss:1 expect:1 versus:1 remarkable:1 nucleus:1 degree:2 harnad:1 article:1 cd:1 row:6 changed:1 supported:1 last:1 understand:3 institute:5 wide:2 fall:1 face:1 curve:28 dimension:2 feedback:1 world:2 calculated:1 author:1 made:1 commonly:1 san:1 evoke:1 investigating:1 factorize:1 channel:10 nature:1 pazzani:1 ca:4 obtaining:1 investigated:1 domain:1 arrow:1 allowed:1 salk:6 perceptual:8 evidence:4 effectively:1 flashing:1 illustrates:1 sorting:1 rejection:1 wavelength:1 appearance:1 visual:5 expressed:2 talking:1 applies:1 corresponds:1 chance:1 conditional:1 goal:2 presentation:2 absence:1 change:2 determined:2 perceiving:2 typical:3 miss:3 called:5 total:3 albright:3 la:4 selectively:3 internal:2 support:2 modulated:2 preparation:2 tested:3 |
1,351 | 223 | Effects of Firing Synchrony on Signal Propagation in Layered Networks
Effects of Firing Synchrony on Signal
Propagation in Layered Networks
G. T. Kenyon,l E. E. Fetz,2 R. D. Puffl
1 Department
of Physics FM-15, 2Department of Physiology and Biophysics SJ-40
University of Washington, Seattle, Wa. 98195
ABSTRACT
Spiking neurons which integrate to threshold and fire were used
to study the transmission of frequency modulated (FM) signals
through layered networks. Firing correlations between cells in the
input layer were found to modulate the transmission of FM signals under certain dynamical conditions. A tonic level of activity
was maintained by providing each cell with a source of Poissondistributed synaptic input. When the average membrane depolarization produced by the synaptic input was sufficiently below
threshold, the firing correlations between cells in the input layer
could greatly amplify the signal present in subsequent layers. When
the depolarization was sufficiently close to threshold, however, the
firing synchrony between cells in the initial layers could no longer
effect the propagation of FM signals. In this latter case, integrateand-fire neurons could be effectively modeled by simpler analog
elements governed by a linear input-output relation.
1
Introduction
Physiologists have long recognized that neurons may code information in their instantaneous firing rates. Analog neuron models have been proposed which assume
that a single function (usually identified with the firing rate) is sufficient to characterize the output state of a cell. We investigate whether biological neurons may
use firing correlations as an additional method of coding information. Specifically,
we use computer simulations of integrate-and-fire neurons to examine how various
levels of synchronous firing activity affect the transmission of frequency-modulated
141
142
Kenyon, Fetz and Puff
(FM) signals through layered networks. Our principal observation is that for certain
dynamical modes of activity, a sufficient level of firing synchrony can considerably
amplify the conduction of FM signals. This work is partly motivated by recent
experimental results obtained from primary visual cortex [1, 2] which report the
existence of synchronized stimulus-evoked oscillations (SEa's) between populations
of cells whose receptive fields share some attribute.
2
Description of Simulation
For these simulations we used integrate-and-fire neurons as a reasonable compromise between biological accuracy and mathematical convenience. The subthreshold
membrane potential of each cell is governed by an over-damped second-order differential equation with source terms to account for synaptic input:
(1)
where ?Ic is the membrane potential of cell k, N is the number of cells, Tic; is the
synaptic weight from cell j to cell k, tj are the firing times for the ph cell, Tp is the
synaptic weight of the Poisson-distributed input source, Pic are the firing times of
Poisson-distributed input, and Tr and Ttl are the rise and decay times of the EPSP.
The Poisson-distributed input represents the synaptic drive from a large presynaptic
population of neurons.
Equation 1 is augmented by a threshold firing condition
(2)
then
where 9(t - tAJ is the threshold of the kth cell, and T/1 is the absolute refractory
period. If the conditions (2) do not hold then ?Ic continues to be governed by
equation 1.
The threshold is 00 during the absolute refractory period and decays exponentially
during the relative refractory period:
9(t - t k) = {
~' -(t-t' )/.,
upe
It"
n
+uo,
if t - t~ <
otherwise,
T /1 ;
(3)
where, 60 is the resting threshold value, f)p is the maximum increase of 9 during the
relative refractory period, and Tp is the time constant characterizing the relative
refractory period.
2.1
Simulation Parameters
and Ttl. are set to 0.2 msec and 1 msec, respectively. Tp and TAl; are always
(1/100)90 ? This strength was chosen as typical of synapses in the eNS. To sustain
Tr
Effects or Firing Synchrony on Signal Propagation in Layered Networks
..-..
>
--e
o
o
20
(mae<:)
40
0
20
(msec)
40
Figure 1: Example membrane potential trajectories for two different modes of
activity. EPSP's arrive at mean frequency, LIm, that is higher for mode I (a) than for
mode II (b). Dotted line below threshold indicates asymptotic membrane potential.
activity, during each interval Ttl, a cell must receive ~ (Bo/Tp) = 100 Poisson~istributed inputs. Resting potential is set to 0.0 mV and Bo to 10 mY . 4>1' and
4>1' are set to 0.0 mV and -1.0 mV /msec, which simulates a small hyperpolarization
after firing. Ta and Tp were each set to 1 msec, and Bp to 1.0 mY .
3
Response Properties of Single Cells
Figure 1 illustrates membrane potential trajectories for two modes of activity. In
mode I (fig. la), synaptic input drives the membrane potential to an asymptotic
value (dotted line) within one standard deviation of ()o. In mode II (fig. 1b), the
asymptotic membrane potential is more than one standard deviation below ()o'
Figure 2 illustrates the change in average firing rate produced by an EPSP, as
measured by a cross-correlation histogram (CCH) between the Poisson source and
the target cell. In mode I (fig. 2a), the CCH is characterized by a primary peak
followed by a period of reduced activity. The derivative of the EPSP, when measured in units of Bo , approximates the peak magnitude of the CCH. In mode II
(fig. 2b), the CCB peak is not followed by a period of reduced activity. The EPSP
itself, measured in units of Bo and divided by Td, predicts the peak magnitude of
the CCB. The transform between the EPSP and the resulting change in firing rate
has been discussed by several authors [3, 4]. Figures 2c and 2d show the cumulative area (CUSUM) between the CCH and the baseline firing rate. The CUSUM
asymptotes to a finite value, ~, which can be interpreted as the average number of
additional firings produced by the EPSP.
~
increases with EPSP amplitude in a manner which depends on the mode of
activity (fig. 2e). In mode II, the response is amplified for large inputs (concave
up). In mode I, the response curve is concave down. The amplified response to large
inputs during mode II activity is understandable in terms of the threshold crossing
mechanism. Populations of such cells should respond preferentially to synchronous
synaptic input [5].
143
144
Kenyon, Fetz and Puff
0
6
6
0
d)
b)
.2
mode II
mode II
.02
-.
i
u
11/
t/)
.01
--E
0
o
6
0
(msec)
6
0,.1
.2
EPSP Amplitude in units 01 fl.
Figure 2: Response to EPSP for two different modes of activity. a) and b)
Cross-correlogram with Poisson input source. Mode I and mode II respectively.
c) and d) CUSUM computed from a) and b). e) A vs. EPSP amplitude for both
modes of activity.
4
Analog Neuron Models
The histograms shown in Figures 2a,b may be used to compute the impulse response
kernel, U, for a cell in either of the two modes of activity, simply by subtracting the
baseline firing rate and normalizing to a unit impulse strength. If the cell behaves
as a linear system in response to a small impulse, U may be used to compute the
response of the cell to any time-varying input. In terms of U, the change in firing
rate, 6F, produced by an external source of Poisson-distributed impulses arriving
with an instantaneous frequency Ft(t) is given by
(4)
where, T t is the amplitude of the incoming EPSP's. For the layered network used in
our simulations, equation 4 may be generalized to yield an iterative relation giving
the signal in one layer in terms of the signal in the previous layer.
(5)
Effects of Firing Synchrony on Signal Propagation in Layered Networks
I
tI ?
1/
til
--~.a
:z:
u
u
o
4
o
4
(msec)
o
4
-4 0
4
-4 0 4
(msec)
-4 0
4
Figure 3: Signal propagation in mode I network. a) Response in first three layers
due to a single impulse delivered simultaneously to all cells in the first layer. Ratio
of common to independent input given by percentages at top of figure. First row
corresponds to input layer. Firing synchrony does not effect signal propagation
through mode I cells. Prediction of analog neuron model (solid line) gives a good
description of signal propagation at all synchrony levels tested. b) Synchrony between cells in the same layer measured by MGH. Firing synchrony within a layer
increases with layer depth for all initial values of the synchrony in the first layer.
where, 6Fi is the change in instantaneous firing rate for cells in the ith layer,1i+l,t
is the synaptic weight between layer i and i + 1, and N is the number of cells per
layer. Equation 5 follows from an equivalent analog neuron model with a linear
input-output relation. This convolution method has been proposed previously [6).
5
Effects of Firing Synchrony on Signal Propagation
A layered network was designed such that the cells in the first layer receive impulses
from both common and independent sources. The ratio of the two inputs was
adjusted to control the degree of firing synchrony between cells in the initial layer.
Each cell in a given layer projects to all the cells in the succeeding layer with equal
strength, 1~o9o. All simulations use 50 cells per layer.
Figure 3a shows the response of cells in the mode I state to a single impulse of
strength 1~o9o delivered simultaneously to all the cells in the first layer. In this and
all subsequent figures, successive layers are shown from top to bottom and synchrony
(defined as the fraction of common input for cells in the first layer) increases from
145
146
Kenyon, Fetz and Puff
.03
i
.2
..-..
(.)
1/
III
e
.......
!I:
u
u
.2
o
4
o
4
(msec)
-4 0
4
-4 0 4
(msec)
-4 0
4
Figure 4: Signal propagation in mode II network. Same organization as fig. 3.
a) At initial levels of synchrony above:::::: 30%, signal propagation is amplified significantly. The propagation of relatively asynchronous signals is still adequately
described by the analog neuron model. b) Firing synchrony within a layer increases
with layer depth for initial synchrony levels above:::::: 30%. Below this level synchrony within a layer decreases with layer depth.
left to right. Figure 3a shows that signals propagate through layers of interneurons
with little dependence on firing synchrony. The solid line is the prediction from an
equivalent analog neuron model with a linear input-output relation (eq. 5). At all
levels of input synchrony, signal propagation is reasonably well approximated by
the simplified model.
Firing synchrony between cells in the same layer may be measured using a mass
correlogram (MeH). The MeH is defined as the auto-correlation of the population
spike record, which combines the individual spike records of all cells in a given layer.
Figure 3b shows that for all initial levels of synchrony produced in the input layer,
the intra-layer firing synchrony increased rapidly with layer depth.
The simulations were repeated using an identical network, but with the tonic level
of input reduced sufficiently to fix the cells in the mode II state (fig. 4). In contrast
with the mode I case, the effect of firing synchrony is substantial. When firing is
asynchronous only a weak impulse response is present in the third layer (fig. 4a,
bottom left), as predicted by the analog neuron model (eq. 5). For levels of input
synchrony above ~ 30%, however, the response in the third layer is substantially
more prominent. A similar effect occurs for synchrony within a layer. At input
Effects of Firing Synchrony on Signal Propagation in Layered Networks
o
4
804
804
(msee)
8
o
4
804
(msec)
804
8
Figure 5: Propagation of sinusoidal signals . Similar organization to figs. 3,4. Top
row shows modulation of input sources. a) Mode I activity. Signal propagation is
not significantly influenced by the level of firing synchrony. Analog neuron model
(solid line) gives reasonable prediction of signal tranmission. b) Mode II activity.
At initial levels of firing synchrony above:::::: 30%, signal propagation is amplified.
The propagation of asynchronous signals is still well described by the analog neuron
model. Period of applied oscillation = 10 msec.
synchrony levels below :::::: 30%, firing synchrony between cells in the same layer
(fig. 4b) falls off in successive layers. Above this level, however, synchrony grows
rapidly from layer to layer.
To confirm that our results are not limited to the propagation of signals generated
by a single impulse, oscillatory signals were produced by sinusoidally modulating
the firing rates of both the common and independent input sources to the first layer
(fig. 5). In the mode I state (fig. 5a), we again find that firing synchrony does
not significantly alter the degree of signal penetration. The solid line shows that
signal transmission is adequately described by the simplified model (eqs. 4,5). In
the mode II case, however, firing synchrony is seen to have an amplifying effect
on sinusoidal signals as well (fig. 5b). Although the propagation of asynchronous
signals is well described by the analog neuron model, at higher levels of synchrony
propagation is enhanced.
147
148
Kenyon, Fetz and Puff
6
Discussion
It is widely accepted that biological neurons code information in their spike den-
sity or firing rate. The degree to which the firing correlations between neurons can
code additional information by modulating the transmission of FM signals, depends
strongly on dynamical factors. We have shown that for cells whose average membrane potential is sufficiently below the threshold for firing, spike correlations can
significantly enhance the transmission of FM signals. We have also shown that the
propagation of asynchronous signals is well described by analog neuron models with
linear transforms. These results may be useful for understanding the role played by
synchronized SEQ's in primary visual cortex [1,2]. Such signals may be propagated
more effectively to subsequent processing areas as a consequence of their relative
synchronization.
These observations may also pertain to the neural mechanisms underlying the increased levels of synchronous discharge of cerebral cortex cells observed in slow wave
sleep [7J. Another relevant phenomenon is the spread of synchronous discharge from
an epileptic focus; the extent to which synchronous activity is propagated through
surrounding areas may be modulated by changing their level of activation through
voluntary effort or changing levels of arousal. These physiological phenomena may
involve mechanisms similar to those exhibited by our network
model.
,
Acknowledgements
This work is supported by an NIH pre-doctoral training grant in molecular biophysics (grant # T32-GM 08268) and by the Office of Naval Research (contract
# N 00018-89-J-1240).
References
[1J C. M. Gray, P. Konig, A. K. Engel, W. Singer, Nature 338:334-337 (1989)
[2) R. Eckhorn, R. Bauer, W. Jordan, M. Brosch, W. Kruse, H. J. Reitboeck, Bio.
Cyber. 60:121-130 (1988)
[3) E. E. Fetz, B. Gustafsson, J. Physiol. 341:387-410 (1983)
[4J P. A. Kirkwood, J. Neurosci. Meth. 1:107-132 (1979)
[5] M. Abeles, Local Cortical Circuits: Studies of Brain Function. Springer, New
York, Vol. 6 (1982)
[6] E. E. Fetz, Neural Information Processing Systems American Institute of
Physics. (1988)
[7] H. Noda, W.R.Adey, J. Neurophysiol. 23:672-684 (1970)
| 223 |@word simulation:7 propagate:1 tr:2 solid:4 initial:7 activation:1 must:1 physiol:1 subsequent:3 asymptote:1 designed:1 succeeding:1 v:1 ith:1 record:2 successive:2 simpler:1 mathematical:1 differential:1 gustafsson:1 combine:1 manner:1 examine:1 brain:1 td:1 little:1 project:1 underlying:1 circuit:1 mass:1 tic:1 ttl:3 interpreted:1 substantially:1 depolarization:2 msee:1 ti:1 concave:2 control:1 unit:4 uo:1 grant:2 bio:1 local:1 consequence:1 firing:43 modulation:1 doctoral:1 evoked:1 limited:1 area:3 physiology:1 significantly:4 pre:1 amplify:2 close:1 layered:9 convenience:1 pertain:1 equivalent:2 population:4 ccb:2 discharge:2 target:1 enhanced:1 gm:1 element:1 crossing:1 approximated:1 continues:1 predicts:1 bottom:2 ft:1 role:1 observed:1 decrease:1 substantial:1 compromise:1 neurophysiol:1 various:1 surrounding:1 whose:2 widely:1 otherwise:1 transform:1 itself:1 delivered:2 subtracting:1 epsp:12 relevant:1 rapidly:2 amplified:4 description:2 seattle:1 konig:1 transmission:6 sea:1 sity:1 measured:5 eq:3 predicted:1 synchronized:2 attribute:1 integrateand:1 fix:1 biological:3 adjusted:1 hold:1 sufficiently:4 ic:2 mgh:1 amplifying:1 modulating:2 engel:1 always:1 varying:1 office:1 focus:1 upe:1 naval:1 indicates:1 greatly:1 contrast:1 baseline:2 poissondistributed:1 relation:4 field:1 equal:1 washington:1 identical:1 represents:1 alter:1 report:1 stimulus:1 simultaneously:2 individual:1 fire:4 organization:2 interneurons:1 investigate:1 intra:1 tj:1 damped:1 arousal:1 increased:2 sinusoidally:1 tp:5 deviation:2 characterize:1 conduction:1 abele:1 considerably:1 my:2 peak:4 contract:1 physic:2 off:1 enhance:1 again:1 external:1 american:1 derivative:1 til:1 account:1 potential:9 sinusoidal:2 coding:1 mv:3 depends:2 wave:1 cch:4 synchrony:35 accuracy:1 subthreshold:1 yield:1 weak:1 produced:6 trajectory:2 drive:2 oscillatory:1 synapsis:1 influenced:1 synaptic:9 frequency:4 propagated:2 lim:1 amplitude:4 higher:2 ta:1 sustain:1 response:12 strongly:1 correlation:7 propagation:22 mode:30 gray:1 impulse:9 grows:1 effect:11 kenyon:5 adequately:2 during:5 maintained:1 generalized:1 prominent:1 instantaneous:3 fi:1 nih:1 common:4 behaves:1 spiking:1 hyperpolarization:1 refractory:5 exponentially:1 cerebral:1 analog:12 discussed:1 approximates:1 resting:2 mae:1 eckhorn:1 longer:1 cortex:3 istributed:1 recent:1 certain:2 seen:1 additional:3 recognized:1 period:8 kruse:1 signal:37 ii:12 characterized:1 cross:2 long:1 divided:1 molecular:1 biophysics:2 prediction:3 poisson:7 histogram:2 kernel:1 cell:39 receive:2 interval:1 source:9 exhibited:1 cyber:1 simulates:1 jordan:1 iii:1 affect:1 fm:8 identified:1 synchronous:5 whether:1 motivated:1 epileptic:1 effort:1 york:1 useful:1 involve:1 transforms:1 ph:1 reduced:3 percentage:1 dotted:2 per:2 vol:1 threshold:10 changing:2 fraction:1 respond:1 arrive:1 reasonable:2 seq:1 oscillation:2 layer:42 fl:1 followed:2 played:1 sleep:1 activity:16 strength:4 bp:1 tal:1 t32:1 relatively:1 department:2 membrane:9 penetration:1 den:1 equation:5 brosch:1 previously:1 mechanism:3 singer:1 physiologist:1 existence:1 top:3 giving:1 spike:4 occurs:1 receptive:1 primary:3 dependence:1 kth:1 presynaptic:1 extent:1 code:3 modeled:1 providing:1 ratio:2 preferentially:1 rise:1 understandable:1 neuron:20 observation:2 convolution:1 finite:1 voluntary:1 noda:1 tonic:2 pic:1 dynamical:3 below:6 usually:1 meth:1 cusum:3 auto:1 understanding:1 acknowledgement:1 relative:4 asymptotic:3 synchronization:1 integrate:3 degree:3 reitboeck:1 sufficient:2 share:1 row:2 supported:1 asynchronous:5 arriving:1 fetz:7 fall:1 institute:1 characterizing:1 absolute:2 distributed:4 bauer:1 curve:1 depth:4 kirkwood:1 cortical:1 cumulative:1 author:1 simplified:2 taj:1 sj:1 confirm:1 incoming:1 iterative:1 nature:1 reasonably:1 spread:1 neurosci:1 repeated:1 augmented:1 fig:13 en:1 slow:1 msec:12 governed:3 third:2 down:1 decay:2 physiological:1 normalizing:1 effectively:2 magnitude:2 illustrates:2 simply:1 visual:2 correlogram:2 bo:4 springer:1 corresponds:1 modulate:1 change:4 specifically:1 typical:1 principal:1 partly:1 experimental:1 la:1 accepted:1 puff:4 latter:1 modulated:3 tested:1 phenomenon:2 |
1,352 | 2,230 | Transductive and Inductive Methods for
Approximate Gaussian Process Regression
1
Anton Schwaighofer1 2
TU Graz, Institute for Theoretical Computer Science
Inffeldgasse 16b, 8010 Graz, Austria
http://www.igi.tugraz.at/aschwaig
Volker Tresp2
Siemens Corporate Technology CT IC4
Otto-Hahn-Ring 6, 81739 Munich, Germany
http://www.tresp.org
2
Abstract
Gaussian process regression allows a simple analytical treatment of exact Bayesian inference and has been found to provide good performance,
yet scales badly with the number of training data. In this paper we compare several approaches towards scaling Gaussian processes regression
to large data sets: the subset of representers method, the reduced rank
approximation, online Gaussian processes, and the Bayesian committee machine. Furthermore we provide theoretical insight into some of
our experimental results. We found that subset of representers methods
can give good and particularly fast predictions for data sets with high
and medium noise levels. On complex low noise data sets, the Bayesian
committee machine achieves significantly better accuracy, yet at a higher
computational cost.
1 Introduction
Gaussian process regression (GPR) has demonstrated excellent performance in a number
of applications. One unpleasant aspect of GPR is its scaling behavior with the size of the
training data set N. In direct implementations, training time increases as O N 3 , with a
memory footprint of O N 2 . The subset of representer method (SRM), the reduced rank
approximation (RRA), online Gaussian processes (OGP) and the Bayesian committee machine (BCM) are approaches to solving the scaling problems based on a finite dimensional
approximation to the typically infinite dimensional Gaussian process.
The focus of this paper is on providing a unifying view on the methods and analyze their
differences, both from an experimental and a theoretical point of view. For all of the discussed methods, we also examine asymptotic and actual runtime and investigate the accuracy versus speed trade-off. A major difference of the methods discussed here is that
the BCM performs transductive learning, whereas RRA, SRM and OGP methods perform
induction style learning. By transduction 1 we mean that a particular method computes a
test set dependent model, i.e. it exploits knowledge about the location of the test data in its
approximation. As a consequence, the BCM approximation is calculated when the inputs
to the test data are known. In contrast, inductive methods (RRA, OGP, SRM) build a model
solely on basis of information from the training data.
In Sec. 1.1 we will briefly introduce Gaussian process regression (GPR). Sec. 2 presents the
various inductive approaches to scaling GPR to large data, Sec. 3 follows with transductive
approaches. In Sec. 4 we give an experimental comparison of all methods and an analysis
of the results. Conclusions are given in Sec. 5.
1.1 Gaussian Process Regression
We consider Gaussian process regression (GPR) on a set of training data D
x i yi Ni 1 ,
where targets are generated from an unknown function f via y i
f xi ei with independent Gaussian noise e i of variance ? 2 . We assume
a Gaussian process prior on f x i ,
meaning that functional values f x i on points xi Ni 1 are jointly Gaussian distributed,
with zero mean and covariance matrix
(or Gram matrix) K N . K N itself is given by the
kernel (or covariance) function k , with K iNj k xi x j .
The Bayes optimal estimator f? x
E f x
D takes on the form of a weighted combination of kernel functions [4] on training points x i
f? x
N
? wi k x xi
(1)
i 1
The weight vector w
w 1
wN is the solution to the system of linear equations
K N ?2 1 w
y
(2)
where 1 denotes a unit matrix and y
y 1 yN . Mean and covariance of the GP
prediction f on a set of test points x 1
xT can be written conveniently as
E f
D
K N w and cov f
D
Ki jN
K K
N
K N ?2 1
1
K
N
(3)
with
k xi x j . Eq. (2) shows clearly what problem we may expect with large training data sets: The solution to a system of N linear equations requires O N 3 operations,
and the size of the Gram matrix K N may easily exceed the memory capacity of an average
work station.
2 Inductive Methods for Approximate GPR
2.1 Reduced Rank Approximation (RRA)
Reduced rank approximations focus on ways of efficiently solving the system of linear
equations Eq. (2), by replacing the kernel matrix K N with some approximation K? N .
Williams and Seeger [12] use the Nystr?om method to calculate an approximation to the
first B eigenvalues and eigenvectors of K N . Essentially, the Nystr?om method performs an
eigendecomposition of the B B covariance matrix K B , obtained from a set of B basis
points selected at random out of the training data. Based on the eigendecomposition of K B ,
1 Originally,
the differences between transductive and inductive learning where pointed out in statistical learning theory [10]. Inductive methods minimize the expected loss over all possible test
sets, whereas transductive methods minimize the expected loss for one particular test set.
one can compute approximate eigenvalues and eigenvectors of K N . In a special case, this
reduces to
K N K? N K NB K B
1 K NB
(4)
B
NB
where K is the kernel matrix for the set of basis points, and K is the matrix of kernel
evaluations between training and basis points. Subsequently, this can be used to obtain an
? of Eq. (1) via matrix inversion lemma in O NB 2 instead of O N 3 .
approximate solution w
2.2 Subset of Representers Method (SRM)
Subset of representers methods replace Eq. (1) by a linear combination of kernel functions
on a set of B basis points, leading to an approximate predictor
f? x
B
? ?i k x x i
(5)
i 1
with an optimal weight vector
?
?2 K B
K NB K NB
1
K NB y
(6)
Note that Eq. (5) becomes exact if the kernel function allows a decomposition of the form
k xi x j
Ki B KB 1 K j B .
In practical implementation, one may expect different performance depending on the
choice of the B basis points x 1 xB . Different approaches for basis selection have been
used in literature, we will discuss them in turn.
Obviously, one may select the basis points at random (SRM Random) out of the training
set. While this produces no computational overhead, the prediction outcome may be suboptimal.
In the sparse greedy matrix approximation (SRM SGMA, [6]) a subset of B basis kernel
functions is selected such that all kernel functions on the training data can be well approximated by linear combinations of the selected basis kernels 2 . If proximity in the associated
reproducing kernel Hilbert space (RKHS) is chosen as the approximation criterion, the optimal linear combination (for a given basis set) can be computed analytically. Smola and
Sch?olkopf [6] introduce a greedy algorithm that finds a near optimal set of basis functions,
where the algorithm has the same asymptotic complexity O NB 2 as the SRM Random
method.
Whereas the SGMA basis selection focuses only on the representation power of kernel
functions, one can also design a basis selection scheme that takes into account the full
likelihood model of the Gaussian process. The underlying idea of the greedy posterior
approximation algorithm (SRM PostApp, [7]) is to compare the log posterior of the subset
of representers method and the full Gaussian process log posterior. One thus can select
basis functions in such a fashion that the SRM log posterior best approximates 3 the full
GP log posterior, while keeping the total number of basis functions B minimal. As for the
case of SGMA, this algorithm can be formulated such that its asymptotic computational
complexity is O NB2 , where B is the total number of basis functions selected.
2.3 Online Gaussian Processes
Csat?o and Opper [2] present an online learning scheme that focuses on a sparse model of
the posterior process that arises from combining a Gaussian process prior with a general
2 This
method was not developed particularly for GPR, yet we expect this basis selection scheme to
be superior to a purely random choice.
3 However, Rasmussen [5] noted that Smola and Bartlett [7] falsely assume that the additive constant
terms in the log likelihood remain constant during basis selection.
likelihood model of data. The posterior process is assumed to be Gaussian and is modeled
by a set of basis vectors. Upon arrival of a new data point, the updated (possibly nonGaussian) posterior process is being projected to the closest (in a KL-divergence sense)
Gaussian posterior. If this projection induces an error above a certain threshold, the newly
arrived data point will be included in the set of basis vectors. Similarly, basis vectors with
minimum contribution to the posterior process may be removed from the basis set.
3 Transductive Methods for Approximate GPR
In order to derive a transductive kernel classifier, we rewrite the Bayes optimal prediction
Eq. (3) as follows:
E f
D
K K K
N
cov y
f
1
N
K
1
N
K
cov y
f
1 y
(7)
Here, cov y
f is the covariance obtained when predicting training observations y given
the functional values f at the test points:
cov y
f
K N ?2 1
K
N
K
1 K
N
(8)
Mind that this matrix can be written down without actual knowledge of f .
Examining Eq. (7) reveals that the Bayes optimal prediction of Eq. (3) can be expressed as
a weighted sum of kernel functions on test points. In Eq. (7), the term cov y
f 1 y gives a
weighting of training observations y: Training points which cannot be predicted well from
the functional values of the test points are given a lower weight. Data points which are
?closer? to the test points (in the sense that they can be predicted better) obtain a higher
weight than data which are remote from the test points.
Eq. (7) still involves the inversion of the N N matrix cov y
f 1 and thus does not make
a practical method. By using different approximations for cov y
f 1 , we obtain different
transductive methods, which we shall discuss in the next sections.
Note that in a Bayesian framework, transductive and inductive methods are equivalent, if
we consider matching models (the true model for the data is in the family of models we
consider for learning). Large data sets reveal more of the structure of the true model, but for
computational reasons, we may have to limit ourselves to models with lower complexity.
In this case, transductive methods allow us to focus on the actual region of interest, i.e. we
can build models that are particularly accurate in the region where the test data lies.
3.1 Transductive SRM
For large sets of test data, we may assume cov y
f to be a diagonal matrix cov y
f
?2 1, meaning that test values f allow a perfect prediction of training observations (up to
noise). With this approximation, Eq. (7) reduces to the prediction of a subset of representers
method (see Sec. 2.2) where the test points are used as the set of basis points (SRM Trans).
3.2 Bayesian Committee Machine (BCM)
For a smaller number of test data, assuming a diagonal matrix for cov y
f (as for the
transductive SRM method) seems unreasonable. Instead, we can use the less stringent
assumption of cov y
f being block diagonal. After some matrix manipulations, we obtain
the following approximation for Eq. (7) with block diagonal cov y
f :
E? f
D
C
1
M
? cov f
i 1
cov f
C
D 1
D i 1 E f
D i
(9)
M 1 K
1
M
? cov f
i 1
Di 1
(10)
This is equivalent to the Bayesian committee machine (BCM) approach [8]. In the BCM,
the training data D are partitioned into M disjoint sets D 1 D M of approximately same
size (?modules?), and M GPR predictors are trained on these subsets. In the prediction
stage, the BCM calculates the unknown responses f at a set of test points x1
xT at once.
The prediction E f
D i of GPR module i is weighted by the inverse covariance of its
prediction. An intuitively appealing effect of this weighting scheme is that modules which
are uncertain about their predictions are automatically weighted less than modules that are
certain about their predictions.
Very good results were obtained with the BCM with random partitioning [8] into subsets
D i . The block diagonal approximation of cov y
f becomes particularly accurate, if each
D i contains data that is spatially separated from other training data. This can be achieved
by pre-processing the training data with a simple k-means clustering algorithm, resulting in
an often drastic reduction of the BCM?s error rates. In this article, we always use the BCM
with clustered data.
4 Experimental Comparison
In this section we will present an evaluation of the different approximation methods discussed in Sec. 2 and 3 on four data sets. In the ABALONE data set [1] with 4177 examples,
the goal is to predict the age of Abalones based on 8 inputs. The KIN8NM data set 4 represents the forward dynamics of an 8 link all-revolute robot arm, based on 8192 examples.
The goal is to predict the distance of the end-effector from a target, given the twist angles
of the 8 links as features. KIN40K represents the same task, yet has a lower noise level
than KIN8NM and contains 40 000 examples. Data set ART with 50000 examples was
used extensively in [8] and describes a nonlinear map with 5 inputs with a small amount of
additive Gaussian noise.
For all data sets, we used a squared exponential kernel of the form k x i x j
exp 2d1 2 xi x j 2 , where the kernel parameter d was optimized individually for each
method. To allow a fair comparison, the subset selection methods SRM SGMA and SRM
PostApp were forced to select a given number B of basis functions (instead of using the
stopping criteria proposed by the authors of the respective methods). Thus, all methods
form their predictions as a linear combination of exactly B basis functions.
Table 1 shows the average remaining variance 5 in a 10-fold cross validation procedure on
all data sets. For each of the methods, we have run experiments with different kernel width
d. In Table 1 we list only the results obtained with optimal d for each method.
On the ABALONE data set (very high level of noise), all of the tested methods achieved
almost identical performance, both with B 200 and B 1000 basis functions. For all
other data sets, significant performance differences were observed. Out of the inductive
4 From
the DELVE archive http://www.cs.toronto.edu/?delve/
MSE
variance 100 MSEmodel , where MSEmean is the MSE obtained from using the
mean
mean of training targets as the prediction for all test data. This gives a measure of performance
that is independent of data scaling.
5 remaining
Abalone
Method
KIN8NM
KIN40K
ART
200
1000
200
1000
200
1000
200
1000
SRM PostApp
SRM SGMA
SRM Random
?
RRA Nystrom
Online GP
42 81
42 83
42 86
42 98
42 87
42 81
42 81
42 82
41 10
13 79
21 84
22 34
7 84
8 70
9 01
9 49
18 32
18 77
2 36
4 25
4 39
3 91
5 62
5 87
N/A
N/A
16 49
N/A
N/A
N/A
10 36
N/A
N/A
N/A
5 37
1 12
1 79
1 79
N/A
N/A
BCM
SRM Trans
42 86
42 93
42 81
42 79
10 32
21 95
8 31
9 79
2 81
16 47
0 83
4 25
0 27
5 15
0 20
1 64
Table 1: Remaining variance, obtained with different GPR approximation methods on four
data sets, with different number of basis functions selected (200 or 1000). Remaining variance is given in per cent, averaged over 10-fold cross validation. Marked
in bold are results that are significantly better (with a significance level of 99% or
above in a paired t-test) than any of the other methods
? ) best performance was
methods (SRM SGMA, SRM Random, SRM PostApp, RRA Nystr om
always achieved with SRM PostApp. Using the results in a paired t-test showed that this
was significant at a level of 99% or above. Online Gaussian processes 6 typically performed
slightly worse than SRM PostApp. Furthermore, we observed certain problems with the
? method. On all but the ABALONE data set, weights w
? took on values in the
RRA Nystrom
range of 10 3 or above, leading to poor performance. For this reason, the results for RRA
? were omitted from Table 1. Further comments on these problems will be given in
Nystrom
Sec. 4.2.
Comparing induction and transduction methods, we see that the BCM performs significantly better than any inductive method in most cases. Here, the average MSE obtained
with the BCM was only a fraction (25-30%) of the average MSE of the best inductive
method. By a paired t-test we confirmed that the BCM is significantly better than all other
methods on the KIN40K and ART data sets, with significance level of 99% or above. On
the KIN8NM data set (medium noise level) we observed a case where SRM PostApp performed best. We attribute this to the fact that k-means clustering was not able to find well
separated clusters. This reduces the performance of the BCM, since the block diagonal
approximation of Eq. (8) becomes less accurate (see Sec. 3.2). Mind that all transductive
methods necessarily lose their advantage over inductive methods, when the allowed model
complexity (that is, the number of basis functions) is increased.
We further noticed that, on the KIN40K and ART data sets, SRM Trans consistently outperformed SRM Random, despite of SRM Trans being the most simplistic transductive method.
The difference in performance was only small, yet significant at a level of 99%.
As mentioned above, we did not make use of the stopping criterion proposed for the SRM
PostApp method, namely the relative gap between SRM log posterior and the log posterior
of the full Gaussian process model. In [7], the authors suggest that the gap is indicative of
the generalization performance of the SRM model and use a gap of 2 5% in their experiments. In contrast, we did not observe any correlation between the gap and the generalization performance in our experiments. For example, selecting 200 basis points out of the
KIN40K data set gave a gap of 1%, indicating a good fit. As shown in Table 1, a significantly better error was achieved with 1000 basis functions (giving a gap of 3 5 10 4 ).
Thus, it remains open how one can automatically choose an appropriate basis set size B.
6 Due
to the numerically demanding approximations, runtime of the OGP method for B
rather long. We thus only list results for B 200 basis functions.
1000 is
Memory consumption
Method
Exact GPR
Runtime
Initialization
Prediction
Initialization
Prediction
KIN40K
O
O NB
O N
O N
O
O NB2
O N
O N
N/A
4 min
3 min
3 min
7h
11 h
est. 150 h
30 min
N2
?
RRA Nystrom
SRM Random
SRM Trans
SRM SGMA
SRM PostApp
Online GP
BCM
Computational cost
O NB
O B2
O B
O NB2
O
?
N
B2
O NB2
O B
N3
O B
O B
O NB
?
Table 2: Memory consumption, asymptotic computational cost and actual runtime for different GP approximation methods with N training data points and B basis points,
B N. For the BCM, we assume here that training and test data are partitioned
into modules of size B. Asymptotic cost for predictions show the cost per test
point. The actual runtime is given for the KIN40K data set, with 36000 training
examples, 4000 test patterns and B 1000 basis functions for each method.
4.1 Computational Cost
Table 2 shows the asymptotic computational cost for all approximation methods we have
described in Sec. 2 and 3. The subset of representers methods (SRM) show the most favorable cost for the prediction stage, since the resulting model consists only of B basis
functions with their associated weight vector. Table 2 also lists the actual runtime 7 for
one (out of 10) cross validation runs on the KIN40K data set. Here, methods with the same
asymptotic complexity exhibit runtimes ranging from 3 minutes to 150 hours. For the SRM
methods, most of this time is spent for basis selection (SRM PostApp and SRM SGMA). We
thus consider the slow basis selection as the bottleneck for SRM methods when working
with larger number of basis functions or larger data sets.
4.2 Problems with RRA Nystr o? m
? in RRA Nystrom
? take on values in
As mentioned in Sec. 4, we observed that weights w
the range of 10 3 or above on data sets KIN8NM, KIN40K and ART. This can be explained
? solves Eq. (2) with an
by considering the perturbation of linear systems. RRA Nystr om
? instead of the true w.
approximate K? N instead of K N , thus calculating an approximate w
?
Using matrix perturbation theory, we can show that the relative error of the approximate w
is bounded by
? ?
?i
? w
w
i
(11)
max
? i ?2
i
w
?
? i denote eigenvalues of K N resp. K? N . A closer look at the Nystr?om approxwhere ?i and ?
imation [11] revealed that already for moderately complex data sets, such as KIN8NM,
it tends to underestimate eigenvalues of the Gram matrix, unless a very high number of
basis points is used. If in addition a rather low noise variance is assumed, we obtain a
very high value for the error bound in Eq. (11), confirming our observations in the experiments. Methods to overcome the problems associated with the Nystr?om approximation are
currently being investigated [11].
7 Runtime was
logged on Linux PCs with AMD Athlon 1GHz CPUs, with all methods implemented
in Matlab and optimized with the Matlab profiler.
5 Conclusions
Our results indicate that, depending on the computational resources and the desired accuracy, one may select methods as follows: If the major concern is speed of prediction, one is
well advised to use the subset of representers method with basis selection by greedy posterior approximation. This method may be expected to give results that are significantly
better than other (inductive) methods. While being painfully slow during basis selection,
the resulting models are compact, easy to use and accurate. Online Gaussian processes
achieve a slightly worse accuracy, yet they are the only (inductive) method that can easily
be adapted for general likelihood models, such as classification and regression with nonGaussian noise. A generalization of the BCM to non-Gaussian likelihood models has been
presented in [9].
On the other hand, if accurate predictions are the major concern, one may expect best results
with the Bayesian committee machine. On complex low noise data sets (such as KIN40K
and ART) we observed significant advantages in terms of prediction accuracy, giving an
average mean squared error that was only a fraction (25-30%) of the error achieved by
the best inductive method. For the BCM, one must take into account that it is a transduction scheme, thus prediction time and memory consumption are larger than those of SRM
methods.
Although all discussed approaches scale linearly in the number of training data, they exhibit
significantly different runtime in practice. For the experiments we had done in this paper
(running 10-fold cross validation on given data) the Bayesian committee machine is about
one order of magnitude slower than an SRM method with randomly chosen basis; SRM
with greedy posterior approximation is again an order of magnitude slower than the BCM.
Acknowledgements Anton Schwaighofer gratefully acknowledges support through an
Ernst-von-Siemens scholarship.
References
[1] Blake, C. and Merz, C. UCI repository of machine learning databases. 1998.
[2] Csat?o, L. and Opper, M. Sparse online gaussian processes. Neural Computation, 14(3):641?
668, 2002.
[3] Leen, T. K., Dietterich, T. G., and Tresp, V., eds. Advances in Neural Information Processing
Systems 13. MIT Press, 2001.
[4] MacKay, D. J. Introduction to Gaussian processes. In C. M. Bishop, ed., Neural Networks
and Machine Learning, vol. 168 of NATO Asi Series. Series F, Computer and Systems Sciences.
Springer Verlag, 1998.
[5] Rasmussen, C. E. Reduced rank Gaussian process learning, 2002. Unpublished Manuscript.
[6] Smola, A. and Sch?olkopf, B. Sparse greedy matrix approximation for machine learning. In
P. Langely, ed., Proceedings of ICML00. Morgan Kaufmann, 2000.
[7] Smola, A. J. and Bartlett, P. Sparse greedy gaussian process regression. In [3], pp. 619?625.
[8] Tresp, V. A Bayesian committee machine. Neural Computation, 12(11):2719?2741, 2000.
[9] Tresp, V. The generalized bayesian committee machine. In Proceedings of the Sixth ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 130?139.
Boston, MA USA, 2000.
[10] Vapnik, V. N. The nature of statistical learning theory. Springer Verlag, 1995.
[11] Williams, C. K., Rasmussen, C. E., Schwaighofer, A., and Tresp, V. Observations on the
Nystr?om method for Gaussian process prediction. Tech. rep., Available from the authors? web
pages, 2002.
[12] Williams, C. K. I. and Seeger, M. Using the nystr?om method to speed up kernel machines. In
[3], pp. 682?688.
| 2230 |@word repository:1 briefly:1 inversion:2 seems:1 open:1 covariance:6 decomposition:1 nystr:9 reduction:1 contains:2 series:2 selecting:1 rkhs:1 comparing:1 yet:6 written:2 must:1 additive:2 nb2:4 confirming:1 greedy:7 selected:5 indicative:1 location:1 toronto:1 org:1 direct:1 consists:1 overhead:1 introduce:2 falsely:1 expected:3 behavior:1 examine:1 automatically:2 actual:6 cpu:1 considering:1 becomes:3 underlying:1 bounded:1 medium:2 what:1 developed:1 runtime:8 exactly:1 classifier:1 partitioning:1 unit:1 yn:1 tends:1 limit:1 consequence:1 despite:1 solely:1 approximately:1 advised:1 initialization:2 delve:2 range:2 averaged:1 practical:2 practice:1 block:4 footprint:1 procedure:1 asi:1 significantly:7 projection:1 matching:1 pre:1 aschwaig:1 suggest:1 cannot:1 selection:10 nb:11 www:3 equivalent:2 map:1 demonstrated:1 williams:3 insight:1 estimator:1 d1:1 updated:1 resp:1 target:3 exact:3 approximated:1 particularly:4 database:1 observed:5 module:5 calculate:1 graz:2 region:2 remote:1 trade:1 removed:1 mentioned:2 complexity:5 moderately:1 dynamic:1 trained:1 solving:2 rewrite:1 purely:1 upon:1 basis:43 easily:2 various:1 separated:2 forced:1 fast:1 outcome:1 larger:3 otto:1 cov:17 transductive:14 jointly:1 itself:1 gp:5 online:9 obviously:1 advantage:2 eigenvalue:4 analytical:1 took:1 tu:1 uci:1 combining:1 ernst:1 achieve:1 olkopf:2 cluster:1 produce:1 perfect:1 ring:1 spent:1 depending:2 derive:1 eq:15 solves:1 implemented:1 predicted:2 involves:1 c:1 indicate:1 attribute:1 subsequently:1 kb:1 stringent:1 ogp:4 clustered:1 generalization:3 painfully:1 proximity:1 blake:1 exp:1 predict:2 major:3 achieves:1 omitted:1 favorable:1 outperformed:1 lose:1 currently:1 individually:1 weighted:4 mit:1 clearly:1 gaussian:29 always:2 rather:2 volker:1 focus:5 consistently:1 rank:5 likelihood:5 tech:1 contrast:2 seeger:2 sigkdd:1 sense:2 inference:1 dependent:1 stopping:2 typically:2 germany:1 classification:1 art:6 special:1 mackay:1 once:1 runtimes:1 identical:1 represents:2 look:1 representer:1 randomly:1 divergence:1 ourselves:1 interest:1 investigate:1 mining:1 evaluation:2 pc:1 xb:1 accurate:5 closer:2 respective:1 unless:1 desired:1 theoretical:3 minimal:1 uncertain:1 effector:1 increased:1 cost:8 subset:13 predictor:2 srm:42 examining:1 international:1 off:1 nongaussian:2 linux:1 squared:2 again:1 von:1 choose:1 possibly:1 worse:2 style:1 leading:2 account:2 sec:11 bold:1 b2:2 representers:8 igi:1 performed:2 view:2 analyze:1 bayes:3 contribution:1 minimize:2 om:8 ni:2 accuracy:5 variance:6 kaufmann:1 efficiently:1 anton:2 bayesian:11 confirmed:1 ed:3 sixth:1 underestimate:1 pp:3 nystrom:5 associated:3 di:1 newly:1 treatment:1 austria:1 knowledge:3 hilbert:1 manuscript:1 higher:2 originally:1 response:1 leen:1 done:1 furthermore:2 smola:4 stage:2 correlation:1 working:1 hand:1 web:1 replacing:1 ei:1 nonlinear:1 reveal:1 usa:1 effect:1 dietterich:1 true:3 inductive:14 analytically:1 spatially:1 during:2 width:1 noted:1 abalone:5 criterion:3 generalized:1 arrived:1 performs:3 meaning:2 ranging:1 kin8nm:6 superior:1 functional:3 twist:1 discussed:4 approximates:1 numerically:1 significant:4 similarly:1 pointed:1 gratefully:1 had:1 robot:1 rra:12 posterior:14 closest:1 showed:1 manipulation:1 verlag:2 certain:3 rep:1 yi:1 morgan:1 minimum:1 full:4 corporate:1 reduces:3 cross:4 long:1 paired:3 calculates:1 prediction:23 regression:9 simplistic:1 essentially:1 kernel:18 achieved:5 athlon:1 whereas:3 addition:1 sch:2 archive:1 comment:1 near:1 exceed:1 revealed:1 easy:1 wn:1 fit:1 gave:1 suboptimal:1 idea:1 bottleneck:1 bartlett:2 matlab:2 eigenvectors:2 amount:1 extensively:1 induces:1 reduced:5 http:3 disjoint:1 per:2 csat:2 shall:1 vol:1 four:2 threshold:1 fraction:2 sum:1 run:2 inverse:1 angle:1 logged:1 family:1 almost:1 scaling:5 ki:2 ct:1 bound:1 fold:3 badly:1 adapted:1 n3:1 aspect:1 speed:3 min:4 munich:1 combination:5 poor:1 remain:1 smaller:1 describes:1 slightly:2 wi:1 partitioned:2 appealing:1 intuitively:1 explained:1 equation:3 resource:1 remains:1 discus:2 turn:1 committee:9 mind:2 imation:1 drastic:1 end:1 available:1 operation:1 unreasonable:1 observe:1 appropriate:1 slower:2 jn:1 cent:1 denotes:1 clustering:2 remaining:4 running:1 tugraz:1 unifying:1 calculating:1 exploit:1 giving:2 scholarship:1 hahn:1 build:2 noticed:1 already:1 diagonal:6 exhibit:2 distance:1 link:2 capacity:1 consumption:3 amd:1 reason:2 induction:2 assuming:1 modeled:1 providing:1 implementation:2 design:1 unknown:2 perform:1 observation:5 finite:1 perturbation:2 reproducing:1 station:1 namely:1 unpublished:1 kl:1 optimized:2 bcm:20 hour:1 ic4:1 trans:5 able:1 pattern:1 kin40k:10 max:1 memory:5 power:1 demanding:1 predicting:1 arm:1 scheme:5 technology:1 acknowledges:1 tresp:5 prior:2 literature:1 acknowledgement:1 discovery:1 asymptotic:7 revolute:1 relative:2 loss:2 expect:4 versus:1 age:1 validation:4 eigendecomposition:2 article:1 keeping:1 rasmussen:3 allow:3 institute:1 sparse:5 distributed:1 ghz:1 overcome:1 calculated:1 opper:2 gram:3 computes:1 forward:1 author:3 projected:1 approximate:9 compact:1 nato:1 reveals:1 assumed:2 xi:7 table:8 nature:1 mse:4 excellent:1 complex:3 necessarily:1 investigated:1 did:2 significance:2 linearly:1 noise:11 arrival:1 n2:1 fair:1 allowed:1 x1:1 transduction:3 fashion:1 slow:2 exponential:1 lie:1 gpr:12 weighting:2 down:1 minute:1 xt:2 bishop:1 list:3 concern:2 vapnik:1 magnitude:2 gap:6 boston:1 conveniently:1 expressed:1 schwaighofer:2 springer:2 acm:1 ma:1 goal:2 formulated:1 marked:1 towards:1 replace:1 included:1 infinite:1 lemma:1 total:2 inj:1 experimental:4 merz:1 siemens:2 est:1 indicating:1 select:4 unpleasant:1 support:1 arises:1 tested:1 |
1,353 | 2,231 | An Information Theoretic Approach to the
Functional Classification of Neurons
Elad Schneidman,1,2 William Bialek,1 and Michael J. Berry II2
1
Department of Physics and 2 Department of Molecular Biology
Princeton University, Princeton NJ 08544, USA
{elads,wbialek,berry}@princeton.edu
Abstract
A population of neurons typically exhibits a broad diversity of responses
to sensory inputs. The intuitive notion of functional classification is that
cells can be clustered so that most of the diversity is captured by the identity of the clusters rather than by individuals within clusters. We show
how this intuition can be made precise using information theory, without any need to introduce a metric on the space of stimuli or responses.
Applied to the retinal ganglion cells of the salamander, this approach recovers classical results, but also provides clear evidence for subclasses
beyond those identified previously. Further, we find that each of the ganglion cells is functionally unique, and that even within the same subclass
only a few spikes are needed to reliably distinguish between cells.
1
Introduction
Neurons exhibit an enormous variety of shapes and molecular compositions. Already in his
classical work, Cajal [1] recognized that the shapes of cells can be classified, and he identified many of the cell types that we recognize today. Such classification is fundamentally
important, because it implies that instead of having to describe ?1012 individual neurons,
a mature neuroscience might need to deal only with a few thousand different classes of
nominally identical neurons. There are three broad methods of classification: morphological, molecular, and functional. Morphological and molecular classification are appealing
because they deal with relatively fixed properties, but ultimately the functional properties
of neurons are the most important, and neurons that share the same morphology or molecular markers need not embody the same function. With attention to arbitrary detail, every
neuron will be individual, while a coarser view might overlook an important distinction; a
quantitative formulation of the classification problem is essential.
The vertebrate retina is an attractive example: its anatomy is well studied and highly ordered, containing repeated micro-circuits that look out at different angles in visual space
[1, 2, 3]; its overall function (vision) is clear, giving the experimenter better intuition
about relevant stimuli; and responses of many of its output neurons, ganglion cells, can
be recorded simultaneously using a multi-electrode array, allowing greater control of experimental variables than possible with serial recordings [4]. Here we exploit this favorable
experimental situation to highlight the mathematical questions that must lie behind any attempt at classification.
Functional classification of retinal ganglion cells typically has consisted of finding qualitatively different responses to simple stimuli. Classes are defined by whether ganglion cells
fire spikes at the onset or offset of a step of light or both (ON, OFF, ON/OFF cells in frog
[5]) or whether they fire once or twice per cycle of a drifting grating (X, Y cells in cat [6]).
Further elaborations exist. In the frog, the literature reports 1 class of ON-type ganglion
cell and 4 or 5 classes of OFF-type [7]. The salamander has been reported to have only 3 of
these OFF-type ganglion cells [8]. The classes have been distinguished using stimuli such
as diffuse flashes of light, moving bars, and moving spots. The results are similar to earlier
work using more exotic stimuli [9]. In some cases, there is very close agreement between
anatomical and functional classes, such as the (?,?) and (Y,X) cells in the cat. However,
the link between anatomy and function is not always so clear.
Here we show how information theory allows us to define the problem of classification
without any a priori assumptions regarding which features of visual stimulus or neural
response are most significant, and without imposing a metric on these variables. All notions
of similarity emerge from the joint statistics of neurons in a population as they respond to
common stimuli. To the extent that we identify the function of retinal ganglion cells as
providing the brain with information about the visual world, then our approach finds exactly
the classification which captures this functionality in a maximally efficient manner. Applied
to experiments on the tiger salamander retina, this method identifies the major types of
ganglion cells in agreement with traditional methods, but on a finer level we find clear
structure within a group of 19 fast OFF cells that suggests at least 5 functional subclasses.
More profoundly, even cells within a subclass are very different from one another, so that on
average the ganglion cell responses to the simplified visual stimuli we have used provide
?6 bits/sec of information about cell identity within our population of 21 cells. This is
sufficient to identify uniquely each neuron in an ?elementary patch? of the retina within one
second, and a typical pair of cells can be distinguished reliably by observing an average of
just two or three spikes.
2
Theory
Suppose that we could give a complete characterization, for each neuron i = 1, 2, ? ? ? , N
in a population, of the probability P (r|~s, i) that a stimulus ~s will generate the response
r. Traditional approaches to functional classification introduce (implicitly or explicitly) a
parametric representation for the distributions P (r|~s, i) and then search for clusters in this
parameter space. For visual neurons we might assume that responses are determined by the
projection of the stimulus movie ~s onto a single template or receptive field ~fi , P (r|~s, i) =
F (r; ~fi ?~s); classifying neurons then amounts to clustering the receptive fields. But it is
not possible to cluster without specifying what it means for these vectors to be similar; in
this case, since the vectors come from the space of stimuli, we need a metric or distortion
measure on the stimuli themselves. It seems strange that classifying the responses of visual
neurons requires us to say in advance what it means for images or movies to be similar. 1
Information theory suggests a formulation that does not require us to measure similarity
among either stimuli or responses. Imagine that we present a stimulus ~s and record the
response r from a single neuron in the population, but we don?t know which one. This response tells us something about the identity of the cell, and on average this can be quantified
If all cells are selective for a small number of commensurate features, then the set of vectors ~fi
must lie on a low dimensional manifold, and we can use this selectivity to guide the clustering. But
we still face the problem of defining similarity: even if all the receptive fields in the retina can be
summarized meaningfully by the diameters of the center and surround (for example), why should we
believe that Euclidean distance in this two dimensional space is a sensible metric?
1
as the mutual information between responses and identity (conditional on the stimulus),
N
P (r|~s, i)
1 XX
P (r|~s, i) log2
bits,
I(r; i|~s) =
N i=1 r
P (r|~s)
(1)
PN
where P (r|~s) = (1/N ) i=1 P (r|~s, i). The mutual information I(r; i|~s) measures the
extent to which different cells in the population produce reliably distinguishable responses
to the same stimulus; from Shannon?s classical arguments [10] this is the unique measure of
these correlations which is consistent with simple and plausible constraints. It is natural to
ask this question on average in an ensemble of stimuli P (~s) (ideally the natural ensemble),
hI(r; i|~s)i~s =
N Z
1 X
P (r|~s, i)
[d~s]P (~s)P (r|~s, i) log2
;
N i=1
P (r|~s)
(2)
hI(r; i|~s)i~s is invariant under all invertible transformations of r or ~s.
Because information is mutual, we also can think of hI(r; i|~s)i~s as the information that
cellular identity provides about the responses we will record. But now it is clear what we
mean by classifying the cells: If there are clear classes, then we can predict the responses
to a stimulus just by knowing the class to which a neuron belongs rather than knowing its
unique identity. Thus we should be able to find a mapping i ? C of cells into classes
C = 1, 2, ? ? ? , K such that hI(r; C|~s)i~s is almost as large as hI(r; i|~s)i~s , despite the fact
that the number of classes K is much less than the number of cells N .
Optimal classifications are those which use the K different class labels to capture as much
information as possible about the stimulus-response relation, maximizing hI(r; C|~s)i~s at
fixed K. More generally we can consider soft classifications, described by probabilities
P (C|i) of assigning each cell to a class, in which case we would like to capture as much
information as possible about the stimulus-response relation while constraining the amount
of information that class labels provide directly about identity, I(C; i). In this case our
optimization problem becomes, with ? as a Lagrange multiplier,
max [hI(r; C|~s)i~s ? ?I(C; i)] .
P (C|i)
(3)
This is a generalization of the information bottleneck problem [11].
Here we confine ourselves to hard classifications, and use a greedy agglomerative algorithm
[12] which starts with K = N and makes mergers which at every step provide the smallest
reduction in I(r; C|~s). This information loss on merging cells (or clusters) i and j is given
by
D(i, j) ? ?Iij (r; C|~s) = hDJS [P (r|~s, i)||P (r|~s, j)]i~s ,
(4)
where DJS is the Jensen?Shannon divergence [13] between the two distributions, or equivalently the information that one sample provides about its source distribution in the case
of just these two alternatives. The matrix of ?distances? ?Iij characterizes the similarities
among neurons in pairwise fashion.
Finally, if cells belong to clear classes, then we ought to be able to replace each cell by a
typical or average member of the class without sacrificing function. In this case function is
quantified by asking how much information cells provide about the visual scene. There is a
strict complementarity of the information measures: information that the stimulus/response
relation provides about the identity of the cell is exactly information about the visual scene
which will be lost if we don?t know the identity of the cells [14]. Our information theoretic
approach to classification of neurons thus produces classes such that replacing cells with
average class members provides the smallest loss of information about the sensory inputs.
3
The responses of retinal ganglion cells to identical stimuli
We recorded simultaneously 21 retinal ganglion cells from the salamander using a multielectrode array.2 The visual stimulus consisted of 100 repeats of a 20 s segment of spatially
uniform flicker (see fig. 1a), in which light intensity values were randomly selected every
30 ms from a Gaussian distribution having a mean of 4 mW/mm2 and an RMS contrast of
18%. Thus, the photoreceptors were presented with exactly the same visual stimulus, and
the movie is many correlation times in duration, so we can replace averages over stimuli by
averages over time (ergodicity). A 3 s sample of the ganglion cell?s responses to the visual
stimulus is shown in Fig. 1b. There are times when many of the cells fire together, while at
other times only a subset of these cells is active. Importantly, the same neuron may be part
of different active groups at different times.
a
b
time
15
5
10
5
0
0
10
20
cell rank order (by rate)
0
mean contrast
10
20
firing rate (spikes/s)
Information rate (bits/s)
500 ms
d
c
0.2
0.1
0
-0.1
-0.2
-300
-200
-100
0
time relative to spike (ms)
Figure 1: Responses of salamander ganglion cells to modulated uniform field intensity. a: The
retina is presented with a series of uniform intensity ?images?. The intensity modulation is Gaussian
white noise distributed. b: A 3 sec segment of the (concurrent) responses of 21 ganglion cells to
repeated presentation of the stimulus. The rasters are ordered from bottom to top according to the
average firing rate of the neurons (over the whole movie). c: Firing rate and Information rates of the
different cells as a function of their rank, ordered by their firing rate. d: The average stimulus pattern
preceding a spike for each of the different cells. Traditionally, these would be classified as 1 ON cell,
1 slow-OFF cell and 19 fast-OFF cells.
On a finer time scale than shown here, the latency of the responses of the single neurons
and their spiking patterns differ across time. To analyze the responses of the different
2
The retina is isolated from the eye of the larval tiger salamander (Ambystoma tigrinum) and perfused in Ringer?s medium. Action potentials were measured extracellularly using a multi-electrode
array [4], while light was projected from a computer monitor onto the photoreceptor layer. Because
erroneously sorted spikes would strongly effect our results, we were very conservative in our identification of cleanly isolated cells.
neurons, we discretize the spike trains into time bins of size ?t. We examine the response
in windows of time having length T , so that an individual neural response r becomes a
binary ?word? W with T /?t ?letters?.3
Since the cells in Fig. 1b are ordered according to their average firing rate, it is clear that
there is no ?simple? grouping of the cells? responses with respect to this response parameter;
firing rates range continuously from 1 to 7 spikes per second (Fig. 1c). Similarly, the rate
of information (estimated according to [15]) that the cells encode about the same stimulus
also ranges continuously from 3 to 20 bits/s. We estimate the average stimulus pattern
preceding a spike for each of the cells, the spike triggered average (STA), shown in Fig. 1d.
According to traditional classification based on the STA, one of the cells is an ON cell, one
is a slow OFF cells and 19 belong to the fast OFF class [16]. While it may be possible to
separate the 19 waveforms of the fast OFF cells into subgroups, this requires assumptions
about what stimulus features are important. Furthermore, there is no clear standard for
ending such subclassification.
4
Clustering of the ganglion cells responses into functional types
To classify these ganglion cells, we solved the information theoretic optimization problem
described above. Figure 2a shows the pairwise distances D(i, j) among the 21 cells, ordered
by their average firing rates; again, firing rate alone does not cluster the cells. The result of
the greedy clustering of the cells is shown by a binary dendrogram in Fig. 2b.
a
b
bits/s
2
5
3
10
2
15
1
20
10
15
20
c
normalized information
about identity
1.5
1
0.5
0
5
0
13 14 15 7 3 8 1216 201718 10 11 2 5 6 9 19 21 1 4
d
1
bits/s
4
5
0.8
0.6
3
10
0.4
1x10ms
2x5ms
5x2ms
1x10ms nn
0.2
0
distance (bits/s)
4
2
15
1
20
0
5
10
15
20
number of clusters
0
5
10
15
20
Figure 2: Clustering ganglion cell responses. a: Average distances between the cells responses;
cells are ordered by their average firing rate. b: Dendrogram of cell clustering. Cell names correspond to their firing rate rank. The height of a merge reflects the distance between merged elements.
c: The information that the cells? responses convey about the clusters in every stage of the clustering
in (b), normalized to the total information that the responses convey about cell identity. Using different response segment parameters or clustering method (e.g., nearest neighbor) result in very similar
behavior. d: reordering of the distance matrix in (a) according to the tree structure given in (b).
The greedy agglomerative approximation [12] starts from every cell as a single cluster.
We iteratively merge the clusters ci and cj which have the minimal value of D(ci , cj )
3
As any fixed choice of T and ?t is arbitrary, we explore a range of these parameters.
and display this distance or information loss as the height of the merger in Fig. 2b. We
pool their spike trains together as the responses of the new cell class. We now re-estimate
the distances between clusters and repeat the procedure, until we get a single cluster that
contains all cells. Fig. 2c shows the compression in information achieved by each of the
mergers: for each number of clusters, we plot the mutual information between the clusters
and the responses, hI(r; C|~s)i~s , normalized by the information that the response conveys
about the full set of cells, hI(r; i|~s)i~s . The clustering structure and the information curve in
Fig. 2c are robust (up to one cell difference in the final dendrogram) to changes in the word
size and bin size used; we even obtain the same results with a nearest neighbor clustering
based on D(i, j). This suggests that the top 7 mergers in Fig. 2b (which correspond to the
bottom 7 points in panel c) are of significantly different subgroups. Two of these mergers,
which correspond to the rightmost branches of the dendrogram, separate out the ON and
slow OFF cells. The remaining 5 clusters are subclasses of fast OFF cells. However, Fig. 2d
which shows the dissimilarity matrix from panel a, reordered by the result of the clustering,
demonstrates that while there is clear structure within the cell population, the subclasses
there are not sharply distinct.
How many types are there?
While one might be happy with classifying the fast OFF cells into 5 subclasses, we further
asked whether the cells within a subclass are reliably distinguishable from one another;
that is, are the bottom mergers in Fig. 2b-c significant? To this end we randomly split each
of the 21 cells into 2 halves (of 50 repeats each), or ?siblings?, and re-clustered. Figure
3a shows the resulting dendrogram of this clustering, indicating that the cells are reliably
distinguishable from one another: The nearest neighbor of each new half?cell is its own
sibling, and (almost) all of the first layer mergers are of the corresponding siblings (the
only mismatch is of a sibling merging with a neighboring full cell and then with the other
sibling). Figure 3b shows the very different cumulative probability distributions of pairwise
distances among the parent cells and that of the distances between siblings.
a
b
1
cumulative distribution
distance (bits/s)
2
1.5
1
0.5
9
2 5 6
0
15
14
13
12 16 20 17
7 3 8
18
10 11 19
21 1
4
0.8
"siblings"
all pairs
0.6
0.4
0.2
0
0.1
.2
.3 .4 .5
2
3
4
1
average distance between cells (bits/s)
Figure 3: Every cell is different from the others. a: Clustering of cell responses after randomly
splitting every cell into 2 ?siblings?. The nearest neighbor of each of the new cells is its sibling and
(except for one case) so is the first merge. From the second level upwards, the tree is identical to
Fig. 2b (up to symmetry of tree plotting). b: Cumulative distribution of pairwise distances between
cells. The distances between siblings are easily discriminated from the continuous distribution of
values of all the (real) cells.
How significant are the differences between the cells?
It might be that cells are distinguishable, but only after observing their responses for very
long times. Since 1 bit is needed to reliably distinguish between a pair of cells, Fig. 3b
shows that more than 90% of the pairs are reliably distinguishable within 2 seconds or less.
This result is especially striking given the low mean spike rate of these cells; clearly, at
times where none of the cells is spiking, it is impossible to distinguish between them. To
place the information about identity on an absolute scale, we compare it to the entropy
of the responses at each time, using 10 ms segments of the responses at each time during
the stimulus (Fig. 4a). Most of the points lie close to the origin, but many of them reflect
discrete times when the responses of the neurons are very different and hence highly informative about cell identity: under the conditions of our experiment, roughly 30% of the
response variability among cells is informative about their identity.4 On average observing
a single neural response gives about 6 bits/s about the identity of the cells within this population. We also computed the average number of spikes per cell which we need to observe
to distinguish reliably between cells i and j,
nd (i, j) =
1
2 (r?i
+ r?j )
.
D(i, j)
(5)
where r?i is the average spike rate of cell i in the experiment. Figure 4b shows the cumulative probability distribution of the values of nd . Evidently, more than 80% of the pairs are
reliably distinguishable after observing, on average, only 3 spikes from one of the neurons.
Since ganglion cells fire in bursts, this suggest that most cells are reliably distinguishable
based on a single firing ?event?! We also show that for the 11 most similar cells (those in
the left subtree in Fig. 2b) only a few more spikes, or one extra firing event, are required to
reliably distinguish them.
b
1
cumulative distribution
information about identity in
10 ms response segment (bits)
a
0.8
0.6
0.4
0.2
0
0
0.5
1
1.5
2
2.5
entropy of 10 ms response segments (bits)
1
0.8
all pairs
subtree pairs
0.6
0.4
0.2
0 0.5
1
2
3
4
5
10
20
30
nd (spikes)
Figure 4: High diversity among cells. a: The average information that a response segment conveys
about the identity of the cell as a function of the entropy of the responses. Every point stands for a
time point along the stimulus. Results shown are for 2-letter words of 5 ms bins; similar behavior is
observed for different word sizes and bins b: Cumulative distribution of the average number of spikes
that are needed to distinguish between pair of cells.
5
Discussion
We have identified a diversity of functional types of retinal ganglion cells by clustering
them to preserve information about their identity. Beyond the easy classification of the major types of salamander ganglion cells ? fast OFF, slow OFF, and ON ? in agreement with
traditional methods, we have found clear structure within the fast OFF cells that suggests
at least 5 more functional classes. Furthermore, we found evidence that each cell is functionally unique. Even under this relatively simple stimulus, the analysis revealed that the
4
Since the cells receive the same stimulus and often possess shared circuitry, an efficiency as high
as 100% is very unlikely.
cell responses convey ?6 bits/s of information about cell identity within this population of
21 cells. Ganglion cells in the salamander interact with each other and collect information
from a ?250 ?m radius; given the density of ganglion cells, the observed rate implies that
a single ganglion cell can be discriminated from all the cells in this ?elementary patch?
within 1 s. This is a surprising degree of diversity, given that 19 cells in our sample would
be traditionally viewed as nominally the same.
One might wonder if our choice of uniform flicker limits the results of our classification.
However, we found that this stimulus was rich enough to distinguish every ganglion cell in
our data set. It is likely that stimuli with spatial structure would reveal further differences.
Using a larger collection of cells will enable us to explore the possibility that there is a
continuum of unique functional units in the retina.
How might the brain make use of this diversity? Several alternatives are conceivable. By
comparing the spiking of closely related cells, it might be possible to achieve much finer
discrimination among stimuli that tend to activate both cells. Diversity also can improve
the robustness of retinal signalling: as the retina is constantly setting its adaptive state in
response to statistics of the environment that it cannot estimate without some noise, maintaining functional diversity can guard against adaptation that overshoots its optimum. Finally, great functional diversity opens up additional possibilities for learning strategies, in
which downstream neurons select the most useful of its inputs rather than merely summing
over identical inputs to reduce their noise. The example of the invertebrate retina demonstrates that nature can construct neural circuits with almost crystalline reproducibility from
synapse to synapse. This suggests that the extreme diversity found here in the vertebrate
retina may not be the result of some inevitable sloppiness of neural development but rather
as evolutionary selection of a different strategy for representing the visual world.
References
[1] Cajal, S.R., Histologie du systeme nerveux de l?homme et des vertebres., Paris: Maloine (1911).
[2] Dowling, J., The Retina: An Approachable Part of the Brain. Cambridge, MA: Belknap Press
(1987).
[3] Masland, R.H., Nat. Neurosci., 4: 877-886 (2001).
[4] Meister, M., Pine, J. & Baylor, D.A., J. Neurosci. Methods. 51: 95-106 (1994).
[5] Hartline, H.K., Am. J. Physiol., 121: 400-415 (1937).
[6] Hochstein, S. & Shapley, R.M., J. Physiol., 262: 265-84 (1976).
[7] Grosser, O.-J. & Grosser-Cornehls, U., in Frog Neurobiology, ed: R. Llinas, Precht, W.: 297-385,
Springer-Verlag: New York (1976).
[8] Grosser-Cornehls, U. & Himstedt, W., Brain Behav. Evol. 7: 145-168 (1973).
[9] Lettvin, J.Y., Maturana, H.R., McCulloch, W.S. & Pitts, W.H., Proc. I.R.E., 47: 1940-51 (1959).
[10] Shannon, C. E. & Weaver W. Mathematical theory of communication Univ. of Illinois (1949).
[11] Tishby, N., Pereira, F. & Bialek, W., in Proceedings of The 37th Allerton conference on communication, control & computing, Univ. of Illinois (1999). see also arXiv: physics/0004057.
[12] Slonim, N. & Tishby, N., NIPS 12, 617?623 (2000).
[13] Lin, J., IEEE IT, 37, 145?151 (1991).
[14] Schneidman, E., Brenner, N., Tishby N., de Ruyter van Steveninck, R. & Bialek, W. NIPS 13:
159-165 (2001). see also arXiv: physics/0005043.
[15] Strong, S.P., Koberle, R., de Ruyter van Steveninck, R. & Bialek, W., Phys. Rev. Lett. 80, 197?
200 (1998). see also arXiv: cond-mat/9603127.
[16] Keat, J., Reinagel, P., Reid, R.C. & Meister, M., Neuron 30, 803-817 (2001).
| 2231 |@word compression:1 seems:1 nd:3 open:1 cleanly:1 systeme:1 reduction:1 series:1 contains:1 rightmost:1 comparing:1 surprising:1 assigning:1 must:2 physiol:2 informative:2 shape:2 plot:1 discrimination:1 alone:1 greedy:3 selected:1 half:2 signalling:1 merger:7 record:2 provides:5 characterization:1 allerton:1 height:2 mathematical:2 burst:1 along:1 guard:1 shapley:1 vertebres:1 elads:1 manner:1 introduce:2 pairwise:4 roughly:1 themselves:1 examine:1 behavior:2 morphology:1 multi:2 brain:4 embody:1 window:1 vertebrate:2 becomes:2 xx:1 exotic:1 circuit:2 medium:1 panel:2 mcculloch:1 what:4 finding:1 transformation:1 nj:1 ought:1 quantitative:1 every:9 subclass:8 exactly:3 demonstrates:2 control:2 unit:1 reid:1 limit:1 slonim:1 despite:1 firing:12 modulation:1 merge:3 might:8 twice:1 frog:3 studied:1 quantified:2 suggests:5 specifying:1 collect:1 range:3 steveninck:2 unique:5 lost:1 spot:1 procedure:1 significantly:1 projection:1 word:4 suggest:1 get:1 onto:2 close:2 cannot:1 selection:1 impossible:1 center:1 maximizing:1 attention:1 duration:1 splitting:1 evol:1 reinagel:1 array:3 importantly:1 his:1 population:9 notion:2 traditionally:2 imagine:1 today:1 suppose:1 origin:1 agreement:3 complementarity:1 element:1 coarser:1 bottom:3 observed:2 solved:1 capture:3 thousand:1 subclassification:1 cycle:1 morphological:2 intuition:2 environment:1 ideally:1 asked:1 ultimately:1 overshoot:1 segment:7 reordered:1 efficiency:1 easily:1 joint:1 cat:2 train:2 univ:2 distinct:1 fast:8 describe:1 activate:1 wbialek:1 tell:1 elad:1 plausible:1 larger:1 distortion:1 say:1 statistic:2 think:1 final:1 triggered:1 evidently:1 adaptation:1 neighboring:1 relevant:1 reproducibility:1 achieve:1 intuitive:1 parent:1 cluster:15 electrode:2 optimum:1 produce:2 maturana:1 measured:1 nearest:4 strong:1 grating:1 implies:2 come:1 differ:1 waveform:1 anatomy:2 merged:1 radius:1 functionality:1 closely:1 enable:1 bin:4 require:1 clustered:2 generalization:1 elementary:2 larval:1 confine:1 great:1 mapping:1 predict:1 pitt:1 circuitry:1 pine:1 major:2 continuum:1 smallest:2 favorable:1 proc:1 label:2 concurrent:1 reflects:1 clearly:1 always:1 gaussian:2 rather:4 pn:1 encode:1 rank:3 salamander:8 contrast:2 am:1 nn:1 typically:2 unlikely:1 relation:3 selective:1 overall:1 classification:18 among:7 priori:1 development:1 spatial:1 mutual:4 field:4 once:1 construct:1 having:3 biology:1 identical:4 broad:2 look:1 mm2:1 inevitable:1 report:1 stimulus:38 fundamentally:1 micro:1 few:3 retina:11 sta:2 randomly:3 others:1 cajal:2 simultaneously:2 recognize:1 divergence:1 individual:4 preserve:1 approachable:1 ourselves:1 fire:4 william:1 attempt:1 highly:2 possibility:2 extreme:1 light:4 behind:1 tree:3 euclidean:1 re:2 sacrificing:1 isolated:2 minimal:1 classify:1 earlier:1 soft:1 asking:1 subset:1 uniform:4 wonder:1 tishby:3 reported:1 density:1 physic:3 off:16 invertible:1 michael:1 together:2 continuously:2 pool:1 again:1 reflect:1 recorded:2 containing:1 potential:1 diversity:10 de:4 retinal:7 sec:2 summarized:1 explicitly:1 onset:1 view:1 extracellularly:1 observing:4 characterizes:1 analyze:1 start:2 precht:1 ensemble:2 correspond:3 identify:2 identification:1 overlook:1 none:1 finer:3 hartline:1 classified:2 nerveux:1 phys:1 ed:1 against:1 raster:1 conveys:2 recovers:1 experimenter:1 ask:1 cj:2 response:51 maximally:1 synapse:2 llinas:1 formulation:2 strongly:1 furthermore:2 just:3 ergodicity:1 stage:1 dendrogram:5 correlation:2 until:1 replacing:1 marker:1 reveal:1 believe:1 usa:1 effect:1 name:1 consisted:2 multiplier:1 normalized:3 hence:1 spatially:1 iteratively:1 deal:2 attractive:1 white:1 during:1 uniquely:1 m:7 theoretic:3 complete:1 upwards:1 image:2 fi:3 common:1 functional:14 spiking:3 discriminated:2 belong:2 he:1 functionally:2 significant:3 composition:1 dowling:1 surround:1 imposing:1 cambridge:1 similarly:1 illinois:2 dj:1 moving:2 similarity:4 something:1 own:1 belongs:1 selectivity:1 verlag:1 binary:2 captured:1 greater:1 additional:1 preceding:2 recognized:1 schneidman:2 branch:1 full:2 long:1 lin:1 elaboration:1 serial:1 molecular:5 vision:1 metric:4 arxiv:3 achieved:1 cell:126 receive:1 source:1 extra:1 posse:1 strict:1 recording:1 tend:1 mature:1 meaningfully:1 member:2 mw:1 constraining:1 split:1 easy:1 revealed:1 enough:1 variety:1 identified:3 reduce:1 regarding:1 knowing:2 sibling:10 bottleneck:1 whether:3 rms:1 york:1 behav:1 action:1 generally:1 latency:1 clear:11 useful:1 amount:2 diameter:1 generate:1 exist:1 flicker:2 neuroscience:1 estimated:1 per:3 anatomical:1 discrete:1 mat:1 profoundly:1 group:2 enormous:1 monitor:1 downstream:1 merely:1 angle:1 letter:2 ringer:1 respond:1 striking:1 place:1 almost:3 strange:1 patch:2 bit:14 layer:2 hi:9 multielectrode:1 distinguish:7 display:1 lettvin:1 constraint:1 sharply:1 scene:2 diffuse:1 invertebrate:1 erroneously:1 argument:1 hochstein:1 relatively:2 department:2 according:5 across:1 appealing:1 rev:1 invariant:1 previously:1 needed:3 know:2 end:1 meister:2 observe:1 distinguished:2 maloine:1 alternative:2 robustness:1 drifting:1 top:2 clustering:14 remaining:1 log2:2 maintaining:1 exploit:1 giving:1 especially:1 classical:3 already:1 question:2 spike:19 parametric:1 receptive:3 strategy:2 traditional:4 bialek:4 exhibit:2 conceivable:1 evolutionary:1 distance:15 link:1 separate:2 sensible:1 manifold:1 agglomerative:2 extent:2 cellular:1 homme:1 length:1 providing:1 happy:1 baylor:1 equivalently:1 reliably:11 allowing:1 discretize:1 neuron:27 commensurate:1 situation:1 defining:1 variability:1 precise:1 neurobiology:1 communication:2 arbitrary:2 intensity:4 pair:8 required:1 paris:1 distinction:1 subgroup:2 nip:2 beyond:2 bar:1 able:2 pattern:3 mismatch:1 max:1 crystalline:1 keat:1 event:2 natural:2 weaver:1 representing:1 improve:1 movie:4 eye:1 identifies:1 ii2:1 koberle:1 literature:1 berry:2 relative:1 loss:3 reordering:1 highlight:1 masland:1 degree:1 sufficient:1 consistent:1 plotting:1 classifying:4 share:1 repeat:3 guide:1 neighbor:4 template:1 face:1 emerge:1 absolute:1 distributed:1 van:2 curve:1 lett:1 world:2 ending:1 cumulative:6 stand:1 sensory:2 rich:1 made:1 qualitatively:1 projected:1 simplified:1 collection:1 adaptive:1 implicitly:1 active:2 photoreceptors:1 summing:1 don:2 search:1 continuous:1 why:1 nature:1 ruyter:2 robust:1 symmetry:1 interact:1 du:1 neurosci:2 whole:1 noise:3 repeated:2 convey:3 fig:16 fashion:1 slow:4 iij:2 pereira:1 lie:3 jensen:1 offset:1 evidence:2 grouping:1 essential:1 merging:2 ci:2 dissimilarity:1 nat:1 subtree:2 entropy:3 distinguishable:7 explore:2 likely:1 ganglion:25 visual:12 histologie:1 lagrange:1 ordered:6 nominally:2 springer:1 constantly:1 ma:1 conditional:1 identity:19 presentation:1 sorted:1 viewed:1 flash:1 replace:2 shared:1 brenner:1 tiger:2 hard:1 change:1 typical:2 determined:1 except:1 conservative:1 total:1 experimental:2 shannon:3 photoreceptor:1 cond:1 indicating:1 select:1 modulated:1 princeton:3 |
1,354 | 2,232 | Support Vector Machines for
Multi ple-Instance Learning
Stuart Andrews, Ioannis Tsochantaridis and Thomas Hofmann
Department of Computer Science, Brown University, Providence, RI 02912
{stu,it,th}@cs.brown.edu
Abstract
This paper presents two new formulations of multiple-instance
learning as a maximum margin problem. The proposed extensions
of the Support Vector Machine (SVM) learning approach lead to
mixed integer quadratic programs that can be solved heuristically.
Our generalization of SVMs makes a state-of-the-art classification
technique, including non-linear classification via kernels, available
to an area that up to now has been largely dominated by special
purpose methods. We present experimental results on a pharmaceutical data set and on applications in automated image indexing
and document categorization.
1
Introduction
Multiple-instance learning (MIL) [4] is a generalization of supervised classification
in which training class labels are associated with sets of patterns, or bags, instead of
individual patterns. While every pattern may possess an associated true label, it is
assumed that pattern labels are only indirectly accessible through labels attached to
bags. The law of inheritance is such that a set receives a particular label, if at least
one of the patterns in the set possesses the label. In the important case of binary
classification, this implies that a bag is "positive" if at least one of its member
patterns is a positive example. MIL differs from the general set-learning problem in
that the set-level classifier is by design induced by a pattern-level classifier. Hence
the key challenge in MIL is to cope with the ambiguity of not knowing which of the
patterns in a positive bag are the actual positive examples and which ones are not.
The MIL setting has numerous interesting applications. One prominent application is the classification of molecules in the context of drug design [4]. Here,
each molecule is represented by a bag of possible conformations. The efficacy of
a molecule can be tested experimentally, but there is no way to control for individual conformations. A second application is in image indexing for content-based
image retrieval. Here, an image can be viewed as a bag of local image patches [9]
or image regions. Since annotating whole images is far less time consuming then
marking relevant image regions, the ability to deal with this type of weakly annotated data is very desirable. Finally, consider the problem of text categorization for
which we are the first to apply the MIL setting. Usually, documents which contain
a relevant passage are considered to be relevant with respect to a particular cate-
gory or topic, yet class labels are rarely available on the passage level and are most
commonly associated with the document as a whole. Formally, all of the above
applications share the same type of label ambiguity which in our opinion makes a
strong argument in favor of the relevance of the MIL setting.
We present two approaches to modify and extend Support Vector Machines (SVMs)
to deal with MIL problems. The first approach explicitly treats the pattern labels
as unobserved integer variables, subjected to constraints defined by the (positive)
bag labels. The goal then is to maximize the usual pattern margin, or soft-margin,
jointly over hidden label variables and a linear (or kernelized) discriminant function. The second approach generalizes the notion of a margin to bags and aims at
maximizing the bag margin directly. The latter seems most appropriate in cases
where we mainly care about classifying new test bags, while the first approach
seems preferable whenever the goal is to derive an accurate pattern-level classifier.
In the case of singleton bags, both methods are identical and reduce to the standard
soft-margin SVM formulation.
Algorithms for the MIL problem were first presented in [4, 1, 7]. These methods (and
related analytical results) are based on hypothesis classes consisting of axis-aligned
rectangles. Similarly, methods developed subsequently (e.g., [8, 12]) have focused
on specially tailored machine learning algorithms that do not compare favorably in
the limiting case of the standard classification setting. A notable exception is [10].
More recently, a kernel-based approach has been suggested which derives MI-kernels
on bags from a given kernel defined on the pattern-level [5]. While the MI-kernel
approach treats the MIL problem merely as a representational problem, we strongly
believe that a deeper conceptual modification of SVMs as outlined in this paper is
necessary. However, we share the ultimate goal with [5], which is to make state-ofthe-art kernel-based classification methods available for multiple-instance learning.
2
Multiple-Instance Learning
In statistical pattern recognition, it is usually assumed that a training set of labeled patterns is available where each pair (Xi, Yi) E ~d X Y has been generated
independently from an unknown distribution. The goal is to induce a classifier, i.e.,
a function from patterns to labels ! : ~d --+ y. In this paper, we will focus on
the binary case of Y = {-I, I}. Multiple-instance learning (MIL) generalizes this
problem by making significantly weaker assumptions about the labeling information. Patterns are grouped into bags and a label is attached to each bag and not
to every pattern. More formally, given is a set of input patterns Xl, ... , Xn grouped
into bags B l , ... , B m , with BI = {Xi: i E I} for given index sets I ~ {I, ... , n} (typically non-overlapping). With each bag B I is associated a label YI. These labels
are interpreted in the following way: if YI = -1, then Yi = -1 for all i E I, i.e., no
pattern in the bag is a positive example. If on the other hand YI = 1, then at least
one pattern Xi E BI is a positive example of the underlying concept. Notice that
the information provided by the label is asymmetric in the sense that a negative
bag label induces a unique label for every pattern in a bag, while a positive label
does not. In general, the relation between pattern labels Yi and bag labels YI can be
expressed compactly as YI = maxiEI Yi or alternatively as a set of linear constraints
'"'
Yi 2
+-1 ;::: 1, VI s.t. YI = 1, and Yi = -1, VI s.t. YI = -1.
~ iEI
(1)
Finally, let us call a discriminant function! : X --+ ~ MI-separating with respect to
a multiple-instance data set if sgn maxiEI !(Xi) = YI for all bags BI holds.
(a)
(b)
. . .Q)
2
2
..?..
<j)
3
3
1-
2
2
....
8\
@.
3
3
2
3
2
2
2
Figure 1: Large margin classifiers for MIL. Negative patterns are denoted by "-"
symbols, positive bag patterns by numbers that encode the bag membership. The
figure to the left sketches the mi-SVM solution while the figure to the right shows
the MI-SVM solution.
3
Maximum Pattern Margin Formulation of MIL
We omit an introduction to SVMs and refer the reader to the excellent books on this
topic, e.g. [11]. The mixed integer formulation of MIL as a generalized soft-margin
SVM can be written as follows in primal form
1
minmin -llwI12+CL~i
mi-SVM
{v;} w,b,? 2
s.t.
.
(2)
t
Vi: Yi((w,xi)+b):::=:l-~i' ~i:::=:O, Yi E{-l,l},and (1) hold.
Notice that in the standard classification setting, the labels Yi of training patterns
would simply be given, while in (2) labels Yi of patterns Xi not belonging to
any negative bag are treated as unknown integer variables. In mi-SVM one thus
maximizes a soft-margin criterion jointly over possible label assignments as well as
hyperplanes. Figure 1 (a) illustrates this idea for the separable case: We are looking
for an MI-separating linear discriminant such that there is at least one pattern from
every positive bag in the positive halfspace, while all patterns belonging to negative
bags are in the negative halfspace. At the same time, we would like to achieve the
maximal margin with respect to the (completed) data set obtained by imputing
labels for patterns in positive bags in accordance with Eq. (1).
Xi
This is similar to the approach pursued in [6] and [3] for transductive inference. In
the latter case, patterns are either labeled or unlabeled. Unlabeled data points are
utilized to refine the decision boundary by maximizing the margin on all data points.
While the labeling for each unlabeled pattern can be carried out independently in
transductive inference, labels of patterns in positive bags are coupled in MIL through
the inequality constraints.
The mi-SVM formulation leads to a mixed integer programming problem. One has
to find both the optimal labeling and the optimal hyperplane. On a conceptual level
this mixed integer formulation captures exactly what MIL is about, i.e. to recover
the unobserved pattern labels and to simultaneously find an optimal discriminant.
Yet, this poses a computational challenge since the resulting mixed integer programming problem cannot be solved efficiently with state-of-the-art tools, even for
moderate size data sets. We will present an optimization heuristic in Section 5.
4
Maximum Bag Margin Formulation of MIL
An alternative way of applying maximum margin ideas to the MIL setting is to
extend the notion of a margin from individual patterns to sets of patterns. It is
natural to define the functional margin of a bag with respect to a hyperplane by
== YI max( (w, Xi) + b).
(3)
iEI
This generalization reflects the fact that predictions for bag labels take the form
YI = sgn maxiEI( (w, Xi) +b). Notice that for a positive bag the margin is defined by
the margin of the "most positive" pattern, while the margin of a negative bag is defined by the "least negative" pattern. The difference between the two formulations of
maximum-margin problems is illustrated in Figure 1. For the pattern-centered miSVM formulation , the margin of every pattern in a positive bag matters, although
one has the freedom to set their label variables so as to maximize the margin. In
the bag-centered formulation, only one pattern per positive bag matters, since it
will determine the margin of the bag. Once these "witness" patterns have been
identified, the relative position of other patterns in positive bags with respect to
the classification boundary becomes irrelevant. Using the above notion of a bag
margin, we define an MIL version of the soft-margin classifier by
II
2
. -llwl1
1
mm
MI-SVM
w , b ,~
s.t.
2
"
+ C '~~I
I
VI: YI malx ( (w, Xi)
?E
+ b)
(4)
::::: 1 - ~I, ~I ::::: O.
For negative bags one can unfold the max operation by introducing one inequality
constraint per pattern, yet with a single slack variable ~I. Hence the constraints on
negative bag patterns, where YI = -1 , read as -(W, Xi) - b::::: 1- ~I' Vi E I.
For positive bags, we introduce a selector variable s(I) E I which denotes the
pattern selected as the positive "witness" in BI. This will result in constraints
(w, xs(I)) + b ::::: 1 - ~I. Thus we arrive at the following equivalent formulation
s
s.t.
1
-llwl1 2 + C 2:
~I
w ,b,~ 2
I
min min
(5)
VI: YI = -1 /\ -(W,Xi) - b::::: 1- ~I, Vi E I,
or YI=l
/\ (w,xs(I))+b:::::1-~I' and6:::::0.
(6)
In this formulation, every positive bag BI is thus effectively represented by a single
member pattern XI == xs(I). Notice that "non-witness" patterns (Xi, i E I with
i =I- s(I)) have no impact on the objective.
For given selector variables, it is straightforward to derive the dual objective function
which is very similar to the standard SVM Wolfe dual. The only major difference
is that the box constraints for the Lagrange parameters c? are modified compared
to the standard SVM solution, namely one gets
o ::; C?I ::; C,
for I s.t. YI = 1 and
0::;
2:
C?i ::;
C,
for I s.t. Y I = -1.
(7)
iEI
Hence, the influence of each bag is bounded by C.
5
Optimization Heuristics
As we have shown, both formulations, mi-SVM and MI-SVM, can be cast as mixedinteger programs. In deriving optimization heuristics, we exploit the fact that for
initialize Yi
= YI for
i E I
REPEAT
compute SVM solution vv , b for data set with imputed labels
compute outputs Ii = (VV , Xi) + b for all xi in positive bags
set Yi = sgn(fi) for every i E I, YI = 1
FOR (every positive bag BI)
IF (L iEI( l
+ Yi)/2 == 0)
compute i* = arg maxiEI Ii
set Yi* = 1
END
END
WHILE (imputed labels have changed)
OUTPUT (vv, b)
Figure 2: Pseudo-code for mi-SVM optimization heuristics (synchronous update).
initialize
XI
=L
iE I
xillII
for every positive bag
BI
REPEAT
compute QP solution vv,b for data set with
positive examples {XI : YI = I}
compute outputs Ii = (VV,Xi) + b for all xi in positive bags
set XI = Xs(I) , 8(I) = arg maxiEI Ii for every I, Y I = 1
WHILE (selector variables 8(1) have changed)
OUTPUT (vv, b)
Figure 3: Pseudo-code for MI-SVM optimization heuristics (synchronous update).
given integer variables, i.e. the hidden labels in mi-SVM and the selector variables
in MI-SVM, the problem reduces to a QP that can be solved exactly. Of course, all
the derivations also hold for general kernel functions K .
A general scheme for a simple optimization heuristic may be described as follows.
Alternate the following two steps: (i) for given integer variables, solve the associated
QP and find the optimal discriminant function, (ii) for a given discriminant, update
one, several, or all integer variables in a way that (locally) minimizes the objective.
The latter step may involve the update of a label variable Yi of a single pattern in miSVM, the update of a single selector variable 8(I) in MI-SVM, or the simultaneous
update of all integer variables. Since the integer variables are essentially decoupled
given the discriminant (with the exception of the bag constraints in mi-SVM), this
can be done very efficiently. Also notice that we can re-initialize the QP-solver
at every iteration with the previously found solution, which will usually result in
a significant speed-up. In terms of initialization of the optimization procedure,
we suggest to impute positive labels for patterns in positive bags as the initial
configuration in mi-SVM. In MI-SVM , XI is initialized as the centroid of the bag
patterns. Figure 2 and 3 summarize pseudo-code descriptions for the algorithms
utilized in the experiments.
There are many possibilities to refine the above heuristic strategy, for example, by
starting from different initial conditions, by using branch and bound techniques to
explore larger parts of the discrete part of the search space, by performing stochastic updates (simulated annealing) or by maintaining probabilities on the integer
variables in the spirit of deterministic annealing. However, we have been able to
achieve competitive results even with the simpler optimization heuristics, which val-
MUSK1
MUSK2
EMDDl12J
84.8
84.9
DD 19J
88.0
84.0
MI-NN l10J
88.9
82.5
IAPR l4J
92.4
89.2
mi-SVM
87.4
83.6
MI-SVM
77.9
84.3
Table 1: Accuracy results for various methods on the MUSK data sets.
idate the maximum margin formulation of SVM. We will address further algorithmic
improvements in future work.
6
Experimental Results
We have performed experiments on various data sets to evaluate the proposed techniques and compare them to other methods for MIL. As a reference method we
have implemented the EM Diverse Density (EM-DD) method [12], for which very
competitive results have been reported on the MUSK benchmark!.
6.1
MUSK Data Set
The MUSK data sets are the benchmark data sets used in virtually all previous
approaches and have been described in detail in the landmark paper [4]. Both
data sets, MUSK1 and MUSK2 , consist of descriptions of molecules using multiple
low-energy conformations. Each conformation is represented by a 166-dimensional
feature vector derived from surface properties. MUSK1 contains on average approximately 6 conformation per molecule, while MUSK2 has on average more than
60 conformations in each bag. The averaged results of ten 10-fold cross-validation
runs are summarized in Table 1. The SVM results are based on an RBF kernel
K(x, y) = exp( -')'llx - Y112) with coarsely optimized ')'. For both MUSK1 and
MUSK2 data sets, mi-SVM achieves competitive accuracy values. While MI-SVM
outperforms mi-SVM on MUSK2, it is significantly worse on MUSK1. Although
both methods fail to achieve the performance of the best method (iterative APR)2,
they compare favorably with other approaches to MIL.
6.2
Automatic Image Annotation
We have generated new MIL data sets for an image annotation task. The original
data are color images from the Corel data set that have been preprocessed and
segmented with the Blobworld system [2]. In this representation, an image consists
of a set of segments (or blobs) , each characterized by color, texture and shape
descriptors. We have utilized three different categories ("elephant", "fox", "tiger")
in our experiments. In each case, the data sets have 100 positive and 100 negative
example images. The latter have been randomly drawn from a pool of photos of
other animals. Due to the limited accuracy of the image segmentation, the relative
small number of region descriptors and the small training set size, this ends up being
quite a hard classification problem. We are currently investigating alternative image
1 However, the description of EM-DD in [12] seems to indicate that the authors used
the test data to select the optimal solution obtained from multiple runs of the algorithm.
In the pseudo-code formulation of EM-DD, Di is used to compute the error for the i-th
data fold, where it should in fact be D t = D - D i (using the notation of [12]). We have
used the corrected version of the algorithm in our experiments and have obtained accuracy
numbers using EM-DD that are more in line with previously published results.
2Since the IAPR (iterative axis parallel rectangle) methods in [4] have been specifically
designed and optimized for the MUSK classification task, the superiority of APR should
not be interpreted as a failure.
Data Set
Category
Elephant
Fox
Tiger
Dims
inst/feat
1391/230
1320/230
1220/230
EM-DD
78.3
56 .1
72.1
mi-SVM
linear poly
82.2
78.1
58.2
55.2
78.4
78.1
rbf
80.0
57.9
78.9
linear
81.4
57.8
84.0
MI-SVM
poly
79.0
59.4
81.6
rbf
73.1
58.8
66.6
Table 2: Classification accuracy of different methods on the Corel image data sets.
Data Set
Category
TST1
TST2
TST3
TST4
TST7
TST9
TST10
Dims
inst/feat
3224/6668
3344/6842
3246/6568
3391/6626
3367/7037
3300/6982
3453/7073
EM-DD
85.8
84.0
69.0
80.5
75.4
65.5
78.5
mi-SVM
linear poly
93.6
92.5
78.2
75.9
87.0
83.3
82.8
80.0
81.3
78.7
67.5
65.6
79.6
78.3
rbf
90.4
74.3
69.0
69.6
81.3
55.2
52.6
MI-SVM
linear poly
93.9
93.8
84.5
84.4
82.2
85.1
82.4
82.9
78.0
78.7
60.2
63.7
79.5
81.0
rbf
93.7
76.4
77.4
77.3
64.5
57.0
69.1
Table 3: Classification accuracy of different methods on the TREC9 document
categorization sets.
representations in the context of applying MIL to content-based image retrieval
and automated image indexing, for which we hope to achieve better (absolute)
classification accuracies. However, these data sets seem legitimate for a comparative
performance analysis. The results are summarized in Table 2. They show that both,
mi-SVM and MI-SVM achieve a similar accuracy and outperform EM-DD by a few
percent. While MI-SVM performed marginally better than mi-SVM, both heuristic
methods were susceptible to other nearby local minima. Evidence of this effect
was observed through experimentation with asynchronus updates, as described in
Section 5, where we varied the number of integer variables updated at each iteration.
6.3
Text Categorization
Finally, we have generated MIL data sets for text categorization. Starting from
the publicly available TREC9 data set, also known as OHSUMED, we have split
documents into passages using overlapping windows of maximal 50 words each.
The original data set consists of several years of selected MEDLINE articles. We
have worked with the 1987 data set used as training data in the TREC9 filtering
task which consists of approximately 54,000 documents. MEDLINE documents are
annotated with MeSH terms (Medical Subject Headings), each defining a binary
concept. The total number of MeSH terms in TREC9 was 4903. While we are
currently performing a larger scale evaluation of MIL techniques on the full data
set, we report preliminary results here on a smaller, randomly subsampled data
set. We have been using the first seven categories of the pre-test portion with at
least 100 positive examples. Compared to the other data sets the representation is
extremely sparse and high-dimensional, which makes this data an interesting addit ional benchmark. Again, using linear and polynomial kernel functions, which are
generally known to work well for text categorization, both methods show improved
performance over EM-DD in almost all cases. No significant difference between the
two methods is clearly evident for the text classification task.
7
Conclusion and Future Work
We have presented a novel approach to multiple-instance learning based on two
alternative generalizations of the maximum margin idea used in SVM classification.
Although these formulations lead to hard mixed integer problems, even simple local optimization heuristics already yield quite competitive results compared to the
baseline approach. We conjecture that better optimization techniques, that can
for example avoid unfavorable local minima, may further improve the classification
accuracy. Ongoing work will also extend the experimental evaluation to include
larger scale problems.
As far as the MIL research problem is concerned, we have considered a wider range
of data sets and applications than is usually done and have been able to obtain
very good results across a variety of data sets. We strongly suspect that many
MIL methods have been optimized to perform well on the MUSK benchmark and
we plan to make the data sets used in the experiments available to the public to
encourage further empirical comparisons.
Acknowledgments
This work was sponsored by an NSF-ITR grant, award number IIS-0085836.
References
[1] P. Auer. On learning from multi-instance examples: Empirical evaluation of a theoretical approach . In Proc. 14th International Conf. on Machin e Learning, pages
21- 29. Morgan Kaufmann, San Francisco, CA , 1997.
[2] C. Carson, M. Thomas, S. Belongie, J. M. Hellerstein, and J. Malik. Blobworld: A
system for region-based image indexing and retrieval. In Proceedings Third International Conference on Visual Information Systems. Springer, 1999.
[3] A. Demirez and K. Bennett. Optimization approaches to semisupervised learning.
In M. Ferris, O. Mangasarian, and J. Pang, editors, Applications and Algorithms of
Complementarity. Kluwer Academic Publishers, Boston, 2000.
[4] T . G . Dietterich, R. H. Lathrop , and T . Lozano-Perez . Solving the multiple instance
problem with axis-parallel rectangles. Artificial Intellig ence, 89(1-2):31- 71 , 1997.
[5] T. Gartner, P. A. Flach, A. Kowalczyk, and A. J. Smola. Multi-instance kernels. In
Proc. 19th International Conf. on Machine Learning. Morgan Kaufmann, San Francisco, CA, 2002.
[6] T. Joachims. Transductive inference for text classification using support vector machines. In Proceedings 16th International Conference on Machine Learning, pages
200- 209. Morgan Kaufmann , San Francisco, CA, 1999.
[7] P.M. Long and L. Tan. PAC learning axis aligned rectangles with respect to product
distributions from multiple-instance examples. In Proc. Compo Learning Theory, 1996.
[8] O. Maron and T. Lozano-Perez. A framework for multiple-instance learning. In
Advances in Neural Information Processing Systems, volume 10. MIT Press, 1998.
[9] O. Maron and A. L. Ratan. Multiple-instance learning for natural scene classification. In Proc. 15th International Conf. on Machine Learning, pages 341- 349. Morgan
Kaufmann, San Francisco, CA, 1998.
[10] J. Ramon and L. De Raedt. Multi instance neural networks. In Proceedings of ICML2000, Workshop on Attribute- Valu e and Relational Learning, 2000 .
[11] B. SchOlkopf and A. Smola. Learning with Kernels. Support Vector Machines, Regularization, Optimization and Beyond. MIT Press, 2002 .
[12] Qi Zhang and Sally A. Goldman . EM-DD: An improved multiple-instance learning
technique. In Advances in Neural Information Processing Systems, volume 14. MIT
Press, 2002 .
| 2232 |@word version:2 polynomial:1 seems:3 flach:1 heuristically:1 ratan:1 initial:2 configuration:1 contains:1 efficacy:1 document:7 outperforms:1 yet:3 written:1 mesh:2 shape:1 minmin:1 hofmann:1 designed:1 sponsored:1 update:8 pursued:1 selected:2 compo:1 ional:1 hyperplanes:1 simpler:1 zhang:1 scholkopf:1 consists:3 introduce:1 multi:4 goldman:1 actual:1 ohsumed:1 solver:1 window:1 becomes:1 provided:1 underlying:1 bounded:1 maximizes:1 notation:1 what:1 interpreted:2 minimizes:1 developed:1 unobserved:2 pseudo:4 every:11 preferable:1 exactly:2 classifier:6 musk2:5 control:1 medical:1 omit:1 superiority:1 grant:1 positive:30 local:4 modify:1 treat:2 accordance:1 approximately:2 initialization:1 limited:1 bi:7 range:1 averaged:1 unique:1 acknowledgment:1 differs:1 procedure:1 unfold:1 area:1 empirical:2 drug:1 significantly:2 word:1 induce:1 pre:1 suggest:1 get:1 cannot:1 unlabeled:3 tsochantaridis:1 context:2 applying:2 influence:1 equivalent:1 deterministic:1 maximizing:2 straightforward:1 starting:2 independently:2 focused:1 legitimate:1 deriving:1 notion:3 limiting:1 updated:1 machin:1 tan:1 programming:2 hypothesis:1 complementarity:1 wolfe:1 recognition:1 utilized:3 asymmetric:1 labeled:2 observed:1 solved:3 capture:1 region:4 weakly:1 solving:1 segment:1 iapr:2 compactly:1 represented:3 various:2 derivation:1 artificial:1 labeling:3 quite:2 heuristic:10 larger:3 solve:1 annotating:1 elephant:2 ability:1 favor:1 transductive:3 jointly:2 blob:1 analytical:1 maximal:2 product:1 relevant:3 aligned:2 llwi12:1 achieve:5 representational:1 description:3 categorization:6 comparative:1 wider:1 derive:2 andrew:1 pose:1 conformation:6 eq:1 strong:1 implemented:1 c:1 implies:1 indicate:1 annotated:2 attribute:1 subsequently:1 stochastic:1 centered:2 sgn:3 opinion:1 public:1 generalization:4 preliminary:1 extension:1 hold:3 mm:1 considered:2 exp:1 algorithmic:1 major:1 achieves:1 purpose:1 proc:4 bag:52 label:34 currently:2 grouped:2 tool:1 reflects:1 hope:1 mit:3 clearly:1 aim:1 modified:1 avoid:1 mil:26 encode:1 derived:1 focus:1 joachim:1 improvement:1 mainly:1 centroid:1 baseline:1 sense:1 inst:2 inference:3 membership:1 nn:1 typically:1 hidden:2 kernelized:1 relation:1 arg:2 classification:19 dual:2 musk:6 denoted:1 animal:1 art:3 special:1 initialize:3 plan:1 once:1 identical:1 stuart:1 future:2 report:1 few:1 randomly:2 simultaneously:1 individual:3 pharmaceutical:1 subsampled:1 stu:1 consisting:1 freedom:1 possibility:1 evaluation:3 perez:2 primal:1 accurate:1 encourage:1 necessary:1 decoupled:1 fox:2 initialized:1 re:1 theoretical:1 instance:16 soft:5 ence:1 raedt:1 assignment:1 introducing:1 reported:1 providence:1 density:1 international:5 accessible:1 ie:1 pool:1 medline:2 ambiguity:2 again:1 worse:1 conf:3 book:1 singleton:1 iei:4 de:1 ioannis:1 summarized:2 matter:2 notable:1 explicitly:1 vi:7 performed:2 portion:1 competitive:4 recover:1 parallel:2 annotation:2 halfspace:2 pang:1 publicly:1 accuracy:9 descriptor:2 largely:1 efficiently:2 kaufmann:4 yield:1 ofthe:1 marginally:1 published:1 simultaneous:1 whenever:1 failure:1 energy:1 icml2000:1 associated:5 mi:34 di:1 color:2 segmentation:1 auer:1 supervised:1 improved:2 formulation:16 done:2 box:1 strongly:2 smola:2 hand:1 receives:1 sketch:1 overlapping:2 maron:2 believe:1 semisupervised:1 effect:1 dietterich:1 brown:2 true:1 contain:1 concept:2 lozano:2 regularization:1 hence:3 read:1 illustrated:1 deal:2 impute:1 criterion:1 generalized:1 prominent:1 carson:1 evident:1 passage:3 percent:1 image:19 novel:1 recently:1 fi:1 mangasarian:1 imputing:1 functional:1 qp:4 corel:2 attached:2 volume:2 extend:3 kluwer:1 refer:1 significant:2 llx:1 automatic:1 outlined:1 similarly:1 surface:1 moderate:1 irrelevant:1 cate:1 inequality:2 binary:3 yi:33 morgan:4 minimum:2 care:1 determine:1 maximize:2 ii:7 branch:1 multiple:14 desirable:1 full:1 reduces:1 segmented:1 characterized:1 academic:1 cross:1 long:1 retrieval:3 award:1 impact:1 prediction:1 qi:1 essentially:1 iteration:2 kernel:11 tailored:1 annealing:2 publisher:1 specially:1 posse:2 induced:1 subject:1 virtually:1 suspect:1 member:2 spirit:1 seem:1 integer:15 call:1 split:1 concerned:1 automated:2 musk1:5 variety:1 identified:1 reduce:1 idea:3 knowing:1 y112:1 itr:1 synchronous:2 ultimate:1 generally:1 involve:1 locally:1 ten:1 induces:1 svms:4 category:4 imputed:2 outperform:1 nsf:1 notice:5 dims:2 per:3 diverse:1 discrete:1 misvm:2 coarsely:1 key:1 drawn:1 preprocessed:1 rectangle:4 merely:1 year:1 run:2 arrive:1 almost:1 reader:1 patch:1 decision:1 bound:1 fold:2 quadratic:1 refine:2 constraint:8 worked:1 ri:1 scene:1 dominated:1 nearby:1 speed:1 argument:1 min:2 extremely:1 performing:2 separable:1 conjecture:1 department:1 marking:1 alternate:1 gory:1 belonging:2 smaller:1 across:1 em:10 modification:1 making:1 indexing:4 intellig:1 gartner:1 previously:2 slack:1 fail:1 subjected:1 end:3 photo:1 available:6 generalizes:2 operation:1 experimentation:1 ferris:1 apply:1 hellerstein:1 indirectly:1 appropriate:1 kowalczyk:1 alternative:3 thomas:2 original:2 denotes:1 include:1 completed:1 maintaining:1 exploit:1 objective:3 malik:1 already:1 strategy:1 usual:1 separating:2 simulated:1 landmark:1 blobworld:2 addit:1 topic:2 seven:1 discriminant:7 code:4 index:1 susceptible:1 favorably:2 negative:10 design:2 unknown:2 perform:1 benchmark:4 defining:1 witness:3 looking:1 relational:1 varied:1 pair:1 namely:1 cast:1 optimized:3 address:1 able:2 suggested:1 beyond:1 usually:4 pattern:51 challenge:2 summarize:1 program:2 including:1 max:2 ramon:1 treated:1 natural:2 scheme:1 improve:1 numerous:1 axis:4 carried:1 coupled:1 text:6 inheritance:1 val:1 relative:2 law:1 mixed:6 interesting:2 filtering:1 valu:1 validation:1 article:1 dd:10 editor:1 classifying:1 share:2 course:1 changed:2 repeat:2 heading:1 weaker:1 deeper:1 vv:6 absolute:1 sparse:1 boundary:2 xn:1 author:1 commonly:1 san:4 ple:1 far:2 cope:1 selector:5 feat:2 investigating:1 conceptual:2 assumed:2 francisco:4 consuming:1 xi:22 belongie:1 alternatively:1 search:1 iterative:2 table:5 molecule:5 ca:4 excellent:1 cl:1 poly:4 apr:2 whole:2 position:1 xl:1 third:1 pac:1 symbol:1 x:4 svm:38 evidence:1 derives:1 consist:1 workshop:1 effectively:1 texture:1 illustrates:1 margin:27 boston:1 simply:1 explore:1 visual:1 lagrange:1 expressed:1 sally:1 springer:1 viewed:1 goal:4 rbf:5 bennett:1 content:2 experimentally:1 tiger:2 hard:2 specifically:1 corrected:1 hyperplane:2 total:1 lathrop:1 experimental:3 unfavorable:1 rarely:1 formally:2 exception:2 select:1 support:5 latter:4 relevance:1 ongoing:1 evaluate:1 tested:1 |
1,355 | 2,233 | Circuit Model of Short-Term Synaptic Dynamics
Shih-Chii Liu, Malte Boegershausen, and Pascal Suter
Institute of Neuroinformatics
University of Zurich and ETH Zurich
Winterthurerstrasse 190
CH-8057 Zurich, Switzerland
[email protected]
Abstract
We describe a model of short-term synaptic depression that is derived
from a silicon circuit implementation. The dynamics of this circuit model
are similar to the dynamics of some present theoretical models of shortterm depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also depends on the presynaptic frequency. The equations describing the steady-state and transient responses of this synaptic model fit the experimental results obtained from
a fabricated silicon network consisting of leaky integrate-and-fire neurons and different types of synapses. We also show experimental data
demonstrating the possible computational roles of depression. One possible role of a depressing synapse is that the input can quickly bring the
neuron up to threshold when the membrane potential is close to the resting potential.
1 Introduction
Short-term synaptic dynamics have been observed in many parts of the cortical system [Stratford et al., 1998, Varela et al., 1997, Tsodyks et al., 1998]. The functionality
of the short-term synaptic dynamics have been implicated in various cortical models [Senn
et al., 1998, Chance et al., 1998, Matveev and Wang, 2000]. along with the processing
capabilities of a network with dynamic synapses [Tsodyks et al., 1998, Maass and Zador,
1999]. The introduction of these dynamic synapses into hardware implementations of recurrent neuronal networks allow a wide range of operating regimes especially in the case
of time-varying inputs.
In this work, we describe a model that was derived from a circuit implementation of shortterm depression. The circuit implementation was initially described by [Rasche and Hahnloser, 2001] but the dynamics were not analyzed in their work. We also compare the
dynamics of the circuit model of depression with the equations of one of the theoretical
models frequently used in network simulations [Abbott et al., 1997,Varela et al., 1997] and
show examples of transient and steady-state responses of this synaptic circuit to inputs of
different statistical distributions.
This circuit has been included in a silicon network of leaky integrate-and-fire neurons together with other short-term dynamic synapses like facilitation synapses. We also show
experimental data from the chip that demonstrate the possible computational roles of depression. We postulate that one possible role of depression is to bring the neuron?s response
quickly up to threshold if the membrane potential of the neuron was close to the resting
potential. We also mapped a proposed cortical model of direction-selectivity that uses depressing synapses onto this chip. The results are qualitatively similar to the results obtained
in the original work [Chance et al., 1998].
The similarity of the circuit responses to the responses from Abbott and colleagues?s synaptic model means that we can use these VLSI networks of integrate-and-fire (I/F) neurons as
an alternative to computer simulations of dynamical networks composed of large numbers
of integrate-and-fire neurons using synapses with different time constants. The outputs of
such networks can also be used to interface with neural wetware. An infrastructure for a reprogrammable, reconfigurable, multi-chip neuronal system is being developed along with
a user-defined interface so that the system is easily accessible to a naive user.
2 Comparisons between Models of Depression
We compare the circuit model with the theoretical model from [Abbott et al., 1997] describing synaptic depression and facilitation. Similar comparisons with [Tsodyks and Markram,
1997] give the same conclusions. Here, we only describe the circuit model for synaptic
depression. The equivalent model for facilitation is described elsewhere [Liu, 2002].
2.1 Theoretical Model for Depression Model
In the model from [Abbott et al., 1997], the synaptic strength is described by , where
is a variable between 0 and 1 that describes the amount of depression (
means no
depression) and is the maximum synaptic strength. The recovery dynamics of is:
where
is the recovery time of the depression. The update equation for
spike at time is
(1)
right after a
!
(2)
where (#"
) is the amount by which is decreased right after the spike and $! is
the time of the spike. The average steady-state value of depression for a regular spike train
with a rate % is
'& )(*,+.-$/0 1
(3)
& 2(*,+.-3/ 0 134
2.2 Circuit Model of Depressing Synapse
In this circuit model of synaptic depression, the equation that describes the recovery dynamics of the depressing variable, is nonlinear. This nonlinearity comes about because
the exponential dynamics in Eq. 1 was replaced with the dynamics of the current through
a single diode-connected transistor. Hence, the equation describing the recovery of (derived from the circuit in the region where a transistor operates in the subthreshold region or
the current is exponential in the gate voltage of the transistor) can be formulated as
5 67' (* 879
(4)
where :;5 is the equivalent of
< in Eq. 1 and = (a transistor parameter) is less than 1.
The maximum value of is 1. The update equation remains as before:
! > ?
(5)
4
Vgain
Va
M6
M1
M7
0.35
Id
Slow recovery
C2
Vx
0.3
M5
M2
Isyn
V =0.26 V
d
0.25 Update
V =0.28 V
Fast recovery
d
x
Vd
C
V (V)
Ir
Vpre
M4
0.2
Vd=0.3 V
0.15
Vpre
M3
0.1
0.04
(a)
0.06
0.08
0.1
0.12
Time (s)
0.14
0.16
(b)
Figure 1: Schematic for a depressing synapse circuit and responses to a regular input spike
train. (a) Depressing synapse circuit. The voltage determines the synaptic conductance
while the synaptic term or is exponential in the voltage, . The subcircuit
consisting of transistors, 5 ( , 5
, and 5
, control the dynamics of . The presynaptic
input goes to the gate terminal of 5 which acts like a switch. When there is a presynaptic
spike, a quantity of charge (determined by ) is removed from the node . In between
spikes, recovers to the voltage, through the diode-connected transistor, 5 ( . When
there is no spike, is around . When the presynaptic input comes from a regular spike
train, decreases with each spike and recovers in between spikes. It reaches a steadystate value as shown in (b). During the spike, transistor 5 turns on and the synaptic
weight current charges up the membrane potential of the neuron through the currentmirror circuit consisting of 5 , 5 , and the capacitor
. We can convert the current
source into a synaptic current with some gain and a ?time constant? by adjusting the
+"!$#%! $& 1 where 687
voltage . The decay dynamics of is given by
+*,.-0/*-0$&21
(('*) 354
4
&
8
<
<
+
;
=
?
;
>
@
A
1
*
B
0
0
and :
. In a normal synapse circuit (that is, without shortterm dynamics), = is controlled by an external bias voltage. (b) Input spike train at a
frequency of 20 Hz (bottom curve) and corresponding response C (top curve) of the circuit
for 0.26,0.28, 0.3 V. The diode-connected transistor 5 ( has nonlinear dynamics. The
recovery time of the depressing variable depends on the distance of the present value of
from D . The recovery rate of increases for a larger difference between E and D .
9 7
2.2.1 Circuit
Equations 4 and 5 are derived from the circuit in Fig. 1. The operation of this circuit is
described in the caption. The detailed analysis leading to the differential equations for
is described in [Liu, 2002]. The voltage codes for > . The conductance is set by
while the dynamics of is set by both and D . The time taken for the present value of
5 to return to % is determined by the current
dynamics of the diode-connected transistor
( and . The recovery time constant ( :;5 ) of is set by .
in Fig. 1(a):
4
GF &8=;2H*IB >
(6)
4
4
,"L 002MNL @ 10OP Q&
where F & 8=;2@,*B is the synaptic strength, is JK
, and -IU
SRT +"!1
4
&
;
:
5
GF 8 +<; 00 =;?H,1*B . The recovery time constant (
) of
is set by V ( 5
The synaptic weight is described by the current,
Q& 4 8 & + ( 281 +<; 00,%; @ 1*B 4 ). The synaptic current, to the neuron, is then a current source
G which lasts for the duration of the pulse width of the presynaptic spike. However, we
can set a longer time constant for the synaptic current through . The equation describing this dependence (that is, the current equation for a current-mirror circuit) is given in the
caption of Fig. 1.
Spikes
a)
1
Abbott?s model
Circuit model
0.5
Vm (mV)
0
0
b) ?60
1000
1500
2000
2500
3000
3500
4000
500
1000
1500
2000
2500
3000
3500
4000
500
1000
1500
2000
2500
Time (msec)
3000
3500
4000
?65
?70
?75
0
c)
1
D
500
0.5
0
0
Figure 2: Comparison between the outputs of the two models of depression. An optimization algorithm was used to determine the parameters of the models so that the least square
error in the difference between the EPSPs from the two models was at a minimum. The
corresponding distribution is shown in (c). (a) Poisson-distributed input with an initial
frequency of 40 Hz and an end frequency of 1 Hz. (b) The EPSP responses of both models
were identical. (c) The values were almost identical except in the region
when is close
to 1. Parameters used in the simulations:
, 4 , =
4 , 5 4 2( .
It is difficult to compute a closed-form solution for Eq. 4 for any value of = (a transistor parameter which is less than 1). This value also changes under different operating conditions
and between transistors
fabricated in different processes. Hence, we solve for in the
:
case of =
4 given that the last spike occurred at
!5 "!# 5
: 5
>
(7)
$% !5 & ' "!# 5 4
When
is far from its recovered value of 1, we can approximate its recovery dynamics by
: 5 (irrespective of = ) and solving for , we get
5 '
4
In this regime, follows a linear trajectory. Note that the same is true of Eq. 1 when
" "
.
1.4
0.6
1.2
0.4
1
EPSP amplitude (V)
Neuron response (V)
Poisson spike train
0.8
0.2
0
?0.2
V = 0.2 V, V = 1.01 V
d
a
0.8
V = 0.4 V, V = 1.03 V
d
0.6
a
Non?depressing synapse
0.4
V = 0.6 V, V = 1.15 V
d
?0.4
?0.6
0
a
0.2
1
2
0
0
3
10
Time (s)
(a)
20
30
Frequency (Hz)
40
50
(b)
Figure 3: Transient EPSP responses to a 10 Hz Poisson-distributed train (a) and dependence
of steady-state EPSP responses on the input frequency for different values of depression (b).
The data was measured from the fabricated circuit. In (a), the amplitude of the EPSP decreases with each incoming input spike clearly showing the effect of synaptic depression.
In (a), the EPSP amplitude depends on the occurrence of the previous spike. The asterisks
are the fits of the circuit model to the peak value of each EPSP. The fits give a value
of 0.79. The input is the bottom curve of the plot. (b) Steady-state EPSP amplitude versus frequency for a Poisson-distributed input. The solid lines are fits from the theoretical
equation.
3 Comparison between Models
We compare the two models by looking at how changes in response to a Poissondistributed input whose frequency varied from 40 Hz to 1 Hz as shown in Fig. 2. We used a
simple linear differential equation to describe the dynamics of the membrane potential :
where
is the membrane time constant and is the synaptic current. We ran an optimization algorithm on the parameters in the two models so that the least square error
between the EPSP outputs of both models was at a minimum. In this case, the EPSP responses were identical (Fig. 2(b)) and the corresponding values (Fig. 2(c)) were almost
identical except in the region where was close to the maximum value. We performed the
same comparison with Tsodyks and Markram?s model and the results were similar. Hence,
the circuit model can be used to describe short-term synaptic depression in a network simulation. However, the nonlinear recovery dynamics of the circuit model leads a different
functional dependence of the average steady-state EPSP on the frequency of a regular input
spike train.
4 Circuit Response
The data in the figures in the remainder of this paper are obtained from a fabricated
silicon network of aVLSI integrate-and-fire neurons of the type described in [Boahen,
1997, Van Schaik, 2001, Indiveri, 2000, Liu et al., 2001] with different types of synapses.
4.1 Transient Response
We first measured the transient response of the neuron when stimulated by a 10 Hz Poissondistributed input through the depressing synapse. We tuned the parameters of the synapse
and the leak current so that the membrane potential did not build up to threshold. This data
is shown in Fig. 3(a). The fit (marked with asterisks with in the figure) using Eq. 6 along
with computed from Eq. 7, describes the experimental data well.
4.2 Steady-State Response
The equation describing the dependence of the steady-state values of on the presynaptic
frequency can easily be determined in the case of a regular spiking input of rate % by using
Eqs. 5 and 7. The resulting expression is somewhat complicated but by using the reduced
dynamics expression ( : 5 ), we obtain a simpler expression for :
5
(8)
% 4
This equation shows that the steady-state and hence, the steady-state EPSP amplitude is
inversely dependent on the presynaptic rate % . The form of the curve is similar to the results
obtained in the work of [Abbott et al., 1997] where the data can be fitted with Eq. 3.
From the chip, we measured the steady-state EPSP amplitudes using a Poisson-distributed
train whose frequency varied over a range of 3 Hz to 50 Hz in steps of 1 Hz. Each frequency
interval lasted 15 s and the EPSP amplitude was averaged in the last 5 s to obtain the steadystate value. Four separate trials were performed and the resulting mean and the variance of
the measurements are shown in Fig. 3(b). The parameters from the fits using the response
data to a regular spiking input were used to generate the fitted curve to the data in Fig. 3(b).
The values from the fits give recovery time constants from 1?3 s and values varying
between 0.02-0.04.
5 Role of Synaptic Depression
Different computational roles have been proposed for networks which incorporate synaptic
depression. In this section, we describe some measurements which illustrate the postulated
roles of depression. The direction-selective model of [Chance et al., 1998] which makes
use of the phase advance property from depressing synapses have been attempted on a
neuron on our chip and the direction-selective results were qualitatively similar.
Depressing synapses have also been implicated in cortical gain control [Abbott et al., 1997].
A depressing synapse acts like a transient detector to changes in frequency (or a first derivative filter). A synapse with short-term depression responds equally to equal percentage rate
changes in its input on different firing rates. We demonstrate the gain-control mechanism
of short-term depression by measuring the neuron?s response to step changes in input frequency from 10 Hz to 20 Hz to 40 Hz. Each step change represents the same rate change
in input frequency. These results are shown in Fig. 4(a) for a regular train and in (b) for a
Poisson-distributed train. Each frequency epoch lasted 3 s so the synaptic strength should
have reached steady-state before the next increase in input frequency.
For both figures in Fig. 4, the top curve shows the response of the neuron when stimulated
by the input (bottom curve) through a depressing synapse (top curve) and a non-depressing
synapse (middle curve). Figure 4(a) shows clearly that the transient increase in the firing
rate of a neuron when stimulated through a depressing synapse right after each step increase
in input frequency and the subsequent adaptation of its firing rate to a steady-state value.
The steady-state firing rate of the neuron with a depressing synapse is less dependent on the
Poisson spike train
Regular spike train
5
5
4.5
4
4
3
Vm (V)
Vm (V)
3.5
3
2.5
2
2
1
1.5
1
0.5
0
3
4
5
Time (s)
(a)
6
7
0
2
4
6
8
Time (s)
(b)
Figure 4: Response of neuron to changes in input frequency (bottom curve) when stimulated through a depressing synapse (top curve) and a non-depressing synapse (middle
curve). The neuron was stimulated for three frequency intervals (10 Hz to 20 Hz to 40 Hz)
lasting 3 s each. (a) Response of neuron using a regular spiking input. The steady-state firing rate of the neuron increased almost linearly with the input frequency when stimulated
through the non-depressing synapse. In the depressing-synapse curve, there is a transient
increase in the neuron?s firing rate before the rate adapted to steady-state. (b) Response of
neuron using a Poisson-distributed input. The parameters for both types of synapses were
tuned so that the steady-state firing rates were about the same at the end of each frequency
interval for both synapses. Notice that during the 10 Hz interval, the neuron quickly built
up to threshold if it was stimulated through the depressing synapse.
absolute input frequency when compared to the firing rate of the neuron when stimulated
through the non-depressing synapse. In the latter case, the firing rate of the neuron is
approximately linear in the input rate.
The data in Fig. 4(b) obtained from a Poisson-distributed train shows an obvious difference
in the responses between the depressing and non-depressing synapse. In the depressingsynapse case, the neuron quickly reached threshold for a 10 Hz input, while it remained subthreshold in the non-depressing case until the input has increased to 20 Hz. This suggests
that a potential role of a depressing synapse is to drive a neuron quickly to threshold when
its membrane potential is far away from its threshold.
6 Conclusion
We described a model of synaptic depression that was derived from a circuit implementation. This circuit model has nonlinear recovery dynamics in contrast to current theoretical
models of dynamic synapses. It gives qualitatively similar results when compared to the
model of Abbott and colleagues. Measured data from a chip with aVLSI integrate-and-fire
neurons and dynamic synapses show that this network can be used to simulate the responses
of dynamic networks with short-term dynamic synapses. Experimental results suggest that
depressing synapses can be used to drive a neuron quickly up to threshold if its membrane
potential is at the resting potential. The silicon networks provide an alternative to computer
simulation of spike-based processing models with different time constant synapses because
they run in real-time and the computational time does not scale with the size of the neuronal
network.
Acknowledgments
This work was supported in part by the Swiss National Foundation Research SPP grant.
We acknowledge Kevan Martin, Pamela Baker, and Ora Ohana for many discussions on
dynamic synapses.
References
[Abbott et al., 1997] Abbott, L., Sen, K., Varela, J., and Nelson, S. (1997). Synaptic depression and cortical gain control. Science, 275(5297):220?223.
[Boahen, 1997] Boahen, K. A. (1997). Retinomorphic Vision Systems: Reverse Engineering the Vertebrate Retina. PhD thesis, California Institute of Technology, Pasadena CA.
[Chance et al., 1998] Chance, F., Nelson, S., and Abbott, L. (1998). Synaptic depression and the temporal response characteristics of V1 cells. Journal of Neuroscience,
18(12):4785?4799.
[Indiveri, 2000] Indiveri, G. (2000). Modeling selective attention using a neuromorphic
aVLSI device. Neural Computation, 12(12):2857?2880.
[Liu, 2002] Liu, S.-C. (2002). Dynamic synapses and neuron circuits for mixed-signal
processing. EURASIP Journal on Applied Signal Processing: Special Issue. Submitted.
[Liu et al., 2001] Liu, S.-C., Kramer, J., Indiveri, G., Delbr?uck, T., Burg, T., and Douglas,
R. (2001). Orientation-selective aVLSI spiking neurons. Neural Networks: Special
Issue on Spiking Neurons in Neuroscience and Technology, 14(6/7):629?643.
[Maass and Zador, 1999] Maass, W. and Zador, A. (1999). Computing and learning with
dynamic synapses. In Maass, W. and Bishop, C. M., editors, Pulsed Neural Networks,
chapter 6, pages 157?178. MIT Press, Boston, MA. ISBN 0-262-13350-4.
[Matveev and Wang, 2000] Matveev, V. and Wang, X. (2000). Differential short-term
synaptic plasticity and transmission of complex spike trains: to depress or to facilitate?
Cerebral Cortex, 10(11):1143?1153.
[Rasche and Hahnloser, 2001] Rasche, C. and Hahnloser, R. (2001). Silicon synaptic depression. Biological Cybernetics, 84(1):57?62.
[Senn et al., 1998] Senn, W., Segev, I., and Tsodyks, M. (1998). Reading neuronal synchrony with depressing synapses. Neural Computation, 10(4):815?819.
[Stratford et al., 1998] Stratford, K., Tarczy-Hornoch, K., Martin, K., Bannister, N., and
Jack, J. (1998). Excitatory synaptic inputs to spiny stellate cells in cat visual cortex.
Nature, 382:258?261.
[Tsodyks and Markram, 1997] Tsodyks, M. and Markram, H. (1997). The neural code
between neocortical pyramidal neurons depends on neurotransmitter release probability.
Proc. Natl. Acad. Sci. USA, 94(2).
[Tsodyks et al., 1998] Tsodyks, M., Pawelzik, K., and Markram, H. (1998). Neural networks with dynamic synapses. Neural Computation, 10(4):821?835.
[Van Schaik, 2001] Van Schaik, A. (2001). Building blocks for electronic spiking neural
networks. Neural Networks, 14(6/7):617?628. Special Issue on Spiking Neurons in
Neuroscience and Technology.
[Varela et al., 1997] Varela, J., Sen, K., Gibson, J., Fost, J., Abbott, L., and Nelson, S.
(1997). A quantitative description of short-term plasticity at excitatory synapses in layer
2/3 of rat primary visual cortex. Journal of Neuroscience, 17(20):7926?7940.
| 2233 |@word trial:1 middle:2 pulse:1 simulation:5 solid:1 initial:1 liu:8 tuned:2 current:15 recovered:1 subsequent:1 plasticity:2 plot:1 update:3 device:1 short:11 schaik:3 infrastructure:1 node:1 simpler:1 along:3 c2:1 m7:1 differential:3 vpre:2 os:1 frequently:1 multi:1 terminal:1 pawelzik:1 vertebrate:1 baker:1 circuit:32 developed:1 fabricated:4 temporal:1 quantitative:1 winterthurerstrasse:1 act:2 charge:2 control:4 grant:1 before:3 engineering:1 fost:1 acad:1 id:1 firing:9 approximately:1 suggests:1 range:2 averaged:1 acknowledgment:1 block:1 swiss:1 gibson:1 eth:1 regular:9 suggest:1 get:1 onto:1 close:4 equivalent:2 go:1 attention:1 zador:3 duration:1 recovery:15 m2:1 facilitation:3 user:2 caption:2 wetware:1 us:1 delbr:1 observed:1 role:8 bottom:4 wang:3 tsodyks:9 region:4 connected:4 decrease:2 removed:1 ran:1 boahen:3 leak:1 dynamic:34 solving:1 easily:2 chip:6 various:1 chapter:1 cat:1 neurotransmitter:1 train:14 fast:1 describe:6 neuroinformatics:1 whose:2 larger:1 solve:1 transistor:11 isbn:1 sen:2 epsp:14 remainder:1 adaptation:1 description:1 transmission:1 illustrate:1 recurrent:1 avlsi:4 measured:4 eq:8 epsps:1 diode:4 come:2 switzerland:1 direction:3 functionality:1 filter:1 vx:1 transient:8 stellate:1 biological:1 around:1 normal:1 rasche:3 proc:1 mit:1 clearly:2 varying:2 voltage:7 derived:5 release:1 indiveri:4 lasted:2 contrast:1 dependent:2 initially:1 pasadena:1 vlsi:1 poissondistributed:2 selective:4 iu:1 issue:3 orientation:1 pascal:1 retinomorphic:1 special:3 equal:1 identical:4 represents:1 retina:1 suter:1 composed:1 national:1 m4:1 replaced:1 phase:1 consisting:3 fire:6 conductance:2 analyzed:1 natl:1 theoretical:6 fitted:2 increased:2 modeling:1 measuring:1 neuromorphic:1 peak:1 accessible:1 vm:3 together:1 quickly:6 thesis:1 postulate:1 external:1 derivative:1 leading:1 return:1 potential:11 postulated:1 mv:1 depends:4 performed:2 closed:1 reached:2 capability:1 complicated:1 synchrony:1 square:2 ir:1 variance:1 characteristic:1 subthreshold:2 chii:1 trajectory:1 drive:2 cybernetics:1 submitted:1 detector:1 synapsis:23 phys:1 reach:1 synaptic:32 colleague:2 frequency:23 obvious:1 recovers:2 gain:4 adjusting:1 amplitude:7 response:26 synapse:22 depressing:28 until:1 nonlinear:5 usa:1 effect:1 building:1 facilitate:1 true:1 hence:4 maass:4 during:2 width:1 steady:17 rat:1 m5:1 ini:1 neocortical:1 demonstrate:2 bring:2 interface:2 steadystate:2 jack:1 functional:1 spiking:7 cerebral:1 occurred:1 m1:1 resting:3 silicon:6 measurement:2 nonlinearity:1 similarity:1 operating:2 longer:1 cortex:3 pulsed:1 reprogrammable:1 reverse:1 selectivity:1 isyn:1 minimum:2 somewhat:1 determine:1 signal:2 equally:1 va:1 schematic:1 controlled:1 vision:1 poisson:9 cell:2 decreased:1 interval:4 pyramidal:1 source:2 sr:1 hz:20 capacitor:1 m6:1 switch:1 fit:7 expression:3 depress:1 depression:30 kevan:1 detailed:1 amount:2 hardware:1 reduced:1 generate:1 percentage:1 notice:1 senn:3 neuroscience:4 varela:5 shih:2 four:1 demonstrating:1 threshold:8 douglas:1 abbott:12 bannister:1 v1:1 convert:1 run:1 almost:3 electronic:1 layer:1 strength:4 adapted:1 segev:1 simulate:1 martin:2 membrane:8 describes:3 spiny:1 lasting:1 taken:1 equation:14 zurich:3 remains:1 describing:6 turn:1 mechanism:1 end:2 operation:1 away:1 occurrence:1 matveev:3 alternative:2 gate:2 original:1 top:4 burg:1 especially:1 build:1 quantity:1 spike:24 primary:1 dependence:4 responds:1 subcircuit:1 distance:1 separate:1 mapped:1 sci:1 vd:2 nelson:3 presynaptic:7 code:2 difficult:1 implementation:5 neuron:35 acknowledge:1 looking:1 varied:2 california:1 dynamical:1 spp:1 regime:2 reading:1 built:1 malte:1 technology:3 inversely:1 irrespective:1 shortterm:3 naive:1 gf:2 epoch:1 mixed:1 versus:1 asterisk:2 foundation:1 integrate:6 editor:1 elsewhere:1 excitatory:2 supported:1 last:3 implicated:2 bias:1 allow:1 institute:2 wide:1 markram:5 absolute:1 leaky:2 distributed:7 van:3 curve:13 cortical:5 qualitatively:3 far:2 approximate:1 incoming:1 stimulated:8 nature:1 ca:1 complex:1 did:1 mnl:1 linearly:1 stratford:3 neuronal:4 fig:12 slow:1 msec:1 exponential:3 ib:1 remained:1 bishop:1 reconfigurable:1 showing:1 decay:1 mirror:1 phd:1 boston:1 pamela:1 visual:2 ch:2 determines:1 chance:5 ma:1 hahnloser:3 marked:1 formulated:1 kramer:1 change:8 eurasip:1 included:1 determined:3 except:3 operates:1 uck:1 experimental:5 m3:1 attempted:1 latter:1 ethz:1 incorporate:1 |
1,356 | 2,234 | Learning with Multiple Labels
Rong Jin*
*School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213, USA
[email protected]
Zoubin Ghahramanit*
tGatsby Computational Neuroscience Unit
University College London
London WCIN 3AR, UK
[email protected]
Abstract
In this paper, we study a special kind of learning problem in which
each training instance is given a set of (or distribution over)
candidate class labels and only one of the candidate labels is the
correct one. Such a problem can occur, e.g., in an information
retrieval setting where a set of words is associated with an image,
or if classes labels are organized hierarchically. We propose a
novel discriminative approach for handling the ambiguity of class
labels in the training examples. The experiments with the proposed
approach over five different UCI datasets show that our approach is
able to find the correct label among the set of candidate labels and
actually achieve performance close to the case when each training
instance is given a single correct label. In contrast, naIve methods
degrade rapidly as more ambiguity is introduced into the labels.
1
Introduction
Supervised and unsupervised learning problems have been extensively studied in the
machine learning literature. In supervised classification each training instance is
associated with a single class label, while in unsupervised classification (i.e.
clustering) the class labels are not known. There has recently been a great deal of
interest in partially- or semi-supervised learning problems, where the training data is
a mixture of both labeled and unlabelled cases. Here we study a new type of semisupervised learning problem.
We generalize the notion of supervision by thinking of learning problems where
multiple candidate class labels are associated with each training instance, and it is
assumed that only one of the candidates is the correct label. For a supervised
classification problem, the set of candidate class labels for every training instance
contains only one label, while for an unsupervised learning problem, the set of
candidate class labels for each training instance counts in all the possible class
labels. For a learning problem with the mixture of labeled and unlabelled training
data, the number of candidate class labels for every training instance can be either
one or the total number of different classes.
Here we study the general setup, i.e. a learning problem when each training instance
is assigned to a subset of all the class labels (later, we further generalize this to
include arbitrary distributions over the class labels). For example, there may be 10
different classes and each training instance is given two candidate class labels and
one of the two given labels is correct. This learning problem is more difficult than
supervised classification because for each training example we don't know which
class among the given set of candidate classes is actually the target. For easy
reference, we called this class of learning problems 'multiple-label' problems.
In practice, many real problems can be formalized as a 'multiple-label' problem. For
example, the problem of having several different class labels for a single training
example can be caused by the disagreement between several assessors. 1 Consider
the scenario when two assessors are hired to label the training data and sometimes
the two assessors give different class labels to the same training example. In this
case, we will have two class labels for a single training instance and don't know
which, if any, is actually correct. Another scenario that can cause multiple class
labels to be assigned to a single training example is when there is a hierarchical
structure over the class labels and some of the training data are given the labels of
the internal nodes in the hierarchy (i.e. superclasses) instead of the labels of the leaf
nodes (subclasses). Such hierarchies occur, for example, in bioinformatics where
proteins are regularly classified into superfamilies and families. For such
hierarchical labels, we can treat the label of internal nodes as a set of the labels on
the leaf nodes.
2
Related Work
First of all, we need to distinguish this 'multiple-label' problem from the problem
where the classes are not mutually exclusive and therefore each training example is
allowed several class labels [4]. There, even though each training example can have
multiple class labels, all the assigned class labels are actually correct labels while in
'multiple-label' problems only one of the assigned multiple labels is the target label
for the training instance.
The essential difficulty of 'multiple-label' problems comes from the ambiguity in
the class labels for training data, i.e. among the several labels assigned to every
training instance only one is presumed to be the correct one and unfortunately we
are not informed which one is the target label. A similar difficulty appears in the
problem of classification from labeled and unlabeled training data. The difference
between the 'multiple-label' problem and the labeled/unlabeled classification
problem is that in the former only a subset of the class labels can be the candidate
for the target label, while in the latter any class label can be the candidate. As will
be shown later, this constraint makes it possible for us to build up a purely
discriminative approach while for learning problems using unlabeled data people
usually take a generative approach and model properties of the input distribution.
In contrast to the 'multiple-label' problem, there is a set of problems named
'multiple-instance' problems [3] where instances are organized into 'bags' of
several instances, and a class label is tagged for every bag of instances. In the
'multiple-instance' problem, at least one of the instances within each bag
corresponds to the label of the bag and all other instances within the bag are just
noise. The difference between 'multiple-label' problems and 'multiple-instance'
problems is that for 'multiple-label' problems the ambiguity lies on the side of class
labels while for 'multiple-instance' problem the ambiguity comes from the instances
within the bag.
1 Observer disagreement has been modeled using the EM algorithm [1] . Our multiplelabel framework differs in that we don't know which observer assigned which label to
each case. This would be an interesting direction to extend our framework.
The most related work to this paper is [6], where a similar problem is studied using
the logistic regression method. Our framework is completely general for any
discriminative model and incorporates non-uniform 'prior' on the labels.
3
Formal Description of the 'Multiple-label' Problem
As described in the introduction, for a 'multiple-label' problem, each training
instance is associated with a set of candidate class labels, only one of which is the
target label for that instance. Let Xi be the input for the i-th training example, and Si
be the set of candidate class labels for the i-th training example. Our goal is to find
the model parameters
E
in some class of models M , i.e. a parameterized
which maps inputs to labels, so that the predicted class
classifier with parameters
label y for the i-th training example has a high probability to be a member of the set
Si. More formally, using the maximum likelihood criterion and the assumption of
i.i.d. assignments, this goal can be simply stated as
e e
e
(1)
4
Description of the Discriminative Model for the
'Multiple-label' Problem
Before discussing the discriminative model for the 'multiple-label' problem, let's
look at the standard discriminative model for supervised classification. Let p(y I X i )
stand for some given conditional distribution of class labels for the training instance
Xi and p(y I x"f}) be the model-based conditional distribution for the training data Xi
to have the class label y. A common and sensible criterion for finding model
parameters (/ is to minimize the KL divergence between the given conditional
distributions and the model-based distributions, i.e.
B*
=
arg min
{L L
B
p(y
;
y
I
x,) log p(y I x) }
p(y I x ;, B)
(2)
For supervised learning problems, the class label for every trammg instance is
known. Therefore, the given conditional distribution of the class label for every
training instance is a delta function or jJ(y I Xi) = c5(y, Yi) where Yi is the given class
label for the i-th instance. With this, it can be easily shown that Eqn. (2) will be
simplified as maximum likelihood criterion. For the 'multiple-label' problem, each
training instance Xi is assigned to a set of candidate class labels Si and therefore Eqn.
(2) can be rewritten as:
()* =
arg min
B
with the constraints
{L L
i
YES;
p(y I X,) log p(y I x,) }
p(y I Xi' (})
Vi L yESi p(y I Xi) = I .
(3)
(4)
In the 'multiple-label' problem the distribution of class labels p(y I x,) is unknown
except for the constraint that the target class label for every training example is a
member of the corresponding set of candidate class labels. A simple solution to the
problem of unknown label distribution is to assume it is uniform, I.e.
p(y I x,) = p(y' I x,) for any y, y' E Si . Then, Eqn. (3) can be simplified to:
1 L:loi
B* = argmin {L:B
i ISi IYES,
1
II Si Ip(y Ix"B)
1 L:IOgp(YIXi' B)} ,
J} =argmax{L:B
i ISi IYE S,
(5)
which corresponds to minimizing the KL divergence (2) to a uniform over Sj . For
the case of multiple assessors giving differing labels to the data, discussed in the
introduction, this corresponds to concatenating the labeled data sets. Standard
learning algorithms can be applied to learn the conditional model p(y I x,B). For
later reference, we called this simple idea the ' Naive Model'.
A better solution than the 'NaIve Model' is to disambiguate the label association,
i.e. to find which label among the given set is more appropriate than the others and
use the appropriate label for training. It turns out that it is possible to apply the EM
algorithm [2] to accomplish this goal, resulting in a procedure which iterates
between disambiguating and classifying. Starting with the assumption that every
class label within the set is equally likely, we train a conditional model p(y I x, B).
Then, with the help of this conditional model, we estimate the label distribution
jJ(y I x,) for each data point. With these label distributions, we refit the conditional
model p(y I x , B) and so on. More formally, this idea can be expressed as follows:
First, we estimate the conditional model based on the assumed or estimated label
distribution according to Eqn. (3). This step corresponds to the M-step in the EM
algorithm. Then, in the E-step, new label distributions are estimated by maximizing
Eqn. (3) W.r.t. jJ(y I x,) under the constraints (4), resulting in:
jJ(y I Xi) =
1
P(yIXi,B)
VYES i
L: p(y' I Xi' B)
(6)
Y ESj
o
otherwise
importantly, this procedure optImIzes the objective function in Eqn. (1), by the
usual EM proof. The negative of the KL divergence in Eqn. (3) is a lower bound on
the log likelihood (1) by Jensen's inequality. Substituting Eqn. (6) for jJ(y I Xi) into
(3) we obtain equality. For easy reference, we called this model the 'EM Model'.
in some 'multiple-label' problems, information on which class label within the set
Sj is more likely to be the correct one can be obtained. For example, if three
assessors manually label the training data, in some cases two assessors will agree on
the class label and the other doesn't. We should give more weights to the labels that
are agreed by two assessors and low weights to the labels that are chosen by only
one. To accommodate prior information on the class labels, we generalize the
previous framework so that the estimated label distribution jJ(y I Xi) has low
relative entropy with the prior on the class labels. Therefore, the objective function
(1) and its EM -bound (4) can be modified to be
B* =
arg~in{ ~ ~ p(y I x,)logP:i.lyx,) - ~ ~ p(y I X,) log p(y I Xi,B)}
(7)
where " i,y is the prior probability for the i-th training example to have class label y.
The first term in the objective function (7) encourages the estimated label
distribution to be consistent with the prior distribution of class labels and the second
term encourages the prediction of the model to be consistent with the estimated
label distribution. The objective (7) is an upper bound on - L:\og L: 7l'i,y P(Y I xi,B) .
YE Si
When there is no prior information about which class label within the given set is
preferable we can set n ;,y = 1/ I S; I and Eqn. (7) becomes
B* = argmin{II p(y IxJlog p(y Ix;) - I I p(y IxJlogp(y I X;,B)}
(I
;
YES,
1/ I S; I
;
YES,
(7')
= argmin{II p(y IxJlog p(y IxJ + Ilog I S; I} = argmin{I I p(y IxJlog p(y IxJ }
II
; yES,
p(y I x;,B)
;
I I ; yES,
p(y I x;,IJ)
Eqn. (7') is identical to Eqn. (3), which shows that when there is no pnor
knowledge on the class label distribution, we revert back to the' EM Model' .
Again we can optimize Eqn. (7) using the EM algorithm, estimating the label
distribution p(y I x;) in the E step fitting any standard discriminative model for
p(y I x,B) in the M step. The label distribution that optimizes (7) in the Estep
is: p(y Ix.)
= 7r. p(y Ix B) / "
7r .p(y'l x B), and 0 otherwise.
As we would expect,
I
I, ),
I'
~ Y'ESi
I ,),
I'
the label distribution p(y I xJ trades off both the prior n ;,y and the model-based
prediction p(y I x;, B). We will call this model 'EM+Prior Model'.
The 'EM+Prior Model' can also be
interpreted from the viewpoint of a graphical
model. The basic idea is illustrated in Figure
1, where the random variable ti represents the
event that the true label Yi belongs to the
label set Si. For the 'EM+Prior' model, n ;,y
actually plays the role of a likelihood or
noise model where, where p(y E Si I x i ,(}) in
Eqn. (1) is replaced as in Eqn. (8). From this
Figure I: Diagram for graphic model
point
of view, generalizing to Bayesian
interpretation of 'EM+Prior' model
learning and regression is easy.
P(ti
= 11xi,B) = LP(ti = 11y)p(y I xi,B) = L"i.yP(y I xi,B)
YE5i
5
YESi
(8)
Experiments
The goal of our experiments is to answer the following questions:
l. Is the 'EM Model' better than the 'Nai've Model'? The difference between the
'EM Model' and the 'Naive Model' for the 'multiple-label' problems is that the
'Naive Model ' makes no effort in finding the correct label within the given label set
while the 'EM Model' applies the EM algorithm to clarify the ambiguity in the class
label. Therefore, in this experiment, we need to justify empirically whether the
effort in disambiguating class labels is effective.
2. Will prior knowledge help the model? The difference between the 'EM Model'
and the 'EM+Prior Model' is that the 'EM+Prior Model' takes advantage of prior
knowledge on the distribution of class labels for instances. However, since
sometimes the prior knowledge on the class label can be misleading, we need to test
the robustness of the 'EM+Prior Model' to such noisy prior knowledge.
5.1
Experimental Data
Since there don't exist standard data sets with trammg instances assigned to
multiple class labels, we actually create several data sets with multiple class labels
from the UCI classification datasets. To make our experiments more realistic, we
tried two different methods of creating datasets with multiple class labels:
? Random Distractors. For every training instance, in addition to the original
assigned label, several randomly selected labels are added to the label candidate set.
We varied the number of added classes to test reliability of our algorithm.
? Nai"ve Bayes Distractors. In the previous method, the added class labels are
randomly selected and therefore independent from the original class label. However,
we usually expect that distractors are in the candidate set should be correlated with
the original label. To simulate this realistic situation, we use the output of a NaIve
Bayes (NB) classifier as an additional member of the class label candidate set. 1
First, a NaIve Bayes classifier using Gaussian generation models is trained on the
dataset. Then, the trained NB classifier is asked to predict the class label of the
training data. When the output of the NB classifier differs from the original label, it
is added as a candidate label. Otherwise, a randomly selected label is added to the
candidate set. Since the NB classifier errors are not completely random, they should
have some correlation with the originally assigned labels.
In these experiments we chose a simple maximum entropy (ME) model [5] as the
basic discriminative model, which expresses a conditional probability p(y Ii,e) in an
exponential form, i.e. p(y I i ,e) = exp(e? i ) / Z(i ) where x is the input feature vector and
Z(x) is the normalization constant which ensures that the conditional probabilities
over all different classes y sum to 1.
T a bei l l n ?ormatIOn ab out f lve UCI d atasets t h at are use d?III t h e expenments
Class Name
ecoli
wine
pendi2it
iris
21ass
Number of Instances
327
178
2000
154
204
Number of Classes
5
3
10
3
5
Number of Features
7
13
16
14
10
% NB Output;tAssigned Label
15%
8%
22.3%
13.3%
16.6%
Error Rate for ME on clean
data (lO-fold cross validation)
12.6%
3.7%
9%
5.7%
9.7%
Five different VCI datasets were selected as
Information about these datasets is listed in Table
cross validation results for the ME model together
NB output differs from the originally assigned label
5.2
the testbed for experiments.
1. For each dataset, the 10-fold
with the percentage of time the
are also listed in Table 1.
Experiment Results (I): 'Naive Model' vs. 'EM Model'
Table 2 lists the results for the 'NaIve Model' and 'EM Model' over a varied
number of additional class labels created by the 'random distractor' and the 'NaIve
Bayes' distractor. Since 'wine' and 'iris' datasets only have 3 different classes, the
maximum additional class labels for these two data sets is 1. Therefore, there is no
experiment result for the case of 2 or 3 distractor class labels for 'wine' and 'iris'.
As shown in Table 2, for the random distractor, the 'EM Model' substantially
outperforms the 'NaIve Model' in all cases. Particularly, for the 'wine' and 'iris'
datasets, by introducing an additional class label to every training instance, there is
only one class label left out of the class label candidates and yet the performance of
the 'EM Model' is still close to the case when there are no additional class labels.
1
NaIve Bayes distractor should not be confused with the multiple-label NaIve Model.
Meanwhile, the 'NaIve Model' degrades significantly for both cases, i.e. from 3.7%
to 10.0% for 'wine' and 5.7% to 18.5% for 'iris'. Therefore, we can conclude that
the 'EM Model' is able to reduce the noise caused by randomly added class labels.
T a bl e 2 Average 10 - D0 Id cross va I attOn error rates Dor bot h 'N aIve Mo de I' an d 'EM Mo de I'
Class Name
ecoli
wine
pendigit
iris
glass
1 extra label
by random
distracter
Naive
17.3%
10%
14.2%
18.5%
24.9%
EM
13.6%
4.4%
8.9%
5.2%
12.9%
2 extra labels
by random
distracter
Naive
20.7%
15.4%
44.9%
EM
14.9%
9.4%
12%
3 extra labe ls
by random
distracter
Na ive
25 .8%
17.6%
34 .6%
EM
18.3%
11.7%
33.5%
1 extra labe l
byNB
distracter
Naive
22.4%
15.7%
17.2%
18.5%
27.7%
EM
14.6%
6.8%
15.4%
6.7%
20.6%
Secondly, we compare the performance of these two models over a more realistic
setup for the 'multiple-label' problem where the distractor identity is correlated with
the true label (simulated by using the NB distractor). Table 1 gives the percentage
of times when the trained Naive Bayes classifier disagreed with the 'true' labels,
which is also the percentage of the additional class labels that is created by the
'Naive Bayes distracter'. The last row of Table 2 shows the performance of these
two models when the additional class labels are introduced by the 'NB distracter'.
Again, the 'EM Model' is significantly better than 'NaIve Model'. For dataset
'ecoli', 'wine' and 'iris', the averaged error rates of the 'EM Model' are very close
to the cases when there are no distractor class labels. Therefore, we can conclude
that the 'EM Model' is able to reduce the noise caused not only by random label
ambiguity but also by some systematic label ambiguity.
5.3
Experiment Results (II): 'EM Model' vs. 'EM+Prior Model'
T a bl e 3 A verage 10 - D0 Id cross va I attOn error rates Dor 'EM +P'
nor M o d e I' over f Ive UCld atasets.
Class Name
ecoli
wine
pendigit
iris
glass
I extra label
by random
distracter
Perfect
13 .3%
3.7%
8.7%
5.2%
12.4%
Noisy
13 .3%
3.2%
9.0%
18.5%
12 .9%
2 extra labels
by random
distracter
Perfect
13.6%
9.0%
12.5%
Noisy
13.9%
9.4%
13.6%
3 extra labels
by random
distracter
Perfect
12.6%
10.0%
12.4%
Noisy
13.9%
11.0%
16.8%
I extra labe l
byNB
distracter
Perfect
13.9%
5.0%
13.4%
5.2%
16.7%
Noisy
15.3%
6.2%
14.2%
6.7%
19.0%
In this subsection, we focus on whether the information from a prior distribution on
class labels can improve the performance. In this experiment, we study two cases:
?
'Perfect Case '. Here the guidance of the prior distribution on class labels is
always correct. In our experiments for every training instance Xi we set the
probability Jri, y; twice as large for the correct Yi as for other Jri ,yo< y; ?
?
'Noisy Case '. For this case, we only allow the guidance of the prior distribution
on the class label to be correct 70% of the time. With this setup, we are able to see if
the ' EM+Prior Model' is robust to noise in the prior distribution.
Table 3 lists the results for ' EM+Prior Model' under both 'Perfect' and ' Noisy'
situations over five different collections. In the 'perfect case ', the averaged error
rates of 'EM+Prior Model ' are quite close to the case when there is no label
ambiguity at all (see Table 1). Moreover, the performance of the 'Noisy case' is also
close to that of the 'Perfect case ' for most data sets listed in Table 3. Therefore, we
can conclude that our 'EM+Prior Model' is able to take advantage of the pnor
distribution on class labels even when some of the' guidance' is not correct.
6
Conclusions and Future Work
We introduced the 'multiple-label' problem and proposed a discriminative
framework that is able to clarify the ambiguity between labels. Although it is
discriminative, this framework is firmly grounded in the EM algorithm for
maximum likelihood estimation. The framework was generalized to take advantage
of prior knowledge on which class label is more likely to be the target label. Our
experiments clearly indicate that the proposed discriminative model is robust to the
addition of noisy class labels and to errors in the prior distribution over class labels.
The idea of this framework, allowing the target distribution p(y I x,) to be inferred
from the classifier itself, can be extended in many different ways. We outline
several promising directions which we hope to explore. (1) It should be possible to
extend this framework to function approximation, where y E 91, and ranges or
distributions are given for the target. In this case, it may be useful to
parameterize p(y I x,) to simplify the resulting variational optimization problem.
(2) We have focused on maximum likelihood; however Bayesian generalizations,
where the goal is to compute a posterior distribution over () given ambiguously
labeled data would be interesting. (3) It is possible to use these ideas as a framework
for combining multiple models. Each model is trained on a small labeled data set
and predicts labels on a large unlabeled data set. These predicted labels can be
combined with the small set to form a larger multiply-labeled data set (since not all
models will agree). This larger data set can be used to train a more complex model.
(4) It is possible to extend this framework to handle the presence of label noise and
to combine it with the multiple-instance problem [3].
References
[1] A. P. Dawid and A. M. Skene (1979) Maximum likelihood estimation of observer errorrates using the EM algorithm. Applied Statistics 28:20-28.
[2] A. Dempster, N. Laird and D. Rubin (1977), Maximum likelihood from incomplete data
via the EM algorithm, Journal of the Royal Statistical Society, 39 (Series B), 1-38.
[3] T. G. Dietterich, R. H. Lathrop, and T. L.-Perez (1997) Solving the multiple-instance
problem with axis-parallel rectangles, Artificial Intelligence, 89(1-2), pp. 31-71.
[4] A. McCallum (1999) Multi-label text classification with a mixture model trained by EM,
AAAI'99 Workshop on Text Learning.
[5] S. Della Pietra, V. Della Pietra and J. Lafferty (1997) Inducing feature s of random fields ,
IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4): 380-393.
[6] Y. Grandvalet (2002), Logistic regression for partial labels, 9th Information Processing
and Managem ent of Uncertainty in Knowledg e-based System (IPMU'02) , pp. 1935-1941.
| 2234 |@word tried:1 accommodate:1 contains:1 series:1 esj:1 outperforms:1 ixj:2 si:8 yet:1 realistic:3 v:2 generative:1 leaf:2 selected:4 intelligence:2 mccallum:1 iterates:1 node:4 five:3 fitting:1 combine:1 presumed:1 isi:2 nor:1 distractor:8 multi:1 becomes:1 confused:1 estimating:1 moreover:1 kind:1 argmin:4 interpreted:1 substantially:1 informed:1 differing:1 finding:2 every:11 subclass:1 ti:3 preferable:1 classifier:8 uk:2 unit:1 before:1 treat:1 id:2 chose:1 twice:1 studied:2 range:1 averaged:2 practice:1 differs:3 procedure:2 significantly:2 word:1 distracter:10 zoubin:2 protein:1 close:5 unlabeled:4 nb:8 optimize:1 map:1 maximizing:1 starting:1 l:1 focused:1 formalized:1 importantly:1 handle:1 notion:1 target:9 hierarchy:2 play:1 pa:1 dawid:1 particularly:1 predicts:1 labeled:8 role:1 parameterize:1 ensures:1 ormation:1 trade:1 dempster:1 asked:1 esi:1 trained:5 solving:1 pendigit:2 purely:1 completely:2 easily:1 train:2 revert:1 effective:1 london:2 artificial:1 quite:1 larger:2 ive:2 otherwise:3 statistic:1 noisy:9 itself:1 ip:1 laird:1 advantage:3 ucl:1 propose:1 ambiguously:1 uci:3 combining:1 rapidly:1 achieve:1 description:2 inducing:1 ent:1 wcin:1 perfect:8 help:2 ij:1 school:1 predicted:2 come:2 indicate:1 direction:2 correct:14 generalization:1 secondly:1 rong:2 clarify:2 exp:1 great:1 predict:1 mo:2 substituting:1 wine:8 estimation:2 bag:6 label:171 trammg:2 create:1 hope:1 clearly:1 gaussian:1 always:1 modified:1 og:1 focus:1 yo:1 likelihood:8 contrast:2 glass:2 arg:3 among:4 classification:9 special:1 field:1 having:1 manually:1 identical:1 represents:1 look:1 unsupervised:3 thinking:1 future:1 others:1 simplify:1 randomly:4 divergence:3 ve:2 pietra:2 replaced:1 argmax:1 dor:2 ab:1 interest:1 aive:1 multiply:1 mixture:3 perez:1 partial:1 incomplete:1 guidance:3 instance:37 ar:1 logp:1 assignment:1 introducing:1 subset:2 uniform:3 graphic:1 answer:1 accomplish:1 combined:1 systematic:1 off:1 together:1 na:1 again:2 ambiguity:10 aaai:1 creating:1 yp:1 de:2 caused:3 vi:1 later:3 view:1 observer:3 pnor:2 bayes:7 parallel:1 hired:1 minimize:1 yes:5 generalize:3 bayesian:2 ecoli:4 classified:1 pp:2 associated:4 proof:1 dataset:3 knowledge:6 distractors:3 subsection:1 organized:2 agreed:1 actually:6 back:1 appears:1 originally:2 supervised:7 though:1 just:1 correlation:1 eqn:14 vci:1 logistic:2 semisupervised:1 name:3 dietterich:1 ye:1 usa:1 true:3 former:1 tagged:1 assigned:11 equality:1 illustrated:1 deal:1 encourages:2 iris:8 criterion:3 generalized:1 outline:1 image:1 variational:1 novel:1 recently:1 common:1 empirically:1 extend:3 interpretation:1 discussed:1 association:1 mellon:1 reliability:1 supervision:1 posterior:1 optimizes:2 belongs:1 scenario:2 inequality:1 discussing:1 yi:4 additional:7 semi:1 ii:6 multiple:36 d0:2 unlabelled:2 cross:4 retrieval:1 equally:1 va:2 prediction:2 regression:3 basic:2 ae:1 sometimes:2 normalization:1 lve:1 grounded:1 addition:2 diagram:1 extra:8 member:3 regularly:1 incorporates:1 lafferty:1 call:1 presence:1 iii:1 easy:3 xj:1 reduce:2 iogp:1 idea:5 whether:2 effort:2 cause:1 jj:6 useful:1 listed:3 knowledg:1 extensively:1 exist:1 percentage:3 bot:1 neuroscience:1 delta:1 estimated:5 jri:2 carnegie:1 express:1 clean:1 rectangle:1 sum:1 parameterized:1 uncertainty:1 named:1 family:1 bound:3 distinguish:1 fold:2 occur:2 constraint:4 simulate:1 min:2 skene:1 estep:1 according:1 verage:1 em:44 lp:1 handling:1 mutually:1 agree:2 turn:1 count:1 know:3 rewritten:1 apply:1 hierarchical:2 appropriate:2 disagreement:2 robustness:1 original:4 clustering:1 include:1 graphical:1 giving:1 build:1 society:1 bl:2 nai:2 objective:4 question:1 added:6 degrades:1 exclusive:1 usual:1 simulated:1 sensible:1 degrade:1 me:3 modeled:1 minimizing:1 setup:3 difficult:1 unfortunately:1 stated:1 negative:1 refit:1 unknown:2 allowing:1 upper:1 ilog:1 datasets:7 jin:1 situation:2 extended:1 varied:2 arbitrary:1 inferred:1 introduced:3 kl:3 testbed:1 emu:1 able:6 usually:2 pattern:1 royal:1 event:1 difficulty:2 improve:1 misleading:1 firmly:1 axis:1 created:2 naive:20 text:2 prior:29 literature:1 relative:1 expect:2 interesting:2 generation:1 validation:2 consistent:2 rubin:1 viewpoint:1 grandvalet:1 classifying:1 lo:1 row:1 last:1 side:1 formal:1 allow:1 superfamily:1 stand:1 doesn:1 c5:1 collection:1 simplified:2 transaction:1 sj:2 pittsburgh:1 assumed:2 conclude:3 discriminative:11 xi:17 don:4 table:9 disambiguate:1 promising:1 learn:1 robust:2 yixi:2 as:1 complex:1 meanwhile:1 hierarchically:1 noise:6 allowed:1 assessor:7 gatsby:1 concatenating:1 exponential:1 candidate:22 lie:1 bei:1 ix:4 jensen:1 list:2 disagreed:1 essential:1 workshop:1 entropy:2 generalizing:1 simply:1 likely:3 explore:1 expressed:1 partially:1 applies:1 ipmu:1 corresponds:4 conditional:11 superclass:1 goal:5 identity:1 disambiguating:2 except:1 justify:1 total:1 called:3 lathrop:1 e:1 experimental:1 formally:2 college:1 internal:2 people:1 loi:1 latter:1 bioinformatics:1 della:2 correlated:2 |
1,357 | 2,235 | The Decision List Machine
Marina Sokolova
SITE, University of Ottawa
Ottawa, Ont. Canada,K1N-6N5
[email protected]
Nathalie Japkowicz
SITE, University of Ottawa
Ottawa, Ont. Canada,K1N-6N5
[email protected]
Mario Marchand
SITE, University of Ottawa
Ottawa, Ont. Canada,K1N-6N5
[email protected]
John Shawe-Taylor
Royal Holloway, University of London
Egham, UK, TW20-0EX
[email protected]
Abstract
We introduce a new learning algorithm for decision lists to allow
features that are constructed from the data and to allow a tradeoff between accuracy and complexity. We bound its generalization
error in terms of the number of errors and the size of the classifier
it finds on the training data. We also compare its performance
on some natural data sets with the set covering machine and the
support vector machine.
1
Introduction
The set covering machine (SCM) has recently been proposed by Marchand and
Shawe-Taylor (2001, 2002) as an alternative to the support vector machine (SVM)
when the objective is to obtain a sparse classifier with good generalization. Given
a feature space, the SCM tries to find the smallest conjunction (or disjunction) of
features that gives a small training error. In contrast, the SVM tries to find the
maximum soft-margin separating hyperplane on all the features. Hence, the two
learning machines are fundamentally different in what they are trying to achieve on
the training data.
To investigate if it is worthwhile to consider larger classes of functions than just the
conjunctions and disjunctions that are used in the SCM, we focus here on the class
of decision lists introduced by Rivest (1987) because this class strictly includes both
conjunctions and disjunctions and is strictly included in the class of linear threshold
functions (Marchand and Golea, 1993). Hence, we denote by decision list machine
(DLM) any classifier which computes a decision list of Boolean-valued features,
including features that are possibly constructed from the data. In this paper, we
use the set of features introduced by Marchand and Shawe-Taylor (2001, 2002)
known as data-dependent balls. By extending the sample compression technique
of Littlestone and Warmuth (1986), we bound the generalization error of the DLM
with data-dependent balls in terms of the number of errors and the number of balls
it achieves on the training data. We also show that the DLM with balls can provide
better generalization than the SCM with this same set of features on some natural
data sets.
2
The Decision List Machine
Let x denote an arbitrary n-dimensional vector of the input space X which could be
arbitrary subsets of <n . We consider binary classification problems for which the
training set S = P ? N consists of a set P of positive training examples and a set N
of negative training examples. We define a feature as an arbitrary Boolean-valued
|H|
function that maps X onto {0, 1}. Given any set H = {hi (x)}i=1 of features hi (x)
and any training set S, the learning algorithm returns a small subset R ? H of
features. Given that subset R, and an arbitrary input vector x, the output f (x) of
the Decision List Machine (DLM) is defined to be:
If (h1 (x)) then b1
Else If (h2 (x)) then b2
...
Else If (hr (x)) then br
Else br+1
where each bi ? 0, 1 defines the output of f (x) if and only if hi is the first feature
to be satisfied on x (i.e. the smallest i for which hi (x) = 1). The constant br+1
(where r = |R|) is known as the default value. Note that f computes a disjunction
of the hi s whenever bi = 1 for i = 1 . . . r and br+1 = 0. To compute a conjunction
of hi s, we simply place in f the negation of each hi with bi = 0 for i = 1 . . . r and
br+1 = 1. Note, however, that a DLM f that contains one or many alternations
(i.e. a pair (bi , bi+1 ) for which bi 6= bi+1 for i < r) cannot be represented as a (pure)
conjunction or disjunction of hi s (and their negations). Hence, the class of decision
lists strictly includes conjunctions and disjunctions.
From this definition, it seems natural to use the following greedy algorithm for
building a DLM from a training set. For a given set S 0 = P 0 ? N 0 of examples
(where P 0 ? P and N 0 ? N ) and a given set H of features, consider only the
features hi ? H which make no errors on either P 0 or N 0 . If hi makes no error with
P 0 , let Qi be the subset of examples of N 0 on which hi makes no errors. Otherwise,
if hi makes no error with N 0 , let Qi be the subset of examples of P 0 on which hi
makes no errors. In both cases we say that hi is covering Qi . The greedy algorithm
starts with S 0 = S and an empty DLM. Then it finds the hi with the largest |Qi |
and appends this hi to the DLM. It then removes Qi from S 0 and repeat to find
the hk with the largest |Qk | until either P 0 or N 0 is empty. It finally assigns br+1
to the class label of the remaining non-empty set.
Following Rivest (1987), this greedy algorithm is assured to build a DLM that
makes no training errors whenever there exists a DLM on a set E ? H of features
that makes zero training errors. However, this constraint is not really required in
practice since we do want to permit the user of a learning algorithm to control the
tradeoff between the accuracy achieved on the training data and the complexity
(here the size) of the classifier. Indeed, a small DLM which makes a few errors
on the training set might give better generalization than a larger DLM (with more
features) which makes zero training errors. One way to include this flexibility is to
early-stop the greedy algorithm when there remains a few more training examples
to be covered. But a further reduction in the size of the DLM can be accomplished
Algorithm BuildDLM(P, N, pp , pn , s, H)
Input: A set P of positive examples, a set N of negative examples, the penalty values
|H|
pp and pn , a stopping point s, and a set H = {hi (x)}i=1 of Boolean-valued features.
Output: A decision list f consisting of a set R = {(hi , bi )}ri=1 of features hi with
their corresponding output values bi , and a default value br+1 .
Initialization: R = ?, P 0 = P, N 0 = N
1. For each hi ? H, let Pi and Ni be respectively the subsets of P 0 and N 0
correctly classified by hi . For each hi compute Ui , where:
def
Ui = max {|Pi | ? pn ? |N 0 ? Ni |, |Ni | ? pp ? |P 0 ? Pi |}
2. Let hk be a feature with the largest value of Uk .
3. If (|Pk | ? pn ? |N 0 ? Nk | ? |Nk | ? pp ? |P 0 ? Pk |) then R = R ? {(hk , 1)},
P 0 = P 0 ? Pk , N 0 = Nk .
4. If (|Pk | ? pn ? |N 0 ? Nk | < |Nk | ? pp ? |P 0 ? Pk |) then R = R ? {(?hk , 0)},
N 0 = N 0 ? Nk , P 0 = Pk .
5. Let r = |R|. If (r < s and P 0 6= ? and N 0 6= ?) then go to step 1
6. Set br+1 = ?br . Return f .
Figure 1: The learning algorithm for the Decision List Machine
by considering features hi that do make a few errors on P 0 (or N 0 ) if many more
examples Qi ? N 0 (or Qi ? P 0 ) can be covered.
Hence, to include this flexibility in choosing the proper tradeoff between complexity
and accuracy, we propose the following modification of the greedy algorithm. For
every feature hi , let us denote by Pi the subset of P 0 on which hi makes no errors
and by Ni the subset of N 0 on which hi makes no error. The above greedy algorithm
is considering only features for which we have either Pi = P 0 or Ni = N 0 , but to
allow small deviation from these choices, we define the usefullness U i of feature hi
by
def
Ui = max {|Pi | ? pn ? |N 0 ? Ni |, |Ni | ? pp ? |P 0 ? Pi |}
where pn denotes the penalty of making an error on a negative example whereas pp
denotes the penalty of making an error on a positive example.
Hence, each greedy step will be modified as follows. For a given set S 0 = P 0 ? N 0 ,
we will select the feature hi with the largest value of Ui and append this hi in the
DLM. If |Pi | ? pn ? |N 0 ? Ni | ? |Ni | ? pp ? |P 0 ? Pi |, we will then remove from S 0
every example in Pi (since they are correctly classified by the current DLM) and
we will also remove from S 0 every example in N 0 ? Ni (since a DLM with this
feature is already misclassifying N 0 ? Ni , and, consequently, the training error of
the DLM will not increase if later features err on examples in N 0 ? Ni ). Otherwise
if |Pi | ? pn ? |N 0 ? Ni | < |Ni | ? pp ? |P 0 ? Pi |, we will then remove from S 0 examples in
Ni ? (P 0 ? Pi ). Hence, we recover the simple greedy algorithm when pp = pn = ?.
The formal description of our learning algorithm is presented in Figure 1. The
penalty parameters pp and pn and the early stopping point s are the model-selection
parameters that give the user the ability to control the proper tradeoff between the
training accuracy and the size of the DLM. Their values could be determined either
by using k-fold cross-validation, or by computing our bound (see section 4) on
the generalization error. It therefore generalizes the learning algorithm of Rivest
(1987) by providing this complexity-accuracy tradeoff and by permitting the use of
any kind of Boolean-valued features, including those that are constructed from the
data. Finally let us mention that Dhagat and Hellerstein (1994) did propose an
algorithm for learning decision lists of few relevant attributes but this algorithm is
not practical in the sense that it provides no tolerance to noise and does not easily
accommodate parameters to provide a complexity-accuracy tradeoff.
3
Data-Dependent Balls
For each training example xi with label yi ? {0, 1} and (real-valued) radius ?, we
define feature hi,? to be the following data-dependent ball centered on xi :
?
yi if d(x, xi ) ? ?
def
hi,? (x) = h? (x, xi ) =
y i otherwise
where y i denotes the Boolean complement of yi and d(x, x0 ) denotes the distance
between x and x0 . Note that any metric can be used for d. So far, we have used
only the L1 , L2 and L? metrics but it is certainly worthwhile to try to use metrics
that actually incorporate some knowledge about the learning task. Moreover, we
could use metrics that are obtained from the definition of an inner product k(x, x 0 ).
Given a set
S S of
S m training examples, our initial set of features consists, in principle,
of H = i?S ??[0,?[ hi,? . But obviously, for each training example xi , we need
only to consider the set of m ? 1 distances {d(xi , xj )}j6=i . This reduces our initial
set H to O(m2 ) features. In fact, from the description of the DLM in the previous
section, it follows that the ball with the largest usefulness belongs to one of the
following following types of balls: type Pi , Po , Ni , and No .
Balls of type Pi (positive inside) are balls having a positive example x for its center
and a radius given by ? = d(x, x0 ) ? ? for some negative example x0 (that we call a
border point) and very small positive number ?. Balls of type Po (positive outside)
have a negative example center x and a radius ? = d(x, x0 ) + ? given by a negative
border x0 . Balls of type Ni (negative inside) have a negative center x and a radius
? = d(x, x0 ) ? ? given by a positive border x0 . Balls of type No (negative outside)
have a positive center x and a radius ? = d(x, x0 ) + ? given by a positive border x0 .
This proposed set of features, constructed from the training data, provides to the
user full control for choosing the proper tradeoff between training accuracy and
function size.
4
Bound on the Generalization Error
Note that we cannot use the ?standard? VC theory to bound the expected loss of
DLMs with data-dependent features because the VC dimension is a property of a
function class defined on some input domain without reference to the data. Hence,
we propose another approach.
Since our learning algorithm tries to build a DLM with the smallest number of datadependent balls, we seek a bound that depends on this number and, consequently,
on the number of examples that are used in the final classifier (the hypothesis).
We can thus think of our learning algorithm as compressing the training set into
a small subset of examples that we call the compression set. It was shown by Littlestone and Warmuth (1986) and Floyd and Warmuth (1995) that we can bound
the generalization error of the hypothesis f if we can always reconstruct f from
the compression set. Hence, the only requirement is the existence of such a reconstruction function and its only purpose is to permit the exact identification of the
hypothesis from the compression set and, possibly, additional bits of information.
Not surprisingly, the bound on the generalization error increases rapidly in terms
of these additional bits of information. So we must make minimal usage of them.
We now describe our reconstruction function and the additional information that
it needs to assure, in all cases, the proper reconstruction of the hypothesis from a
compression set. Our proposed scheme works in all cases provided that the learning
algorithm returns a hypothesis that always correctly classifies the compression set
(but not necessarily all of the training set). Hence, we need to add this constraint
in BuildDLM for our bound to be valid but, in practice, we have not seen any
significant performance variation introduced by this constraint. We first describe
the simpler case where only balls of types Pi and Ni are permitted and, later,
describe the additional requirements that are introduced when we also permit balls
of types Po and No .
Given a compression set ? (returned by the learning algorithm), we first partition it
into four disjoint subsets Cp , Cn , Bp , and Bn consisting of positive ball centers, negative ball centers, positive borders, and negative borders respectively. Each example
in ? is specified only once. When only balls of type Pi and Ni are permitted, the
center of a ball cannot be the center of another ball since the center is removed from
the remaining examples to be covered when a ball is added to the DLM. But a center
can be the border of a previous ball in the DLM and a border can be the border of
more than one ball. Hence, points in Bp ?Bn are examples that are borders without
being the center of another ball. Because of the crucial importance of the ordering
of the features in a decision list, these sets do not provide enough information by
themselves to be able to reconstruct the hypothesis. To specify the ordering of each
ball center it is sufficient to provide log 2 (r) bits of additional information where the
number r of balls is given by r = cp + cn for cp = |Cp | and cn = |Cn |. To find the radius ?i for each center xi we start with Cp0 = Cp , Cn0 = Cn , Bp0 = Bp , Bn0 = Bn , and
do the following, sequentially from the first center to the last. If center xi ? Cp0 , then
the radius is given by ?i = minxj ?Cn0 ?Bn0 d(xi , xj ) ? ? and we remove center xi from
Cp0 and any other point from Bp0 covered by this ball (to find the radius of the other
balls). If center xi ? Cn0 , then the radius is given by ?i = minxj ?Cp0 ?Bp0 d(xi , xj ) ? ?
and we remove center xi from Cn0 and any other point from Bn0 covered by this
ball. The output bi for each ball hi is 1 if the center xi ? Cp and 0 otherwise.
This reconstructed decision list of balls will be the same as the hypothesis if and
only if the compression set is always correctly classified by the learning algorithm.
Once we can identify the hypothesis from the compression set, we can bound its
generalization error.
Theorem 1 Let S = P ? N be a training set of positive and negative examples
of size m = mp + mn . Let A be the learning algorithm BuildDLM that uses
data-dependent balls of type Pi and Ni for its set of features with the constraint
that the returned function A(S) always correctly classifies every example in the
compression set. Suppose that A(S) contains r balls, and makes kp training errors
on P , kn training errors on N (with k = kp + kn ), and has a compression set
? = Cp ? Cn ? Bp ? Bn (as defined above) of size ? = cp + cn + bp + bn . With
probability 1 ? ? over all random training sets S of size m, the generalization error
er(A(S)) of A(S) is bounded by
?
?
??
?1
1
er(A(S)) ? 1 ? exp
ln B? + ln(r!) + ln
m???k
??
def
where ?? =
where
def
B?
=
??6
?
?2
6
?
mp
cp
??
? ((cp + 1)(cn + 1)(bp + 1)(bn + 1)(kp + 1)(kn + 1))
mp ? c p
bp
??
mn
cn
??
mn ? c n
bn
??
mp ? c p ? b p
kp
?2
? ? and
??
?
mn ? c n ? b n
kn
Proof Let X be the set of training sets of size m. Let us first bound the probability
def
Pm = P {S ? X : er(A(S)) ? ? | m(S) = m} given that m(S) is fixed to some value
def
m where m = (m, mp , mn , cp , cn , bp , bn , kp , kn ). For this, denote by Ep the subset
of P on which A(S) makes an error and similarly for En . Let I be the message of
log2 (r!) bits needed to specify the ordering of the balls (as described above). Now
0
define Pm
to be
def
0
Pm
=
P {S ? X : er(A(S)) ? ? | Cp = S1 , Cn = S2 , Bp = S3 , Bn = S4
Ep = S5 , En = S6 , I = I0 , m(S) = m}
for some fixed set of disjoint subsets {Si }6i=1 of S and some fixed information message I0 . Since B? is the number of different ways of choosing the different compression subsets and set of error points in a training set of fixed m, we have:
0
Pm ? (r!) ? B? ? Pm
where the first factor comes from the additional information that is needed to specify
def
0
the ordering of r balls. Note that the hypothesis f = A(S) is fixed in Pm
(because
the compression set is fixed and the required information bits are given). To bound
0
Pm
, we make the standard assumption that each example x is independently and
identically generated according to some fixed but unknown distribution. Let p
be the probability of obtaining a positive example, let ? be the probability that
the fixed hypothesis f makes an error on a positive example, and let ? be the
def
probability that f makes an error on a negative example. Let tp = cp + bp + kp and
def
let tn = cn + bn + kn . We then have:
0
Pm
?
m ? tn ? tp mp ?tp
= (1 ? ?)
(1 ? ?)
p
(1 ? p)m?tn ?mp
mp ? t p
?
?
m?t
Xn
0
0
m0 ?tp
m?tn ?m0 m ? tn ? tp
?
(1 ? ?)
(1 ? ?)
pm ?tp (1 ? p)m?tn ?m
0
m ? tp
0
mp ?tp
m?tn ?mp
?
m =tp
= [(1 ? ?)p + (1 ? ?)(1 ? p)]
? (1 ? ?)m?tn ?tp
m?tn ?tp
= (1 ? er(f ))
m?tn ?tp
Consequently:
Pm ? (r!) ? B? ? (1 ? ?)m?tn ?tp .
The theorem is obtained by bounding this last expression by the proposed value for
?? (m) and solving for ? since, in that case, we satisfy the requirement that
?
?
?
?
X
P S ? X : er(A(S)) ? ?
=
Pm P S ? X : m(S) = m
m
?
X
m
?
?? (m)P S ? X : m(S) = m
?
?
X
?? (m) = ?
m
where the sums are over all possible realizations of m for a fixed mp and mn .
With the proposed value for ?? (m), the last equality follows from the fact that
P?
i=1 (1/i
2
) = ? 2 /6.
The use of balls of type Po and No introduces a few more difficulties that are
taken into account by sending more bits to the reconstruction function. First, the
center of a ball of type Po and No can be used for more than one ball since the
covered examples are outside the ball. Hence, the number r of balls can now exceed
cp + cn = c. So, to specify r, we can send log 2 (?) bits. Then, for each ball,
we can send log2 c bits to specify which center this ball is using and another bit
to specify if the examples covered are inside or outside the ball. Using the same
notation as before, the radius ?i of a center xi of a ball of type Po is given by
?i = maxxj ?Cn0 ?Bn0 d(xi , xj ) + ?, and for a center xi of a ball of type No , its radius is
given by ?i = maxxj ?Cp0 ?Bp0 d(xi , xj ) + ?. With these modifications, the same proof
of Theorem 1 can be used to obtain the next theorem.
Theorem 2 Let A be the learning algorithm BuildDLM that uses data-dependent
balls of type Pi , Ni , Po , and No for its set of features. Consider all the definitions
def
used for Theorem 1 with c = cp +cn . With probability 1?? over all random training
sets S of size m, we have
?
?
??
?1
1
er(A(S)) ? 1 ? exp
ln B? + ln ? + r ln(2c) + ln
m???k
??
Basically, our bound states that good generalization is expected when we can find a
small DLM that makes few training errors. In principle, we could use it as a guide
for choosing the model selection parameters s, pp , and pn since it depends only on
what the hypothesis has achieved on the training data.
5
Empirical Results on Natural data
We have compared the practical performance of the DLM with the support vector
machine (SVM) equipped with a Radial Basis Function kernel of variance 1/?. The
data sets used and the results obtained are reported in Table 1. All these data
sets where obtained from the machine learning repository at UCI. For each data
set, we have removed all examples that contained attributes with unknown values
(this has reduced substantially the ?votes? data set) and we have removed examples
with contradictory labels (this occurred only for a few examples in the Haberman
data set). The remaining number of examples for each data set is reported in
Table 1. No other preprocessing of the data (such as scaling) was performed. For
all these data sets, we have used the 10-fold cross validation error as an estimate
of the generalization error. The values reported are expressed as the total number
of errors (i.e. the sum of errors over all testing sets). We have ensured that each
training set and each testing set, used in the 10-fold cross validation process, was
the same for each learning machine (i.e. each machine was trained on the same
training sets and tested on the same testing sets).
The results reported for the SVM are only those obtained for the best values of the
kernel parameter ? and the soft margin parameter C found among an exhaustive
list of many values. The values of these parameters are reported in Marchand and
Shawe-Taylor (2002). The ?size? column refers to the average number of support
vectors contained in SVM machines obtained from the 10 different training sets of
10-fold cross-validation.
We have reported the results for the SCM (Marchand and Shawe-Taylor, 2002) and
the DLM when both machines are equipped with data-dependent balls under the
L2 metric. For the SCM, the T column refers to type of the best machine found
Data Set
Name
#exs
BreastW
683
Votes
52
Pima
768
Haberman
294
Bupa
345
Glass
214
Credit
653
SVM
size errors
58
19
18
3
526 203
146 71
266 107
125 34
423 190
SCM with balls
T p
s errors
c 1.8 2 15
d 0.9 1 6
c 1.1 3 189
c 1.4 1 71
d 2.8 9 106
d ? 2 36
d 1.2 4 194
T
c
s
c
s
c
c
c
DLM with balls
pp
pn s
errors
2.1 1
2
14
0.1 0.3 1
3
1.5 1.5 6
189
2
3
7
65
2
2
4
108
4.8 ? 12 28
1
? 11 197
Table 1: Data sets and results for SVMs, SCMs, and DLMs.
(c for conjunction, and d for disjunction), the p column refers the best value found
for the penalty parameter, and the s column refers the the best stopping point in
terms of the number of balls. The same definitions applies also for DLMs except
that two different penalty values (pp and pn ) are used. In the T column of the DLM
results, we have specified by s (simple) when the DLM was trained by using only
balls of type Pi and Ni and by c (complex) when the four possible types of balls
where used (see section 3). Again, only the values that gave the smallest 10-fold
cross-validation error are reported.
The most striking feature in Table 1 is the level of sparsity achieved by the SCM and
the DLM in comparison with the SVM. This difference is huge. The other important
feature is that DLMs often provide slightly better generalization than SCMs and
SVMs. Hence, DLMs can provide a good alternative to SCMs and SVMs.
Acknowledgments
Work supported by NSERC grant OGP0122405 and, in part, by the EU under the
NeuroCOLT2 Working Group, No EP 27150.
References
Aditi Dhagat and Lisa Hellerstein. PAC learning with irrelevant attributes. In
Proc. of the 35rd Annual Symposium on Foundations of Computer Science, pages
64?74. IEEE Computer Society Press, Los Alamitos, CA, 1994.
Sally Floyd and Manfred Warmuth. Sample compression, learnability, and the
Vapnik-Chervonenkis dimension. Machine Learning, 21(3):269?304, 1995.
N. Littlestone and M. Warmuth. Relating data compression and learnability. Technical report, University of California Santa Cruz, 1986.
Mario Marchand and Mostefa Golea. On learning simple neural concepts: from
halfspace intersections to neural decision lists. Network: Computation in Neural
Systems, 4:67?85, 1993.
Mario Marchand and John Shawe-Taylor. Learning with the set covering machine.
Proceedings of the Eighteenth International Conference on Machine Learning
(ICML 2001), pages 345?352, 2001.
Mario Marchand and John Shawe-Taylor. The set covering machine. Journal of
Machine Learning Reasearch (to appear), 2002.
Ronald L. Rivest. Learning decision lists. Machine Learning, 2:229?246, 1987.
| 2235 |@word repository:1 compression:15 seems:1 seek:1 bn:10 mention:1 accommodate:1 reduction:1 initial:2 contains:2 chervonenkis:1 err:1 current:1 si:1 must:1 john:3 cruz:1 ronald:1 partition:1 remove:6 greedy:8 warmuth:5 manfred:1 provides:2 simpler:1 constructed:4 symposium:1 consists:2 inside:3 introduce:1 x0:10 expected:2 indeed:1 themselves:1 ont:3 cp0:5 equipped:2 considering:2 haberman:2 provided:1 classifies:2 rivest:4 moreover:1 bounded:1 notation:1 what:2 kind:1 substantially:1 mostefa:1 every:4 ensured:1 classifier:5 uk:3 control:3 grant:1 reasearch:1 appear:1 positive:15 before:1 might:1 initialization:1 bupa:1 bi:10 practical:2 acknowledgment:1 testing:3 practice:2 tw20:1 bp0:4 empirical:1 radial:1 refers:4 onto:1 cannot:3 selection:2 map:1 center:23 eighteenth:1 send:2 go:1 independently:1 bn0:4 assigns:1 pure:1 m2:1 s6:1 variation:1 suppose:1 user:3 exact:1 us:2 hypothesis:11 aditi:1 assure:1 ep:3 compressing:1 ordering:4 eu:1 removed:3 ui:4 complexity:5 trained:2 solving:1 basis:1 easily:1 po:7 represented:1 uottawa:3 describe:3 london:1 kp:6 choosing:4 outside:4 exhaustive:1 disjunction:7 larger:2 valued:5 say:1 otherwise:4 reconstruct:2 ability:1 think:1 final:1 obviously:1 propose:3 reconstruction:4 product:1 relevant:1 uci:1 realization:1 rapidly:1 flexibility:2 achieve:1 description:2 los:1 empty:3 requirement:3 extending:1 ac:1 c:1 come:1 radius:11 attribute:3 golea:2 vc:2 centered:1 jst:1 generalization:14 really:1 strictly:3 credit:1 exp:2 m0:2 achieves:1 early:2 smallest:4 purpose:1 proc:1 label:3 largest:5 always:4 modified:1 pn:14 conjunction:7 focus:1 hk:4 contrast:1 sense:1 glass:1 dependent:8 stopping:3 i0:2 japkowicz:1 maxxj:2 classification:1 among:1 once:2 having:1 icml:1 report:1 fundamentally:1 few:7 consisting:2 negation:2 huge:1 message:2 investigate:1 certainly:1 introduces:1 taylor:7 littlestone:3 minimal:1 column:5 soft:2 boolean:5 tp:13 ottawa:6 subset:13 deviation:1 usefulness:1 learnability:2 reported:7 kn:6 international:1 again:1 satisfied:1 possibly:2 return:3 account:1 b2:1 includes:2 satisfy:1 mp:11 depends:2 later:2 try:4 h1:1 performed:1 mario:4 start:2 recover:1 halfspace:1 scm:8 ni:22 accuracy:7 qk:1 variance:1 identify:1 identification:1 basically:1 j6:1 classified:3 whenever:2 definition:4 pp:14 proof:2 stop:1 appends:1 knowledge:1 minxj:2 actually:1 permitted:2 specify:6 just:1 until:1 working:1 defines:1 usage:1 name:1 building:1 concept:1 hence:12 equality:1 floyd:2 covering:5 sokolova:2 trying:1 tn:11 l1:1 cp:15 recently:1 occurred:1 relating:1 significant:1 s5:1 rd:1 pm:11 similarly:1 shawe:7 add:1 belongs:1 irrelevant:1 cn0:5 binary:1 alternation:1 accomplished:1 yi:3 seen:1 additional:6 full:1 reduces:1 technical:1 cross:5 marina:1 permitting:1 qi:7 n5:3 metric:5 kernel:2 achieved:3 whereas:1 want:1 else:3 crucial:1 call:2 exceed:1 enough:1 identically:1 xj:5 gave:1 inner:1 cn:14 tradeoff:7 br:9 expression:1 penalty:6 returned:2 covered:7 santa:1 s4:1 svms:3 reduced:1 misclassifying:1 s3:1 disjoint:2 correctly:5 group:1 four:2 threshold:1 sum:2 striking:1 place:1 decision:15 scaling:1 bit:9 def:12 bound:13 hi:33 fold:5 marchand:10 annual:1 constraint:4 bp:10 ri:1 according:1 ball:53 slightly:1 modification:2 making:2 s1:1 dlm:29 taken:1 ln:7 remains:1 needed:2 sending:1 generalizes:1 permit:3 worthwhile:2 hellerstein:2 egham:1 alternative:2 existence:1 denotes:4 remaining:3 include:2 log2:2 build:2 society:1 objective:1 already:1 added:1 neurocolt2:1 alamitos:1 distance:2 separating:1 scms:3 providing:1 pima:1 negative:13 append:1 proper:4 unknown:2 arbitrary:4 canada:3 introduced:4 complement:1 pair:1 required:2 specified:2 california:1 able:1 sparsity:1 max:2 royal:1 including:2 natural:4 difficulty:1 hr:1 mn:6 scheme:1 l2:2 loss:1 validation:5 h2:1 foundation:1 sufficient:1 nathalie:1 principle:2 pi:20 repeat:1 surprisingly:1 last:3 supported:1 formal:1 allow:3 guide:1 lisa:1 sparse:1 tolerance:1 default:2 dimension:2 valid:1 xn:1 computes:2 preprocessing:1 far:1 reconstructed:1 sequentially:1 b1:1 xi:18 k1n:3 table:4 ca:4 obtaining:1 necessarily:1 complex:1 domain:1 assured:1 did:1 pk:6 border:10 noise:1 s2:1 bounding:1 site:6 en:2 theorem:6 pac:1 er:7 list:16 rhul:1 svm:7 exists:1 vapnik:1 importance:1 nat:1 margin:2 nk:6 intersection:1 simply:1 expressed:1 datadependent:1 contained:2 nserc:1 sally:1 applies:1 consequently:3 included:1 determined:1 except:1 hyperplane:1 contradictory:1 total:1 vote:2 holloway:1 select:1 support:4 incorporate:1 tested:1 ex:2 |
1,358 | 2,236 | Field-Programmable Learning Arrays
Seth Bridges, Miguel Figueroa, David Hsu, and Chris Diorio
Department of Computer Science and Engineering
University of Washington
114 Sieg Hall, Box 352350
Seattle, WA 98195-2350
seth,miguel,hsud,diorio @cs.washington.edu
Abstract
This paper introduces the Field-Programmable Learning Array, a new
paradigm for rapid prototyping of learning primitives and machinelearning algorithms in silicon. The FPLA is a mixed-signal counterpart
to the all-digital Field-Programmable Gate Array in that it enables rapid
prototyping of algorithms in hardware. Unlike the FPGA, the FPLA is
targeted directly for machine learning by providing local, parallel, online analog learning using floating-gate MOS synapse transistors. We
present a prototype FPLA chip comprising an array of reconfigurable
computational blocks and local interconnect. We demonstrate the viability of this architecture by mapping several learning circuits onto the
prototype chip.
1 Introduction
Implementing machine-learning algorithms in VLSI is a logical step toward enabling realtime or mobile applications of these algorithms [1]. Several machine-learning architectures
such as neural networks and Bayes nets map naturally to VLSI, because each uses many
simple elements in parallel and computes using only local information. Such algorithms,
when implemented in VLSI, can leverage the inherent parallelism offered by the millions of
transistors on a single silicon die. Depending on the design technique, hardware implementations of learning algorithms can realize significant performance increases over standard
computers in terms of speed or power consumption.
Despite the benefits of implementing machine-learning algorithms in VLSI, several issues
have kept hardware implementations from penetrating mainstream machine learning. First,
many previous hardware systems were not scalable due to the size of many primary components such as digital multipliers or digital-to-analog converters[2, 3]. Second, many
systems such as [4] have inflexible circuit topologies, allowing them to be used for only
very specific problems. Third, many hardware learning systems did not comprise a complete solution with on-chip learning [5] and often required external weight updates[3, 6]. In
addition to these problems of scalability and inflexibility, perhaps the biggest impediment
to implementing learning in VLSI is that designing VLSI chips is a time-consuming and
error-prone process. All current VLSI learning implementations required a detailed knowledge of analog and digital circuit design. This prerequisite knowledge impedes hardware
development by a hardware novice; indeed, the design process can challenge even the most
experienced circuit designer. Because we make extensive use of floating-gate synapse transistors [1] in our learning circuits to enable local adaptation, the design process becomes
even more difficult due to slow and inaccurate simulation of these devices.
A reconfigurable learning system would solve these problems by allowing rapid prototyping and flexibility in learning system hardware. Also, reconfigurability allows the system to
adapt to changes in the problem definition. For example, a designer can trade input dimensionality for resolution by reallocating FPLA resources, even after the implementation is
complete. A custom VLSI solution would not allow such tradeoffs after fabrication. When
combined with a simple user interface, a reconfigurable learning system can enable anyone
with a machine-learning background to express his/her ideas in hardware.
In this paper, we propose a mixed analog-digital Field-Programmable Learning Array
(FPLA), a reconfigurable system for rapid prototyping of machine-learning algorithms in
hardware. The FPLA enables the design cycle shown in Figure 1(a) in which the designer
expresses a machine-learning problem as an algorithm, compiles that representation into
an FPLA configuration, and prototypes the algorithm in an FPLA. The FPLA is similar in
concept to all-digital Field-Programmable Gate Arrays (FPGA), in that they both enable
reconfigurable computation and prototyping using arrays of simple elements and reconfigurable wiring. Unlike previous reconfigurable hardware learning solutions [3, 4, 6, 7], the
FPLA is a general-purpose prototyping tool and does not target one specific architecture.
Moreover, our FPLA supports on-chip adaptation and enables rapid prototyping of a large
class of learning algorithms.
We have implemented a prototype core for an FPLA. Our chip comprises a small (2 2)
array of Programmable Learning Blocks (PLBs) as well as a simple interconnect structure
to allow the PLBs to communicate in an all-to-all fashion. Our results show that this prototype system achieves its design goal of enabling rapid prototyping of floating-gate learning
circuits by implementing learning circuits known in the literature as well as new circuits
prototyped for the first time.
The remainder of the paper proceeds as follows. In section 2, we discuss the proposed
FPLA architecture, as well as the subset that is our prototype. Section 3 shows results from
our test chip of the prototype design. Section 4 concludes with a discussion of improvements that we are making to the design and opportunities for future work.
2 FPLA Architecture
2.1 An FPLA Architecture
Our proposed FPLA architecture, shown in Figure 1(b), has three properties that enable
machine learning: 1) a core comprising an array of Programmable Learning Blocks to
compute machine-learning functions, 2) reconfigurable interconnect to enable inter-PLB
communication, 3) the ability to compute with sufficient accuracy, and 4) a simple and
well-defined user interface.
The first two properties are dimensions of the FPLA design space, where tradeoffs between
them results in varying levels of flexibility and functionality at the cost of area and power.
The FPLA core determines the system?s functionality. For example, in a task-oriented
FPLA, the PLBs that compose the core should allow high-level functions such as multiplication and outer-product learning. Likewise, to develop new learning algorithms in silicon,
the PLBs should allow lower-level functions such as current mirrors, differential pairs, and
current sources.
In addition to a multi-functional core, a reconfigurable learning array requires flexible interconnect that provides good local connectivity between neighboring PLBs and global
DAC
Problem
DAC
Hardware compilation
Configured FPLA
Digital In
Algorithmic
Description
Input Filtering
Define and translate algorithm
space
PLB
PLB
PLB
PLB
PLB
PLB
PLB
PLB
PLB
DAC
Training data and learning
Local Interconnect
Output Filtering
Global interconnect
Trained FPLA
Analog
Out
(a)
ADC
ADC
ADC
(b)
Figure 1: (a) FPLA-Based Design Flow. A user programs a machine-learning algorithm and tests it
using standard software tools (e.g. Matlab). The design compiler transforms this code into an FPLA
configuration, which is then downloaded to the chip. At this point, the FPLA runs the algorithm on
a training data set and performs on-chip learning. (b) Proposed FPLA Architecture. The architecture
comprises an array of Programmable Learning Blocks (PLBs), a flexible interconnect, and support
circuitry on the periphery. Local interconnect enables efficient, low-cost communication between
adjacent PLBs. Global interconnect enables distant PLBs to communicate, albeit at a higher cost.
interconnect for long-range connections. The global interconnect must be sparse because
of area constraints in VLSI chips, but flexible enough to allow a wide range of PLB connectivity. Local connectivity is critical to enable the creation of complex learning primitives
from combinations of PLBs and the implementation of large classes of machine-learning
algorithms that exhibit strong local computation.
Analog and mixed signal VLSI systems are typically plagued by offsets and device mismatch. Even though accurate systems are possible[8], the accuracy usually comes at the
cost of increased power consumption and die area. The adaptive properties of floating-gate
transistors can overcome these intrinsic accuracy limitations[9], therefore enabling mixed
analog-digital computation to obtain the best combination of power, area, scalability, and
performance.
A user interface for an FPLA comprises two different components: a design compilation
and configuration tool, and a chip interface that provides both digital and analog I/O. An
FPLA design compiler allows a user to compile an abstract expression of an algorithm (e.g.
Matlab code) to an FPLA configuration. The chip interface provides digital I/O to interface
with standard computers and surrounding digital circuitry, as well as analog I/O to interface
with signals from sensors such as vision chips and implantable devices.
2.2 Prototype Chip
As a first step in designing an FPLA, we built a prototype focusing on the PLB design and
local interconnect. Our design comprises a 2 2 array of PLBs interconnected in an allto-all fashion. The system I/O comprises digital input for programming and bidirectional
analog input/output for system operation. We show the prototype FPLA architecture and
chip micrograph in Figure 2. We fabricated the chip in the TSMC 0.35 m double-poly, four
metal process available from MOSIS. The FPLA included two pFET PLBs and two nFET
PLBs, each containing 8 uncommitted lines, 4 I/O blocks, and the computational primitives
described below. The FPLA occupies 2000 m 700 m including the programming 4-
Configuration Shift Register
I/O
I/O
pFET PLB
pFET PLB
nFET PLB
nFET PLB
I/O
I/O
nFET PLB
(a)
pFET PLB
nFET PLB
Inter?PLB
pFET PLB
Inter?PLB
Inter?PLB Block
Inter?PLB
Decoder
Configuration Decoder
Programming Logic
(b)
Figure 2: (a) Fabricated Chip Architecture. Our prototype FPLA comprises 4 PLBs that contain
simple analog functional primitives. A set of interconnect switches connect the PLBs in an all-toall fashion. (b) Chip Micrograph. The chip photograph shows the four PLBs, inter-PLB blocks,
and programming circuitry. The chip was fabricated in the TSMC 0.35 m double-poly four-metal
process from MOSIS.
to-16 decoder and 108-bit shift register. Through design optimization, we have recently
reduced the size by more than 50%.
Each of the four PLBs comprises computational circuitry and a large switching matrix built
of pass-gates controlled by SRAM. There are two different types of PLBs, the pFET PLB
and the nFET PLB, because nFET and pFET are the two flavors of transistors available
in standard CMOS processes. The computational primitives that compose the PLBs are
two floating-gate transistors, a differential pair, a current mirror, a diode-connected transistor, a bias current source, three transistors with configurable length and width, and two
configurable capacitors. These circuit primitives can be wired into arbitrary configurations
simply by changing the state of the PLB switch matrix. When deciding what functions to
place in the PLBs, our starting point was the decomposition of known primitives [10, 11]
for silicon learning as well as standard analog primitives such as those in Mead?s book
on silicon neural systems [12]. The circuits included in our PLBs are the most common
subcircuits found when decomposing these primitives.
Each of the four PLBs is independent of the others and can be programmed and operated
independently. However, more useful circuits require resources from multiple PLBs. InterPLB blocks provide local connectivity between PLBs where each inter-PLB block is an
array of SRAM pass-gate switches that can connect an uncommitted line in one PLB to
an uncommitted line in another PLB. The six inter-PLB blocks provide a path from one
PLB to any other PLB in the system. To interface with the external world, there are four
I/O connections per PLB, each of which can be configured in one of two ways: as a bare
connection to the pad for voltage inputs or current outputs, or as a voltage output through a
unity-gain buffer. The user configures the FPLA by shifting the configuration bits into the
configuration SRAM, located throughout the PLBs and interconnect.
3 Implementing Machine-Learning Primitives
To show the correct functionality of our chip, we implemented various circuits from the
literature as well as new circuits developed entirely in the FPLA. In the following section,
we show results for three of these circuits.
pFET PLB
2X
2X
80
2X
2X
60
3X
3X
Vb
Vtun
Vtun
W
X
X
X
Weq (nA)
Vb
Custom
FPLA
X
W
2X
Y
40
2X
20
Y
nFET PLB
(a)
2X
0.25
0.5
Pr(X|Y)
(b)
0.75
(c)
Figure 3: (a) Schematic of the correlational-learning circuit described by Shon and Hsu in [11]. (b)
Schematic of the same circuit as implemented in the FPLA. (c) Experimental results comparing the
performance of the custom circuit against the reconfigurable circuit. We scaled the data to compensate
for differences in operating point between the two implementations. The data reported by Shon and
Hsu is smoother because it is averaged over a larger number of experiments.
3.1 Correlational-Learning Primitive
As a first test of our chip, we implemented the correlational-learning circuit described by
Shon and Hsu in [11]. This circuit learns the conditional probability of a binary event
given another binary event . We show the original circuit in Figure 3(a), and the FPLA
implementation Figure 3(b).
We implemented this circuit using primitives from two PLBs. We input the signals and
as voltage pulses. Figure 3(c) compares the results from the custom chip to the results
from the FPLA. Both sets of data can be fit by:
' (
)
! #"%$&
(1)
where ,
,
, and are fit constants. We conclude from this experiment that
the correlational-learning circuit, when implemented in the FPLA, operates as the original
circuit. SPICE simulations confirm that the interconnect switches have a negligible effect
on circuit performance.
3.2 Regression-Learning Primitive
The regression-learning circuit described in this section is a new hardware learning primitive first implemented in the FPLA. The circuit performs regression learning on a set of 2-D
input data. It comprises two correlational learning circuits like the one shown in Figure 4(a)
to encode a differential weight . Each circuit learns
and
respectively, such that:
(2)
The circuit operates as follows. We apply a zero-mean input signal , encoded as a varying
current plus some DC bias current , to the two inputs of the circuit. The differential
output current
of each circuit represents the product of its stored weight with the input
current.
(3)
(4)
The difference in those output currents represents the total product of the current input and
the weight stored on the floating gate.
(5)
3
*
57698
*,+
* /* +101*.4
57698:+ 3#;<4"%*/+
57698=- 3#;<4"%*.-
*.-
2
5768 57698:+1057698=- 3 */+01*.->"?;@4 *A+10*.->"
0.5
Output(nA)
Vb
w
Current
Input
i=x+b
Update
Control
Current
Output
out=w(x+b)
0
?0.5
?1
(a)
?0.5
0
0.5
Input(nA)
1
(b)
Figure 4: (a) Regression Learning Circuit. This circuit is one-half of the regression learning circuit
and learns the positive weight . The other half of the circuit is identical but used to represent the
negative differential weight . The difference between the learned weights and converges
to the slope of the incoming data. (b) Experimental Data. This data is taken from the FPLA configured
as the circuit on the left. The circuit was shown 388 data points with a slope of 0.5 and zero-mean
Gaussian noise of 5%. The circuit learned a slope of 0.4924.
*/3
where the multiplication is performed by the current mirror formed by the input diode and
, so we remove the scaled input offset
the floating gate. The output prediction we seek is
with a high-pass filter implemented in the test computer.
current
*,4
(6)
5768
3 *A+0*.- "
Circuit training occurs in a supervised manner. An input 3 is provided to the circuit, and
the circuit predicts an output */3 . The computer running the test compares that predicted
output with the target and feeds an error signal back to the chip. Based on the error signal,
the circuit adapts the weight * . Positive changes in * + increase * , while positive changes
in * - decrease * . We implement a small weight decay on the both synapses. Results from
this circuit are shown in Figure 4(b).
3.3 Clustering Primitive
We tested a new clustering primitive that is based on the adaptive bump circuit introduced
in [10]. The circuit performs two functions: 1) computes the similarity between an input
and a stored value, and 2) adapts the stored value to decrease its distance to the input. This
adaptive bump circuit exhibits improved adaptation over previous versions [10, 13] due
to the inclusion of the autonulling differential pair[14], shown in Figure 5(a) (top). The
autonulling differential pair ensures that the adaptation process increases the similarity between the stored mean and the input. The data in Figure 5(b) shows the clustering primitive
adapting to an input that is initially distant from the stored value. The result of this adaptation is that over time, the circuit learns to produce a maximal output response at the present
input.
This circuit was easily prototyped in the FPLA. Creation of a configuration file took less
than one hour, experimental setup took another hour, and data was produced within two
additional hours. Instead of waiting several months for chip fabrication, we were able to
produce experimental results from a chip in under four hours. Also, the results are a more
accurate model of actual circuit behavior than a SPICE simulation.
800
700
600
500
400
300
200
100
0
Iout(nA)
adaptation
?2
?1
0
1
V1?V2(V)
2
(a)
(b)
Figure 5: (a) Clustering Primitive. This circuit can: 1) compute the similarity between the stored
value and the input, and 2) adapt the stored value to decrease its distance to the input. (b) Experimental Data. This plot shows that circuit adaptation moves the circuit?s peak response toward the
presented input. Adaptation strength decreases as the stored value approaches the input.
4 Future Work
The chip that we developed is effective for prototyping single learning primitives, but is
too small for solving real machine-learning problems. An FPLA whose target is machinelearning algorithms requires PLBs that comprise higher-level functions, such as the primitives presented in the previous section.
To scale up our design for machine-learning applications, we will make the following improvements to our prototype. First, to reduce the size of the PLBs, we will increase the ratio
of computational circuitry to switching circuitry by replacing the low-level functions such
as current mirrors and synapse transistors with higher-level primitives such as those mentioned in the previous section. Second, we will increase the number of PLBs in the design,
which will require an efficient and scalable global interconnect structure. We will base
our revisions on commercial FPGA architectures and other well-known on-chip communication schemes. Third, we will improve the I/O structures to enable multichip systems.
Finally, we have begun work on the design compiler, a software tool that maps machinelearning algorithms to an FPLA configuration.
5 Conclusions
Because of the match between the parallelism offered by hardware and the parallelism
in machine-learning algorithms, mixed analog-digital VLSI is a promising substrate for
machine-learning implementations. However, custom VLSI solutions are costly, inflexible, and difficult to design. To overcome these limitations, we have proposed
Field-Programmable Learning Arrays, a viable reconfigurable architecture for prototyping machine-learning algorithms in hardware. FPLAs combine elements of FPGAs, analog
VLSI, and on-chip learning to provide a scalable and cost-effective solution for learning
in silicon. Our results show that our prototype core and interconnect can effectively implement existing learning primitives and assist in the development of new circuits. An
enhanced version of the FPLA, currently under development, will support complex learning algorithms.
Acknowledgments
This work was supported by ONR grant #N00014-01-1-0566 and an Intel Fellowship.
Chips were fabricated by the MOSIS service.
References
[1] C. Diorio, D. Hsu, and M. Figueroa, ?Adaptive CMOS: From biological inspiration to systemson-a-chip,? Proceedings of the IEEE, vol. 90, no. 3, pp. 245?357, 2002.
[2] J. B. Burr, ?Digital Neural Network Implementations,? in Neural Networks: Concepts, Applications, and Implementations, Volume 2 (P. Antognetti and V. Milutinovic, eds.), pp. 237?285,
Prentice Hall, 1991.
[3] S. Satyanarayana, Y. Tsividis, and H. Graf, ?A reconfigurable VLSI neural network,? IEEE
Journal of Solid-State Circuits, vol. 27, January 1992.
[4] R. Coggins, M. Jabri, B. Flower, and S. Pickard, ?ICEG morphology classification using an
analogue VLSI neural network,? in Advances in Neural Information Processing Systems 7,
pp. 731?738, MIT Press, 1995.
[5] M. Holler, S. Tam, H. Castro, and R. Benson, ?An electrically trainable artificial neural network
with 10240 ?floating gate? synapses,? in Proceedings of the International Joint Conference on
Neural Networks(IJCNN89), vol. 2, (Washington D.C), pp. 191?196, 1989.
[6] E. K. F. Lee and P. G. Gulak, ?A CMOS field programmable analog array,? IEEE Journal of
Solid-State Circuits, vol. 26, December 1991.
[7] A. Montalvo, R. Gyurcsik, and J. Paulos, ?An analog VLSI neural network with on-chip learning,? IEEE Journal of Solid-State Circuits, vol. 32, no. 4, 1997.
[8] R. Genov and G. Cauwenberghs, ?Stochastic mixed-signal VLSI architecture for highdimensional kernel machines,? in Advances in Neural Information Processing Systems 14 (T. G.
Dietterich, S. Becker, and Z. Ghahramani, eds.), (Cambridge, MA), MIT Press, 2002.
[9] J. Hyde, T. Humes, C. Diorio, M. Thomas, and M. Figueroa, ?A floating-gate trimmed, 14bit, 250 ms/s digital-to-analog converter in standard 0.25 m CMOS,? in Symposium on VLSI
Circuits Digest of Technical Papers, pp. 328?331, 2002.
[10] D. Hsu, M. Figueroa, and C. Diorio, ?A silicon primitive for competitive learning,? in Advances
in Neural Information Processing Systems 13 (T. K. Leen, T. G. Dietterich, and V. Tresp, eds.),
pp. 713?719, MIT Press, 2001.
[11] A. P. Shon, D. Hsu, and C. Diorio, ?Learning spike-based correlations and conditional probabilities in silicon,? in Advances in Neural Information Processing Systems 14 (T. G. Dietterich,
S. Becker, and Z. Ghahramani, eds.), (Cambridge, MA), MIT Press, 2002.
[12] C. Mead, Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley, 1989.
[13] P. Hasler, ?Continuous-time feedback in floating-gate MOS circuits,? IEEE Transactions on
Circuits and Systems II, vol. 48, pp. 56?64, January 2001.
[14] D. Hsu, S. Bridges, and C. Diorio, ?Adaptive quantization and density estimation in silicon,?
2002. In submission.
| 2236 |@word version:2 weq:1 pulse:1 simulation:3 seek:1 decomposition:1 solid:3 configuration:11 existing:1 current:17 comparing:1 must:1 realize:1 distant:2 enables:5 remove:1 plot:1 update:2 half:2 device:3 sram:3 core:6 provides:3 sieg:1 differential:7 symposium:1 viable:1 compose:2 combine:1 burr:1 manner:1 inter:8 indeed:1 rapid:6 behavior:1 multi:1 morphology:1 actual:1 becomes:1 provided:1 revision:1 moreover:1 circuit:60 what:1 developed:2 adc:3 fabricated:4 scaled:2 control:1 grant:1 positive:3 negligible:1 engineering:1 local:11 service:1 switching:2 despite:1 mead:2 path:1 plus:1 compile:1 programmed:1 range:2 averaged:1 acknowledgment:1 block:10 implement:2 area:4 adapting:1 onto:1 prentice:1 map:2 primitive:23 starting:1 independently:1 resolution:1 array:15 machinelearning:3 his:1 enhanced:1 target:3 commercial:1 user:6 substrate:1 programming:4 us:1 designing:2 vtun:2 element:3 located:1 submission:1 predicts:1 ensures:1 cycle:1 connected:1 diorio:7 trade:1 decrease:4 mentioned:1 trained:1 solving:1 creation:2 easily:1 seth:2 joint:1 chip:32 various:1 surrounding:1 effective:2 artificial:1 whose:1 encoded:1 larger:1 solve:1 tested:1 ability:1 online:1 transistor:9 net:1 took:2 propose:1 interconnected:1 product:3 maximal:1 adaptation:8 remainder:1 neighboring:1 translate:1 flexibility:2 adapts:2 description:1 scalability:2 seattle:1 double:2 wired:1 produce:2 cmos:4 converges:1 depending:1 develop:1 miguel:2 strong:1 implemented:9 c:1 diode:2 come:1 predicted:1 correct:1 functionality:3 filter:1 stochastic:1 occupies:1 enable:7 implementing:5 require:2 hyde:1 biological:1 coggins:1 hall:2 plagued:1 deciding:1 mapping:1 algorithmic:1 mo:2 bump:2 circuitry:6 achieves:1 purpose:1 estimation:1 compiles:1 currently:1 bridge:2 tool:4 mit:4 sensor:1 gaussian:1 mobile:1 varying:2 voltage:3 encode:1 improvement:2 interconnect:17 inaccurate:1 typically:1 pad:1 her:1 vlsi:19 initially:1 comprising:2 issue:1 classification:1 flexible:3 development:3 field:7 comprise:2 washington:3 identical:1 represents:2 future:2 others:1 inherent:1 oriented:1 implantable:1 floating:10 custom:5 introduces:1 operated:1 compilation:2 accurate:2 increased:1 cost:5 subset:1 fpga:3 fabrication:2 too:1 configurable:2 reported:1 connect:2 stored:9 combined:1 density:1 peak:1 international:1 lee:1 multichip:1 holler:1 na:4 connectivity:4 containing:1 external:2 book:1 tam:1 configured:3 register:2 performed:1 compiler:3 cauwenberghs:1 bayes:1 competitive:1 parallel:2 slope:3 formed:1 accuracy:3 likewise:1 pickard:1 produced:1 synapsis:2 ed:4 definition:1 against:1 pp:7 naturally:1 hsu:8 gain:1 begun:1 logical:1 knowledge:2 dimensionality:1 back:1 focusing:1 bidirectional:1 feed:1 higher:3 wesley:1 supervised:1 response:2 improved:1 synapse:3 leen:1 box:1 though:1 correlation:1 dac:3 replacing:1 perhaps:1 effect:1 dietterich:3 concept:2 multiplier:1 contain:1 counterpart:1 inspiration:1 wiring:1 adjacent:1 width:1 die:2 m:1 penetrating:1 complete:2 demonstrate:1 performs:3 interface:8 recently:1 common:1 functional:2 inflexibility:1 volume:1 million:1 analog:18 silicon:9 significant:1 cambridge:2 inclusion:1 similarity:3 mainstream:1 operating:1 base:1 periphery:1 buffer:1 n00014:1 binary:2 onr:1 iout:1 additional:1 paradigm:1 signal:8 ii:1 smoother:1 multiple:1 technical:1 match:1 adapt:2 long:1 compensate:1 controlled:1 schematic:2 prediction:1 scalable:3 regression:5 vision:1 represent:1 kernel:1 addition:2 fellowship:1 background:1 source:2 tsmc:2 unlike:2 file:1 december:1 flow:1 capacitor:1 leverage:1 satyanarayana:1 viability:1 enough:1 switch:4 fit:2 architecture:14 converter:2 topology:1 impediment:1 reduce:1 idea:1 prototype:13 tradeoff:2 shift:2 expression:1 six:1 assist:1 becker:2 trimmed:1 reconfigurability:1 programmable:10 matlab:2 useful:1 detailed:1 transforms:1 uncommitted:3 hardware:15 reduced:1 spice:2 designer:3 per:1 vol:6 waiting:1 express:2 four:7 micrograph:2 changing:1 kept:1 hasler:1 v1:1 mosis:3 run:1 communicate:2 place:1 throughout:1 realtime:1 vb:3 bit:3 entirely:1 strength:1 figueroa:4 constraint:1 software:2 speed:1 anyone:1 subcircuits:1 department:1 combination:2 pfet:8 electrically:1 inflexible:2 unity:1 making:1 castro:1 benson:1 pr:1 taken:1 resource:2 discus:1 addison:1 available:2 operation:1 decomposing:1 prerequisite:1 apply:1 v2:1 gate:14 fpgas:1 original:2 thomas:1 top:1 running:1 clustering:4 opportunity:1 ghahramani:2 move:1 occurs:1 digest:1 spike:1 primary:1 costly:1 exhibit:2 distance:2 decoder:3 outer:1 consumption:2 chris:1 toward:2 plb:37 code:2 length:1 providing:1 ratio:1 nfet:8 difficult:2 setup:1 configures:1 negative:1 design:20 implementation:10 allowing:2 enabling:3 january:2 communication:3 dc:1 arbitrary:1 david:1 introduced:1 pair:4 required:2 extensive:1 connection:3 learned:2 hour:4 able:1 proceeds:1 parallelism:3 usually:1 prototyping:10 mismatch:1 below:1 flower:1 reading:1 challenge:1 program:1 built:2 including:1 shifting:1 power:4 critical:1 event:2 analogue:1 scheme:1 improve:1 concludes:1 bare:1 tresp:1 literature:2 genov:1 multiplication:2 graf:1 mixed:6 limitation:2 filtering:2 digital:15 downloaded:1 offered:2 sufficient:1 metal:2 prone:1 supported:1 bias:2 allow:5 wide:1 sparse:1 benefit:1 overcome:2 dimension:1 feedback:1 world:1 computes:2 adaptive:5 novice:1 transaction:1 logic:1 confirm:1 global:5 incoming:1 conclude:1 consuming:1 continuous:1 promising:1 tsividis:1 complex:2 poly:2 jabri:1 did:1 noise:1 biggest:1 intel:1 fashion:3 slow:1 experienced:1 comprises:8 third:2 learns:4 specific:2 reconfigurable:12 offset:2 decay:1 hsud:1 intrinsic:1 quantization:1 albeit:1 effectively:1 mirror:4 flavor:1 photograph:1 simply:1 shon:4 impedes:1 prototyped:2 determines:1 ma:3 conditional:2 goal:1 targeted:1 month:1 iceg:1 change:3 included:2 operates:2 correlational:5 total:1 pas:3 experimental:5 highdimensional:1 support:3 trainable:1 |
1,359 | 2,237 | Reconstructing Stimulus-Driven Neural
Networks from Spike Times
Duane Q. Nykamp
UCLA Mathematics Department
Los Angeles, CA 90095
[email protected]
Abstract
We present a method to distinguish direct connections between two neurons from common input originating from other, unmeasured neurons.
The distinction is computed from the spike times of the two neurons in
response to a white noise stimulus. Although the method is based on a
highly idealized linear-nonlinear approximation of neural response, we
demonstrate via simulation that the approach can work with a more realistic, integrate-and-fire neuron model. We propose that the approach
exemplified by this analysis may yield viable tools for reconstructing
stimulus-driven neural networks from data gathered in neurophysiology
experiments.
1 Introduction
The pattern of connectivity between neurons in the brain is fundamental to understanding
the function the brain?s neural networks. Related properties of closely connected neurons,
for example, may lead to inferences on how the observed properties are built or enhanced by
the neural connections. Unfortunately, the complexity of higher organisms makes obtaining
combined functional and connectivity data extraordinarily difficult.
The most common tool for recording in vivo the activity of neurons in higher organisms
is the extracellular electrode. Typically, one uses this electrode to record only the times of
output spikes, or action potentials, of neurons. In such an experiment, the states of the measured neurons remain hidden. The ability to infer connectivity patterns from spike times
alone would greatly expand the attainable connectivity data and provide the opportunity to
better address the link between function and connectivity.
Attempts to infer connectivity from spike time data have focused on second-order statistics
of the spike times of two simultaneously recorded neurons. In particular, the joint peristimulus time histogram (JPSTH) and its integral, the shuffle-corrected correlogram [1, 2, 3]
have become widely used tools to analyze such data.
However, the JPSTH and correlogram cannot distinguish correlations induced by connections between the two measured neurons (direct connection correlations) from correlations
induced by common connections from a third, unmeasured neuron (common input correlations). Inferences from the JPSTH or correlogram about the connections between the two
measured neurons are ambiguous.
Analysis tools such as partial coherence [4] can distinguish between a direct connection
and common input when one can also measure neurons inducing the common input effects.
The distinction of present approach is that all other neurons are unmeasured.
We demonstrate that, by characterizing how each neuron responds to the stimulus, one
may be able to distinguish between direct connection and common input correlations. In
that case, one could determine if a connection existed between two neurons simply by
measuring their spike times in response to a stimulus. Since the properties of the neurons
would be determined by the same measurements, such an analysis would be the basis for
inferring links between connectivity and function.
2 The model
To make the subtle distinction between direct connection correlations and common input
correlations, one needs to exploit an explicit model. The model must be sufficiently simple
so that all necessary model parameters can be determined from experimental measurements. For this reason, the analysis is limited to phenomenological lumped models. We
present analysis based on a linear-nonlinear model of neural response to white noise.
Let the stimulus X be a vector of independent Gaussian random variables with zero mean
and standard deviation ? = 1. X is a discrete approximation to temporal or spatio-temporal
white noise. Let Rpi = 1 if neuron p spiked at the discrete time point i and be zero otherwise. Let the probability of a spike from a neuron be a linear-nonlinear function of the
stimulus and the previous spike times of the other neurons,
XX
j i?j
? qp
Pr Rpi = 1X = x, Rq = rq , ?q = gp hip ? x +
W
rq
,
(1)
q6=p j>0
hip
where
is the linear kernel of neuron p shifted i units in time (normalized so that
khip k = 1), gp (?) is its output nonlinearity (representing, for example, its spike generat? j is a connectivity term representing how a spike of neuron q at a
ing mechanism), and W
qp
particular time modifies the response of neuron p after j time steps.
The network of Eq. (1) is an extension of the standard linear-nonlinear model of a single
neuron. The linear-nonlinear model of a single neuron can be completely reconstructed
from measured spike times in response to white noise [5]. We will demonstrate that the
network of linear-nonlinear neurons can be similarly analyzed to determine the connectivity
between two measured neurons.
3 Analysis of model
Let neurons 1 and 2 be the only two measured neurons. The spike times of all other neurons
will remain unmeasured. Given further simplifying assumptions detailed below, we can
? j and W
? j ). We will outline a
isolate the connectivity terms between neurons 1 and 2 (W
12
21
method to determined these connectivity terms from a few statistics of the two measured
spikes trains and the white noise stimulus.
3.1 Assumptions
The first assumption is that the coupling is sufficiently weak to justify a first order approx? j . We will neglect all quadratic and higher order terms in the W
? j with
imation in the W
qp
qp
? j because
one important exception. Common input correlations are second order in the W
qp
common input requires two connections. Since our analysis must include common input to
?j W
?k
the measured neurons, we retain terms containing W
p1 q2 with p, q > 2.
The second assumption is that the unmeasured neurons do not respond to essentially identical stimulus features as the measured neurons (1 & 2) or each other. We quantify similarity
k
to stimulus features by the inner product between linear kernels, cos ??pq
= hi?k
? hiq . We
p
? We al? cos ?.
require each cos ?? to be small so that we can ignore terms of the form W
? cos ??k terms so that no assumption is made on the two
low one exception and retain W
21
measured neurons.
Last, we assume the nonlinearity is an error function of the form
x ? T? i
1h
p
?
gp (x) = 1 + erf
2
p 2
Ry
2
with parameters T?p and p , where erf(y) = ?2? 0 e?t dt.
(2)
3.2 Outline of method
The first step in analyzing the network response is to ignore the fact that the neurons are
embedded in a neural network and analyze the spike trains of neurons 1 and 2 as though
each were an isolated linear-nonlinear system. Using the procedure outlined in Ref. [5],
one can determine the effective linear-nonlinear parameters from the average firing rates
(E{R1i } and E{R2i })1 and the stimulus-spike correlations (E{XR1i } and E{XR2i }).
These effective linear-nonlinear parameters clearly will not match the parameters for neurons 1 and 2 in the complete system (Eq. (1)). The network connections alter the mean rates
and stimulus-spike correlations of neurons 1 and 2, changing the linear-nonlinear parameters reconstructed from these measurements. Nonetheless, these effective linear-nonlinear
system parameters can be written approximately as combinations of parameters from the
network in Eq. (1).
The connectivity between neurons 1 and 2 can then be determined from the correlation
between their spikes (E{R1i R2i?k } measured at different positive and negative delays k and
the correlation of their spike pairs with the stimulus (E{XR1i R2i?k }) as follows. Given our
? ??
?j ,W
? j , and W
?j W
assumptions, we obtain equations linear in W
12
21
p1 q2 . For each delay k, we
obtain three equations: one from E{R1i R2i?k }, one from the projection of E{XR1i R2i?k }
onto E{XR1i }, and one from the projection of E{XR1i R2i?k } onto E{XR2i?k }. At first
glance, it appears that the unknowns greatly outnumber the equations.
? ??
?j W
However, the system of equations is well-posed because the W
p1 q2 appear in the same
combination for each of the three equations at a given delay. In fact, we have only two sets
of unknowns, which can be written as
?k
?
? k = W12 for k < 0,
(3)
W
k
? 21
W
for k > 0,
and
?k =
U
XX
? ? j ? ??
ckj?
p Wp1 Wp2 .
(4)
p>2 j,?
?
All other parameters in the equations were already determined in the first stage. If N is the
number of delays considered, then we have 3N linear equations and only 2N unknowns.
? k is the direct connection between neurons 1 and 2 (the direction of the conThe factor W
? k is the common input to neuron 2
nection depends on the sign of the delay k). The factor U
and neuron 1 (k times steps delayed) from all other neurons in the network. The weighting
1
E{?} denotes expected value.
?
(ckj?
p ) of its terms depends on the properties of the unmeasured neurons. Fortunately, since
? k as a unit, we don?t need to determine the weighting.
we can treat U
To analyze spike train data, we approximate the statistics E{R1i }, E{R2i }, E{XR1i },
E{XR2i }, E{R1i R2i?k }, and E{XR1i R2i?k } by averages over an experiment. We then
? and U
? . We denote these
compute the least-squares fit to solve for approximations of W
approximations (or correlation measures) as W and U, respectively.
4 Demonstration
We demonstrate the ability of the measures W and U to distinguish direct connection correlations from common input correlations with three example simulations. In the first two
examples, we simulated a network of three coupled linear-nonlinear neurons (Eqs. (1) and
(2)). In the third example, we simulated a pair of integrate-and-fire neurons driven by the
stimulus in a manner similar to the linear-nonlinear neurons. In each example, we measured
only the spike times of neuron 1 and neuron 2.
Since the white noise stimulus does not repeat, one cannot calculate a JPSTH or shufflecorrected correlogram. Instead, for comparison we calculated the covariance between the
spike times, C k = hR1i R2i?k i ? hR1i ihR2i?k i, and a stimulus independent correlation meak
sure introduced in Ref. [6], S k = hR1i R2i?k i ? ?21
, where hi represents averaging over the
k
entire stimulus. The quantity ?21 is the expected value of hR1i R2i?k i if neurons 1 and 2
were independent linear-nonlinear systems responding to the same stimulus.
We used spatio-temporal linear kernels of the form
hp (j, t) = te
? ?t
h
e?
|j|2
40
sin((j1 cos ?p + j2 sin ?p )fp + kp )
(5)
for t > 0 (hp = 0 otherwise), where j = (j1 , j2 ) denotes a discrete space point. For the
linear-nonlinear simulations, we sampled this function on a 20 ? 20 ? 20 grid in space and
time, normalizing the resulting vector to obtain the unit vector hip . The kernels were chosen
to be caricatures of receptive fields of simple cells in visual cortex. The only geometry of
k
the kernels that appears in the equations is their inner products cos ??pq
= hi?k
? hiq .
p
For the first example, we simulated a network of three linear-nonlinear neurons. Neuron
2 had an excitatory connection onto neuron 1 with a delay of 5?6 units of time (a positive
5
6
? 21
? 21
delay for our sign convention): W
=W
= 0.6. Neuron 3 had one excitatory connection onto neuron 1 and second excitatory connection onto neuron 2 that was delayed by
1
2
8
9
? 31
? 31
? 32
? 32
6?8 units of time (a negative delay): W
=W
=W
=W
= 1.5. In this way, the
spike times from neuron 1 and 2 had positive correlations due to both a direct connection
and common input. Fig. 1 shows the results after simulating for 600,000 units of time,
obtaining 16,000?22,000 spikes per neuron.
The covariance C has peaks at both positive and negative delays, corresponding to the direct
connection and common input, respectively, as well as a small peak around zero due to the
shared stimulus (see Ref. [6]). The measure S eliminates the stimulus-induced correlation,
but still cannot distinguish the direct connection from the common input. The proposed
measures W and U, however, do separate the two sources of correlation. W contains a
peak only at the positive delay corresponding to the direct connection from neuron 2 to
neuron 1; U contains a peak only at the negative delay corresponding to the common input
from the (unmeasured) third neuron. This distinction was made at the cost of a dramatic
increase in the noise. On the order of 20,000 spikes were needed to get clean results even in
this idealized simulation, a long experiment given the typically low firing rates in response
to white noise stimuli.
Theoretically, the method should handle inhibitory connections just as well as excitatory
?3
a
C
4
x 10
2
0
?30
?20
?10
0
Delay
10
20
30
?20
?10
0
Delay
10
20
30
?20
?10
0
Delay
10
20
30
?20
?10
0
Delay
10
20
30
?3
b
3
x 10
S
2
1
0
?30
W
c
1
0.5
0
d
1
U
?30
0.5
0
?30
Figure 1: Results from the spike times of two neurons in a simulation of three linearnonlinear neurons. Delay is in units of time and is the spike time of neuron 1 minus the
spike time of neuron 2. The correlations at a positive delay are due to a direct connection,
while those a negative delay are due to common input. (a) The covariance C between the
spike times of neuron 1 and neuron 2 reflects both connections. The third peak around zero
delay, due to similarity in the kernels hi1 and hi2 , is induced by the common stimulus. (b)
The correlation measure S removes the correlation induced by the common stimulus, but
cannot distinguish between the two connectivity induced correlations. (c?d) The measures
W and U do distinguish the connectivity induced correlations. W reflects only the direct
connection (c); U reflects only the common input (d). Parameters for g(?): T?1 = 2.5,
T?2 = 3.0, T?3 = 2.2, 1 = 0.5, 2 = 1.0, 3 = 0.7. Parameters for h: ?h = 1, ?1 = 0,
?2 = ?/8, ?3 = ?/4, f1 = 0.5, f2 = 0.8, f3 = 1.0, k1 = 0, k2 = ?1, k3 = 1.
connections. To test the inhibitory case, we modified the connections so that neuron 1
?5 = W
? 6 = ?0.3), and neuron 1
received an inhibitory connection from neuron 2 (W
21
21
1
2
?
?
received an inhibitory connection from neuron 3 (W31 = W31
= ?1.0). Neuron 2 con?8 = W
? 9 = 1.0). The low
tinued to receive an excitatory connection from neuron 3 (W
32
32
firing rates of neurons, however, makes inhibition more difficult to detect via correlations
[3]. Similarly, the measures W and U performed less well with inhibition. To demonstrate
that they could, at least theoretically, work for inhibition, we increased the firing rates, used
? s with smaller magnitudes, and increased the simulation length. Fig. 2 shows the results
W
after simulating for 1,200,000 units of time, obtaining 130,000?140,000 spikes per neuron.
With this extraordinarily large number of spikes, W and U successfully distinguish the
direct connection correlations from the common input correlations.
To test the robustness of the method to deviations from the linear-nonlinear model, we
simulated a system of two integrate-and-fire neurons whose input was a threshold-linear
function of the stimulus. The neurons received common input from a threshold-linear unit,
?3
a
x 10
C
5
0
?5
?30
?20
?10
0
Delay
10
20
30
?4
?30
?20
?10
0
Delay
10
20
30
0
?0.1
?0.2
?0.3
?30
?20
?10
0
Delay
10
20
30
?20
?10
0
Delay
10
20
30
S
b
?3
x 10
0
?2
W
c
d
U
0
?0.1
?30
Figure 2: Results from the simulation of the same linear-nonlinear network as in Fig. 1,
except that the connections from both neuron 2 and neuron 3 onto neuron 1 were made
inhibitory. Panels are as in Fig. 1. Again, S eliminates the stimulus-induced peak in C.
W reflects only the direct connection correlations, and U reflects only the common input
correlations. This inhibitory example, however, required a long simulation for accurate
results (see text). Parameters for g(?): T?1 = 1.2, T?2 = 2.0, T?3 = 1.5, 1 = 0.5, 2 = 1.0,
3 = 0.7. Parameters for h are the same as in Fig. 1.
and neuron 1 received a direct connection from neuron 2 (see Fig. 3).
We let t be given in milliseconds, sampled Eq. (5) on a 20 ? 20 ? 30 grid in space and time,
using a 2 ms grid in time, and normalized the resulting vector to obtain the unit vector h ip .
A two millisecond sample rate of discrete white noise is unrealistic in many experiments,
so we departed further from the assumptions of the derivation and let the stimulus
be white
?
noise sampled at 10 ms. We let the stimulus standard deviation be ? = 1/ 5 so that it had
the same power as discrete white noise sampled at 2 ms with ? = 1.
After one hour of simulated time (360,000 frames), we collected approximately 23,000?
25,000 spikes per neuron. Fig. 4 shows that the method still effectively distinguishes direct
connection correlations from common input correlations. The separation isn?t perfect as
W becomes negative where the common input correlation is positive and U becomes negative where the direct input correlation is positive. To determine whether a combination
of positive W and negative U, for example, indicates positive direct connection correlation
or negative common input correlation, one simply needs to look to see if S is positive or
negative.
Fig. 4 dramatically illustrates the increased noise in W and U. For this reason, the measures are useful only when one can run a relatively long experiment to get an acceptable
signal-to-noise ratio. The noise is due to the conditioning of the (non-square) matrix in the
j
T1
h1
X
j
j
1
Tsp,1
j
2
Tsp,1
T3
h3
T2
j
h2
Figure 3: Diagram of two integrate-and-fire neurons (circles) receiving threshold-linear
input from the stimulus. The neurons received common input from threshold-linear unit
3, and neuron 1 received a direct connection from neuron 2. The evolution of the voltage
dV
of neuron p in response to input gp (t) was given by ?m dtp + Vp + gp (t)(Vp ? Es ) = 0.
When Vp (t) reached 1, a spike was recorded, and the voltage was reset to 0 and held
there for an absolute refractory period of length ?ref . We let gp (t) = gpext (t) + gpint (t),
P
P
where the external input was gpext (t) = 0.05 j G(t ? Tpj ) + 0.05 j G(t ? T3j ? ?p )
2
2
with G(t) = e4 ?ts e?t/?s for t > 0 and G(t) = 0 otherwise. The Tpj were drawn
+
from a modulated Poisson process with rate given by ?p hip ? X where [x]+ = x if
x > 0 and is zero otherwise. The internal input g2int (t) to neuron 2 was set to zero, and
the internal input to neuron 1 was set to reflect an excitatory connection from neuron 2,
P
j
j
g1int (t) = 0.05 j G(t ? Tsp,2
? ?21 ), where the Tsp,2
are the spike times of neuron 2.
least-square calculation of W and U. The condition numbers in the three examples were
approximately 70, 50, and 110, respectively. Measurement errors or noise could be magnified by as much as these factors. The high condition numbers reflect the subtlety of the
distinction we are making.
Obtaining values of W and U significantly beyond the noise level in real experiments may
prove a formidable challenge. However, the utility of W and U with noisy data greatly
improves when they are used in conjunction with other measures. One can use a less noisy
measure such as S to find significant stimulus-independent correlations and determine their
magnitudes. Then, assuming one can rule out causes like covariation in latency or excitability [7], one simply needs to determine if the correlations were caused by a direct connection
or by common input. One does not need to use W and U to reject the null hypothesis of
no connectivity-induced correlations; they are needed only to make the remaining binary
distinction.
The proposed method should be viewed simply as an example of a new framework for
reconstructing stimulus-driven neural networks. Clearly, extensions beyond the presented
model will be necessary since the linear-nonlinear model can adequately describe the behavior of only a small subset of neurons in primary sensory areas. Furthermore, methods to
validate the assumed model will be required before results of this approach can be trusted.
Though limited in scope and model-dependent, we have demonstrated what appears to
be the first example of a definitive dissociation between direct connection and common
input correlations from spike time data. At least in the case of excitatory connections,
this distinction can be made with a realistic, albeit large, amount of data. With further
refinements, this approach may yield viable tools for reconstructing stimulus-driven neural
networks.
?5
x 10
6
4
2
0
?150
C
a
?100
?50
0
Delay (ms)
50
100
150
?100
?50
0
Delay (ms)
50
100
150
?100
?50
0
Delay (ms)
50
100
150
?100
?50
0
Delay (ms)
50
100
150
?5
b
x 10
S
4
2
0
?150
W
c
1
0.5
0
?0.5
?150
U
d 0.6
0.4
0.2
0
?0.2
?150
Figure 4: Results from the simulation of two integrate-and-fire neurons, where neuron 2 had
an excitatory connection onto neuron 1 with a delay ?21 = 50 ms. Both neurons received
common input, but the common input to neuron 2 was delayed (? 1 = 0 ms, ?2 = 60 ms).
Panels are as in Fig. 1. S greatly reduces the central, stimulus-induced correlation from
C. W and U successfully distinguish the direct connection correlations from the common
input correlations, but also negatively reflect each other. Ambiguity in interpretation of W
and U can be eliminated by comparison with S. Integrate-and-fire parameters: ? m = 5
ms, Es = 6.5, ?2 = 2 ms, ?ref = 2 ms, ?1 = ?2 = 0.25 ms?1 , and ?3 = 0.1 ms?1 .
Parameters for h are the same as in Fig. 1 except that ?h = 10 ms.
References
[1] D. H. Perkel, G. L. Gerstein, and G. P. Moore. Neuronal spike trains and stochastic point processes. II. Simultaneous spike trains. Biophys. J., 7:419?40, 1967.
[2] A. M. H. J. Aertsen, G. L. Gerstein, M. K. Habib, and G. Palm. Dynamics of neuronal firing
correlation: Modulation of ?effective connectivity?. J. Neurophysiol., 61:900?917, 1989.
[3] G. Palm, A. M. H. J. Aertsen, and G. L. Gerstein. On the significance of correlations among
neuronal spike trains. Biol. Cybern., 59:1?11, 1988.
[4] J. R. Rosenberg, A. M. Amjad, P. Breeze, D. R. Brillinger, and D. M. Halliday. The Fourier approach to the identification of functional coupling between neuronal spike trains. Prog. Biophys.
Mol. Biol., 53:1?31, 1989.
[5] D. Q. Nykamp and Dario L. Ringach. Full identification of a linear-nonlinear system via crosscorrelation analysis. J. Vision, 2:1?11, 2002.
[6] D. Q. Nykamp. A spike correlation measure that eliminates stimulus effects in response to white
noise. J. Comp. Neurosci., 14:193?209, 2003.
[7] C. D. Brody. Correlations without synchrony. Neural. Comput., 11:1537?51, 1999.
| 2237 |@word neurophysiology:1 simulation:9 simplifying:1 covariance:3 attainable:1 dramatic:1 minus:1 contains:2 amjad:1 rpi:2 must:2 written:2 realistic:2 j1:2 remove:1 alone:1 record:1 math:1 direct:23 become:1 viable:2 prove:1 manner:1 theoretically:2 expected:2 behavior:1 p1:3 ry:1 brain:2 perkel:1 becomes:2 xx:2 panel:2 formidable:1 null:1 what:1 q2:3 magnified:1 brillinger:1 temporal:3 k2:1 unit:11 appear:1 positive:11 t1:1 before:1 treat:1 analyzing:1 firing:5 modulation:1 approximately:3 co:6 limited:2 dtp:1 procedure:1 area:1 significantly:1 reject:1 projection:2 get:2 cannot:4 onto:7 cybern:1 demonstrated:1 modifies:1 focused:1 rule:1 handle:1 unmeasured:7 enhanced:1 us:1 hypothesis:1 observed:1 calculate:1 connected:1 shuffle:1 rq:3 complexity:1 dynamic:1 xr2i:3 negatively:1 f2:1 basis:1 completely:1 neurophysiol:1 joint:1 derivation:1 train:7 effective:4 describe:1 kp:1 r2i:12 extraordinarily:2 whose:1 widely:1 posed:1 solve:1 otherwise:4 ability:2 statistic:3 erf:2 gp:6 tsp:4 noisy:2 ip:1 propose:1 product:2 reset:1 j2:2 hi2:1 inducing:1 validate:1 los:1 electrode:2 perfect:1 coupling:2 measured:12 h3:1 received:7 eq:5 quantify:1 convention:1 direction:1 closely:1 stochastic:1 require:1 f1:1 extension:2 sufficiently:2 considered:1 around:2 k3:1 scope:1 hiq:2 successfully:2 tool:5 reflects:5 trusted:1 clearly:2 gaussian:1 modified:1 voltage:2 rosenberg:1 conjunction:1 indicates:1 greatly:4 detect:1 tpj:2 inference:2 dependent:1 typically:2 entire:1 hidden:1 breeze:1 originating:1 expand:1 caricature:1 among:1 field:1 f3:1 generat:1 eliminated:1 identical:1 represents:1 look:1 alter:1 t2:1 stimulus:33 few:1 distinguishes:1 simultaneously:1 delayed:3 geometry:1 fire:6 attempt:1 highly:1 analyzed:1 held:1 accurate:1 conthe:1 integral:1 partial:1 necessary:2 circle:1 nykamp:4 isolated:1 hip:4 increased:3 measuring:1 cost:1 deviation:3 subset:1 delay:28 combined:1 fundamental:1 peak:6 retain:2 receiving:1 connectivity:16 again:1 reflect:3 recorded:2 central:1 containing:1 ambiguity:1 external:1 crosscorrelation:1 potential:1 caused:1 idealized:2 depends:2 performed:1 h1:1 analyze:3 reached:1 synchrony:1 vivo:1 square:3 dissociation:1 yield:2 gathered:1 t3:1 vp:3 weak:1 identification:2 q6:1 comp:1 simultaneous:1 nonetheless:1 con:1 outnumber:1 sampled:4 covariation:1 improves:1 subtle:1 appears:3 higher:3 dt:1 response:10 though:2 furthermore:1 just:1 stage:1 correlation:46 nonlinear:20 glance:1 departed:1 effect:2 dario:1 normalized:2 evolution:1 adequately:1 excitability:1 moore:1 ringach:1 white:11 sin:2 lumped:1 ambiguous:1 m:16 outline:2 complete:1 demonstrate:5 common:33 functional:2 qp:5 conditioning:1 refractory:1 organism:2 interpretation:1 measurement:4 significant:1 r1i:5 approx:1 outlined:1 mathematics:1 similarly:2 hp:2 grid:3 nonlinearity:2 had:5 phenomenological:1 pq:2 similarity:2 cortex:1 inhibition:3 driven:5 binary:1 fortunately:1 determine:7 period:1 signal:1 ii:1 full:1 infer:2 reduces:1 ing:1 match:1 calculation:1 long:3 essentially:1 vision:1 poisson:1 histogram:1 kernel:6 cell:1 receive:1 linearnonlinear:1 diagram:1 source:1 eliminates:3 sure:1 recording:1 induced:10 isolate:1 w31:2 fit:1 ckj:2 inner:2 angeles:1 whether:1 utility:1 cause:1 action:1 dramatically:1 useful:1 latency:1 detailed:1 amount:1 inhibitory:6 shifted:1 millisecond:2 sign:2 per:3 discrete:5 threshold:4 drawn:1 changing:1 clean:1 hi1:1 run:1 respond:1 prog:1 separation:1 gerstein:3 w12:1 coherence:1 acceptable:1 brody:1 hi:3 distinguish:10 existed:1 quadratic:1 activity:1 ucla:2 fourier:1 extracellular:1 relatively:1 department:1 palm:2 combination:3 remain:2 smaller:1 reconstructing:4 making:1 dv:1 spiked:1 pr:1 equation:8 mechanism:1 needed:2 imation:1 simulating:2 robustness:1 denotes:2 responding:1 include:1 remaining:1 opportunity:1 neglect:1 exploit:1 k1:1 already:1 quantity:1 spike:40 receptive:1 primary:1 responds:1 aertsen:2 link:2 separate:1 simulated:5 collected:1 reason:2 assuming:1 length:2 ratio:1 demonstration:1 difficult:2 unfortunately:1 negative:10 unknown:3 neuron:100 t:1 frame:1 introduced:1 pair:2 required:2 connection:43 distinction:7 hour:1 address:1 able:1 beyond:2 below:1 exemplified:1 pattern:2 fp:1 challenge:1 built:1 unrealistic:1 power:1 representing:2 coupled:1 isn:1 text:1 understanding:1 embedded:1 h2:1 integrate:6 excitatory:8 repeat:1 last:1 characterizing:1 absolute:1 calculated:1 sensory:1 made:4 refinement:1 reconstructed:2 approximate:1 ignore:2 assumed:1 spatio:2 don:1 ca:1 obtaining:4 mol:1 significance:1 neurosci:1 noise:17 definitive:1 ref:5 neuronal:4 fig:10 inferring:1 explicit:1 comput:1 third:4 weighting:2 e4:1 peristimulus:1 jpsth:4 normalizing:1 albeit:1 effectively:1 magnitude:2 te:1 illustrates:1 biophys:2 simply:4 visual:1 correlogram:4 subtlety:1 duane:1 viewed:1 shared:1 habib:1 determined:5 except:2 corrected:1 justify:1 averaging:1 experimental:1 e:2 exception:2 internal:2 modulated:1 biol:2 |
1,360 | 2,238 | Source Separation with a Sensor Array Using
Graphical Models and Subband Filtering
Hagai Attias
Microsoft Research
Redmond, WA 98052
[email protected]
Abstract
Source separation is an important problem at the intersection of several
fields, including machine learning, signal processing, and speech technology. Here we describe new separation algorithms which are based
on probabilistic graphical models with latent variables. In contrast with
existing methods, these algorithms exploit detailed models to describe
source properties. They also use subband filtering ideas to model the
reverberant environment, and employ an explicit model for background
and sensor noise. We leverage variational techniques to keep the computational complexity per EM iteration linear in the number of frames.
1 The Source Separation Problem
Fig. 1 illustrates the problem of source separation with a sensor array. In this problem,
signals from K independent sources are received by each of L ? K sensors. The task
is to extract the sources from the sensor signals. It is a difficult task, partly because the
received signals are distorted versions of the originals. There are two types of distortions.
The first type arises from propagation through a medium, and is approximately linear but
also history dependent. This type is usually termed reverberations. The second type arises
from background noise and sensor noise, which are assumed additive. Hence, the actual
task is to obtain an optimal estimate of the sources from data. The task is difficult for another
reason, which is lack of advance knowledge of the properties of the sources, the propagation
medium, and the noises. This difficulty gave rise to adaptive source separation algorithms,
where parameters that are related to those properties are adjusted to optimized a chosen cost
function.
Unfortunately, the intense activity this problem has attracted over the last several years [1?9]
has not yet produced a satisfactory solution. In our opinion, the reason is that existing techniques fail to address three major factors. The first is noise robustness: algorithms typically
ignore background and sensor noise, sometime assuming they may be treated as additional
sources. It seems plausible that to produce a noise robust algorithm, noise signals and their
properties must be modeled explicitly, and these models should be exploited to compute
optimal source estimators. The second factor is mixing filters: algorithms typically seek,
and directly optimize, a transformation that would unmix the sources. However, in many
situations, the filters describing medium propagation are non-invertible, or have an unstable
inverse, or have a stable inverse that is extremely long. It may hence be advantageous to
Figure 1: The source separation problem. Signals from K = 2 speakers propagate toward
L = 2 sensors. Each sensor receives a linear mixture of the speaker signals, distorted by
multipath propagation, medium response, and background and sensor noise. The task is to
infer the original signals from sensor data.
estimate the mixing filters themselves, then use them to estimate the sources. The third
factor is source properties: algorithms typically use a very simple source model (e.g., a one
time point histogram). But in many cases one may easily obtain detailed models of the
source signals. This is particularly true for speech sources, where large datasets exist and
much modeling expertise has developed over decades of research. Separation of speakers is
also one of the major potential commercial applications of source separation algorithms. It
seems plausible that incorporating strong source models could improve performance. Such
models may potentially have two more advantages: first, they could help limit the range of
possible mixing filters by constraining the optimization problem. Second, they could help
avoid whitening the extracted signals by effectively limiting their spectral range to the range
characteristic of the source model.
This paper makes several contributions to the problem of real world source separation. In
the following, we present new separation algorithms that are the first to address all three
factors. We work in the framework of probabilistic graphical models. This framework
allows us to construct models for sources and for noise, combine them with the reverberant
mixing transformation in a principled manner, and compute parameter and source estimates
from data which are Bayes optimal. We identify three technical ideas that are key to our
approach: (1) a strong speech model, (2) subband filtering, and (3) variational EM.
2
Frames, Subband Signals, and Subband Filtering
We start with the concept of subband filtering. This is also a good point to define our
notation. Let xm denote a time domain signal, e.g., the value of a sound pressure waveform
at time point m = 0, 1, 2, .... Let Xn [k] denote the corresponding subband signal at time
frame n and subband frequency k. The subband signals are obtained from the time domain
signal by imposing an N -point window wm , m = 0 : N ? 1 on that signal at equally spaced
points nJ, n = 0, 1, 2, ..., and FFT-ing the windowed signal,
Xn [k] =
N
?1
X
e?i?k m wm xnJ+m ,
(1)
m=0
where ?k = 2?k/N and k = 0 : N ? 1. The subband signals are also termed frames.
Notice the difference in time scale between the time frame index n in Xn [k] and the time
point index n in xn .
The chosen value of the spacing J depends on the window length N . For J ? N the original
signal xm can be synthesized exactly from the subband signals (synthesis formula omitted).
An important consideration for selecting J, as well as the window shape, is behavior under
filtering. Consider a filter hm applied to xm , and denote by ym the filtered signal. In the
simple case hm = h?m,0 (no filtering), the subband signals keep the same dependence as
the time domain ones, yn = hxn ?? Yn [k] = hXn [k] . For an arbitrary filter hm , we
use the relation
X
X
yn =
hm xn?m ?? Yn [k] =
Hm [k]Xn?m [k] ,
(2)
m
m
with complex coefficients Hm [k] for each k. This relation between the subband signals
is termed subband filtering, and the Hm [k] are termed subband filters. Unlike the simple
case of non-filtering, the relation (2) holds approximately, but quite accurately using an
appropriate choice of J and wm ; see [13] for details on accuracy. Throughout this paper,
we will assume that an arbitrary filter hm can be modeled by the subband filters Hm [k] to
a sufficient accuracy for our purposes.
One advantage of subband filtering is that it replaces a long filter hm by a set of short
independent filters Hm [k], one per frequency. This will turn out to decompose the source
separation problem into a set of small (albeit coupled) problems, one per frequency. Another
advantage is that this representation allows using a detailed speech model on the same footing
with the filter model. This is because a speech model is defined on the time scale of a single
frame, whereas the original filter hm , in contrast with Hm [k], is typically as long as 10 or
more frames.
As a final point on notation, we define a Gaussian distribution over a complex number Z
by p(Z) = N (Z | ?, ?) = ?? exp(?? | Z ? ? |2 ) . Notice that this is a joint distribution
over the real and imaginary parts of Z. The mean is ? = hXi and the precision (inverse
variance) ? satisfies ? ?1 = h| X |2 i? | ? |2 .
3 A Model for Speech Signals
We assume independent sources, and model the distribution of source j by a mixture model
over its subband signals Xjn ,
N/2?1
p(Xjn | Sjn = s)
=
Y
N (Xjn [k] | 0, Ajs [k])
p(Sjn = s) = ?js
k=1
p(X, S)
=
Y
p(Xjn | Sjn )p(Sjn ) ,
(3)
jn
where the components are labeled by Sjn . Component s of source j is a zero mean Gaussian
with precision Ajs . The mixing proportions of source j are ?js . The DAG representing
this model is shown in Fig. 2. A similar model was used in [10] for one microphone speech
enhancement for recognition (see also [11]).
Here are several things to note about this model. (1) Each component has a characteristic spectrum, which may describe a particular part of a speech phoneme. This is because
the precision corresponds to the inverse spectrum: the mean energy (w.r.t. the above distribution) of source j at frequency k, conditioned on label s, is h| Xjn |2 i = A?1
js . (2)
A zero mean model is appropriate given the physics of the problem, since the mean of a
sound pressure waveform is zero. (3) k runs from 1 to N/2 ? 1, since for k > N/2,
Xjn [k] = Xjn [N ? k]? ; the subbands k = 0, N/2 are real and are omitted from
the model, a common practice in speech recognition engines. (4) Perhaps most importantly, for each
are correlated via the component label s, as
P source the subband signals
Q
p(Xjn ) = s p(Xjn , Sjn = s) 6= k p(Xjn [k]) . Hence, when the source separation
problem decomposes into one problem per frequency, these problems turn out to be coupled (see below), and independent frequency permutations are avoided. (5) To increase
sn
xn
Figure 2: Graphical model describing speech signals in the subband domain. The model
assumes i.i.d. frames; only the frame at time n is shown. The node Xn represents a complex
N/2 ? 1-dimensional vector Xn [k], k = 1 : N/2 ? 1.
model accuracy, a state transition matrix p(Sjn = s | Sj,n?1 = s0 ) may be added for each
source. The resulting HMM models are straightforward to incorporate without increasing
the algorithm complexity.
There are several modes of using the speech model in the algorithms below. In one mode,
the sources are trained online using the sensor data. In a second mode, source models are
trained offline using available data on each source in the problem. A third mode correspond
to separation of sources known to be speech but whose speakers are unknown. In this case,
all sources have the same model, which is trained offline on a large dataset of speech signals,
including 150 male and female speakers reading sentences from the Wall Street Journal (see
[10] for details). This is the case presented in this paper. The training algorithm used was
standard EM (omitted) using 256 clusters, initialized by vector quantization.
4
Separation of Non-Reverberant Mixtures
We now present a source separation algorithm for the case of non-reverberant (or instantaneous) mixing. Whereas many algorithms exist for this case, our contribution here is an
algorithm that is significantly more robust to noise. Its robustness results, as indicated in the
introduction, from three factors: (1) explicitly modeling the noise in the problem, (2) using
a strong source model, in particular modeling the temporal statistics (over N time points)
of the sources, rather than one time point statistics, and (3) extracting each source signal
from data by a Bayes optimal estimator obtained from p(X | Y ). A more minor point is
handling the case of less sources than sensors in a principled way.
P
The mixing situation is described by yin = j hij xjn + uin , where xjn is source signal
j at time point n, yin is sensor signal i, hij is the instantaneous mixing matrix, and uin is
the
P noise corrupting sensor i?s signal. The corresponding subband signals satisfy Yin [k] =
j hij Xjn [k] + Uin [k] .
To turn the last equation into a probabilistic graphical model, we assume that noise i has
precision (inverse spectrum) Bi [k], and that noises at different sensors are independent (the
latter assumption is often inaccurate but can be easily relaxed). This yields
X
Y
hij Xjn [k], Bi [k])
N (Yin [k] |
p(Yin | X) =
j
k
p(Y | X)
=
Y
p(Yin | X) ,
(4)
in
which together with the speech model (3) forms a complete model p(Y, X, S) for this
problem. The DAG representing this model for the case K = L = 2 is shown in Fig. 3.
Notice that this model generalizes [4] to the subband domain.
s1n?2
s1n?1
s1 n
s2n?2
s2n?1
s2 n
x1n?2
x1n?1
x1 n
x2n?2
x2n?1
x2 n
y1n?2
y1n?1
y1n
y2n?2
y2n?1
y2 n
Figure 3: Graphical model for noisy, non-reverberant 2 ? 2 mixing, showing a 3 frame-long
sequence. All nodes Yin and Xjn represent complex N/2 ? 1-dimensional vectors (see Fig.
2). While Y1n and Y2n have the same parents, X1n and X2n , the arcs from the parents to
Y2n are omitted for clarity.
The model parameters ? = {hij , Bi [k], Ajs [k], ?js } are estimated from data by an EM
algorithm. However, as the number of speech components M or the number of sources K
increases, the E-step becomes computationally intractable, as it requires summing over all
O(M K ) configurations of (S1n , ..., SKn ) at each frame. We approximate the E-step using
a variational technique: focusing on the posterior distribution p(X, S | Y ), we compute an
optimal tractable approximation q(X, S | Y ) ? p(X, S | Y ), which we use to compute the
sufficient statistics (SS). We choose
Y
q(Xjn | Sjn , Y )q(Sjn | Y ) ,
(5)
q(X, S | Y ) =
jn
where the hidden variables are factorized over the sources, and also over the frames (the latter
factorization is exact in this model, but is an approximation for reverberant mixing). This
posterior maintains the dependence of X on S, and thus the correlations between different
subbands Xjn [k]. Notice also that this posterior implies a multimodal q(Xjn ) (i.e., a mixture
distribution), which is more accurate than unimodal posteriors often employed in variational
approximations (e.g., [12]), but is also harder to compute. A slightly
Q more general form
which allows inter-frame correlations by employing q(S | Y ) = jn q(Sjn | Sj,n?1 , Y )
may also be used, without increasing complexity.
By optimizing in the usual way (see [12,13]) a lower bound on the likelihood w.r.t. q, we
obtain
Y
q(Xjn [k] | Sjn = s, Y )q(Sjn = s | Y ) ,
(6)
q(Xjn , Sjn = s | Y ) =
k
where q(Xjn [k] | Sjn = s, Y ) = N (Xjn [k] | ?jns [k], ?js [k]) and q(Sjn = s | Y ) = ?jns .
Both the factorization over k of q(Xjn | Sjn ) and its Gaussian functional form fall out from
the optimization under the structural restriction (5) and need not be specified in advance.
The variational parameters {?jns [k], ?js [k], ?jns }, which depend on the data Y , constitute
the SS and are computed in the E-step. The DAG representing this posterior is shown in
Fig. 4.
s1n?2
s1n?1
s1 n
s2n?2
s2n?1
s2 n
x1n?2
x1n?1
x1 n
x2n?2
x2n?1
x2 n
{y im }
Figure 4: Graphical model describing the variational posterior distribution applied to the
model of Fig. 3. In the non-reverberant case, the components of this posterior at time frame
n are conditioned only on the data Yin at that frame; in the reverberant case, the components
at frame n are conditioned on the data Yim at all frames m. For clarity and space reasons,
this distinction is not made in the figure.
After learning, the sources are extracted from data by a variational approximation of the
minimum mean squared error estimator,
Z
?
Xjn [k] = E(Xjn [k] | Y ) = dX q(X | Y )Xjn [k] ,
(7)
P
i.e., the posterior mean, where q(X | Y ) = S q(X, S | Y ). The time domain waveform
x
?jm is then obtained by appropriately patching together the subband signals.
M-step. The update rule for the mixing matrix hij is obtained by solving the linear equation
X
X
X
Bi [k]?ij,0 [k] =
hij 0
Bi [k]?j 0 j,0 [k] .
(8)
j0
k
k
The update rule for the noise precisions Bi [k] is omitted. The quantities ?ij,m [k] and
?j 0 j,m [k] are computed from the SS; see [13] for details.
E-step. The posterior means of the sources (7) are obtained by solving
?
?
X
X
? j 0 n [k]?
? jn [k] = ??jn [k]?1
hij 0 X
X
Bi [k]hij ?Yin [k] ?
i
(9)
j 0 6=j
? jn [k], which is a K ?K linear system for each frequency k and frame n. The equations
for X
for the SS are given in [13], which also describes experimental results.
5
Separation of Reverberant Mixtures
In this section we extend the algorithm to the case of reverberant mixing. In that case,
due to signal propagation in the medium, each sensor signal at time frame n depends
on the source signals not just at the same time but also at previous times. To describe
this mathematically,
the mixing matrix hij must become a matrix of filters hij,m , and
P
yin =
hij,m xj,n?m + uin .
jm
It may seem straightforward to extend the algorithm derived above to the present case.
However, this appearance is misleading, because we have a time scale problem. Whereas
are speech model p(X, S) is frame based, the filters hij,m are generally longer than the
frame length N , typically 10 frames long and sometime longer. It is unclear how one can
work with both Xjn and hij,m on the same footing (and, it is easy to see that straightforward
windowed FFT cannot solve this problem).
This
P is where the idea of subband filtering becomes very useful. Using (2) we have Yin [k] =
Hij,m [k]Xj,n?m [k] + Uin [k], which yields the probabilistic model
jm
p(Yin | X)
=
Y
N (Yin [k] |
X
Hij,m [k]Xj,n?m [k], Bi [k]) .
(10)
jm
k
Hence, both X and Y are now frame based. Combining this equation with the speech model
(3), we now have a complete model p(Y, X, S) for the reverberant mixing problem. The
DAG describing this model is shown in Fig. 5.
s1n?2
s1n?1
s1 n
s2n?2
s2n?1
s2 n
x1n?2
x1n?1
x1 n
x2n?2
x2n?1
x2 n
y1n?2
y1n?1
y1n
y2n?2
y2n?1
y2 n
Figure 5: Graphical model for noisy, reverberant 2 ? 2 mixing, showing a 3 frame-long
sequence. Here we assume 2 frame-long filters, i.e., m = 0, 1 in Eq. (10), where the solid
arcs from X to Y correspond to m = 0 (as in Fig. 3) and the dashed arcs to m = 1. While
Y1n and Y2n have the same parents, X1n and X2n , the arcs from the parents to Y2n are
omitted for clarity.
The model parameters ? = {Hij,m [k], Bi [k], Ajs [k], ?js } are estimated from data by a
variational EM algorithm, whose derivation generally follows the one outlined in the previous section. Notice that the exact E-step here is even more intractable, due to the history
dependence introduced by the filters.
M-step. The update rule for Hij,m is obtained by solving the Toeplitz system
X
Hij 0 ,m0 [k]?j 0 j,m?m0 [k] = ?ij,m [k]
(11)
j 0 m0
where the quantities ?j 0 j,m [k], ?ij,m [k] are computed from the SS (see [12]). The update
rule for the Bi [k] is omitted.
E-step. The posterior means of the sources (7) are obtained by solving
?
?
X
X
? j 0 m0 [k]? (12)
? jn [k] = ??jn [k]?1
Hij 0 ,m?m0 [k]X
X
Bi [k]Hij,m?n [k]? ?Yim [k] ?
im
j 0 m0 6=jm
? jn [k]. Assuming P frames long filters Hij,m , m = 0 : P ? 1, this is a KP ? KP
for X
linear system for each frequency k. The equations for the SS are given in [13], which also
describes experimental results.
6
Extensions
An alternative technique we have been pursuing for approximating EM in our models is
Sequential Rao-Blackwellized Monte Carlo. There, we sample state sequences S from the
posterior p(S | Y ) and, for a given sequence, perform exact inference on the source signals
X conditioned on that sequence (observe that given S, the posterior p(X | S, Y ) is Gaussian
and can be computed exactly). In addition, we are extending our speech model to include
features such as pitch [7] in order to improve separation performance, especially in cases
with less sensors than sources [7?9]. Yet another extension is applying model selection
techniques to infer the number of sources from data in a dynamic manner.
Acknowledgments
I thank Te-Won Lee for extremely valuable discussions.
References
[1] A.J. Bell, T.J. Sejnowski (1995). An information maximisation approach to blind separation and
blind deconvolution. Neural Computation 7, 1129-1159.
[2] B.A. Pearlmutter, L.C. Parra (1997). Maximum likelihood blind source separation: A contextsensitive generalization of ICA. Proc. NIPS-96.
[3] A. Cichocki, S.-I. Amari (2002). Adaptive Blind Signal and Image Processing. Wiley.
[4] H. Attias (1999). Independent Factor Analysis. Neural Computation 11, 803-851.
[5] T.-W. Lee et al. (2001) (Ed.). Proc. ICA 2001.
[6] S. Griebel, M. Brandstein (2001). Microphone array speech dereverberation using coarse channel
modeling. Proc. ICASSP 2001.
[7] J. Hershey, M. Casey (2002). Audiovisual source separation via hidden Markov models. Proc.
NIPS 2001.
[8] S. Roweis (2001). One Microphone Source Separation. Proc. NIPS-00, 793-799.
[9] G.-J. Jang, T.-W. Lee, Y.-H. Oh (2003). A probabilistic approach to single channel blind signal
separation. Proc. NIPS 2002.
[10] H. Attias, L. Deng, A. Acero, J.C. Platt (2001). A new method for speech denoising using
probabilistic models for clean speech and for noise. Proc. Eurospeech 2001.
[11] Ephraim, Y. (1992). Statistical model based speech enhancement systems. Proc. IEEE 80(10),
1526-1555.
[12] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, L.K. Saul (1999). An introduction to variational
methods in graphical models. Machine Learning 37, 183-233.
[13] H. Attias (2003). New EM algorithms for source separation and deconvolution with a microphone
array. Proc. ICASSP 2003.
| 2238 |@word version:1 brandstein:1 proportion:1 seems:2 advantageous:1 seek:1 propagate:1 pressure:2 solid:1 harder:1 configuration:1 selecting:1 existing:2 xnj:1 imaginary:1 com:1 yet:2 dx:1 attracted:1 must:2 griebel:1 additive:1 shape:1 update:4 short:1 footing:2 filtered:1 coarse:1 node:2 windowed:2 blackwellized:1 become:1 combine:1 manner:2 inter:1 ica:2 behavior:1 themselves:1 audiovisual:1 actual:1 jm:5 window:3 increasing:2 becomes:2 notation:2 medium:5 factorized:1 developed:1 transformation:2 nj:1 temporal:1 exactly:2 platt:1 yn:4 limit:1 approximately:2 factorization:2 range:3 bi:11 acknowledgment:1 practice:1 maximisation:1 j0:1 bell:1 significantly:1 cannot:1 selection:1 acero:1 applying:1 optimize:1 restriction:1 straightforward:3 estimator:3 rule:4 array:4 importantly:1 oh:1 limiting:1 commercial:1 exact:3 recognition:2 particularly:1 labeled:1 valuable:1 principled:2 ephraim:1 environment:1 complexity:3 dynamic:1 trained:3 depend:1 solving:4 easily:2 joint:1 multimodal:1 icassp:2 derivation:1 describe:4 monte:1 kp:2 sejnowski:1 quite:1 whose:2 plausible:2 solve:1 distortion:1 s:6 amari:1 toeplitz:1 statistic:3 noisy:2 final:1 online:1 uin:5 advantage:3 sequence:5 combining:1 mixing:15 roweis:1 parent:4 enhancement:2 cluster:1 extending:1 produce:1 help:2 ij:4 minor:1 received:2 eq:1 strong:3 implies:1 waveform:3 filter:18 opinion:1 generalization:1 wall:1 decompose:1 parra:1 hagai:1 adjusted:1 im:2 mathematically:1 extension:2 hold:1 exp:1 m0:6 major:2 omitted:7 purpose:1 sometime:2 proc:9 label:2 sensor:18 gaussian:4 rather:1 avoid:1 jaakkola:1 derived:1 casey:1 likelihood:2 contrast:2 inference:1 dependent:1 inaccurate:1 typically:5 hidden:2 relation:3 y2n:8 field:1 construct:1 x2n:8 represents:1 employ:1 microsoft:2 male:1 mixture:5 accurate:1 intense:1 initialized:1 modeling:4 rao:1 cost:1 eurospeech:1 probabilistic:6 physic:1 lee:3 invertible:1 synthesis:1 ym:1 together:2 squared:1 choose:1 unmix:1 potential:1 coefficient:1 satisfy:1 explicitly:2 depends:2 blind:5 wm:3 bayes:2 start:1 maintains:1 contribution:2 accuracy:3 variance:1 characteristic:2 phoneme:1 spaced:1 identify:1 correspond:2 yield:2 accurately:1 produced:1 carlo:1 expertise:1 history:2 ed:1 energy:1 frequency:8 dataset:1 knowledge:1 focusing:1 response:1 hershey:1 just:1 correlation:2 receives:1 propagation:5 lack:1 mode:4 indicated:1 perhaps:1 concept:1 true:1 y2:2 hence:4 satisfactory:1 speaker:5 x1n:8 won:1 complete:2 pearlmutter:1 image:1 variational:9 consideration:1 instantaneous:2 patching:1 common:1 functional:1 extend:2 synthesized:1 imposing:1 dag:4 outlined:1 hxi:1 stable:1 longer:2 whitening:1 j:7 posterior:12 female:1 optimizing:1 termed:4 exploited:1 minimum:1 additional:1 relaxed:1 employed:1 deng:1 signal:40 dashed:1 sound:2 unimodal:1 infer:2 ing:1 technical:1 long:8 equally:1 jns:4 pitch:1 iteration:1 histogram:1 represent:1 background:4 whereas:3 addition:1 spacing:1 source:58 appropriately:1 unlike:1 thing:1 seem:1 jordan:1 extracting:1 structural:1 leverage:1 constraining:1 reverberant:12 easy:1 fft:2 xj:3 gave:1 idea:3 attias:4 hxn:2 speech:22 constitute:1 generally:2 useful:1 detailed:3 exist:2 notice:5 estimated:2 per:4 hagaia:1 key:1 clarity:3 clean:1 year:1 run:1 inverse:5 distorted:2 throughout:1 contextsensitive:1 pursuing:1 separation:24 bound:1 replaces:1 activity:1 x2:3 extremely:2 describes:2 slightly:1 em:7 s1:3 computationally:1 equation:5 describing:4 turn:3 fail:1 tractable:1 ajs:4 available:1 generalizes:1 multipath:1 subbands:2 observe:1 spectral:1 appropriate:2 s2n:6 yim:2 alternative:1 robustness:2 jang:1 jn:9 original:4 assumes:1 include:1 graphical:9 exploit:1 subband:24 especially:1 ghahramani:1 approximating:1 added:1 quantity:2 dependence:3 usual:1 unclear:1 thank:1 hmm:1 street:1 unstable:1 reason:3 toward:1 assuming:2 length:2 modeled:2 index:2 difficult:2 unfortunately:1 potentially:1 hij:22 reverberation:1 rise:1 unknown:1 perform:1 datasets:1 markov:1 arc:4 situation:2 frame:26 arbitrary:2 introduced:1 specified:1 optimized:1 sentence:1 engine:1 distinction:1 nip:4 address:2 redmond:1 usually:1 below:2 xm:3 dereverberation:1 reading:1 including:2 y1n:8 difficulty:1 treated:1 representing:3 improve:2 technology:1 misleading:1 hm:13 extract:1 coupled:2 cichocki:1 sn:1 permutation:1 filtering:11 sufficient:2 s0:1 corrupting:1 last:2 offline:2 fall:1 saul:1 xn:9 world:1 transition:1 made:1 adaptive:2 avoided:1 employing:1 sj:2 approximate:1 ignore:1 keep:2 summing:1 assumed:1 spectrum:3 latent:1 decade:1 decomposes:1 channel:2 robust:2 correlated:1 complex:4 domain:6 s2:3 noise:17 sjn:16 x1:3 fig:8 wiley:1 precision:5 explicit:1 third:2 formula:1 showing:2 deconvolution:2 incorporating:1 intractable:2 quantization:1 albeit:1 sequential:1 effectively:1 te:1 illustrates:1 conditioned:4 intersection:1 yin:13 appearance:1 xjn:27 corresponds:1 satisfies:1 extracted:2 denoising:1 microphone:4 partly:1 experimental:2 s1n:7 latter:2 arises:2 incorporate:1 skn:1 handling:1 |
1,361 | 2,239 | Artefactual Structure from Least Squares
Multidimensional Scaling
Nicholas P. Hughes
Department of Engineering Science
University of Oxford
Oxford, 0X1 3PJ, UK
[email protected]
David Lowe
Neural Computing Research Group
Aston University
Birmingham, B4 7ET, UK
[email protected]
Abstract
We consider the problem of illusory or artefactual structure from the visualisation of high-dimensional structureless data. In particular we examine the role of the distance metric in the use of topographic mappings
based on the statistical field of multidimensional scaling. We show that
the use of a squared Euclidean metric (i.e. the SS TRESS measure) gives
rise to an annular structure when the input data is drawn from a highdimensional isotropic distribution, and we provide a theoretical justification for this observation.
1 Introduction
The discovery of meaningful patterns and relationships from large amounts of multivariate
data is a significant and challenging problem with close ties to the fields of pattern recognition and machine learning, and important applications in the areas of data mining and
knowledge discovery in databases (KDD).
For many real-world high-dimensional data sets (such as collections of images, or multichannel recordings of biomedical signals) there will generally be strong correlations between neighbouring observations, and thus we expect that the data will lie on a lower
dimensional (possibly nonlinear) manifold embedded in the original data space. One approach to the aforementioned problem then is to find a faithful1 representation of the data in
a lower dimensional space. Typically this space is chosen to be two- or three-dimensional,
thus facilitating the visualisation and exploratory analysis of the intrinsic low-dimensional
structure in the data (which would otherwise be masked by the dimensionality of the data
space).
In this context then, an effective dimensionality reduction algorithm should seek to extract
the underlying relationships in the data with minimum loss of information. Conversely, any
interesting patterns which are present in the visualisation space should be representative of
similar patterns in the original data space, and not artefacts of the dimensionality reduction
process.
1
By ?faithful? we mean that the underlying geometric structure in the data space, which characterises the informative relationships in the data, is preserved in the visualisation space.
Although much effort has been focused on the former problem of optimal structure elucidation (see [7, 10] for recent approaches to dimensionality reduction), comparatively
little work has been undertaken on the latter (and equally important) problem of artefactual
structure. This shortcoming was recently highlighted in a controversial example of the application of visualisation techniques to neuroanatomical connectivity data derived from the
primate visual cortex [12, 9, 13, 3].
In this paper we attempt to redress the balance by considering the visualisation of highdimensional structureless data through the use of topographic mappings based on the statistical field of multidimensional scaling (MDS). This is an important class of mappings
which have recently been brought into the neural network domain [5], and have significant
connections to modern kernel-based algorithms such as kernel PCA [11].
The organisation of the remainder of this paper is as follows: In section 2 we introduce
the technique of multidimensional scaling and relate this to the field of topographic mappings. In section 3 we show how under certain conditions such mappings can give rise to
artefactual structure. A theoretical analysis of this effect is then presented in section 4.
2 Multidimensional Scaling and Topographic Mappings
The visualisation of experimental data which is characterised by pairwise proximity values is a common problem in areas such as psychology, molecular biology and linguistics.
Multidimensional scaling (MDS) is a statistical technique which can be used to construct
a spatial configuration of points in a (typically) two- or three-dimensional space given a
matrix of pairwise proximity values between objects. The proximity matrix provides a
measure of the similarity or dissimilarity between the objects, and the geometric layout of
the resulting MDS configuration reflects the relationships between the objects as defined by
this matrix. In this way the information contained within the proximity matrix can be captured by a more succinct spatial model which aids visualisation of the data and improves
understanding of the processes that generated it.
In many situations, the raw dissimilarities will not be representative of actual inter-point
distances between the objects, and thus will not be suitable for embedding in a lowdimensional space. In this case the dissimilarities can be transformed into a set of values
more suitable for embedding through the use of an appropriate transformation:
where represents the transformation function and
are the resulting transformed dissimilarities (which are termed
?disparities?). The aim of metric MDS then is that the trans
formed dissimilarities
should correspond as closely as possible to the inter-point dis
tances in the resulting configuration2.
Metric MDS can be formulated as a continuous optimisation problem through the definition
of an appropriate error function. In particular, least squares scaling algorithms directly seek
to minimise the sum-of-squares error between the disparities and the inter-point distances.
This error, or S TRESS 3 measure, is given by:
S TRESS
!
"$#
(1)
2
This is in contrast to nonmetric MDS which requires that only the ordering of the disparities
corresponds to the ordering of the inter-point distances (and thus that the disparities are some arbitrary
monotonically increasing function of the distances).
3
S TRESS is an acronym for STandard REsidual Sum of Squares.
where the term
is a normalising constant which reduces the sensitivity
of the
measure to the number of points and the scaling of the disparities, and the
are the
weighting factors. It is straightforward
to differentiate this S TRESS measure with respect
to the configuration points and minimise the error through the use of standard nonlinear
optimisation techniques.
An alternative and commonly used error function, which is referred to as SS TRESS, is given
by:
SS TRESS
" #
(2)
which represents the sum-of-squares error between squared disparities and squared distances. The primary advantage of the SS TRESS measure is that it can be efficiently minimised through the use of an alternating least squares procedure4 [1].
Closely related to the field of Metric MDS is Sammon?s mapping [8], which takes as its
input a set of high-dimensional vectors and seeks to produce a set of lower dimensional
vectors such that the following error measure is minimised:
#
(3)
where the are the inter-point Euclidean distances in the data space:
,
and the are
the
corresponding
inter-point
Euclidean
distances
in
the
feature
or
map
space:
.
Ignoring the normalising constant, Sammon?s mapping is thus equivalent to least squares
metric MDS with the disparities taken to
inter-point distances in the data space
be
the raw
. Lowe (1993) termed such a mapping
and the weighting factors given by
# a topographic
based on the minimisation of an error measure of the form
mapping, since this constraint ?optimally preserves the geometric structure in the data? [5].
Interestingly the choice of the S TRESS or SS TRESS measure in MDS has a more natural
interpretation when viewed within the framework of Sammon?s mapping. In particular,
S TRESS corresponds to the use of the standard Euclidean distance metric whereas SS TRESS
corresponds to the use of the squared Euclidean distance metric. In the next section we
show that this choice of metric can lead to markedly different results when the input data
is sampled from a high-dimensional isotropic distribution.
3 Emergence of Artefactual Structure
In order to investigate the problem of artefactual structure we consider the visualisation of
high-dimensional structureless data (where we use the term ?structureless? to indicate that
the data density is equal in all directions from the mean and varies only gradually in any
direction). Such data can be generated by sampling from an isotropic distribution (such as
a spherical Gaussian), which is characterised by a covariance matrix that is proportional to
the identity matrix, and a skewness of zero.
We created four structureless data sets by randomly sampling 1000 i.i.d. points from unit
hypercubes of dimensions
5, 10, 30 and 100. For each data set, we generated a pair
4
The SS TRESS measure now forms the basis of the ALSCAL implementation of MDS, which is
included as part of the SPSS software package for statistical data analysis.
1.4
4
2.5
1.2
1.5
3
2
1
1.5
2
1
0.8
1
1
0.6
0.5
0.5
0.4
0
0
0.2
0
?1
?0.5
0
?0.2
?2
?1
?0.5
?1.5
?0.4
?0.5
0
(a)
0.5
1
1.5
?1
?0.5
5
0
(b)
0.5
1
1.5
2
?2
?1.5
10
?1
?0.5
(c)
0
0.5
1
1.5
2
2.5
3
?3
?4
?3
?2
30
?1
(d)
0
1
2
3
4
5
100
Figure 1: Final map configurations produced by S TRESS mappings of data uniformly randomly distributed in unit hypercubes of dimension .
5
of
configurations
S TRESS and SS TRESS error measures of the form
2-D
by minimising
#
#
and
respectively. The process was repeated fifty times
(for each individual error function and data set) using different initial configurations of the
map points, and the configuration with the lowest final error was retained.
As previously noted, the choice of the S TRESS or SS TRESS error measure is best viewed
as a choice of distance metric, where S TRESS corresponds to the standard Euclidean metric
and SS TRESS corresponds to the squared Euclidean metric. Figure 1 shows the resulting
configurations from the S TRESS mappings. It is clear that each configuration has captured
the isotropic nature of the associated data set, and there are no spurious patterns or clusters
evident in the final visualisation plots.
1.6
1.2
3
2
1.4
2.5
1
1.2
1.5
2
0.8
1
1.5
1
0.8
0.6
1
0.6
0.5
0.4
0.5
0.4
0
0.2
0
0.2
?0.5
0
0
?1
?0.5
?0.2
?0.2
?1.5
?0.4
?1
?2
?0.4
?0.5
0
(a)
0.5
1
5
1.5
?0.5
0
(b)
0.5
1
10
1.5
?1.5
?1
?0.5
(c)
0
0.5
1
30
1.5
2
2.5
?3
?2
?1
(d)
0
1
2
3
100
Figure 2: Final map configurations produced by SS TRESS mappings of data uniformly
randomly distributed in unit hypercubes of dimension .
Figure 2 shows the resulting configurations from the SS TRESS mappings. The configurations exhibit significant artefactual structure, which is characterised by a tendency for the
map points to cluster in a circular fashion. Furthermore, the degree of clustering increases
with increasing dimensionality of the data space (and is clearly evident for as low as
10).
Although the tendency for SS TRESS configurations to cluster in a circular fashion has been
noted in the MDS literature [2], the connection between artefactual structure and the choice
of distance metric has not been made. Indeed, in the next section we show analytically that
the use of the squared Euclidean metric leads to a globally optimal solution corresponding
to an annular structure.
To date, the most significant work on this problem is that of Klock and Buhmann [4], who
proposed a novel transformation of the dissimilarities (i.e. the squared inter-point distances
5
We used a conjugate gradients optimisation algorithm.
in the data space) such that ?the final disparities are more suitable for Euclidean embedding?. However this transformation assumes that the input data are drawn from a spherical
Gaussian distribution6 , which is inappropriate for most real-world data sets of interest.
4 Theoretical Analysis of Artefactual Structure
In this section we present a theoretical analysis of the artefactual structure problem. A
dimensional map configuration is considered to be the result of a SS TRESS mapping of a
data set of i.i.d. points drawn from a dimensional isotropic
).
distribution
(where
T
The set of data points is given by the x matrix
and
similarly
#
T
the set of map points is given by the x matrix
#
.
#
We begin by defining the derivative of the SS TRESS error measure
with respect to a particular map vector :
(4)
The inter-point distances and are given by:
# $ T
$
#
T
Equation (4) can therefore be expanded to:
T
$
T
T
T
"!
#$!
#%!
T
T
T
T
T
T
T
!
T
T
T
We can immediately simplify some of these terms as follows:
T
T
T
T
T
T $
&
T
T
T
T
&
T
T
)'' *,( + .- ), we have:
!
234
0/1 $
In this case the squared inter-point distances will follow a 5
6 distribution.
Thus at a stationary point of the error (i.e.
T
6
T
T
T
$
/1
T
T
T
$
T
23
(5)
Since the error is a function of the inter-point distances only, we can centre both the data
points and the map points on the origin without loss of generality. For large we have:
-
-
T
tr
T
-
T
T
tr
where
is the x zero matrix, is the covariance matrix of the map vectors,
is the covariance matrix of the map vectors and the data vectors, and tr
is the matrix
trace operator.
Thus equation (5) reduces to:
T
T
"
$
T
tr
tr
T
(6)
This represents a general expression for the value of the map vector at a stationary point
of the SS TRESS error, regardless of the nature of the input data distribution. However we
are interested in the case where the input data is drawn from a high-dimensional isotropic
distribution.
If the data space is isotropic then a stationary point of the error will correspond to a similarly
isotropic map space7 . Thus, at a stationary point, we have for large :
-
tr
tr
where is the x identity matrix, and and are the variances in the map space and
the data space respectively.
Finally, consider the expression:
)
T
T
The first term is the third order moment, which is zero for an isotropic distribution [6]. For
high-dimensional data (i.e. large ) the second term can be simplified to:
7
)
T
"!
#-
(7)
This is true regardless of the initial distribution of the map points, although a highly non-uniform
initial configuration would take significantly longer to reach a local minimum of the error function.
.-
Thus the equation governing the stationary points of the SS TRESS error is given by:
T
T
At the minimum error configuration, we have:
T
T
Summing over all points , gives:
tr
T
T
T
tr
$
T
$
(8)
!
Thus, for large , the variance of the map points is related to the variance of the data points
. Table 1 shows the values of the observed and predicted map variances
by a factor of
for 1000
data
points
sampled
randomly
from
uniform
distributions
in
the
interval
(i.e.
) of dimensions
5, 10, 30, and 100. Clearly as the dimension of the
data space increases, so too does the accuracy of the approximation given by equation (7),
and therefore the accuracy of equation (8).
Dimension
5
10
30
100
Number of points
1000
1000
1000
1000
observed
predicted
0.166
0.303
0.864
2.823
0.139
0.278
0.835
2.783
Percentage error
16.4%
8.1%
3.4%
1.4%
Table 1: A comparison of the predicted and observed map variances.
We can show that this mismatch in variances in the two spaces results in the map points
clustering in a circular fashion by considering the expected squared distance of the map
points from the origin (i.e. the expected squared radius # of the annulus):
(9)
In addition we can derive an analytic
for
. For simplicity, consider a
expression
two-dimensional map space
# . Then
# # we have:
# #
#
##
#
(10)
where the expectation over # # separates since # and # will be uncorrelated due to the
#
T
T
#
#
isotropic
nature of . In general for a -dimensional map space we have that
. Thus the variance of # is given by:
#
#
#
Hence for large the optimal configuration will be an annulus or ring shape, as observed
in figure 2.
5 Conclusions
We have investigated the problem or artefactual or illusory structure from topographic mappings based upon least squares scaling algorithms from multidimensional scaling. In particular we have shown that the use of a squared Euclidean distance metric (i.e. the SS TRESS
measure) gives rise to an annular structure when the input data is drawn from a highdimensional isotropic distribution. A theoretical analysis of this problem was presented
and a simple relationship between the variance of the map and the data points was derived. Finally we showed that this relationship results in an optimal configuration which is
characterised by the map points clustering in a circular fashion.
Acknowledgments
We thank Miguel Carreira-Perpi?na? n for useful comments on this work.
References
[1] T. F. Cox and M. A. A. Cox. Multidimensional scaling. Chapman and Hall, London, 1994.
[2] J. deLeeuw and B. Bettonvil. An upper bound for sstress. Psychometrika, 51:149 ? 153, 1986.
[3] G. J. Goodhill, M. W. Simmen, and D. J. Willshaw. An evaluation of the use of multidimensional
scaling for understanding brain connectivity. Philosophical Transactions of the Royal Society,
Series B, 348:256 ? 280, 1995.
[4] H. Klock and J. M. Buhmann. Multidimensional scaling by deterministic annealing. In
M. Pelillo and E. R. Hancock, editors, Energy Minimization Methods in Computer Vision
and Pattern Recognition, Proc. Int. Workshop EMMCVPR ?97, Venice, Italy, pages 246?260.
Springer Lecture Notes in Computer Science, 1997.
[5] D. Lowe and M. E. Tipping. Neuroscale: Novel topographic feature extraction with radial basis
function networks. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, Advances in Neural
Information Processing Systems 9. Cambridge, MA: MIT Press, 1997.
[6] K. V. Mardia, J. T. Kent, and J. M. Bibby. Multivariate analysis. Academic Press, 1997.
[7] S. T. Roweis, L. K. Saul, and G. E. Hinton. Global coordination of local linear models. In T. G.
Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing
Systems 14. Cambridge, MA: MIT Press, 2002.
[8] J. W. Sammon. A nonlinear mapping for data structure analysis. IEEE Transactions On Computers, C-18(5):401 ? 409, 1969.
[9] M. W. Simmen, G. J. Goodhill, and D. J. Willshaw. Scaling and brain connectivity. Nature,
369:448?450, 1994.
[10] J. B. Tenenbaum. Mapping a manifold of perceptual observations. In M. I. Jordan, M. J. Kearns,
and S. A. Solla, editors, Advances in Neural Information Processing Systems 10. Cambridge,
MA: MIT Press, 1998.
[11] C. K. Williams. On a connection between kernel PCA and metric multidimensional scaling. In
T. K. Leen, T. G. Diettrich, and V. Tresp, editors, Advances in Neural Information Processing
Systems 13. Cambridge, MA: MIT Press, 2001.
[12] M. P. Young. Objective analysis of the topological organization of the primate cortical visual
system. Nature, 358:152?155, 1992.
[13] M. P. Young, J. W. Scannell, M. A. O?Neill, C. C. Hilgetag, G. Burns, and C. Blakemore.
Non-metric multidimensional scaling in the analysis of neuroanatomical connection data and
the organization of the primate cortical visual system. Philosophical Transactions of the Royal
Society, Series B, 348:281 ? 308, 1995.
| 2239 |@word cox:2 sammon:4 seek:3 covariance:3 kent:1 tr:9 moment:1 reduction:3 configuration:18 series:2 disparity:8 initial:3 interestingly:1 informative:1 kdd:1 shape:1 analytic:1 plot:1 stationary:5 isotropic:11 normalising:2 provides:1 introduce:1 pairwise:2 inter:11 expected:2 indeed:1 examine:1 brain:2 globally:1 spherical:2 little:1 actual:1 inappropriate:1 considering:2 increasing:2 psychometrika:1 begin:1 underlying:2 lowest:1 skewness:1 transformation:4 multidimensional:12 tie:1 willshaw:2 uk:4 unit:3 engineering:1 local:2 oxford:2 diettrich:1 burn:1 conversely:1 challenging:1 blakemore:1 faithful:1 acknowledgment:1 hughes:1 area:2 significantly:1 radial:1 close:1 operator:1 context:1 equivalent:1 map:24 deterministic:1 layout:1 straightforward:1 regardless:2 williams:1 focused:1 simplicity:1 immediately:1 embedding:3 exploratory:1 justification:1 neighbouring:1 origin:2 recognition:2 database:1 observed:4 role:1 artefactual:11 ordering:2 solla:1 mozer:1 upon:1 basis:2 alscal:1 hancock:1 effective:1 shortcoming:1 london:1 s:18 otherwise:1 topographic:7 highlighted:1 emergence:1 final:5 differentiate:1 advantage:1 lowdimensional:1 remainder:1 date:1 roweis:1 cluster:3 produce:1 ring:1 object:4 derive:1 ac:2 miguel:1 pelillo:1 strong:1 predicted:3 indicate:1 klock:2 artefact:1 direction:2 radius:1 closely:2 proximity:4 considered:1 hall:1 mapping:19 proc:1 birmingham:1 coordination:1 reflects:1 minimization:1 brought:1 clearly:2 mit:4 gaussian:2 aim:1 minimisation:1 derived:2 contrast:1 typically:2 spurious:1 visualisation:10 transformed:2 interested:1 aforementioned:1 spatial:2 field:5 construct:1 equal:1 extraction:1 sampling:2 chapman:1 biology:1 represents:3 scannell:1 simplify:1 modern:1 randomly:4 preserve:1 individual:1 attempt:1 organization:2 interest:1 mining:1 investigate:1 circular:4 highly:1 evaluation:1 euclidean:10 theoretical:5 bibby:1 masked:1 uniform:2 distribution6:1 too:1 optimally:1 varies:1 hypercubes:3 density:1 sensitivity:1 minimised:2 na:1 connectivity:3 squared:11 possibly:1 derivative:1 int:1 lowe:4 structureless:5 square:8 formed:1 accuracy:2 variance:8 who:1 efficiently:1 correspond:2 raw:2 produced:2 annulus:2 reach:1 definition:1 energy:1 associated:1 sampled:2 illusory:2 knowledge:1 dimensionality:5 improves:1 nonmetric:1 tipping:1 follow:1 leen:1 ox:1 generality:1 furthermore:1 governing:1 biomedical:1 correlation:1 nonlinear:3 effect:1 dietterich:1 true:1 former:1 analytically:1 hence:1 alternating:1 noted:2 evident:2 image:1 novel:2 recently:2 common:1 b4:1 interpretation:1 significant:4 cambridge:4 similarly:2 centre:1 robot:1 cortex:1 similarity:1 longer:1 multivariate:2 recent:1 showed:1 italy:1 termed:2 certain:1 captured:2 minimum:3 monotonically:1 signal:1 reduces:2 annular:3 academic:1 minimising:1 equally:1 molecular:1 emmcvpr:1 optimisation:3 metric:17 expectation:1 vision:1 kernel:3 preserved:1 whereas:1 addition:1 interval:1 annealing:1 fifty:1 markedly:1 comment:1 recording:1 jordan:2 psychology:1 minimise:2 expression:3 pca:2 becker:1 effort:1 generally:1 useful:1 clear:1 amount:1 tenenbaum:1 multichannel:1 percentage:1 group:1 four:1 drawn:5 pj:1 neuroscale:1 undertaken:1 sum:3 package:1 venice:1 scaling:16 bound:1 neill:1 topological:1 constraint:1 software:1 expanded:1 department:1 conjugate:1 primate:3 gradually:1 taken:1 equation:5 previously:1 acronym:1 appropriate:2 petsche:1 nicholas:1 alternative:1 original:2 neuroanatomical:2 assumes:1 clustering:3 linguistics:1 elucidation:1 ghahramani:1 society:2 comparatively:1 objective:1 primary:1 md:11 exhibit:1 gradient:1 distance:19 separate:1 thank:1 manifold:2 retained:1 relationship:6 balance:1 relate:1 trace:1 rise:3 implementation:1 upper:1 observation:3 situation:1 defining:1 hinton:1 arbitrary:1 david:1 pair:1 connection:4 philosophical:2 trans:1 pattern:6 mismatch:1 goodhill:2 royal:2 tances:1 suitable:3 natural:1 buhmann:2 residual:1 aston:2 created:1 extract:1 tresp:1 geometric:3 discovery:2 characterises:1 understanding:2 literature:1 embedded:1 loss:2 expect:1 lecture:1 interesting:1 proportional:1 degree:1 controversial:1 editor:5 uncorrelated:1 dis:1 saul:1 distributed:2 dimension:6 cortical:2 world:2 collection:1 commonly:1 made:1 simplified:1 transaction:3 global:1 summing:1 continuous:1 table:2 tress:29 nature:5 ignoring:1 investigated:1 domain:1 succinct:1 repeated:1 facilitating:1 x1:1 representative:2 referred:1 fashion:4 aid:1 lie:1 mardia:1 perceptual:1 weighting:2 third:1 young:2 perpi:1 organisation:1 intrinsic:1 workshop:1 dissimilarity:6 nph:1 visual:3 contained:1 springer:1 corresponds:5 ma:4 viewed:2 formulated:1 identity:2 included:1 characterised:4 carreira:1 uniformly:2 kearns:1 experimental:1 tendency:2 meaningful:1 highdimensional:3 latter:1 |
1,362 | 224 | 2
Simmons
Acoustic-Imaging Computations by Echolocating Bats:
Unification of Diversely-Represented Stimulus
Features into Whole Images.
James A. Simmons
Department of Psychology
and Section of Neurobiology,
Division of Biology and Medicine
Brown University, Providence, RI 02912.
ABSTRACT
The echolocating bat, Eptesicus fuscus, perceives the distance to
sonar targets from the delay of echoes and the shape of targets
from the spectrum of echoes. However, shape is perceived in
terms of the target's range proftle. The time separation of echo
components from parts of the target located at different distances
is reconstructed from the echo spectrum and added to the
estimate of absolute delay already derived from the arrival-time
of echoes. The bat thus perceives the distance to targets and
depth within targets along the same psychological range
dimension, which is computed. The image corresponds to the
crosscorrelation function of echoes. Fusion of physiologically
distinct time- and frequency-domain representations into a fmal,
common time-domain image illustrates the binding of withinmodality features into a unified, whole image. To support the
structure of images along the dimension of range, bats can
perceive echo delay with a hyperacuity of 10 nanoseconds.
Acoustic-Imaging Computations by Echolocating Bats
THE SONAR
O.~
BATS
Bats are flying mammals, whose lives are largely nocturnal. They have evolved
the capacity to orient in darkness using a biological sonar called echolocation,
which they use to avoid obstacles to flight and to detect, identify, and track flying
insects for interception (Griffm, 1958). Echolocating bats emit brief, mostly
ultrasonic sonar sounds and perceive objects from echoes that return to their ears.
The bat's auditory system acts as the sonar receiver, processing echoes to
reconstruct images of the objects themselves.
Many bats emit frequencymodulated (FM) signals; the big brown bat, Eptesicus fuscus, transmits sounds
with durations of several milliseconds containing frequencies from about 20 to
100 kHz arranged in two or three hannonic sweeps (Fig. 1). The images that
Eptesicus ultimately perceives retain crucial features of the original sonar wave100
N 80
I
~
Figure I: Spectrogram of a
sonar sound emitted by the
big brown bat, Eptesicus
fuscus (Simmons, 1989).
-;:: 60
()
c
~ 40
cO>
. . . 20
o~
.........
____________
1 msec
~
forms, thus revealing how echoes are processed to reconstruct a display of the
object itself. Several important general aspects of perception are embodied in
specific echo-processing operations in the bat's sonar. By recognizing constraints
imposed when echoes are encoded in terms of neural activity in the bat's auditory
system, recent experiments have identified a nove) use of time- and frequencydomain techniques as the basis for acoustic imaging in FM echolocation. The
intrinsically reciprocal properties of time- and frequency-domain representations
are exploited in the neural algorithms which the bat uses to unify disparate
features into whole images.
IMAGES OF SINGLE-GI.JNT TARGETS
A simple sonar target consists of a single reflecting point, or glint, located at a
discrete range and reflecting a single replica of the incident sonar signal. A
complex target consists of several glints at slightly different ranges. It thus reflects
compound echoes composed of individual replicas of the incident sound arriving
3
4
Simmons
at slightly different delays. To dctennine the distance to a target, or target range,
echolocating bats estimate the delay of echoes (Simmons, 1989). The bat's image
of a single-glint target is constructed around its estimate of echo delay, and the
shape of the image can be measured behaviorally. The performance of bats
trained to discriminate between echoes that jitter in delay and echoes that are
stationary in delay yields a graph of the image itself (Altes, 1989), together with
an indication of the accuracy of the delay estimate that underlies it (Simmons,
1979; Simmons, Perragamo, Moss, Stevenson, & Altes, in press). Fig. 2 shows
Jitter Performonce
Crasscorrelatian Function
1"-....
/\
./.\
-"..-. '\./.\ / j \\ /.'/
/
-50 -40 -030 -20 -10
0
10
~o
)0
Time (mIcroseconds)
40
50
.
-50 -40 -JO -20 -10
.
0
10
20
/-
.......
JO
40
50
Time (microseconds)
Figure 2: Graphs showing the bat's image of a single-glint target
from jitter discrimination experilnents (left) for comparison with
the crosscorrelation function of echoes (right). The zero point
on each time axis corresponds to the objective arrival-time of the
echoes (about 3 msec in this experiment; Sinlmons, Perragamo,
et aI., in press).
the image of a single-glint target perceived by Eptesicus, expressed in terms of
echo delay (58 Ilsec/cm of range).
Prom the bat's jitter discrimination
performance, the target is perceived at its true range. Also, the image has a fme
structure consisting of a central peak corresponding to the location of the target
and two prominent side-peaks as ghost images located about 35 }lsec or 0.6 cm
nearer and farther than the main peak. This image fme structure reflects the
composition of the waveform of the echoes themselves; it approximates the
crosscorrelation function of echoes (Fig. 2).
The discovery that the bat perceives an image corresponding to the crosscorrelation function of echoes provides a view of the hidden machinery of the
bat's sonar receiver. The bat's estimate of echo delay evidently is based upon a
capacity of the auditory system to represent virtually all of the information
available in echo waveforms that is relevant to determining delay, including the
phase of echoes relative to emissions (Simmons, Ferragamo, et al, in press). The
bat's initial auditory representation of these FM signals resembles spectrograms
Acoustic-Imaging Computations by Echolocating Bats
that consist of neural impulses marking the time-of-occurrence of succeSSlve
frequencies in the FM sweeps of the sounds (Fig. 3). Each nerve im150
120
100
80
N
60
50
I
.x:
40
"
.
\.
..
'-
.\~.
":.
I~
':~
.::\
.
-=\.
"I
25
.
~
.~
30
+,
~
)
'.
20
15
0
5
time (msec)
10
Hgure 3: Neural spectrograms
representing a sonar emission
(left) and an echo from a target
located about I m away (right),
The individual dots are neural
impulses
conveying
the
instantaneous frequency of the
FM sweeps (see Fig. 1). The 6msec time separation of the two
spectrograms indicates target
range in the bat's sonar receiver
(Simmons & Kick, 1984).
pulse travels in a "channel" that is tuned to a particular excitatory frequency
(Bodenhamer & Pollak, 1981) as a consequence of the frequency analyzing
properties of the cochlea.. The cochlear filters are followed by rectification and
low-pass filtering, so in a conventional sense the phase of the filtered signals is
destroyed in the course of forming the spectrograms. However, Fig. 2 shows that
the bat is able to reconstruct the crosscorrclation function of echoes from its
spectrogram-like auditory representation. The individual neural "points" in the
spectrogram signify instantaneous frequency, and the recovery of the fIne
structure in the image may exploit properties of instantaneous frequency when
the images are assembled by integrating numerous separate delay measurements
across different frequencies. The fact that the crosscorrelation function emerges
from these neural computations is provocative from theoretical and technological
viewpoints--the bat appears to employ novel real-time algorithms that can
transform echoes into spectrograms and then into the sonar ambiguity function
itself.
The range-axis image of a single-glint target has a fIne structure surrounding a
central peak that constitutes the bat's estimate of echo delay (Fig. 2). The width
of this peak corresponds to the limiting accuracy of the bat's delay estimate,
allowing for the ambiguity represented by the side-peaks located about 35 Jlsec
away. In Fig. 2, the data-points arc spaced 5 Jlsec apart along the time axis
(approximately the Nyquist sampling interval for the bat's signals), and the true
width of the central peak is poorly shown. Fig. 4 shows the performance of three
Eptesicus in an experiment to measure this width with smaller delay steps. The
5
6
Simmons
100
""
~
g.
~"
"~
u
c
e
"
/~-~-------
90
1'.
1
80
?
70
~.
60
I,'
50
1
0..
40
0
5
Oeloy line
Bot #I 1 . - .
Bot. 3 . - - .
Bot. 50-0
Cable
Bat.3
Bot'5
0--0
.-0
10 15 20 25 30 35 40 45 50 55 60
TIme (nanosetonds)
Figure 4: A graph of the
pelformance
of
Eptesicus
discriminating
echo-delay
jitters that change m small
steps.
The bats' limiting
acuity IS about 10 nsec for
75%
correct
responses
(Simmons, Perragamo, et a1.,
in press).
bats can detect a shift of as little as 10 nsec as a hyperacuity (Altes, 1989) for
echo delay in the jitter task. In estimating echo delay, the bat must integrate
spectrogram delay estimates across separate frequencies in the FM sweeps of
emissions and echoes (see Fig. 3), and it arrives at a very accurate composite
estimate indeed. Timing accuracy in the nanosecond range is a previously
unsuspected capahility of the nervous system, and it is likely that more complex
algorithms than just integration of information across frequencies lie behind this
fine acuity (see below on amplitude-latency trading and perceived delay).
IMAGES
OI<~
lWO-GLINT TARGETS
Complex targets such as airborne insects reflect echoes composed of several
replicas of the incident sound separated by short intervals of time (Simmons &
Chen, 1989). Por insect-sized targets, with dimensions of a few centimeters, this
time separation of echo components is unlikely to exceed 100 to 150 Jlsec.
Because the bat's signals arc several milliseconds long, the echoes from complex
targets thus will contain echo components that largely overlap. The auditory
system of Eptesicus has an integration-time of about 350 Jlsec for reception of
sonar echoes (Simmons, Freedman, et at., 1989). Two echo components that
arrive together within this integration-time will merge together into a single
compound echo having an arrival-time as a whole that indicates the delay of the
first echo component, and having a series of notches in its spectrum that indicates
the time separation of the first and second components. In the bat's auditory
representation, echo delay corresponds to the time separation of the emission and
echo spectrograms (see Fig. 3), while the notches in the compound echo
spectrum appear as '1101es" in the spectrogram--that is, as frequencies that fail to
appear in echoes. The location and spacing of these notches or holes in
frequency is related to the separation of the two echo components in lime. The
crucial point is that the constraint imposed by the 350-Jlsec integration-time for
echo reception disperses the information required to reconstruct the detailed range
Acoustic-Imaging Computations by Echolocating Bats
structure of the complex target into both the time and the frequency dimensions
of the neural spectrograms.
FptesicuJ extracts an estimate of the overall delay of the waveform of compound
echoes from two-glint targets. This time estimate leads to a range-axis image of
the closer of the two glints in the target (the target's leading edge). This part of
the image exhibits the same properties as the image of a single-glint target--it is
encoded by the time-of-occurrence of neural discharges in the spectrograms and it
resembles the crosscorrclation function for the first echo component (Simmons,
Moss, & Perragamo, 1990; Simmons, Ferragamo, et al., in press; see Simmons,
1989). The bat also perceives a range-axis image of the farther of the two glints
(the target's trailing edge). This image is located at a perceived distance that
corresponds to the bat's estimate of the time separation of the two echo
components that make up the compound echo. Fig. 5 shows the performance of
EpleJicuJ in a jitter discrimination experiment in which one of the
8,
a'i
i~~I
!
o
I
I
I
20
lime (psec)
I
40
,
Figure 5: A graph comparing
the crosscorrelation function of
echoes from a two-glint target
with a delay separation of 10
Jlsec (top) with the bat's jitter
discrimination
performance
using tlus compound echo as a
stimulus (bottom). The two
glints arc indicated as a I and
aI' (Simmons, 1989).
jittering stimulus echoes contained two replicas of the bat's emitted sound
separated by 10 Jlsec. The bat perceives two distinct reflecting points along the
range axis. Both glints appear as events along the range axis in a time-domain
image even though the existence of the second glint could only be inferred from
the frequency domain because the delay separation of 10 Jlsec is much shorter
than the receiver's integration time. The image of the second glint resembles the
crosscorrelation function of the later of the two echo components. The bat adds
it to the crosscorrelation function for the earlier component when the whole
image is formed.
7
8
Simmons
ACOUSTIC-IMA(;E PROCESSING BY FM BATS
Somehow Eptesicus recovers sufficient information from the timing of neural
discharges across the frequencies in the PM sweeps of emissions and echoes to
reconstruct the crosscorrelation function of echoes from the flfst glint in the
complex target and to estimate delay with nanosecond accuracy.
This
fundamentally time-domain image is derived from the processing of information
initially also represented in the time domain, as demonstrated by the occurrence
of changes in apparent delay as echo amplitude increases or decreases: The
location of the perceived crosscorrelation function for the flfst glint can be shifted
by predictable amounts along the time axis according to the separately-measured
amplitude-latency trading relation for Eptesicus (about -17 }lsec/dB; Simmons,
Moss, & Perragamo, 1990; Simmons, Ferragamo, et aI., in press), indicating that
neural response latency--that is, neural discharge timing--conveys the crucial
information about delay in the bat's auditory system.
The second glint in the complex target manifests itself as a crosscorrelation-like
image component, too. However, the bat must transform spectral information
into the time domain to arrive at such a time- or range-axis representation for the
second glint. This transformed time-domain image is added to the time-domain
image for the first glint in such a way that the absolute range of the second glint
is referred to that of the first glint. Shifts in the apparent range of the flfst glint
caused by neural discharges undergoing amplitude-latency trading will carry the
image of the second glint along with it to a new range value (Simmons, Moss, &
Perragamo, 1990). Evidently, the psychological dimension of absolute range
supports the image of the target as a whole. This helps to explain the bat's
extraordinary IO-nsec accuracy for perceiving delay. For the psychological range
or delay axis to accept fine-grain range infonnation about the separation of glints
in complex targets, its intrinsic accuracy must be adequate to receive the
information that is transformed from the frequency domain. The bat achieves
fusion of image components by transfonning one component into the numerical
fonnat for the other and then adding them together.
The experimental
dissociation of the images of the first and second glints from different effects of
latency shifts demonstrates the independence of their initial physiological
representations. Furthennore, the expected latency shift does not occur for
frequencies whose amplitudes are low because they coincide with spectral
notches; the bat's fine nanosecond acuity thus seems to involve removal of
discharges at "untrustworthy" frequencies prior to integration of discharge timing
across frequencies. The delay-tuning of neurons is usually thought to represent
the conversion of a temporal code (timing of neural discharges) into a "place"
code (the location of activity on the neural map). The bat's unusual acuity of 10
nsec suggests that this conversion of a temporal to a "place" code is only partial.
Acoustic-Imaging Computations by EchoIocating Bats
Not. only does the site of activity on the neural map convey information about
delay, but the timing of discharges in map neurons may also play a critical role in
the map-reading operation. The bat's fIne acuity may emerge in the behavioral
data because initial neural encoding of the stimulus conditions in the jitter task
involves the same parameter of neural rcsponses--timing--that later is intimately
associated with map-reading in the brain. Echolocation may thus fortuitously be
a good system in which to explore this basic perceptual process.
Ackllowledgmen ts
Research supported by grants from ONR, NIH, NIMH, ORF, and SOF.
References
R. A. Altes (1989) Ubiquity of hyperacuity, 1. Acoust. Soc. Am. 85: 943-952.
R. D. Bodenhamer & G. O. Pollak (1981) Time and frequency domain
processing in the inferior colliculus of echolocating bats, Hearing Res. 5:
317-355.
O. R. Griffin (1958) Listening in the Dark, Yale Univ. Press.
1. A. Simmons (1979) Perception of echo phase information in bat sonar,
Science, 207: 1336-1338.
1. A. Simmons (1989) A view of the world through the bat's ear: the formation of
acoustic images in echolocation, Cognition 33: 155-199.
J. A. Simmons & L. Chen (1989) The acoustic basis for target discrimination by
PM echolocating bats, 1. Acoust. Soc. Am. 86: 1333-1350.
1. A. Simmons, M. Ferragamo, C. F. Moss, S. B. Stevenson, & R. A. Altes (in
press) Discrimination of jittered sonar echoes by the echolocating bat,
Eplesicus fuscus: the shape of target unages in echolocation, 1. Compo
Physiol. A.
1. A. Simmons, E. G. Freedman, S. B. Stevenson, L. Chen, & T. 1. Wohlgenant
(1989) Clutter interference and the integration tUne of echoes in the
echolocating bat, Eptesicus fuscus, J. Acoust. Soc. Am. 86: 1318-1332.
1. A. Simmons & S. A. Kick (1984) Physiological mechanisms for spatial fIltering
and unage enhancement in the sonar of bats, Ann. Rev. Physiol. 46: 599614.
J. A. Simmons, C. F. Moss, & M. Ferragamo (1990) Convergence of temporal
and spectral information into acoustic images perceived by the
echolocating bat, Eptesicus fuscus, 1. Compo Physiol. A 166:
9
| 224 |@word seems:1 pulse:1 orf:1 hannonic:1 mammal:1 carry:1 initial:3 series:1 tuned:1 comparing:1 must:3 grain:1 physiol:3 numerical:1 shape:4 discrimination:6 stationary:1 nervous:1 reciprocal:1 short:1 farther:2 compo:2 filtered:1 provides:1 location:4 along:7 constructed:1 consists:2 behavioral:1 expected:1 indeed:1 themselves:2 brain:1 little:1 perceives:6 estimating:1 evolved:1 cm:2 unified:1 acoust:3 interception:1 temporal:3 act:1 demonstrates:1 grant:1 appear:3 timing:7 consequence:1 io:1 encoding:1 analyzing:1 approximately:1 reception:2 merge:1 resembles:3 suggests:1 co:1 range:23 bat:61 thought:1 revealing:1 composite:1 integrating:1 darkness:1 conventional:1 imposed:2 demonstrated:1 map:5 duration:1 unify:1 recovery:1 perceive:2 simmons:28 limiting:2 target:36 discharge:8 play:1 us:1 hyperacuity:3 located:6 unsuspected:1 bottom:1 role:1 decrease:1 technological:1 tlus:1 predictable:1 nimh:1 diversely:1 ultimately:1 trained:1 jittering:1 flying:2 upon:1 division:1 basis:2 represented:3 surrounding:1 separated:2 distinct:2 univ:1 formation:1 whose:2 encoded:2 apparent:2 reconstruct:5 furthennore:1 gi:1 pollak:2 transform:2 echo:62 itself:4 indication:1 evidently:2 provocative:1 relevant:1 poorly:1 convergence:1 enhancement:1 object:3 help:1 measured:2 lwo:1 soc:3 fonnat:1 involves:1 trading:3 nsec:4 waveform:3 correct:1 filter:1 biological:1 around:1 cognition:1 trailing:1 achieves:1 perceived:7 travel:1 infonnation:1 eptesicus:12 reflects:2 behaviorally:1 avoid:1 derived:2 jnt:1 emission:5 acuity:5 indicates:3 detect:2 sense:1 am:3 unlikely:1 accept:1 initially:1 hidden:1 relation:1 transformed:2 prom:1 overall:1 insect:3 spatial:1 integration:7 having:2 sampling:1 biology:1 constitutes:1 fuscus:6 stimulus:4 fundamentally:1 employ:1 few:1 composed:2 individual:3 ima:1 phase:3 consisting:1 arrives:1 behind:1 accurate:1 emit:2 edge:2 closer:1 unification:1 partial:1 shorter:1 machinery:1 re:1 theoretical:1 psychological:3 earlier:1 obstacle:1 hearing:1 delay:33 recognizing:1 too:1 providence:1 jittered:1 peak:7 discriminating:1 retain:1 together:4 jo:2 central:3 ear:2 ambiguity:2 containing:1 reflect:1 por:1 leading:1 crosscorrelation:11 return:1 stevenson:3 caused:1 later:2 view:2 oi:1 formed:1 accuracy:6 fme:2 largely:2 dissociation:1 yield:1 conveying:1 identify:1 spaced:1 explain:1 echolocation:5 frequency:22 james:1 conveys:1 transmits:1 associated:1 recovers:1 auditory:8 intrinsically:1 manifest:1 emerges:1 amplitude:5 reflecting:3 nerve:1 appears:1 response:2 fmal:1 arranged:1 though:1 just:1 flight:1 somehow:1 indicated:1 impulse:2 effect:1 brown:3 true:2 contain:1 width:3 inferior:1 prominent:1 image:40 instantaneous:3 novel:1 nanosecond:4 common:1 nih:1 khz:1 echolocating:12 approximates:1 measurement:1 composition:1 ai:3 tuning:1 pm:2 dot:1 add:1 recent:1 apart:1 compound:6 onr:1 life:1 exploited:1 spectrogram:13 signal:6 sound:7 long:1 a1:1 underlies:1 basic:1 cochlea:1 represent:2 sof:1 receive:1 signify:1 fine:6 interval:2 separately:1 airborne:1 spacing:1 crucial:3 virtually:1 db:1 emitted:2 kick:2 exceed:1 destroyed:1 independence:1 psychology:1 fm:7 identified:1 listening:1 shift:4 notch:4 nyquist:1 flfst:3 adequate:1 latency:6 nocturnal:1 detailed:1 involve:1 tune:1 amount:1 clutter:1 dark:1 processed:1 millisecond:2 shifted:1 bot:4 track:1 discrete:1 ultrasonic:1 replica:4 imaging:6 graph:4 colliculus:1 orient:1 jitter:9 arrive:2 place:2 separation:10 griffin:1 lime:2 followed:1 display:1 yale:1 activity:3 occur:1 constraint:2 ri:1 lsec:2 aspect:1 department:1 marking:1 according:1 across:5 slightly:2 smaller:1 intimately:1 cable:1 rev:1 interference:1 rectification:1 previously:1 fail:1 mechanism:1 unusual:1 available:1 operation:2 away:2 spectral:3 ubiquity:1 occurrence:3 existence:1 original:1 ferragamo:5 top:1 medicine:1 exploit:1 sweep:5 objective:1 added:2 already:1 exhibit:1 distance:5 separate:2 fortuitously:1 capacity:2 cochlear:1 code:3 mostly:1 disparate:1 allowing:1 conversion:2 neuron:2 arc:3 t:1 neurobiology:1 psec:1 inferred:1 required:1 acoustic:10 nearer:1 assembled:1 able:1 below:1 perception:2 usually:1 ghost:1 reading:2 including:1 overlap:1 event:1 critical:1 representing:1 brief:1 numerous:1 glint:27 axis:10 extract:1 embodied:1 moss:6 prior:1 discovery:1 removal:1 determining:1 relative:1 filtering:2 integrate:1 incident:3 sufficient:1 viewpoint:1 centimeter:1 excitatory:1 course:1 supported:1 arriving:1 side:2 emerge:1 absolute:3 depth:1 dimension:5 world:1 coincide:1 reconstructed:1 receiver:4 spectrum:4 physiologically:1 sonar:18 channel:1 untrustworthy:1 complex:8 domain:12 main:1 whole:6 big:2 arrival:3 freedman:2 convey:1 fig:12 referred:1 site:1 extraordinary:1 msec:4 lie:1 perceptual:1 specific:1 showing:1 undergoing:1 physiological:2 fusion:2 consist:1 intrinsic:1 adding:1 illustrates:1 hole:1 chen:3 likely:1 explore:1 forming:1 expressed:1 contained:1 binding:1 corresponds:5 sized:1 ann:1 microsecond:2 jlsec:8 change:2 perceiving:1 called:1 discriminate:1 pas:1 e:1 experimental:1 indicating:1 support:2 |
1,363 | 2,240 | Fast Sparse Gaussian Process Methods:
The Informative Vector Machine
Neil Lawrence
University of Sheffield
211 Portobello Street
Sheffield, S1 4DP
[email protected]
Matthias Seeger
University of Edinburgh
5 Forrest Hill
Edinburgh, EH1 2QL
[email protected]
Ralf Herbrich
Microsoft Research Ltd
7 J J Thomson Avenue
Cambridge, CB3 0FB
[email protected]
Abstract
We present a framework for sparse Gaussian process (GP) methods
which uses forward selection with criteria based on informationtheoretic principles, previously suggested for active learning. Our
goal is not only to learn d?sparse predictors (which can be evaluated in O(d) rather than O(n), d n, n the number of training
points), but also to perform training under strong restrictions on
time and memory requirements. The scaling of our method is at
most O(n ? d2 ), and in large real-world classification experiments
we show that it can match prediction performance of the popular
support vector machine (SVM), yet can be significantly faster in
training. In contrast to the SVM, our approximation produces estimates of predictive probabilities (?error bars?), allows for Bayesian
model selection and is less complex in implementation.
1
Introduction
Gaussian process (GP) models are powerful non-parametric tools for approximate
Bayesian inference and learning. In comparison with other popular nonlinear architectures, such as multi-layer perceptrons, their behavior is conceptually simpler
to understand and model fitting can be achieved without resorting to non-convex
optimization routines. However, their training time scaling of O(n3 ) and memory
scaling of O(n2 ), where n the number of training points, has hindered their more
widespread use. The related, yet non-probabilistic, support vector machine (SVM)
classifier often renders results that are comparable to GP classifiers w.r.t. prediction
error at a fraction of the training cost. This is possible because many tasks can
be solved satisfactorily using sparse representations of the data set. The SVM is
triggered towards finding such representations through the use of a particular loss
function1 that encourages some degree of sparsity, i.e. the final predictor depends
only on a fraction of training points crucial for good discrimination on the task.
Here, we call these utilized points the active set of the sparse predictor. In case of
SVM classification, the active set contains the support vectors, the points closest to
1
An SVM classifier is trained by minimizing a regularized loss functional, a process
which cannot be interpreted as approximation to Bayesian inference.
the decision boundary and the misclassified ones. If the active set size d is much
smaller than n, an SVM classifier can be trained in average case running time between O(n ? d2 ) and O(n2 ? d) with memory requirements significantly less than n2 .
Note, however, that without any restrictions on the data distribution, d can rise to
n.
In an effort to overcome scaling problems a range of sparse GP approximations have
been proposed [1, 8, 9, 10, 11]. However, none of these has fully achieved the goals of
being a nontrivial approximation to a non-sparse GP model and matching the SVM
w.r.t. both prediction performance and run time. The algorithm proposed here accomplishes these objectives and, as our experiments show, can even be significantly
faster in training than the SVM. Furthermore, time and memory requirements may
be restricted a priori. The potential benefits of retaining the probabilistic characteristics of the method are numerous, since hard problems, e.g. feature and model
selection, can be dealt with using standard techniques from Bayesian learning.
Our approach builds on earlier work of Lawrence and Herbrich [2] which we extend
here by considering randomized greedy selections and focusing on an alternative
representation of the GP model which facilitates generalizations to settings such
as regression and multi-class classification. In the next section we introduce the
GP classification model and a method for approximate inference. Section 3 then
contains the derivation of our fast greedy approximation and a description of the associated algorithm. In Section 4, we present large-scale experiments on the MNIST
database, comparing our method directly against the SVM. Finally we close with a
discussion in Section 5.
We denote vectors g = (gi )i and matrices G = (gi,j )i,j in bold-face2 . If I, J
are sets of row and column indices respectively, we denote the corresponding submatrix of G ? Rp,q by GI,J , furthermore we abbreviate GI,? to GI,1...q , GI,j to
GI,{j} , GI to GI,I , etc. The density of the Gaussian distribution with mean ? and
covariance matrix ? is denoted by N (?|?, ?). Finally, we use diag(?) to represent an
?overloaded? operator which extracts the diagonal elements of a matrix as a vector
or produces a square matrix with diagonal elements from a given vector, all other
elements 0.
2
Gaussian Process Classification
Assume we are given a sample S := ((x1 , y1 ), . . . , (xn , yn )), xi ? X , yi ? {?1, +1},
drawn independently and identically distributed (i.i.d.) from an unknown data distribution3 P (x, y). Our goal is to estimate P (y|x) for typical x or, less ambitiously,
to learn a predictor x ? y with small error on future data. To model this situation,
we introduce a latent variable u ? R separating x and y, and some classification
noise model P (y|u) := ?(y ?(u+b)), where ? is the cumulative distribution function
of the standard Gaussian N (0, 1), and b ? R is a bias parameter. From the Bayesian
viewpoint, the relationship x ? u is a random process u(?), which, in a Gaussian
process (GP) model, is given a GP prior with mean function 0 and covariance kernel
k(?, ?). This prior encodes the belief that (before observing any data) for any finite
? 1, . . . , x
? p } ? X , the corresponding latent outputs (u(x
? 1 ), . . . , u(x
? p ))T
set X = {x
? i, x
? j ))i,j ? Rp,p .
are jointly Gaussian with mean 0 ? Rp and covariance matrix (k(x
GP models are non-parametric, that is, there is in general no finite-dimensional
2
Whenever we use a bold symbol g or G for a vector or matrix, we denote its components by the corresponding normal symbols gi and gi,j .
3
We focus on binary classification, but our framework can be applied straightforwardly
to regression estimation and multi-class classification.
parametric representation for u(?). It is possible to write u(?) as linear function
in some feature space F associated with k, i.e. u(x) = w T ?(x), w ? F, in the
sense that a Gaussian prior on w induces a GP distribution on the linear function
u(?). Here, ? is a feature map from X into F, and the covariance function can be
written k(x, x0 ) = ?(x)T ?(x0 ). This linear function view, under which predictors
become separating hyper-planes in F, is frequently used in the SVM community.
However, F is, in general, infinite-dimensional and not uniquely determined by
the kernel function k. We denote the sequence of latent outputs at the training
points by u := (u(x1 ), . . . , u(xn ))T ? Rn and the covariance or kernel matrix by
K := (k(xi , xj ))i,j ? Rn,n .
The Bayesian posterior process for u(?) can be computed in principle using Bayes?
formula. However, if the noise model P (y|u) is non-Gaussian (as is the case for
binary classification), it cannot be handled tractably and is usually approximated
by another Gaussian process, which should ideally preserve mean and covariance
function of the former. It is easy to show that this is equivalent to fitting the moments between the finite-dimensional (marginal) posterior P (u|S) over the training points and a Gaussian approximation Q(u), because the conditional posterior
P (u(x? )|u, S) for some non-training point x? is identical to the conditional prior
P (u(x? )|u). In general, computing Q is also infeasible, but several authors have
proposed to approximate the global moment matching by iterative schemes which
locally focus on one training pattern at a time [1, 4]. These schemes (at least in
their simplest forms) result in a parametric form for the approximating Gaussian
Q(u) ? P (u)
n
Y
p
i
exp ? (ui ? mi )2 .
2
i=1
(1)
This Q
may be compared with the form of the true posterior P (u|S) ?
P (u) ni=1 P (yi |ui ) and shows that Q(u) is obtained from P (u|S) by a likelihood
approximation. Borrowing from graphical models vocabulary, the factors in (1) are
called sites. Initially, all pi , mi are 0, thus Q(u) = P (u). In order to update the
parameters for a site i, we replace it in Q(u) by the corresponding true likelihood
factor P (yi |ui ), resulting in a non-Gaussian distribution whose mean and covariance matrix can still be computed. This allows us to approximate it by a Gaussian
Qnew (u) using moment matching. The site update is called the inclusion of i into
the active set I. The factorized form of the likelihood implies that the new and old
Q differ only in the parameters pi , mi of site i. This is a useful locality property of
the scheme which is referred to as assumed density filtering (ADF) (e.g. [4]). The
special case of ADF4 for GP models has been proposed in [5].
3
Sparse Gaussian Process Classification
The simplest way to obtain a sparse Gaussian process classification (GPC) approximation from the ADF scheme is to leave most of the site parameters at 0, i.e.
pi = 0, mi = 0 for all i 6? I, where I ? {1, . . . , n} is the active set, |I| =: d < n. For
this to succeed, it is important to choose I so that the decision boundary between
classes is represented essentially as accurately as if we used the whole training set.
An exhaustive search over all possible subsets I is, of course, intractable. Here, we
follow a greedy approach suggested in [2], including new patterns one at a time into
I. The selection of a pattern to include is made by computing a score function for
4
A generalization of ADF, expectation propagation (EP) [4], allows for several iterations
over the data. In the context of sparse approximations, it allows us to remove points from
I or exchange them against such outside I, although we do not consider such moves here.
Algorithm 1 Informative vector machine algorithm
Require: A desired sparsity d n.
I = ?, m = 0, ? = diag(0), diag(A) = diag(K), h = 0, J = {1, . . . , n}.
repeat
for j ? J do
Compute ?j according to (4).
end for
i = argmaxj?J ?j
Do updates for pi and mi according to (2).
Update matrices L, M , diag(A) and h according to (3).
I ? I ? {i}, J ? J \ {i}.
until |I| = d
all points in J = {1, . . . , n} \ I (or a subset thereof) and then picking the winner.
The heuristic we implement has also been considered in the context of active learning (see chapter 5 of [3]): score an example (xi , yi ) by the decrease in entropy of
Q(?) upon its inclusion. As a result of the locality property of ADF and the fact
that Q is Gaussian, it is easy to see that the entropy difference H[Qnew ] ? H[Q] is
proportional to the log ratio between the variances of the marginals Qnew (ui ) and
Q(ui ). Thus, our heuristic (referred to as the differential entropy score) favors points
whose inclusion leads to a large reduction in predictive (posterior) variance at the
corresponding site. Whilst other selection heuristics can be argued for and utilized,
it turns out that the differential entropy score together with the simple likelihood
approximation in (1) leads to an extremely efficient and competitive algorithm.
In the remainder of this section, we describe our method and give a schematic
algorithm. A detailed derivation and discussions of some extensions can be found
in [7]. From (1) we have Q(?) = N (?|h, A), A := (K ?1 + ?)?1 , h := A?m and
? := diag(p). If I is the current active set, then all components of p and m not in
I are zero, and some algebra using the Woodbury formula gives
1/2
A = K ? M TM ,
M = L?1 ?I K I,? ? Rd,n ,
where L is the lower-triangular Cholesky factor of
1/2
1/2
B = I + ?I K I ?I ? Rd,d .
In order to compute the differential entropy score for a point j 6? I, we have to
know aj,j and hj . Thus, when including i into the active set I, we need to update
diag(A) and h accordingly, which in turn requires the matrices L and M to be
kept up-to-date. The update equations for pi , mi are
?i
?i
pi =
, mi = h i + ,
where
1 ? ai,i ?i
?i
(2)
yi ? N (zi |0, 1)
hi + b
yi ? (hi + b)
p
, ?i =
, ? i = ? i ?i +
zi = p
.
1 + ai,i
1 + ai,i
?(zi ) 1 + ai,i
We then update L ? Lnew by appending the row (lT , l) and M ? M new by
appending the row ?T , where
q
?
?
(3)
l = pi M ?,i , l = 1 + pi K i,i ? lT l, ? = l?1 ( pi K ?,i ? M T l).
?1/2
Finally, diag(Anew ) ? diag(A) ? (?2j )j and hnew ? h + ?i lpi
?. The differential
entropy score for j 6? I can be computed based on the variables in (2) (with i ? j)
as
1
?j = log(1 ? aj,j ?j ),
(4)
2
which can be computed in O(1), given hj and aj,j . In Algorithm 1 we give an
algorithmic version of this scheme.
Each inclusion costs O(n ? d), dominated by the computation of ?, apart from the
computation of the kernel matrix column K ?,i . Thus the total time complexity is
O(n?d2 ). The storage requirement is O(n?d), dominated by the buffer for M . Given
diag(A) and h, the error or the expected log likelihood of the current predictor on
the remaining points J can be computed in O(n). These scores can be used in order
to decide how many points to include into the final I. For kernel functions with
constant diagonal, our selection heuristic is constant over patterns if I = ?, so the
first (or the first few) inclusion candidate is chosen at random. After training is
complete, we can predict onRtest points x? by evaluating the approximate predictive
distribution Q(u? |x? , S) = P (u? |u)Q(u) du = N (u? |?(x? ), ? 2 (x? )), where
?(x? ) = ?T k(x? ),
1/2
1/2
1/2
? 2 (x? ) = k(x? , x? ) ? k(x? )T ?I B ?1 ?I k(x? ),
(5)
1/2
with ? := ?I B ?1 ?I mI and k(x? ) := (k(xi , x? ))i?I . We may compute ? 2 (x? )
using one back-substitution with the factor L. The approximate predictive distribution over y? can be obtained by averaging the noise model over the Gaussian.
The optimal predictor for the approximation is sgn(?(x? )+b), which is independent
of the variance ? 2 (x? ).
The simple scheme above employs full greedy selection over all remaining points to
find the inclusion candidate. This is sensible during early inclusions, but computationally wasteful during later ones, and an important extension of the basic scheme
of [2] allows for randomized greedy selections. To this end, we maintain a selection
index J ? {1, . . . , n} with J ? I = ? at all times. Having included i into I we
modify the selection index J. This means that only the components J of diag(A)
and h have to be updated, which requires only the columns M ?,J . Hence, if J
exhibits some inertia while moving over {1, . . . , n} \ I, many of the columns of M
will not have to be kept up-to-date. In our implementation, we employ a simple
delayed updating scheme for the columns of M which avoids double computations
(see [7] for details). After a number of initial inclusions are done using full greedy
selection, we use a J of fixed size m together with the following modification rule:
for a fraction ? ? (0, 1), retain the ? ? m best-scoring points in J, then fill it up to
size m by drawing at random from {1, . . . , n} \ (I ? J).
4
Experiments
We now present results of experiments on the MNIST handwritten digits database5 ,
comparing our method against the SVM algorithm. We considered binary tasks of
the form ?c-against-rest?, c ? {0, . . . , 9}. c is mapped to +1, all others to ?1. We
down-sampled the bitmaps to size 13 ? 13 and split the MNIST training set into
a (new) training set of size n = 59000 and a validation set of size 1000; the test
set size is 10000. A run consisted of model selection, training and testing, and
all results are averaged over 10 runs. We employed the RBF kernel k(x, x 0 ) =
C exp(?(?/(2 ? 169))kx ? x0 k2 ), x ? R169 with hyper-parameters C > 0 (process
variance) and ? > 0 (inverse squared length-scale). Model selection was done by
minimizing validation set error, training on random training set subsets of size
5000.6
5
Available online at http://www.research.att.com/?yann/exdb/mnist/index.html.
The model selection training set for a run i is the same across tested methods. The
list of kernel parameters considered for selection has the same size across methods.
6
SVM
c
0
1
2
3
4
5
6
7
8
9
d
1247
798
2240
2610
1826
2306
1331
1759
2636
2731
gen
0.22
0.20
0.40
0.41
0.40
0.29
0.28
0.54
0.50
0.58
IVM
time
1281
864
2977
3687
2442
2771
1520
2251
3909
3469
c
0
1
2
3
4
5
6
7
8
9
d
1130
820
2150
2500
1740
2200
1270
1660
2470
2740
gen
0.18
0.26
0.40
0.39
0.33
0.32
0.29
0.51
0.53
0.55
time
627
427
1690
2191
1210
1758
765
1110
2024
2444
Table 1: Test error rates (gen, %) and training times (time, s) on binary MNIST
tasks. SVM: Support vector machine (SMO); d: average number of SVs. IVM:
Sparse GPC, randomized greedy selections; d: final active set size. Figures are
means over 10 runs.
Our goal was to compare the methods not only w.r.t. performance, but also running
time. For the SVM, we chose the SMO algorithm [6] together with a fast elaborate
kernel matrix cache (see [7] for details). For the IVM, we employed randomized
greedy selections with fairly conservative settings.7 Since each binary digit classification task is very unbalanced, the bias parameter b in the GPC model was chosen
to be non-zero. We simply fixed b = ??1 (r), where r is the ratio between +1 and
?1 patterns in the training set, and added a constant vb = 1/10 to the kernel k
to account for the variance of the bias hyper-parameter. Ideally, both b and v b
should be chosen by model selection, but initial experiments with different values
for (b, vb ) exhibited no significant fluctuations in validation errors. To ensure a fair
comparison, we did initial SVM runs and initialized the active set size d with the
average number (over 10 runs) of SVs found, independently for each c. We then
re-ran the SVM experiments, allowing for O(d n) cache space. Table 1 shows the
results.
Note that IVM shows comparable performance to the SVM, while achieving significantly lower training times. For less conservative settings of the randomized
selection parameters, further speed-ups might be realizable. We also registered
(not shown here) significant fluctuations in training time for the SVM runs, while
this figure is stable and a-priori predictable for the IVM. Within the IVM, we can
obtain estimates of predictive probabilities for test points, quantifying prediction
uncertainties. In Figure 1, which was produced for the hardest task c = 9, we reject
fractions of test set examples based on the size of |P (y? = +1)?1/2|. For the SVM,
the size of the discriminant output is often used to quantify predictive uncertainty
heuristically. For c = 9, the latter is clearly inferior (although the difference is less
pronounced for the simpler binary tasks).
In the SVM community it is common to combine the ?c-against-rest? classifiers to
obtain a multi-class discriminant8 as follows: for a test point x? , decide for the class
whose associated classifier has the highest real-valued output. For the IVM, the
7
First 2 selections at random, then 198 using full greedy, after that a selection index of
size 500 and a retained fraction ? = 1/2.
8
Although much recent work has looked into more powerful combination schemes, e.g.
based on error-correcting codes.
?2
10
error rate
SVM
IVM
?3
10
?4
10
0
0.05
0.1
rejected fraction
0.15
0.2
Figure 1: Plot of test error rate against increasing rejection rate for the SVM
(dashed) and IVM (solid), for the task c = 9 against the rest. For SVM, we reject
based on ?distance? from separating plane, for IVM based on estimates of predictive
probabilities. The IVM line runs below the SVM line exhibiting lower classification
errors for identical rejection rates.
equivalent would be to compare the estimates log P (y? = +1) from each c-predictor
and pick the maximizing c. This is suboptimal, because the different predictors
have not been trained jointly.9 However, the estimates of log P (y? = +1) do depend
on predictive variances, i.e. a measure of uncertainty about the predictive mean,
which cannot be properly obtained within the SVM framework. This combination
scheme results in test errors of 1.54%(?0.0417%) for IVM, 1.62%(?0.0316%) for
SVM. When comparing these results to others in the literature, recall that our
experiments were based on images sub-sampled to size 13 ? 13 rather than the
usual 28 ? 28.
5
Discussion
We have demonstrated that sparse Gaussian process classifiers can be constructed
efficiently using greedy selection with a simple fast selection criterion. Although we
focused on the change in differential entropy in our experiments here, the simple
likelihood approximation at the basis of our method allows for other equally efficient
criteria such as information gain [3]. Our method retains many of the benefits
of probabilistic GP models (error bars, model combination, interpretability, etc.)
while being much faster and more memory-efficient both in training and prediction.
In comparison with non-probabilistic SVM classification, our method enjoys the
further advantages of being simpler to implement and having strictly predictable
time requirements. Our method can also be significantly faster10 than SVM with the
SMO algorithm. This is due to the fact that SMO?s active set typically fluctuates
heavily across the training set, thus a large fraction of the full kernel matrix must
be evaluated. In contrast, IVM requires only d/n of K.
9
It is straightforward to obtain the IVM for a joint GP classification model, however
the training costs raise by a factor of c2 . Whether this factor can be reduced to c using
further sensible approximations, is an open question.
10
We would expect SVMs to catch up with IVMs on tasks which require fairly large
active sets, and for which very simple and fast covariance functions are appropriate (e.g.
sparse input patterns).
Among the many proposed sparse GP approximations [1, 8, 9, 10, 11], our method
is most closely related to [1]. The latter is a sparse Bayesian online scheme which
does not employ greedy selections and uses a more accurate likelihood approximation than we do, at the expense of slightly worse training time scaling, especially
when compared with our randomized version. It also requires the specification of a
rejection threshold and is dependent on the ordering in which the training points
are presented. It incorporates steps to remove points from I, which can also be
done straightforwardly in our scheme, however such moves are likely to create numerical stability problems. Smola and Bartlett [8] use a likelihood approximation
different from both the IVM and the scheme of [1] for GP regression, together with
greedy selections, but in contrast to our work they use a very expensive selection
heuristic (O(n ? d) per score computation) and are forced to use randomized greedy
selection over small selection indexes. The differential entropy score has previously
been suggested in the context of active learning (e.g. [3]), but applies more directly
to our problem. In active learning, the label yi is not known at the time xi has to
be scored, and expected rather than actual entropy changes have to be considered.
Furthermore, MacKay [3] applies the selection to multi-layer perceptron (MLP)
models for which Gaussian posterior approximations over the weights can be very
poor.
Acknowledgments
We thank Chris Williams, David MacKay, Manfred Opper and Lehel Csat?
o for helpful discussions. MS gratefully acknowledges support through a research studentship
from Microsoft Research Ltd.
References
[1] Lehel Csat?
o and Manfred Opper. Sparse online Gaussian processes. N. Comp., 14:641?
668, 2002.
[2] Neil D. Lawrence and Ralf Herbrich. A sparse Bayesian compression scheme - the
informative vector machine. Presented at NIPS 2001 Workshop on Kernel Methods,
2001.
[3] David MacKay. Bayesian Methods for Adaptive Models. PhD thesis, California Institute of Technology, 1991.
[4] Thomas Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD
thesis, MIT, January 2001.
[5] Manfred Opper and Ole Winther. Gaussian processes for classification: Mean field
algorithms. N. Comp., 12(11):2655?2684, 2000.
[6] John C. Platt. Fast training of support vector machines using sequential minimal
optimization. In Sch?
olkopf et. al., editor, Advances in Kernel Methods, pages 185?
208. 1998.
[7] Matthias Seeger, Neil D. Lawrence, and Ralf Herbrich. Sparse Bayesian learning:
The informative vector machine. Technical report, Department of Computer Science,
Sheffield, UK, 2002. See www.dcs.shef.ac.uk/~neil/papers/.
[8] Alex Smola and Peter Bartlett. Sparse greedy Gaussian process regression. In Advances in NIPS 13, pages 619?625, 2001.
[9] Michael Tipping. Sparse Bayesian learning and the relevance vector machine.
J. M. Learn. Res., 1:211?244, 2001.
[10] Volker Tresp. A Bayesian committee machine. N. Comp., 12(11):2719?2741, 2000.
[11] Christopher K. I. Williams and Matthias Seeger. Using the Nystr?
om method to speed
up kernel machines. In Advances in NIPS 13, pages 682?688, 2001.
| 2240 |@word version:2 compression:1 open:1 heuristically:1 d2:3 covariance:8 evaluating:1 pick:1 nystr:1 solid:1 reduction:1 moment:3 substitution:1 contains:2 score:9 att:1 initial:3 bitmap:1 current:2 com:2 comparing:3 yet:2 written:1 must:1 john:1 numerical:1 informative:4 remove:2 plot:1 update:7 discrimination:1 greedy:14 accordingly:1 plane:2 avoids:1 manfred:3 herbrich:4 simpler:3 constructed:1 c2:1 become:1 differential:6 fitting:2 combine:1 introduce:2 x0:3 expected:2 behavior:1 frequently:1 multi:5 actual:1 cache:2 considering:1 increasing:1 factorized:1 interpreted:1 whilst:1 finding:1 hnew:1 classifier:7 k2:1 uk:4 platt:1 yn:1 before:1 modify:1 fluctuation:2 might:1 chose:1 range:1 averaged:1 acknowledgment:1 satisfactorily:1 woodbury:1 testing:1 implement:2 digit:2 significantly:5 reject:2 matching:3 ups:1 cannot:3 close:1 selection:30 operator:1 storage:1 context:3 restriction:2 equivalent:2 map:1 www:2 demonstrated:1 maximizing:1 straightforward:1 williams:2 independently:2 convex:1 focused:1 correcting:1 rule:1 fill:1 ralf:3 stability:1 updated:1 heavily:1 us:2 element:3 approximated:1 expensive:1 utilized:2 updating:1 database:1 ep:1 solved:1 ordering:1 decrease:1 highest:1 ran:1 predictable:2 ui:5 complexity:1 ideally:2 trained:3 depend:1 raise:1 algebra:1 predictive:9 upon:1 basis:1 joint:1 represented:1 chapter:1 derivation:2 forced:1 fast:6 describe:1 ole:1 hyper:3 outside:1 exhaustive:1 whose:3 heuristic:5 fluctuates:1 valued:1 drawing:1 triangular:1 favor:1 neil:5 gi:11 gp:16 jointly:2 final:3 online:3 triggered:1 sequence:1 advantage:1 matthias:3 remainder:1 date:2 gen:3 description:1 pronounced:1 olkopf:1 double:1 requirement:5 produce:2 distribution3:1 leave:1 ac:3 strong:1 implies:1 quantify:1 differ:1 exhibiting:1 closely:1 sgn:1 exchange:1 require:2 argued:1 generalization:2 extension:2 strictly:1 considered:4 normal:1 exp:2 lawrence:4 algorithmic:1 predict:1 early:1 estimation:1 label:1 create:1 tool:1 mit:1 clearly:1 gaussian:24 rather:3 hj:2 volker:1 focus:2 properly:1 likelihood:8 seeger:4 contrast:3 sense:1 realizable:1 helpful:1 inference:4 dependent:1 typically:1 lehel:2 initially:1 borrowing:1 misclassified:1 classification:16 ambitiously:1 html:1 denoted:1 priori:2 retaining:1 among:1 special:1 fairly:2 mackay:3 marginal:1 field:1 having:2 identical:2 hardest:1 future:1 others:2 report:1 few:1 employ:3 preserve:1 delayed:1 microsoft:3 maintain:1 mlp:1 accurate:1 old:1 initialized:1 desired:1 re:2 minimal:1 column:5 earlier:1 retains:1 cost:3 subset:3 predictor:9 straightforwardly:2 density:2 winther:1 randomized:7 retain:1 probabilistic:4 picking:1 michael:1 together:4 squared:1 thesis:2 choose:1 worse:1 account:1 potential:1 bold:2 depends:1 later:1 view:1 observing:1 competitive:1 bayes:1 om:1 square:1 ni:1 variance:6 characteristic:1 efficiently:1 conceptually:1 dealt:1 bayesian:13 handwritten:1 accurately:1 produced:1 none:1 comp:3 database5:1 whenever:1 ed:1 against:7 minka:1 thereof:1 associated:3 mi:8 sampled:2 gain:1 popular:2 recall:1 routine:1 back:1 focusing:1 adf:4 tipping:1 follow:1 evaluated:2 done:3 furthermore:3 rejected:1 smola:2 until:1 christopher:1 nonlinear:1 propagation:1 widespread:1 aj:3 consisted:1 true:2 former:1 hence:1 during:2 encourages:1 uniquely:1 inferior:1 portobello:1 criterion:3 m:1 hill:1 exdb:1 thomson:1 complete:1 image:1 common:1 functional:1 function1:1 winner:1 extend:1 marginals:1 significant:2 cambridge:1 ai:4 rd:2 resorting:1 inclusion:8 gratefully:1 moving:1 stable:1 specification:1 etc:2 closest:1 posterior:6 recent:1 apart:1 buffer:1 binary:6 yi:7 scoring:1 dai:1 employed:2 accomplishes:1 dashed:1 full:4 technical:1 match:1 faster:3 equally:1 schematic:1 prediction:5 regression:4 sheffield:3 basic:1 essentially:1 expectation:1 iteration:1 represent:1 kernel:13 lpi:1 achieved:2 shef:2 crucial:1 sch:1 rest:3 exhibited:1 facilitates:1 incorporates:1 call:1 split:1 identically:1 easy:2 rherb:1 xj:1 zi:3 architecture:1 suboptimal:1 hindered:1 tm:1 avenue:1 whether:1 handled:1 bartlett:2 ltd:2 effort:1 render:1 peter:1 svs:2 useful:1 gpc:3 detailed:1 locally:1 induces:1 svms:1 simplest:2 reduced:1 http:1 per:1 csat:2 write:1 threshold:1 achieving:1 drawn:1 cb3:1 wasteful:1 kept:2 fraction:7 run:9 inverse:1 powerful:2 uncertainty:3 family:1 decide:2 yann:1 forrest:1 decision:2 scaling:5 vb:2 comparable:2 submatrix:1 layer:2 hi:2 nontrivial:1 alex:1 n3:1 encodes:1 dominated:2 speed:2 extremely:1 department:1 according:3 combination:3 poor:1 smaller:1 across:3 slightly:1 modification:1 s1:1 restricted:1 computationally:1 equation:1 previously:2 turn:2 committee:1 argmaxj:1 know:1 qnew:3 end:2 available:1 appropriate:1 appending:2 alternative:1 rp:3 thomas:1 running:2 include:2 remaining:2 ensure:1 graphical:1 build:1 especially:1 approximating:1 objective:1 move:2 eh1:1 added:1 looked:1 question:1 parametric:4 usual:1 diagonal:3 exhibit:1 dp:1 distance:1 thank:1 mapped:1 separating:3 street:1 sensible:2 chris:1 discriminant:1 length:1 code:1 index:6 relationship:1 retained:1 ratio:2 minimizing:2 ql:1 expense:1 rise:1 implementation:2 unknown:1 perform:1 allowing:1 finite:3 january:1 situation:1 dc:2 y1:1 rn:2 community:2 overloaded:1 david:2 california:1 smo:4 registered:1 tractably:1 nip:3 suggested:3 bar:2 usually:1 pattern:6 below:1 sparsity:2 including:2 memory:5 interpretability:1 belief:1 regularized:1 abbreviate:1 scheme:14 technology:1 numerous:1 acknowledges:1 catch:1 extract:1 tresp:1 prior:4 literature:1 loss:2 fully:1 expect:1 filtering:1 proportional:1 validation:3 degree:1 principle:2 viewpoint:1 editor:1 pi:9 row:3 course:1 repeat:1 infeasible:1 enjoys:1 bias:3 understand:1 perceptron:1 institute:1 sparse:20 edinburgh:2 benefit:2 studentship:1 opper:3 boundary:2 overcome:1 xn:2 world:1 distributed:1 cumulative:1 fb:1 vocabulary:1 forward:1 author:1 made:1 inertia:1 adaptive:1 approximate:7 informationtheoretic:1 global:1 active:15 anew:1 assumed:1 xi:5 search:1 latent:3 iterative:1 table:2 learn:3 du:1 complex:1 diag:11 did:1 whole:1 noise:3 scored:1 n2:3 fair:1 x1:2 site:6 referred:2 elaborate:1 sub:1 candidate:2 formula:2 down:1 symbol:2 list:1 svm:29 intractable:1 workshop:1 mnist:5 sequential:1 phd:2 kx:1 rejection:3 locality:2 entropy:9 lt:2 simply:1 likely:1 face2:1 applies:2 ivm:15 succeed:1 conditional:2 goal:4 quantifying:1 rbf:1 towards:1 replace:1 hard:1 change:2 included:1 typical:1 infinite:1 determined:1 averaging:1 conservative:2 called:2 total:1 perceptrons:1 support:6 cholesky:1 latter:2 unbalanced:1 relevance:1 tested:1 lnew:1 |
1,364 | 2,241 | Approximate Linear Programming for
Average-Cost Dynamic Programming
Daniela Pucci de Farias
IBM Almaden Research Center
650 Harry Road, San Jose, CA 95120
[email protected]
Benjamin Van Roy
Department of Management Science and Engineering
Stanford University
Stanford, CA 94305
[email protected]
Abstract
This paper extends our earlier analysis on approximate linear programming as an approach to approximating the cost-to-go function in a
discounted-cost dynamic program [6]. In this paper, we consider the
average-cost criterion and a version of approximate linear programming
that generates approximations to the optimal average cost and differential
cost function. We demonstrate that a naive version of approximate linear
programming prioritizes approximation of the optimal average cost and
that this may not be well-aligned with the objective of deriving a policy
with low average cost. For that, the algorithm should aim at producing a
good approximation of the differential cost function. We propose a twophase variant of approximate linear programming that allows for external
control of the relative accuracy of the approximation of the differential
cost function over different portions of the state space via state-relevance
weights. Performance bounds suggest that the new algorithm is compatible with the objective of optimizing performance and provide guidance
on appropriate choices for state-relevance weights.
1 Introduction
The curse of dimensionality prevents application of dynamic programming to most problems of practical interest. Approximate linear programming (ALP) aims to alleviate the
curse of dimensionality by approximation of the dynamic programming solution. In [6], we
develop a variant of approximate linear programming for the discounted-cost case which
is shown to scale well with problem size. In this paper, we extend that analysis to the
average-cost criterion.
Originally introduced by Schweitzer and Seidmann [11], approximate linear programming
combines the linear programming approach to exact dynamic programming [9] to ap-
proximation of the differential cost function (cost-to-go function, in the discounted-cost
case)
by
alinear
architecture. More specifically, given a collection of basis functions
, mapping states in the system to be controlled to real numbers, approximate linear programming involves solution of a linear program for generating an approxi
mation to the differential cost function of the form
Extension of approximate linear programming to the average-cost setting requires a different algorithm and additional analytical ideas. Specifically, our contribution can be summarized as follows:
Analysis of the usual formulation of approximate linear programming for averagecost problems. We start with the observation that the most natural formulation of averagecost ALP, which follows immediately from taking limits in the discounted-cost formulation
and can be found, for instance, in [1, 2, 4, 10], can be interpreted as an algorithm for approximating of the optimal average cost. However, to obtain a good policy, one needs a
good approximation to the differential cost function. We demonstrate through a counterexample that approximating the average cost and approximating the differential cost function
so that it leads to a good policy are not necessarily aligned objectives. Indeed, the algorithm
may lead to arbitrarily bad policies, even if the approximate average cost is very close to
optimal and the basis functions have the potential to produce an approximate differential
cost function leading to a reasonable policy.
Proposal of a variant of average-cost ALP. A critical limitation of the average-cost ALP
algorithm found in the literature is that it does not allow for external control of how the approximation to the differential cost function should be emphasized over different portions
of the state space. In situations like the one described in the previous paragraph, when the
algorithm produces a bad policy, there is little one can do to improve the approximation
other than selecting new basis functions. To address this issue, we propose a two-phase
variant of average-cost ALP: the first phase is simply the average-cost ALP algorithm already found in the literature, which is used for generating an approximation for the optimal
average cost. This approximation is used in the second phase of the algorithm for generating an approximation to the differential cost function. We show that the second phase
selects an approximate differential cost function minimizing a weighted sum of the distance to the true differential cost function, where the weights (referred to as state-relevance
weights) are algorithm parameters to be specified during implementation of the algorithm,
and can be used to control which states should have more accurate approximations for the
differential cost function.
Development of bounds linking the quality of approximate differential cost functions
to the performance of the policy associated with them. The observation that the usual
formulation of ALP may lead to arbitrarily bad policies raises the question of how to design an algorithm for directly optimizing performance of the policy being obtained. With
this question in mind, we develop bounds that relate the quality of approximate differential
cost functions ? i.e., their proximity to the true differential cost function ? to the expected increase in cost incurred by using a greedy policy associated with them. The bound
suggests using a weighted sum of the distance to the true differential cost function for comparing different approximate differential cost functions. Thus the objective of the second
phase of our ALP algorithm is compatible with the objective of optimizing performance
of the policy being obtained, and we also have some guidance on appropriate choices of
state-relevance weights.
2 Stochastic Control Problems and the Curse of Dimensionality
We consider discrete-time
stochastic control
problems involving a finite state space of
$#
"!
cardinality
. For each state
, there is a finite set %'& of available actions.
#
When the current state is and
is incurred. State
%'& is taken, a cost
action
transition
probabilities
represent,
for
each
pair
of
states
and each
#
action
% & , the probability
that the next state will be given that the current state is and the
#
% & .
current action is
A policy is a mapping from states to actions. Given a policy ,
the
dynamics of the system
follow a Markov chain with transition probabilities
&
. For each policy , we
whose
define
a
transition
matrix
th
entry
is
, and a cost vector
whose
&
th entry is
&
. We make the following assumption on the transition probabilities:
Assumption 1 (Irreducibility).
For each pair of states and and each policy , there
is such that
In stochastic control problems, we want to select a policy optimizing a given
crite
rion. In this
paper,
we
will
employ
as
an
optimality
criterion
the
average
cost
,+ -) /.
"!$# &%('
Irreducibility implies that, for each policy , this
*)
++
10
for all ? the average cost is independent of the initial state
limit exists and
in the system.
0*2 345 0
. For any policy
, we define the
We denote the minimal average cost by
by 6
87
:9
87 Note that 6
operates
6
associated dynamic
programming
operator
#;< =><
on vectors 7
corresponding to functions
?45 on the state space . We also define the
6
7 A policy is called greedy with
dynamic programming operator 6 by 6 7
respect to 7 if it attains the minimum in the definition of 6 .
An optimal policy
the
0-@ minimizing
average
@ cost can be derived from the solution of Bell9A7
6 0*7 2 where
man?s equation
is the vector
of ones. We denote solutions to
0*2
2
7 . The scalar
is unique and equal to the the optimal
Bellman?s equation by pairs
2
average cost. The vector 7 is called a differential
cost function. The differential
2
2 cost
@ function is unique up to a constant factor; if 7 solves Bellman?s equation, then 7 9CB is also
a solution for all B , and all2 other
can be shown to
solutions
be of this form. We can ensure
for an arbitrary state . Any policy that is greedy with
uniqueness by imposing 7
respect to the differential cost function is optimal.
Solving Bellman?s equation involves computing and storing the differential cost function
for all states in the system. This is computationally infeasible in most problems of practical interest due to the explosion on the number of states as the number of state variables
grows. We try to combat the curse of dimensionality by settling for the more modest goal
of finding an approximation to the differential cost function. The underlying assumption is
that, in many problems of practical interest, the differential cost function will exhibit some
regularity, or structure, allowing for reasonable approximations to be stored compactly.
&D
;
We
a linear approximation architecture: given a set of functions
FEG
consider
H
, we generate approximations of the form
7
2 &IKJ
7
M L
(1)
#O;< =><QP
3R TSUSUS
, i.e., each
We define a matrix N
L by N
of the basis functions
V
is stored as a column of N , and each row corresponds toLC
a vector
of the basis functions
J S
evaluated at a distinct state . We represent 7
in matrix notation as N .
In the remainder of the paper, we assume that (a manageable number of) basis functions
are prespecified, and address the problem of choosing a suitable parameter vector . For
simplicity,
X which
we
2 W we choose an arbitrary state ? henceforth called state ?0?? for
set
7
; accordingly, we assume that the basis functions are such that
,Y .
3 Approximate Linear Programming
Approximate linear programming [11, 6] is inspired by the traditional linear programming
approach to dynamic programming, introduced by [9]. Bellman?s equation can be solved
by the average-cost exact LP (ELP):
0
(2)
0 @
9 7
6 7
0 @
0
937
9
6 7 can be replaced by 9 7
Note that
the
constraints
7
therefore we can think of problem (2) as an LP.
Y
In approximate linear programming, we reduce the generally intractable dimensions of
the average-cost ELP by constraining 7 to be of the form N . This yields the first-phase
approximate LP (ALP)
0
(3)
0 @
9 N
6 N
Problem (3) can be expressed
0
as an LP by the same argument used for the exact LP. We
denote its solution by
The following result is immediate.
0
0*2 F0
over the feasible
Lemma 1. The solution
of the first-phase ALP minimizes
region.
0
0>2 C0
. Since the first-phase
Proof: Maximizing in (3) is equivalent to maimizing
0>2 ALP
, we have 0
7
for all
corresponds
to
the
exact
LP
(2)
with
extra
constraints
N
0
0"2
0 0"2
0
feasible . Hence
, and the claim follows.
Lemma 1 implies that the first-phase ALP can be seen as an algorithm for approximating
the optimal average cost. Using this algorithm for generating a policy for the averagecost problem is based on the hope that approximation of the optimal average cost should
also implicitly imply approximation of the differential cost function. Note that it is not
unreasonable to expect that some0*approximation
of the differential cost
should
2 0
0 function
0 2
2
7 be
; for instance, we know that
involved in the minimization of
iff N
.
The ALP has as many variables as the number of basis functions plus one, which will
usually amount to a dramatically smaller number of variables than what we had in the ELP.
However, the ALP still has as many constraints as the number of state-action pairs. This
problem is also found in the discounted-cost formulation and there are several approaches
in the literature for dealing with it, including constraint sampling [7] and exploitation of
problem-specific structures for efficient elimination of redundant constraints [8, 10].
Our first step in the analysis of average-cost ALP is to demonstrate through a counterexample that it can produce arbitrarily bad policies, even if the approximation to the average
cost is very accurate.
4 Performance of the first-phase ALP: a counterexample
We consider a Markov process with states
, each representing a possible number
of jobs in a queue with buffer of size . The system state
evolves according to
/
/
H
9
"
!
#
%
'
$
&
%
(
+
)
*
*
,
.-0/
"
!
#
%
'
$
&
%
(
+
)
*
*
,
.
)
2
$
3
1
(
!
1
H
H
From state , transitions to states and occurs
with probabilities and
, respectively.
From
state
,
transitions
to
states
and
occur
with
probabilities
and
H
, respectively. The arrival probability
is the same for all states and we let
H
. The action to be chosen in each
state
departure
probability or service
is the
rate
, which takes values the set
. The cost incurred at
9
state if action is taken is given by
.
)
)
We use basis functions
. For
, the
0 ,
first-phase ALP yields an approximation
for
the
optimal
average
cost,
which
0*2
is within 2% of the true value
. However,
the
average cost yielded by the
is 9842.2
greedy policy with respect to N
for
, and goes to
as we
J isinfinity
N
increase the buffer size.
Figure
1
explains
this
behavior.
Note
that
a
very
good
2
7
approximation
for
over
states
,
and
becomes
progressive
worse
as
increases.
States
correspond
I to
virtually all of the stationary probability under the optimal
policy (
), hence
0*2 it is not surprising that the first-phase ALP yields
a very accurate approximation for , as other states contribute very little to the optimal
average cost. However, fitting the optimal average cost and the differential cost function
over states visited
often under the optimal policy is not sufficient for getting a good policy.
severely
Indeed, N
underestimates costs in large states, and the greedy policy drives the
system to those states, yielding a very large average cost and ultimately making the system
unstable, when the buffer size goes to infinity.
/
/
/
%/
/
/
#
It is also troublesome to note that our choice of basis
R
actually
has
the
potential to
function
lead to a reasonably good policy ? indeed, for
V , the greedy
, regardless of
policy associated with N has an average
cost approximately equal to
the buffer size, which is only about
larger than the optimal average cost. Hence even
though the first-phase ALP is being given a relatively good set of basis functions, it is
producing a bad approximate differential cost function, which cannot be improved unless
different basis functions are selected.
5 Two-phase average-cost ALP
A striking difference between the first-phase average-cost ALP and discounted-cost ALP
is the presence in the latter of state relevance weights. These are algorithm parameters that
can be used to control the accuracy of the approximation to the cost-to-go function (the
discounted-cost counterpart of the differential cost function) over different portions of the
state space and have been shown in [6] to have a first-order impact on the performance of
the policy being generated. For instance, in the example described in the previous section,
in the discounted-cost formulation one might be able to improve the policy
yielded
by ALP
by choosing state-relevance weights that put more emphasis on states
. Inspired
by this observation, we propose a two-phase algorithm with the characteristic that staterelevance weights are present and can be used to control the quality of the differential
cost function approximation. The first phase is simply the first-phase ALP introduced in
Section 3, and is used for generating an approximation to the optimal average cost. The
second phase consists of solving the second-phase ALP for finding approximations to the
differential cost function:
(4)
N
0
9 N
6 N
Y
1
0
The state-relevance weights
and
are algorithm parameters to be specified by the
user and denotes the transpose of . We denote the optimal solution of the second-phase
ALP by .
0
We now demonstrate how the state-relevance weights and can be used for controlling
the quality of the approximation to the differential cost function. We first define, for any
, given by the unique solution to [3]
0 W
7
7
6 7
Y
(5)
0
If is our estimate for the optimal
average cost, then 7 can be seen as an 2 estimate to
2
7
the differential cost function
.
Our
first
the difference between 7 and 7 to
0*2
0
0 result
0"2 .links
the difference between
and , when
For simplicity of notation, we implicitly
0
given , the function 7
drop from all
corresponding to state 0, so that, for
2 vectors and matrices rows and columns
2
instance, 7 corresponds to the original vector 7 without the row corresponding to state 0,
and
corresponds to the original matrix
without rows and columns corresponding
to state 0.
0
Lemma 2. For all , we have
2
0 2 0
@
7
7
0
Proof: Equation (5), satisfied by 7 , corresponds to Bellman?s equation for
the0 problem of
finding the stochastic shortest path to state 0, when costs are given by
[3]. Hence
7 corresponds to the vector of smallest expected lengths of paths until state 0. It follows
that
0 @
7
0 2 0 @
0 2 @
9
2
0 2 0
@
7 9
Note that if
0
0"2
, we also have 7
2
7 , and 7
7
2
0"2 0
@
.
N
In the following theorem, we show that the second-phase
ALP minimizes 7
S
over the feasible region. The weighted
norm
,
which
will
be used in the remainder
of the paper, is defined as 7
.
7
, for any
&
Theorem
1. Let be the optimal solution to the second-phase ALP. Then it minimizes
7
N over the feasible region of the second-phase ALP.
7
N . It is a well-known
Proof: Maximizing N is equivalent
to
minimizing
0 @
that N
7 , we have 7
7 . It follows
result that, for all 7 such that 6 7
7
minimizes 7
N
N
over
the
feasible
region
of
the
second-phase
ALP,
and
7
N .
N
7
0
0
0 2
For any fixed choice of
satisfying
, we have
2
0 2 0
@
7
N
7
N 9
(6)
hence
the second-phase
ALP minimizes an upper bound on the weighted
norm
2
7
N of the error in the differential cost function approximation. Note that
state-relevance weights determine how errors over different portions of the state space
are weighted in the decision of which approximate differential cost function to select, and
can be used for balancing accuracy of the approximation over different
states. In the next
section, we2 will provide performance bounds that tie a certain
norm of the difference
between 7 and N to the expect increase in cost incurred by using the greedy policy with
respect to N . This demonstrates that the objective optimized by the second-phase ALP is
compatible with the objective of optimizing performance of the policy being obtained, and
it also provides some insight about appropriate choices of state-relevance weights.
0
0 F
0
0
We have not yet specified how to choose . An obvious choice is
, since 0 is the
estimate
for the optimal average cost yielded by the first-phase ALP and it satisfies
0"2
, so that bound (6) holds. In practice, it may be advantageous to perform a line search
0
over to optimize performance of the ultimate policy being generated. An important
0 issue
is
the
feasibility
of
the
second-phase
ALP
will
be
feasible
for
a
given
choice
of
; for
0
0
, this will always be the case. It can also be shown that, under certain conditions
on the basis functions N ,0 the second-phase ALP possesses multiple feasible solutions
regardless of the choice of .
6 A performance bound
In this section, we present a bound on the performance of greedy policies associated with
approximate differential cost functions. This bound provide some guidance on appropriate
choices for state-relevance weights.
0
Theorem 2. Let Assumption 1 hold. For all 7 , let
and
denote the average cost and
stationary 2 state distribution
policy
associated with 7 . Then, for all 7 such
0
0 of
2 the greedy
2
7 , we have
9 7
7
that 7
0
7
7 , where and denote
9 7
6 7
Proof: We have
the costs and transition
7 , and we
matrix associated with the greedy 2 policy with0 respect
to
7 , we have
7
have
in the first equality. Now if 7
6 7
used
2 0"2
(0 2
2
2
7
7 9
7
9 7
7 .
6 7
Theorem 2 suggests that one approach to selecting state-relevance weights may be to run
the second-phase ALP adaptively, using in each iteration weights corresponding to the
stationary state distribution associated with the policy generated by the previous iteration.
Alternatively, in some cases it may suffice to use rough guesses about the stationary state
distribution of the MDP as choices for the state-relevance weights. We revisit the example
from Section 4 to illustrate this idea.
Example 1. Consider applying the second-phase
ALP
the controlled queue described in
to
& . This is similar to what is done in
Section 4. We use weights of the form
[6] and is motivated by the fact that, if the system runs under a ?stabilizing? policy, there
S
are exponential lower and upper bounds to the stationary state distribution [5].
0 Hence
0
is a reasonable guess for the shape of the stationary distribution. We also let
.
Figure 1 demonstrates the evolution of N as we
increase . Note that there is significant
,
improvement in the shape of N relative to N . The best policy is obtained for
and incurs an
average
cost
of
approximately
,
regardless
of
the
buffer
size.
This
cost
is
only about
higher than the optimal average cost.
7 Conclusions
We have extended the analysis of ALP to the case of minimization of average costs. We
have shown how the ALP version commonly found in the literature may lead to arbitrarily
bad policies even if the choice of basis functions is relatively good; the main problem
is that this version of the algorithm ? the first-phase ALP ? prioritizes approximation
of the optimal average cost, but does not necessarily yield a good approximation for the
differential cost function. We propose a variant of approximate linear programming ?
the two-phase approximate linear programming method ? that explicitly approximates
the differential cost function. The main attractive of the algorithm is the presence of staterelevance weights, which can be used for controlling the relative accuracy of the differential
cost function approximation over different portions of the state space.
Many open issues must still be addressed. Perhaps most important of all is whether there
is an automatic way of choosing state-relevance weights. The performance bound suggest
in Theorem 2 suggests an iterative scheme, where the second-phase ALP is run multiple
x 10 6
h?
?r2 (?=0.9)
?r2 (?=0.8)
?r2 (?=0.7)
?r1
1
0.5
0
0.5
1
0
10
20
30
40
50
60
70
80
90 100
Figure 1:
Controlled queue example: Differential cost function
2 approximations as a func7
tion
of
.
From
top
to
bottom,
differential
cost
function
, approximations N
(with
.
), and approximation N
times state-relevance weights are updated in each iteration according to the stationary state
distribution obtained with the policy generated by the algorithm in the previous iteration. It
remains to be shown whether such a scheme2 converges.
0 (0 2 It is also important to note that, in
7 . If 2
principle, Theorem 2 holds only for 7
, this
condition cannot be verified
N is only speculative.
for N , and the appropriateness of minimizing 7
References
[1] D. Adelman. A price-directed approach to stochastic inventory/routing. Preprint, 2002.
[2] D. Adelman. Price-directed replenishment of subsets: Methodology and its application to inventory routing. Preprint, 2002.
[3] D. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 1995.
[4] D. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996.
[5] D. Bertsimas, D. Gamarnik, and J.N. Tsitsiklis. Performance of multiclass Markovian queueing
networks via piecewise linear Lyapunov functions. Annals of Applied Probability, 11.
[6] D.P. de Farias and B. Van Roy. The linear programming approach to approximate dynamic
programming. To appear in Operations Research, 2001.
[7] D.P. de Farias and B. Van Roy. On constraint sampling in the linear programming approach
to approximate dynamic programming. Conditionally accepted to Mathematics of Operations
Research, 2001.
[8] C. Guestrin, D. Koller, and R. Parr. Efficient solution algorithms for factored MDPs. Submitted
to Journal of Artificial Intelligence Research, 2001.
[9] A.S. Manne. Linear programming and sequential decisions. Management Science, 6(3):259?
267, 1960.
[10] J.R. Morrison and P.R. Kumar. New linear program performance bounds for queueing networks.
Journal of Optimization Theory and Applications, 100(3):575?597, 1999.
[11] P. Schweitzer and A. Seidmann. Generalized polynomial approximations in Markovian decision
processes. Journal of Mathematical Analysis and Applications, 110:568?582, 1985.
| 2241 |@word exploitation:1 version:4 manageable:1 polynomial:1 norm:3 advantageous:1 c0:1 open:1 crite:1 incurs:1 initial:1 selecting:2 staterelevance:2 current:3 comparing:1 surprising:1 yet:1 must:1 shape:2 drop:1 stationary:7 greedy:10 selected:1 guess:2 intelligence:1 accordingly:1 prespecified:1 provides:1 contribute:1 mathematical:1 schweitzer:2 differential:41 consists:1 combine:1 fitting:1 paragraph:1 expected:2 indeed:3 behavior:1 bellman:5 inspired:2 discounted:8 little:2 curse:4 cardinality:1 becomes:1 underlying:1 notation:2 suffice:1 what:2 interpreted:1 minimizes:5 finding:3 combat:1 tie:1 demonstrates:2 control:9 appear:1 producing:2 bertsekas:2 service:1 engineering:1 limit:2 severely:1 troublesome:1 path:2 ap:1 approximately:2 might:1 plus:1 emphasis:1 suggests:3 directed:2 practical:3 unique:3 rion:1 practice:1 road:1 suggest:2 cannot:2 close:1 operator:2 put:1 applying:1 optimize:1 equivalent:2 center:1 maximizing:2 go:5 regardless:3 stabilizing:1 simplicity:2 immediately:1 factored:1 insight:1 deriving:1 updated:1 annals:1 controlling:2 user:1 exact:4 programming:32 roy:3 satisfying:1 bottom:1 preprint:2 solved:1 region:4 benjamin:1 dynamic:13 ultimately:1 raise:1 solving:2 basis:14 farias:3 compactly:1 distinct:1 artificial:1 choosing:3 whose:2 stanford:3 larger:1 think:1 analytical:1 propose:4 remainder:2 aligned:2 manne:1 iff:1 getting:1 regularity:1 r1:1 produce:3 generating:5 converges:1 illustrate:1 develop:2 job:1 solves:1 involves:2 implies:2 appropriateness:1 lyapunov:1 stochastic:5 alp:41 routing:2 elimination:1 explains:1 alleviate:1 extension:1 hold:3 proximity:1 cb:1 mapping:2 claim:1 parr:1 smallest:1 uniqueness:1 visited:1 weighted:5 hope:1 minimization:2 mit:1 rough:1 always:1 mation:1 aim:2 derived:1 improvement:1 attains:1 koller:1 selects:1 issue:3 almaden:1 development:1 equal:2 sampling:2 progressive:1 prioritizes:2 piecewise:1 employ:1 replaced:1 phase:35 interest:3 we2:1 yielding:1 chain:1 accurate:3 explosion:1 modest:1 unless:1 guidance:3 minimal:1 instance:4 column:3 earlier:1 markovian:2 cost:104 entry:2 subset:1 stored:2 adaptively:1 satisfied:1 management:2 choose:2 henceforth:1 worse:1 external:2 leading:1 potential:2 de:3 harry:1 summarized:1 explicitly:1 tion:1 try:1 portion:5 start:1 contribution:1 accuracy:4 characteristic:1 yield:4 correspond:1 drive:1 submitted:1 definition:1 underestimate:1 involved:1 obvious:1 associated:8 proof:4 dimensionality:4 actually:1 originally:1 higher:1 follow:1 methodology:1 improved:1 formulation:6 evaluated:1 though:1 done:1 until:1 elp:3 quality:4 perhaps:1 scientific:2 mdp:1 grows:1 true:4 counterpart:1 evolution:1 hence:6 equality:1 attractive:1 conditionally:1 during:1 adelman:2 criterion:3 generalized:1 demonstrate:4 averagecost:3 gamarnik:1 speculative:1 qp:1 extend:1 linking:1 approximates:1 significant:1 counterexample:3 imposing:1 automatic:1 mathematics:1 had:1 f0:1 optimizing:5 buffer:5 certain:2 arbitrarily:4 seen:2 minimum:1 additional:1 guestrin:1 determine:1 shortest:1 redundant:1 morrison:1 multiple:2 controlled:3 impact:1 feasibility:1 variant:5 involving:1 neuro:1 iteration:4 represent:2 proposal:1 want:1 addressed:1 extra:1 posse:1 virtually:1 presence:2 constraining:1 architecture:2 irreducibility:2 reduce:1 idea:2 multiclass:1 whether:2 motivated:1 ultimate:1 queue:3 action:8 dramatically:1 generally:1 amount:1 generate:1 revisit:1 discrete:1 queueing:2 verified:1 bertsimas:1 sum:2 run:3 jose:1 striking:1 extends:1 reasonable:3 decision:3 bound:13 yielded:3 occur:1 constraint:6 infinity:1 generates:1 argument:1 optimality:1 kumar:1 the0:1 relatively:2 department:1 according:2 smaller:1 lp:6 evolves:1 making:1 ikj:1 feg:1 taken:2 computationally:1 equation:7 remains:1 daniela:1 mind:1 know:1 available:1 operation:2 unreasonable:1 appropriate:4 original:2 denotes:1 top:1 ensure:1 approximating:5 objective:7 already:1 question:2 occurs:1 usual:2 traditional:1 exhibit:1 distance:2 link:1 athena:2 bvr:1 unstable:1 length:1 minimizing:4 relate:1 implementation:1 design:1 policy:43 perform:1 allowing:1 upper:2 observation:3 markov:2 finite:2 immediate:1 situation:1 extended:1 arbitrary:2 introduced:3 pair:4 specified:3 optimized:1 address:2 able:1 usually:1 departure:1 program:3 including:1 critical:1 suitable:1 natural:1 settling:1 representing:1 scheme:1 improve:2 mdps:1 imply:1 naive:1 literature:4 relative:3 expect:2 limitation:1 incurred:4 sufficient:1 principle:1 storing:1 balancing:1 ibm:1 row:4 compatible:3 transpose:1 infeasible:1 tsitsiklis:2 allow:1 taking:1 van:3 seidmann:2 dimension:1 transition:7 collection:1 commonly:1 san:1 approximate:28 implicitly:2 dealing:1 approxi:1 proximation:1 alternatively:1 search:1 iterative:1 reasonably:1 ca:2 inventory:2 necessarily:2 main:2 arrival:1 referred:1 exponential:1 theorem:6 bad:6 specific:1 emphasized:1 r2:3 exists:1 intractable:1 sequential:1 simply:2 prevents:1 expressed:1 scalar:1 pucci:2 corresponds:6 satisfies:1 goal:1 price:2 man:1 feasible:7 specifically:2 operates:1 lemma:3 called:3 accepted:1 select:2 latter:1 relevance:15 |
1,365 | 2,242 | Forward-Decoding Kernel-Based
Phone Sequence Recognition
Shantanu Chakrabartty and Gert Cauwenberghs
Center for Language and Speech Processing
Department of Electrical and Computer Engineering
Johns Hopkins University, Baltimore MD 21218
{shantanu,gert}@jhu.edu
Abstract
Forward decoding kernel machines (FDKM) combine large-margin classifiers with hidden Markov models (HMM) for maximum a posteriori
(MAP) adaptive sequence estimation. State transitions in the sequence
are conditioned on observed data using a kernel-based probability model
trained with a recursive scheme that deals effectively with noisy and partially labeled data. Training over very large data sets is accomplished using a sparse probabilistic support vector machine (SVM) model based on
quadratic entropy, and an on-line stochastic steepest descent algorithm.
For speaker-independent continuous phone recognition, FDKM trained
over 177 ,080 samples of the TlMIT database achieves 80.6% recognition
accuracy over the full test set, without use of a prior phonetic language
model.
1 Introduction
Sequence estimation is at the core of many problems in pattern recognition, most notably
speech and language processing. Recognizing dynamic patterns in sequential data requires
a set of tools very different from classifiers trained to recognize static patterns in data
assumed i.i.d. distributed over time.
The speech recognition community has predominantly relied on hidden Markov models
(HMMs) [1] to produce state-of-the-art results. HMMs are generative models that function
by estimating probability densities and therefore require a large amount of data to estimate
parameters reliably. If the aim is discrimination between classes, then it might be sufficient
to model discrimination boundaries between classes which (in most affine cases) afford
fewer parameters.
Recurrent neural networks have been used to extend the dynamic modeling power of
HMMs with the discriminant nature of neural networks [2], but learning long term dependencies remains a challenging problem [3]. Typically, neural network training algorithms
are prone to local optima, and while they work well in many situations, the quality and
consistency of the converged solution cannot be warranted.
Large margin classifiers, like support vector machines, have been the subject of intensive
research in the neural network and artificial intelligence communities [4]. They are attractive because they generalize well even with relatively few data points in the training set, and
bounds on the generalization error can be directly obtained from the training data. Under
general conditions, the training procedure finds a unique solution (decision or regression
surface) that provides an out-of-sample performance superior to many techniques.
Recently, support vector machines (SVMs) [4] have been used for phoneme (or phone)
recognition [5] and have shown encouraging results. However, use of a standard SVM
P(xI1)
P(xIO)
P(111 )
P(OIO)
P(110)
(a)
P(110,x)
(b)
Figure 1: (a) Two state Markovian maximum-likehood (ML) model with static state transition probabilities and observation vectors xemittedfrom the states. (b) Two state Markovian
MAP model, where transition probabilities between states are modulated by the observation vector x.
classifier by itself implicitly assumes i.i.d. data, unlike the sequential nature of phones.
To model inter-phonetic dependencies, maximum likelihood (ML) approaches assume a
phonetic language model that is independent of the utterance data [6], as illustrated in Figure 1 (a). In contrast, the maximum a posteriori (MAP) approach assumes transitions between states that are directly modulated by the observed data, as illustrated in Figure 1 (b).
The MAP approach lends itself naturally to hybrid HMM/connectionist approaches with
performance comparable to state-of-the-art HMM systems [7].
FDKM [8] can be seen a hybrid HMM/SYM MAP approach to sequence estimation. It
thereby augments the ability of large margin classifiers to infer sequential properties of
the data. FDKMs have shown superior performance for channel equalization in digital
communication where the received symbol sequence is contaminated by inter symbol interference [8].
In the present paper, FDKM is applied to speaker-independent continuous phone recognition. To handle the vast amount of data in the TIMIT corpus, we present a sparse probabilistic model and efficient implementation of the associated FDKM training procedure.
2 FDKM formulation
The problem of FDKM recognition is formulated in the framework of MAP (maximum a
posteriori) estimation, combining Markovian dynamics with kernel machines. A Markovian model is assumed with symbols belonging to S classes, as illustrated in Figure I(a)
for S = 2. Transitions between the classes are modulated in probability by observation
(data) vectors x over time.
2.1
Decoding Formulation
The MAP forward decoder receives the sequence X [n] = {x[n], x [n - 1], ... ,x li]}
and produces an estimate of the probability of the state variable q[n] over all classes i,
adn] = P(q[n] = i I X [n], w) , where w denotes the set of parameters for the learning
machine. Unlike hidden Markov models, the states directly encode the symbols, and the
observations x modulate transition probabilities between states [7]. Estimates of the posterior probability a i [n] are obtained from estimates of local transition probabilities using the
forward-decoding procedure [7]
S- l
adn] =
L
Pij[n] aj[n - 1]
(1)
j =O
where Pij [n] = P(q[n] = i I q[n - 1] = j , x[n], w) denotes the probability of making
a transition from class j at time n - 1 to class i at time n, given the current observation
vector x [n]. The forward decoding (1) embeds sequential dependence of the data wherein
the probability estimate at time instant n depends on all the previous data. An on-line
estimate of the symbol q[n] is thus obtained:
q est
[n] = arg max ai [n]
(2)
t
The BCJR forward-backward algorithm [9] produces in principle a better estimate that
accounts for future context, but requires a backward pass through the data, which is impractical in many applications requiring real time decoding.
Accurate estimation of transition probabilities Pij [n ] in (1) is crucial in decoding (2) to
provide good performance. In [8] we used kernel logistic regression [10], with regularized maximum cross-entropy, to model conditional probabilities. A different probabilistic
model that offers a sparser representation is introduced below.
2.2
Training Formulation
For training the MAP forward decoder, we assume access to a training sequence with labels (class memberships). For instance, the TIMIT speech database comes labeled with
phonemes. Continuous (soft) labels could be assigned rather than binary indicator labels,
to signify uncertainty in the training data over the classes. Like probabilities, label assignydn] = 1, ydn] :::: 0.
ments are normalized:
The objective of training is to maximize the cross-entropy of the estimated probabilities
adn] given by (1) with respect to the labels Ydn] over all classes i and training data n
L;:Ol
N- 18- 1
H
= L L
(3)
Yd n]log adn]
n = O i= O
To provide capacity control we introduce a regularizer fl( w ) in the objective function [II).
The parameter space w can be partitioned into disjoint parameter vectors W ij and bij for
each pair of classes i , j = 0, ... , S - 1 such that Pij [n] depends only on W i j and bij .
(The parameter bij corresponds to the bias term in the standard SVM formulation). The
regularizer can then be chosen as the L 2 norm of each disjoint parameter vector, and the
objective function becomes
18-
N - 18- 1
H = C L
Lyd n ]logadn ] - "2 L
n= O i= O
18- 1
L IW ij l2
(4)
j = O i= O
where the regularization parameter C controls complexity versus generalization as a biasvariance trade-off [11). The objective function (4) is similar to the primal formulation of
a large margin classifier [4]. Unlike the convex (quadratic) cost function of SVMs, the
formulation (4) does not have a unique solution and direct optimization could lead to poor
local optima. However, a lower bound of the objective function can be formulated so that
maximizing this lower bound reduces to a set of convex optimization sub-problems with
an elegant dual formulation in terms of support vectors and kernels. Applying the convex
property of the - log(.) function to the convex sum in the forward estimation (1), we obtain
directly
(5)
where
N - 1
Hj = L
n= O
8- 1
Cj [n] L yd n]log Pij [n] i= O
8- 1
~L
IWij 12
(6)
i= O
with effective regularization sequence
Cj[n] = Caj[n - 1] .
(7)
Disregarding the intricate dependence of (7) on the results of (6) which we defer to the followin~ section, the formulation (6) is equivalent to regression of conditional probabilities
Pij [n j from labeled data x [n] and Yi [n], for a given outgoing state j.
2.3
Kernel Logistic Probability Regression
Estimation of conditional probabilities Pr( ilx) from training data x[n] and labels Yi [n] can
be obtained using a regularized form of kernel logistic regression [10]. For each outgoing
state j, one such probabilistic model can be constructed for the incoming state i conditional
onx[n]:
5- 1
Pij [n] = exp(fij (x [n])) I L
exp(f8j (x[n]))
(8)
8= 0
As with SVMs, dot products in the expression for i ij (x) in (8) convert into kernel expansions over the training data x[m] by transforming the data to feature space [12]
Wij ?X + bij
i ij (x)
LX?] x[m].x + bij
(9)
m
<p ( )
----+
'
6"
A0 K(x [m], x)
+ bij
m
where K (', .) denotes any symmetric positive-definite kernel l that satisfies the Mercer condition, such as a Gaussian radial basis function or a polynomial [11].
Optimization of the lower-bound in (5) requires solving M disjoint but similar suboptimization problems (6). The subscript j is omitted in the remainder of this section for
clarity. The (primal) objective function of kernel logistic regression expresses regularized
cross-entropy (6) of the logistic model (8) in the form [13, 14]
1
N
M
m
i
= - L 21wil2 + C L [L Ydm]jk(x[m])
H
i
_log(e!I (x[m])
+ ... + ef M(x[m]) ].
(10)
The parameters A0 in (9) are determined by minimizing a dual formulation of the objective
function (10) obtained through the Legendre transformation, which for logistic regression
takes the form of an entropy-based potential function in the parameters [10]
MIN
N
N
[2 L L A~Qlm AZO
He = L
.
I
+ C L (Ydm] - AZOIC) log(ydm] - AZOIC)]
m
(11)
m
subject to constraints
LAZO
0
(12)
0
(13)
Cydm]
(14)
m
LAZO
Am
<
2
There are two disadvantages of using the logistic regression dual directly:
1. The solution is non-sparse and all the training points contribute to the final solu-
tion. For tasks involving large data sets like phone recognition this turns out to be
prohibitive due to memory and run-time constraints.
2. Even though the dual optimization problem is convex, it is not quadratic and precludes the use of standard quadratic programming (QP) techniques. One has to
resort to Newton-Raphson or other nonlinear optimization techniques which complicate convergence and require tuning of additional system parameters.
I
K(x , y)
= <I>(x).<I>(y) .
inner-product form.
The map <1>(-) need not be computed explicitly, as it only appears in
2.4
GiniSVM formulation
The GiniSVM probabilistic model [15] provides a sparse alternative to logistic regression. A quadratic ('Gini' [16]) index replaces entropy in the dual formulation of logistic
regression. The 'Gini' index provides a lower bound of the dual logistic functional, and its
quadratic form produces sparse solutions as with support vector machines. The tightness
of the bound provides an elegant trade-off between approximation and sparsity.
Jensen 's inequality (logp ::::; P - 1) formulates the lower bound for the entropy term in (11)
in the form of the multivariate Gini impurity index [16]:
M
M
(15)
1- LP; ::::; - LPi logpi
where 0 ::::; Pi ::::; 1, Vi and L,i Pi = 1. Both forms of entropy - L,~ Pi log Pi and 1 L,~ PT reach their maxima at the same values Pi == 1/ M corresponding to a uniform
distribution. As in the binary case, the bound can be tightened by scaling the Gini index
with a multiplicative factor '1 ~ 1, of which the particular value depends on M.2 The
GiniSVM dual cost function Hg is then given by
M
I
N
N
N
[2 LL>'~Qlm>'7' +'YC(L (yd m ]- >'7'/C)2 - 1)]
Hg = L
.
1
m
(16)
m
The convex quadratic cost function (16) with constraints in (11) can now be minimized
directly using standard quadratic programming techniques. The primary advantage of the
technique is that it yields sparse solutions and yet approximates the logistic regression
solution very well [15].
2.5
Online GiniSVM Training
For very large data sets such as TIMIT, using a QP approach to train GiniSVM may still
be prohibitive even through sparsity drastically in the trained model reduces the number
of support vectors. An on-line estimation procedure is presented, that computes each coefficient >'i in turn from single presentation of the data {x[n], ydn]} . A line search in
the parameter >'i and the bias bi performs stochastic steepest descent of the dual objective
function (16) of the form
(17)
n
bi
~
bi
+ L>'~
(18)
1
where [x] + denotes the positive part of x. The normalization factor zn is determined by
equation
M
L
n
[Cydn](Qnn
+ 2) + f dn] + 2 L >.f -
znl +
= C(Qnn + 2) + 2'1
(19)
?
solved in at most M algorithmic iterations.
3 Recursive FDKM Training
The weights (7) in (6) are recursively estimated using an iterative procedure reminiscent
of (but different from) expectation maximization. The procedure involves computing new
estimates of the sequence Ctj [n - 1] to train (6) based on estimates of Pij using previous
values of the parameters >.i] . The training proceeds in a series of epochs, each refining the
training
~t1'
n-1
n
+fl
n-2
n-1 n
2
1
:rt~r]~i'
n-K
time_
n-2 n-1 n
K
Figure 2: Iterations involved in training FDKM on a trellis based on the Markov model of
Figure I. During the initial epoch, parameters of the probabilistic model, conditioned on
the observed labelfor the outgoing state at time n - 1, of the state at time n are trainedfrom
observed labels at time n. During subsequent epochs, probability estimates of the outgoing
state at time n - lover increasing forward decoding depth k = 1, ... K determine weights
assigned to data nfor training each of the probabilistic models conditioned on the outgoing
state.
estimate of the sequence CYj[n - 1] by increasing the size of the time window (decoding
depth, k) over which it is obtained by the forward algorithm (1).
The training steps are illustrated in Figure 2 and summarized as follows:
1. To bootstrap the iteration for the first training epoch (k = 1), obtain initial values
for CYj[n - 1] from the labels of the outgoing state, CY j [n - 1] = Yj [n - 1]. This
corresponds to taking the labels Yd n - 1] as true state probabilities which corresponds to the standard procedure of using fragmented data to estimate transition
probabilities.
2. Train logistic kernel machines, one for each outgoing class j, to estimate the parameters in Pij[n ], i, j = 1, .. , S from the training data x[n] and labels Yd n ],
weighted by the sequence CYj [n - 1].
3. Re-estimate CYj [n - 1] using the forward algorithm (1) over increasing decoding
depth k, by initializing CYj [n - k] to y[n - k].
4. Re-train, increment decoding depth k, and re-estimate
decoding depth is reached (k = K).
CYj
[n - 1], until the final
The performance of FDKM training depends on the final decoding depth K, although observed variations in generalization performance for large values of K are relatively smalL
A suitable value can be chosen a priori to match the extent of temporal dependency in the
data. For phoneme classification in speech, the decoding depth can be chosen according to
the length of a typical syllable.
An efficient procedure to implement the above algorithm is discussed in [15].
4 Experiments and Results
The performance of FDKM was evaluated on the full TIMIT dataset [17], consisting of
labeled continuous spoken utterances. The 60 phone classes presented in TIMIT were first
collapsed onto 39 classes according to standard folding techniques [6]. The training set
consisted of 6,300 sentences spoken by 63 speakers, resulting in 177,080 phone instances.
The test set consisted of 192 sentences spoken by 24 speakers.
The speech signal was first processed by a pre-emphasis filter with transfer function
1 - 0.97z - 1 . Subsequently, a 25 ms Hamming window was applied over 10 ms shifts
to extract a sequence of phonetic segments. Cepstral coefficients were extracted from the
sequence, combined with their first and second order time differences into a 39-dimensional
vector. Cepstral mean subtraction and speaker normalization were subsequently applied.
2Unlike the binary case (M
maxima at Pi = 11M.
=
2), the factor 'Y for general M cannot be chosen to match the two
Table 1: Performance Evaluation of FDKM (K = 10) on TIMIT
Machme
Accuracy
InsertIOn
SubstItutIOn
DeletIOn
Errors
84
83
~82
28 1
~
!----
380
/
:~
0079
o
u
~78
77
~
!
V
/
I
~~
V
/ 1------/
L
2
~
---<r-
4
6
/
Training
Test
I
8
10
Decoding depth k
Figure 3: Recognition rate as afunction of decoding depth k = 1, . . . K.
Each phone utterance were then subdivided into three segments with relative proportions
4:3:4 [18]. The features in the three segments were individually averaged and concatenated
to obtain a 117 -dimensional feature vector.
Evaluation on the test was performed using thresholding of state probabilities in the MAP
forward decoding (2) [19], with threshold 0.25. The decoded phone sequence was then
compared with the transcribed sequence using Levenshtein's distance to evaluate different sources of errors. Multiple runs of identical phones in the decoded and transcribed
sequences were collapsed to single phone instances to reflect true insertion errors.
Table 1 summarizes the results of the experiments with FDKM on TIMIT for different
values of the regularization constant C . The recognition performance is comparable to
the state of the art using HMMs and other approaches, in the upper 70% and lower 80%
range [2, 5, 20]. Figure 3 illustrates the improvement in recognition rate with increasing
decoding depth k. The optimum value k ;::::; 10 corresponds to inter-phonetic dependencies
on a time scale of 100 ms.
5
Conclusion
Experiments with FDKM on the TIMIT corpus have demonstrated levels of speakerindependent continuous phone recognition accuracy comparable to or better than other
approaches that use HMMs and their various extensions. FDKM improves decoding and
generalization performance for data with embedded sequential structure, providing an elegant tradeoff between learning temporal versus spatial dependencies. The recursive estimation procedure reduces or masks the effect of noisy or missing labels Yj [n]. Further
improvements can be expected by tuning of hyper-parameters and improved representation
of acoustic features.
Acknowledgement
This work was supported by a grant from the Catalyst Foundation.
References
[1] L. Rabiner and B-H Juang, Fundamentals of Speech Recognition, Englewood Cliffs,
NJ: Prentice-Hall, 1993.
[2] Robinson, AJ., "An application of recurrent nets to phone probability estimation,"
IEEE Transactions on Neural Networks, vol. S,No.2,March 1994.
[3] Bengio, Y, "Learning long-term dependencies with gradient descent is difficult," IEEE
T. Neural Networks, vol. S, pp. IS7-166, 1994.
[4] Vapnik, V. The Nature of Statistical Learning Theory, New York: Springer-Verlag,
1995.
[S] Clark, P. and Moreno, M.J. "On the use of Support Vector Machines for Phonetic
Classification," IEEE Conf. Proc., 1999.
[6] Lee, K.F and Hon, H.W, "Speaker-Independent phone recognition using hidden
markov models," IEEE Transactions on Acoustics, Speech and Signal Processing,
vol. 37, pp. 1641-1648, 1989.
[7] Bourlard, H. and Morgan, N., Connectionist Speech Recognition: A Hybrid Approach,
Kluwer Academic, 1994.
[8] Chakrabartty, S. and Cauwenberghs, G. "Sequence Estimation and Channel Equalization using Forward Decoding Kernel Machines," IEEE Int. Con! Acoustics and Signal
Proc. (ICASSP'2002), Orlando FL, 2002.
[9] Bahl, L.R., Cocke J., Jelinek F and Raviv J. "Optimal decoding of linear codes for
minimizing symbol error rate," IEEE Transactions on Inform. Theory, vol. IT-20, pp.
284-287,1974.
[10] Jaakkola, T and Haussler, D. "Probabilistic kernel regression models," Proceedings
of Seventh International Workshop on Artificial Intelligence and Statistics , 1999.
[11] Girosi, F, Jones, M. and Poggio, T "Regularization Theory and Neural Networks
Architectures," Neural Computation, vol. 7, pp 219-269, 1995.
[12] SchOlkopf, B., Burges, C. and Smola, A., Eds., Advances in Kernel Methods-Support
Vector Learning, MIT Press, Cambridge, 1998.
[13] Wahba, G. Support Vector Machine, Reproducing Kernel Hilbert Spaces and Randomized GACV, Technical Report 984, Department of Statistics, University of Wisconsin, Madison WI.
[14] Zhu, J and Hastie, T, "Kernel Logistic Regression and Import Vector Machine," Adv.
IEEE Neural Information Processing Systems (NIPS '2001), Cambridge, MA: MIT
Press, 2002.
[IS] Chakrabartty, S. and Cauwenberghs, G. "Forward Decoding Kernel Machines: A
hybrid HMM/SVM Approach to Sequence Recognition," IEEE Int. Con! of Pattern
Recognition: SVM workshop. (ICPR'2002), Niagara Falls, 2002.
[16] Breiman, L. Friedman, J. H. et al. Classification and Regression Trees, Wadsworth
and Brooks, Pacific Grove, CA, 1984.
[17] Fisher, w., Doddington G. et al The DARPA Speech Recognition Research Database:
Specifications and Status. Proceedings DARPA speech recognition workshop, pp. 9399, 1986.
[18] Fosler-Lussier, E. Greenberg, S. Morgan, N., "Incorporating contextual phonetics into
automatic speech recognition," Proc. XIVth Int. Congo Phon. Sci., 1999.
[19] Wald, A. Sequential Analysis, Wiley, New York, 1947.
[20] Chengalvarayan, R. and Deng, Li., "Speech Trajectory Discrimination Using the Minimum Classification Error Training," IEEE Transactions on Speech and Audio Processing, vol. 6, pp. SOS-SIS, Nov. 1998.
| 2242 |@word polynomial:1 norm:1 proportion:1 thereby:1 recursively:1 initial:2 substitution:1 series:1 current:1 contextual:1 si:1 yet:1 reminiscent:1 import:1 john:1 subsequent:1 speakerindependent:1 girosi:1 moreno:1 discrimination:3 generative:1 fewer:1 intelligence:2 prohibitive:2 steepest:2 core:1 provides:4 contribute:1 lx:1 dn:1 constructed:1 direct:1 scholkopf:1 shantanu:2 combine:1 introduce:1 inter:3 mask:1 expected:1 notably:1 intricate:1 ol:1 encouraging:1 window:2 increasing:4 becomes:1 estimating:1 spoken:3 transformation:1 impractical:1 nj:1 temporal:2 machme:1 classifier:6 control:2 grant:1 positive:2 t1:1 engineering:1 local:3 cliff:1 subscript:1 yd:5 might:1 emphasis:1 challenging:1 hmms:5 bi:3 range:1 averaged:1 unique:2 yj:2 recursive:3 definite:1 oio:1 implement:1 bootstrap:1 procedure:9 jhu:1 pre:1 radial:1 cannot:2 onto:1 prentice:1 context:1 applying:1 collapsed:2 equalization:2 equivalent:1 map:10 logpi:1 center:1 maximizing:1 demonstrated:1 missing:1 fdkm:15 convex:6 haussler:1 handle:1 gert:2 variation:1 increment:1 pt:1 programming:2 recognition:21 jk:1 labeled:4 database:3 observed:5 electrical:1 solved:1 initializing:1 cy:1 adv:1 trade:2 ydn:3 transforming:1 complexity:1 xio:1 insertion:2 dynamic:3 trained:4 solving:1 segment:3 impurity:1 basis:1 gacv:1 icassp:1 darpa:2 various:1 regularizer:2 train:4 effective:1 artificial:2 gini:4 hyper:1 tightness:1 precludes:1 ability:1 statistic:2 noisy:2 itself:2 final:3 online:1 sequence:19 advantage:1 net:1 product:2 remainder:1 combining:1 convergence:1 juang:1 optimum:3 produce:4 raviv:1 recurrent:2 ij:4 received:1 involves:1 come:1 fij:1 filter:1 stochastic:2 subsequently:2 require:2 subdivided:1 orlando:1 generalization:4 cyj:6 solu:1 extension:1 hall:1 exp:2 algorithmic:1 achieves:1 omitted:1 estimation:11 proc:3 label:11 iw:1 individually:1 tool:1 weighted:1 mit:2 gaussian:1 aim:1 rather:1 ctj:1 hj:1 breiman:1 jaakkola:1 encode:1 refining:1 improvement:2 likelihood:1 contrast:1 am:1 posteriori:3 membership:1 typically:1 qnn:2 a0:2 hidden:4 wij:1 arg:1 dual:8 classification:4 hon:1 priori:1 art:3 spatial:1 wadsworth:1 identical:1 jones:1 future:1 minimized:1 connectionist:2 contaminated:1 adn:4 report:1 few:1 recognize:1 consisting:1 friedman:1 englewood:1 evaluation:2 primal:2 hg:2 accurate:1 grove:1 poggio:1 chakrabartty:3 tree:1 re:3 instance:3 modeling:1 soft:1 markovian:4 disadvantage:1 formulates:1 logp:1 zn:1 maximization:1 cost:3 uniform:1 recognizing:1 seventh:1 dependency:6 combined:1 density:1 fundamental:1 international:1 randomized:1 phon:1 probabilistic:8 xi1:1 off:2 decoding:22 lee:1 hopkins:1 reflect:1 transcribed:2 conf:1 resort:1 li:2 account:1 potential:1 summarized:1 coefficient:2 int:3 explicitly:1 depends:4 vi:1 tion:1 multiplicative:1 performed:1 cauwenberghs:3 reached:1 relied:1 defer:1 timit:8 accuracy:3 phoneme:3 yield:1 rabiner:1 generalize:1 trajectory:1 converged:1 reach:1 inform:1 complicate:1 ed:1 pp:6 involved:1 naturally:1 associated:1 static:2 hamming:1 con:2 dataset:1 improves:1 cj:2 hilbert:1 appears:1 wherein:1 improved:1 formulation:11 evaluated:1 though:1 smola:1 until:1 receives:1 nonlinear:1 logistic:13 bahl:1 quality:1 aj:2 effect:1 requiring:1 normalized:1 true:2 consisted:2 regularization:4 assigned:2 symmetric:1 illustrated:4 deal:1 attractive:1 ll:1 during:2 speaker:6 m:3 performs:1 phonetics:1 ef:1 recently:1 predominantly:1 superior:2 functional:1 qp:2 extend:1 he:1 approximates:1 discussed:1 kluwer:1 cambridge:2 ai:1 tuning:2 automatic:1 consistency:1 language:4 dot:1 access:1 specification:1 surface:1 posterior:1 azo:1 multivariate:1 phone:15 phonetic:6 verlag:1 inequality:1 binary:3 accomplished:1 yi:2 seen:1 morgan:2 additional:1 minimum:1 deng:1 subtraction:1 determine:1 maximize:1 signal:3 ii:1 full:2 multiple:1 infer:1 reduces:3 technical:1 match:2 academic:1 cross:3 long:2 offer:1 raphson:1 cocke:1 involving:1 regression:14 wald:1 expectation:1 iteration:3 kernel:18 lpi:1 normalization:2 folding:1 signify:1 baltimore:1 biasvariance:1 source:1 crucial:1 unlike:4 subject:2 elegant:3 lover:1 bengio:1 architecture:1 wahba:1 hastie:1 inner:1 followin:1 tradeoff:1 intensive:1 fosler:1 shift:1 expression:1 nfor:1 caj:1 speech:14 york:2 afford:1 amount:2 svms:3 augments:1 processed:1 estimated:2 disjoint:3 vol:6 express:1 threshold:1 clarity:1 backward:2 vast:1 sum:1 convert:1 run:2 uncertainty:1 decision:1 summarizes:1 scaling:1 comparable:3 bound:8 fl:3 syllable:1 quadratic:8 replaces:1 constraint:3 min:1 relatively:2 department:2 pacific:1 according:2 icpr:1 march:1 poor:1 belonging:1 legendre:1 partitioned:1 lp:1 wi:1 making:1 pr:1 interference:1 equation:1 remains:1 turn:2 alternative:1 denotes:4 likehood:1 assumes:2 newton:1 madison:1 instant:1 concatenated:1 objective:8 primary:1 dependence:2 md:1 rt:1 gradient:1 lends:1 distance:1 sci:1 capacity:1 hmm:5 decoder:2 extent:1 discriminant:1 length:1 code:1 index:4 providing:1 minimizing:2 difficult:1 implementation:1 reliably:1 upper:1 observation:5 markov:5 descent:3 situation:1 communication:1 reproducing:1 community:2 introduced:1 pair:1 sentence:2 acoustic:3 deletion:1 nip:1 robinson:1 brook:1 proceeds:1 below:1 pattern:4 yc:1 sparsity:2 max:1 memory:1 power:1 suitable:1 hybrid:4 regularized:3 indicator:1 bourlard:1 zhu:1 scheme:1 lazo:2 extract:1 utterance:3 prior:1 epoch:4 l2:1 acknowledgement:1 relative:1 wisconsin:1 embedded:1 catalyst:1 versus:2 clark:1 digital:1 foundation:1 affine:1 sufficient:1 pij:9 mercer:1 principle:1 tightened:1 thresholding:1 pi:6 prone:1 supported:1 sym:1 drastically:1 bias:2 burges:1 fall:1 taking:1 cepstral:2 sparse:6 jelinek:1 distributed:1 fragmented:1 boundary:1 depth:10 greenberg:1 transition:10 computes:1 forward:14 adaptive:1 transaction:4 nov:1 implicitly:1 status:1 ml:2 incoming:1 corpus:2 assumed:2 continuous:5 search:1 iterative:1 table:2 nature:3 channel:2 transfer:1 ca:1 expansion:1 warranted:1 wiley:1 embeds:1 sub:1 trellis:1 decoded:2 bij:6 jensen:1 symbol:6 disregarding:1 svm:5 ments:1 workshop:3 incorporating:1 vapnik:1 sequential:6 effectively:1 conditioned:3 illustrates:1 margin:4 sparser:1 entropy:8 ilx:1 wil2:1 partially:1 lussier:1 springer:1 corresponds:4 satisfies:1 extracted:1 ma:1 conditional:4 modulate:1 formulated:2 presentation:1 fisher:1 determined:2 typical:1 pas:1 est:1 support:9 modulated:3 doddington:1 levenshtein:1 evaluate:1 outgoing:7 audio:1 |
1,366 | 2,243 | A Digital Antennal Lobe for Pattern
Equalization: Analysis and Design
Alex Holub, Gilles Laurent and Pietro Perona
Computation and Neural Systems, California Institute of Technology
[email protected], [email protected], [email protected]
Abstract
Re-mapping patterns in order to equalize their distribution may
greatly simplify both the structure and the training of classifiers.
Here, the properties of one such map obtained by running a few
steps of discrete-time dynamical system are explored. The system
is called 'Digital Antennal Lobe' (DAL) because it is inspired by
recent studies of the antennallobe, a structure in the olfactory system of the grasshopper. The pattern-spreading properties of the
DAL as well as its average behavior as a function of its (few) design parameters are analyzed by extending previous results of Van
Vreeswijk and Sompolinsky. Furthermore, a technique for adapting
the parameters of the initial design in order to obtain opportune
noise-rejection behavior is suggested. Our results are demonstrated
with a number of simulations.
1
Introduction
The complexity of classifiers and the difficulty of learning their parameters is affected
by the distribution of the input patterns. It is easier to obtain simple and accurate
classifiers when the patterns associated with different classes are spaced far apart
and evenly in the input space. Distributions which are lumpy, with classes bunched
up in some regions of space leaving other regions of space empty may be more
difficult to classify. This problem is particularly evident in sensory processing. In
olfaction numerous odors which we wish to discriminate are chemically very similar,
for example the citrus family (orange, lemon, lime ... ), while many odors that are in
principle possible never occur in practice. The uneven chemical spacing for the odors
of interest is expensive: in biological systems there is a premium in the simplicity
of the classifiers that will recognize each individual odor.
When the dimension ofthe pattern space is large (e.g. D > 100), and the number of
classes to be discriminated is relatively small (e.g. N < 1000), one may transform
an uneven distribution of patterns into an evenly distributed one by means of a map
that 'randomizes' the position of each pattern, i.e. that takes (small) neighborhoods
of the input space and remaps them to random locations. In large-dimensional
spaces it is exceedingly likely that two contiguous regions will be remapped to
locations whose distance is comparable with the diameter of the space, and thus
the distribution of patterns is equalized.
We explore a simple dynamical system which realizes one such map for spreading
patterns in a high-dimensional space. The input space is the analog D-dimensional
hypercube (0,1)D and the output space the digital hypercube {0,1}D. The map
is implemented by iterating a discrete-time first-order dynamical system consisting
of two steps at each iteration: a first-order linear dynamical system followed by
memory less thresholding. The interest of the map is that it makes very parsimonious
use of computational hardware (e.g. on the order of D neurons or transistors) and
yet it achieves good equalization in a few time steps. The ideas that we present are
inspired by a computation that may take place in the olfactory system as suggested
in Friedrichs and Laurent [1J and Laurent [2 , 3J. In insects, the anatomical structure
where this computation is presumed to take place is called the 'Antennal Lobe'.
Because of this we call the map a 'Digital Antennal Lobe' (DAL).
2
The digital antennal lobe
The dynamical system we propose is inspired by the overall architecture of the
antennal lobe and is designed to explore its computational capabilities. We apply
two key simplifications: we discretize time into equally spaced 'epochs', updating
synchronously the state of all the neurons in the network at each epoch, and we discretize the value of the state of each unit to the binary set {O, 1}. The physiological
justification for these simplifications goes beyond the scope of this paper.
Consider a collection of N binary neurons which are randomly connected and updated synchronously. The network is initially quiescent (i.e. all the neurons have
constant state zero). At some time an input is applied causing the network to
take values that are different from zero. The state of the network evolves in time.
The state of the network after a given constant number of time-steps (e.g. 10-20
time-steps) is the desired output of the system. Let us introduce the following
notation:
Number of excitatory, inhibitory, and external input units.
Total number of excitatory and inhibitory units (N = N E + N I )
Neuron index: i E {1, ... ,NE} for excitatory and
i E {NE + 1, ... ,N} for inhibitory.
x~ E {O, 1}V'i Value of unit i at time t.
Xl Vector of values for all excitatory and inhibitory units at time t.
c Connectivity: cN is the number of inputs to a given neuron.
KE,KI,Ku Excitatory, inhibitory, and external input (i.e. KE = eNE) .
A Matrix of connections. A has eN 2 nonzero entries.
Aij Connection weight of unit j to unit i.
aE, aI, au Excitatory, inhibitory, input weights (Aij E {aI,O,aE}).
T Activation thresholds for all the neurons
it Vector of pattern inputs.
B Matrix of excitatory connections from pattern inputs to units.
gt Vector of neuronal input currents, i.e. gt+l = AXl + Bat - T.
Xl = 1(gt) Update equation for x. 1(?) is the Heaviside function.
mt Mean activity in the network at time t, i.e. mt = Li xi/No
mu Fraction of the external inputs which are active.
A DAL may be generated once the value of 5 parameters are chosen. Assume excitatory connection weight aE = au = 1 (this is a normalization constant). Choose
a value for aI, c, T, N I , N E . Generate random connection matrices A and B with
average connectivity e and connection weights aE, aI. Solve the following dynamical
system forward in time from a zero initial condition:
""""''''''?''' '-?''''''' 1\1'_ '1.2_'''"'''' 1'''''''1''' '' '''''''''''''1 ''''') ''''''''
f" -??? ~" ???
,I
-.-.-.e--. '
I~? ?-- e"?--?''-?>i
?
\
I',
r1\
I ',
/
I "",
I /~--.~
I "'"
!
-----,:;------;;;------c:. .' +'" .-.-;
Figure 1: Example of pattern spreading by the a DAL. (Left) Response of a DAL to
10 uniformly distributed random olfactory input patterns applied at time epoch t = 3.
Each vertical panel represents the state of excitatory units at a given time epoch (epochs
2,4,8,10 and excitatory units 1-200 are shown) in response to all stimuli. In a given
panel the row index refers to a given excitatory unit and the column index to a given
input pattern (200 of 1024 excitatory units shown and 10 input patterns). A white dot
represents a state of '1' and a dark dot represent a state of '0'. Around 10% of the neurons
are active (i.e. state = '1') by the 8th time-epoch. The salt-and-pepper pattern present
in each panel indicates that excitatory units respond differently to each input pattern.
(Center) Activity of the DAL in response to 10 stimuli that differ only in one out of 1024
input dimensions, i.e. 0.1%. The horizontal streaks in the panels corresponding to early
epochs (t = 4 and t = 6) indicate that the excitatory units respond equally or similarly
to all input patterns. The salt-and-pepper pattern in later epochs indicates that the
time course of each excitatory units state becomes increasingly different in time. (Right)
Time-course of the normalized average distance between the patterns corresponding to
different families of input patterns: the red curve corresponds to input patterns that
are very different (average difference 20%), while the green and blue curve correspond
to families of similar input patterns: 0.1% average difference for the green curve and
0.2% average difference for the blue curve. The parameters used in this network were
aJ = 10, c = .05, T = 10, NE = 1024, NJ = 256.
o
Axt- 1
l(yt)
+ Bit -
T,
t>0
zero initial condition
neuronal input
state update
for some (constant) input pattern it. The notation 1(?) indicates the Heaviside step
function.
The overall behavior of the DAL in response to different olfactory inputs is illustrated in Figure 1. Notice the main features of the DAL. (1) In response to an
input each unit exhibits a complex temporal pattern of activity. (2) The pattern
is different for different inputs. (3) The average activity rate of the neurons is
approximately independent of the input pattern. (4) When very different input
patterns are applied the average normalized Hamming distance between excitatory
unit states is almost maximal immediately after the onset of the input stimulus. (5)
When very similar input patterns are applied (e.g. 0.1 % average difference), the
average normalized Hamming distance between excitatory unit patterns is initially
very small, i.e. initially the excitatory units respond similarly to similar inputs.
The difference increases with time and reaches almost maximal value within 8-9
time-epochs.
The 'chaotic' properties of sparsely connected networks of neurons were noticed
and studied by Van Vreeswijk and Sompolinsky [5] in the limit of 00 neurons. In
this paper we study networks with a small number of neurons comparable to the
number observed within the antennal lobe. Additionally, we propose a technique
for the design of such networks, and demonstrate the possibility of 'stabilizing' some
trajectories by parameter learning.
2.1
Analytic solution and equilibrium of network
The use of simplified neural elements, namely McCulloch-Pitts units [4], allows us
to represent the system as a simple discrete time dynamical system. Furthermore,
we are able to create expressions for various network properties. Several distributions can be used to approximate the number of active units in the population of
excitatory, inhibitory, and external units, including: (1) the Binomial distribution,
(2) the Poisson distribution, and (3) the Gaussian distribution. An approximation
common to all three is that the activities of all units are uncorrelated. The Gaussian
approximation will yield Van Vreeswijk and Sompolinsky's analysis [5].
Given the population activity at a time t, mt, we can calculate the expected value
for the population activity at the next time step, m H1 :
KE KJ Ku
E(m t+1) =
2..= 2..= 2..=p(e)p(i)p(u)l(aEe + ali + auu -
T)
e=O i = O u=O
Where pee), p(i), and p(u) are the probabilities of e excitatory, i inhibitory, and u
external inputs being active. Both e and i are binomially distributed with mean
activity m = mt, while the external input is binomially distributed with mean
activity m = mu:
The Poisson distribution can be used to approximate the binomial distribution
for reasonable values of A, where for instance Ae = K emt. Using the Poisson
approximation, the probability of j units being active is given by:
In the limit as N ---+ 00, the distributions for the sum of the number of excitatory,
inhibitory, and external units active approach normal distributions. Since the sum
of Gaussian random variables is itself a Gaussian random variable, we can model
the net input to a unit as the sum of the excitatory, inhibitory, and external input
shifted by a constant representing the threshold. The mean f-L and variance (J2 of
the Gaussian representing the input to an individual unit are then:
f-L
(J2 = NE[a~mtc -
= aEm t KE
+ alm t Kl + aumuKu - T
a~c2mt] + Nl[aJmtc - aJc 2mt] + Nu[a~muc -
a~c2mu]
The fraction of active input units can be determined by considering the area under
the gaussian corresponding to positive cumulative input:
The predicted population mean activity was calculated by imposing that the system
is at equilibrium. The equilibrium condition is satisfied when mt = mHl.
Figure 2: Design of a DAL. (Left) Behavior of the system for a given connectivity value.
Light gray indicates inhibition-threshold values that yield a stable dynamical system. That
is, small perturbations of firing activity do not result in large fluctuations in activity later
in time. The dark blue line indicates equilibria, i.e. inhibition-threshold values for which
the dynamical system rests at a constant mean-firing rate. (Center) The stable portions of
the equilibrium curves for a number of connectivity values. Using this chart one may design
an antennal lobe: for any given connectivity choose inhibition and threshold values that
produce a desired mean firing rate. (Right) The design procedure produces networks that
behave as desired. The arrows indicate parameter sets for which Monte Carlo simulation
were performed in order to test the accuracy of the predictions. The values indexing the
arrows correspond to the absolute difference ofthe predicted activity (.15) using a binomial
approximation and the mean simulation activity across 10 random inputs to 10 different
networks with the specified parameters sets.
We found the binomial approximation to yield the most accurate predictions in
parameter ranges of interest to us, namely 500-4000 total units and connectivities
ranging from .05-.15 (see Figure 2). The binomial approximation was always within
1 standard deviation of the Monte Carlo means. The Gaussian approximation
yielded slightly less accurate predictions but required a fraction of the time to
compute.
3
Design of the Antennal Lobe
The analysis described above allows us to design well behaved DALs. Specifically,
we can predict which subsets of parameters in a given parameter range yield good
network behavior. These predictions are made by solving the update equation for
multiple sets of parameters and then determining which parameter ranges yield
networks which are both stable and at equilibrium.
Figure 2 outlines the design technique for a network of 512 excitatory and 512
inhibitory units and a population mean activity of .15. The predicted activity of the
network for different parameter sets corresponds well with that observed in Monte
Carlo simulations. There is an average difference of .0061 between the predicted
mean activity and that found in the simulations (see Figure 2, right plot).
4
Learning for trajectory stabilization
Consider a 'physical' implementation of the DAL, either by means of neurons in
a biological system or by transistors in an electronic circuit. The inevitable presence of noise points to a fatal flaw of the DAL as we have seen it so far. The
key property of the DAL is input decorrelation. In the presence of noise the same
input applied multiple times to the same network will produce divergent trajectories , hence different final conditions, thus making the use of DALs for pattern
classification problematic.
Consider the possibility that noise is present in the system: as a result of fluctuations
in the level of the input ii, fluctuations in the biophysical properties of the neurons,
etc. We may represent this noise as an additional term fi in the dynamical system:
ifAX't + Biit - T
X'tH
l(if + fit)
Whatever the statistics of the noise, it is clear that it may influence the trajectory X'
of the dynamical system. Indeed, if yf, the nominal input to a neuron, is sufficiently
close to zero, then even a small amount of noise may change the state xf of that
neuron. As we saw in earlier sections this implies that the ensuing trajectory will
diverge from the trajectory of the same system with the same inputs and no noise
or the same inputs and a different realization of the same noise process. This is
shown in the left panel of Figure 3. On the other hand, if yf is far from zero, then
xf will not change even with large amounts of noise. This raises the possibility
that, if a DAL is appropriately designed, it may exhibit a high degree of robustness
to noise. Ideally, for any given initial condition and input, and for any E, there
exists a constant Yo > 0 such that any initial condition and input in a Yo-ball
around the original input and initial condition will produce trajectories that differ
at most by E. Clearly, if E = 0 (i.e. the trajectory is required to be identical to
the one of the noiseless system) then all trajectories of the system must coincide,
not very useful. Similarly, if E <~ Yo the map will not spread different inputs.
Therefore, this formulation of the problem does not have a satisfactory solution.
One may, however, consider a weaker requirement. If the total number of patterns
to be discriminated is not too large (probably 10-1000 in the case of olfaction) one
could think of requiring noise robustness only for the trajectories X'that are specific
to those patterns. We therefore explored whether it was in principle possible to
stabilize trajectories corresponding to different odor presentations rather than all
trajectories.
We wish to change the connection weights A, B and thresholds T so that the network
is robust with respect to noise around a given trajectory X'(ii). In order to achieve
this we wish to ensure that at no time t neuron i has an input that is close to the
threshold. If neuron i is not firing at time t (i.e. xf = 0) then its input must be
comfortably less than zero (i.e. for some constant Yo > 0, yf < -Yo) and viceversa
for xf = 1. We do so by minimizing an appropriate cost function: call g(.) an
appropriate penalty function, e.g. g(y) = exp(y/yo) , then the cost of neuron i at
time t if xf = 0 is Cf = g(yf) and if xf = 1 then Cf = g( -yf). Therefore:
cf
C(A,B,T)
g( (1 -
2xDyf)
LLCf
The minimization may proceed by gradient descent. The equations for the gradient
are:
aCf
--'
aA ij
ayf
aA ij
similarly,
ayf
aBij
Dive rgerlCe Of 22 Traje<;tori es 8efore Leam ing
Divergence of Trajectories After Leam ing f O Ti me_Steps
Figure 3: Robustness of trajectories to noise resulting from network learning. (Left)
Pattern spreading in a DAL before learning. Each curve corresponds to the divergence
rate between 10 identical trajectories in the presence of 5% gaussian synaptic noise added
to each active presynaptic synapse. All patterns achieve maximum spreading in 9-10
steps as also shown in Figure 1. (Right) The divergence rate of the same trajectories after
learning the first 10 steps of each trajectory. Each trajectory was learned sequentially, with
the trajectory labelled 1 learned first. Note that trajectories learned later, for instance
trajectory 20, diverge more slowly than earlier learned trajectories. Thus, the trajectories
learned earlier are forgotten while more recently acquired trajectories are maintained.
Furthermore, the trajectories maintain their stereotyped ability to decorrelate both after
they are forgotten (e.g. trajectory 8) and after the 10 step learning period is over (e.g.
trajectory 20). Untrained trajectories behave the same as trajectories in the left panel.
-1
In Figure 3 the results of one learning experiment are shown. Before learning all
trajectories are susceptible to synaptic noise. After learning, those trajectories
learned last exhibit robustness to noise, while trajectories learned earlier are slowly
forgotten. We can compare each learned trajectory to a curve in multi-dimensional
space with a 'robustness pipe' surrounding it. Any points lying within this pipe
will be part of trajectories that remain within the pipe. In the case of olfactory
processing, different odors correspond to unique trajectories, while trajectories lying
within a common pipe correspond to the same input odor presentation.
A few details on the experiment: The network contained 2048 neurons, half of
which were excitatory and the other half inhibitory. The values of the constants
were: c = 0.08, aE = 1, a[ = 1.5, T = 7.2, and the mean firing rate was set at about
.05. The optimization took 60 gradient-descent steps.
5
Discussion and Conclusions
Sparsely connected networks of neurons have 'chaotic' properties which may be
used for equalizing a set of patterns in order to make their classification easier. In
studying the properties of such networks we extend previous results on networks
with 00 neurons by van Vreeswijk and Sompolinsky to the case of small number
of neurons. We also provide techniques for designing networks that have desired
average properties. Moreover, we propose a learning technique to make the network immune to noise around chosen trajectories while preserving the equalization
property elsewhere.
A number of issues are left open. A precise characterization of the effects of the DAL
on the distribution of the input parameters, and the consequent improvement in the
ease of pattern classification is still missing. The geometry of the map implemented
by the DAL is also unclear. Finally, it would be useful to obtain a quantitative
estimate for the 'capacity' of the DAL, i.e. the number of trajectories which can be
learned in any given network before older trajectories are forgotten.
Acknowledgements
We would like to thank Or Neeman for useful suggestions and feedback. This work
was supported in part by the Engineering Research Centers Program of the National
Science Foundation under Award Number EEC-9402726.
References
[1] Friedrich R. & Laurent, G. (2001) Dynamical optimization of odor representations
by slow temporal patterning of mitral cell activity. Science 291:889-894.
[2] Laurent G, Stopfer M, Friedrich RW, Rabinovich MI, Volkovskii A, Abarbanel HD.
(2001) Odor encoding as an active, dynamical process: experiments, computation,
and theory. Ann Rev Neurosci. 24:263-97.
[3] Laurent G. (2002) Olfactory network dynamics and the encoding of multidimensional
signals. Nat Rev Neurosci 3(11):884-95.
[4] McCulloch WS, Pitts W. (1943). A logical calculus of ideas immanent in nervous
activity. Bulletin of Mathematical Biophysics 5: 115-133.
[5] van Vreeswijk C, Sompolinsky H (1998) Chaotic balanced state in a model of cortical
circuits. Neural Computation. 10(6): 1321-71.
| 2243 |@word auu:1 open:1 calculus:1 simulation:5 lobe:9 decorrelate:1 initial:6 neeman:1 current:1 activation:1 yet:1 must:2 dive:1 analytic:1 designed:2 plot:1 update:3 half:2 patterning:1 nervous:1 characterization:1 location:2 mathematical:1 olfactory:6 introduce:1 acquired:1 alm:1 expected:1 indeed:1 presumed:1 behavior:5 multi:1 inspired:3 considering:1 becomes:1 notation:2 moreover:1 panel:6 circuit:2 mcculloch:2 nj:1 temporal:2 forgotten:4 quantitative:1 multidimensional:1 ti:1 axl:1 friedrichs:1 axt:1 classifier:4 whatever:1 unit:30 positive:1 before:3 engineering:1 limit:2 randomizes:1 encoding:2 laurent:6 firing:5 fluctuation:3 approximately:1 au:2 studied:1 ease:1 range:3 bat:1 unique:1 practice:1 chaotic:3 procedure:1 area:1 adapting:1 viceversa:1 refers:1 close:2 influence:1 equalization:3 map:8 demonstrated:1 center:3 yt:1 missing:1 go:1 ke:4 stabilizing:1 simplicity:1 immediately:1 hd:1 population:5 justification:1 updated:1 nominal:1 designing:1 element:1 expensive:1 particularly:1 updating:1 sparsely:2 observed:2 calculate:1 region:3 connected:3 sompolinsky:5 equalize:1 balanced:1 mu:2 complexity:1 ideally:1 dynamic:1 raise:1 solving:1 ali:1 differently:1 various:1 surrounding:1 monte:3 equalized:1 neighborhood:1 whose:1 solve:1 fatal:1 ability:1 statistic:1 think:1 transform:1 itself:1 final:1 transistor:2 net:1 biophysical:1 took:1 propose:3 equalizing:1 maximal:2 causing:1 j2:2 realization:1 achieve:2 empty:1 requirement:1 extending:1 r1:1 produce:4 ij:2 implemented:2 predicted:4 indicate:2 implies:1 differ:2 bunched:1 stabilization:1 mtc:1 biological:2 lying:2 around:4 sufficiently:1 normal:1 exp:1 equilibrium:6 mapping:1 scope:1 predict:1 pitt:2 achieves:1 early:1 pee:1 realizes:1 spreading:5 saw:1 create:1 minimization:1 clearly:1 gaussian:8 always:1 rather:1 yo:6 improvement:1 indicates:5 greatly:1 flaw:1 initially:3 perona:2 w:1 overall:2 classification:3 issue:1 insect:1 grasshopper:1 orange:1 once:1 never:1 identical:2 represents:2 inevitable:1 stimulus:3 simplify:1 few:4 randomly:1 antennallobe:1 recognize:1 divergence:3 individual:2 national:1 geometry:1 consisting:1 maintain:1 olfaction:2 interest:3 possibility:3 analyzed:1 nl:1 light:1 accurate:3 re:1 desired:4 dal:20 instance:2 classify:1 column:1 earlier:4 contiguous:1 rabinovich:1 cost:2 deviation:1 entry:1 subset:1 too:1 eec:1 muc:1 diverge:2 connectivity:6 satisfied:1 choose:2 slowly:2 external:8 abarbanel:1 li:1 stabilize:1 onset:1 later:3 h1:1 performed:1 red:1 portion:1 capability:1 chart:1 accuracy:1 variance:1 spaced:2 ofthe:2 correspond:4 yield:5 carlo:3 trajectory:40 lumpy:1 reach:1 synaptic:2 associated:1 mi:1 hamming:2 logical:1 holub:2 response:5 synapse:1 formulation:1 furthermore:3 hand:1 horizontal:1 yf:5 aj:1 gray:1 behaved:1 effect:1 normalized:3 requiring:1 hence:1 chemical:1 nonzero:1 satisfactory:1 illustrated:1 white:1 maintained:1 evident:1 outline:1 demonstrate:1 ranging:1 fi:1 recently:1 common:2 mt:6 discriminated:2 emt:1 physical:1 volkovskii:1 salt:2 analog:1 comfortably:1 extend:1 imposing:1 ai:4 similarly:4 efore:1 dot:2 immune:1 stable:3 inhibition:3 gt:3 etc:1 recent:1 apart:1 binary:2 aee:1 caltech:3 aem:1 seen:1 preserving:1 additional:1 period:1 signal:1 ii:2 multiple:2 ing:2 xf:6 equally:2 award:1 biophysics:1 prediction:4 ae:6 noiseless:1 poisson:3 iteration:1 normalization:1 represent:3 cell:1 spacing:1 leaving:1 appropriately:1 rest:1 probably:1 call:2 presence:3 fit:1 pepper:2 architecture:1 idea:2 cn:1 whether:1 expression:1 penalty:1 proceed:1 useful:3 iterating:1 clear:1 amount:2 dark:2 hardware:1 diameter:1 rw:1 generate:1 problematic:1 inhibitory:12 notice:1 shifted:1 anatomical:1 blue:3 discrete:3 affected:1 key:2 threshold:7 pietro:1 fraction:3 sum:3 respond:3 chemically:1 family:3 place:2 almost:2 reasonable:1 electronic:1 parsimonious:1 lime:1 comparable:2 bit:1 ki:1 followed:1 simplification:2 mitral:1 yielded:1 activity:19 lemon:1 occur:1 alex:1 relatively:1 ball:1 across:1 slightly:1 increasingly:1 remain:1 evolves:1 making:1 ayf:2 rev:2 indexing:1 ene:1 equation:3 vreeswijk:5 studying:1 apply:1 appropriate:2 odor:9 robustness:5 original:1 binomial:5 running:1 ensure:1 cf:3 hypercube:2 noticed:1 added:1 unclear:1 exhibit:3 gradient:3 distance:4 thank:1 capacity:1 ensuing:1 evenly:2 presynaptic:1 index:3 minimizing:1 difficult:1 susceptible:1 opportune:1 design:10 binomially:2 implementation:1 gilles:1 discretize:2 vertical:1 neuron:23 descent:2 behave:2 precise:1 synchronously:2 perturbation:1 namely:2 required:2 kl:1 specified:1 connection:7 pipe:4 friedrich:2 california:1 learned:9 nu:1 beyond:1 suggested:2 able:1 dynamical:13 pattern:38 remapped:1 program:1 green:2 memory:1 including:1 decorrelation:1 difficulty:1 representing:2 older:1 technology:1 numerous:1 ne:4 kj:1 epoch:9 acknowledgement:1 determining:1 antennal:9 suggestion:1 digital:5 foundation:1 degree:1 principle:2 thresholding:1 uncorrelated:1 row:1 excitatory:24 course:2 elsewhere:1 supported:1 last:1 aij:2 weaker:1 institute:1 bulletin:1 absolute:1 van:5 distributed:4 curve:7 dimension:2 calculated:1 feedback:1 cumulative:1 cortical:1 exceedingly:1 sensory:1 forward:1 collection:1 made:1 coincide:1 simplified:1 far:3 approximate:2 stopfer:1 active:9 sequentially:1 quiescent:1 xi:1 additionally:1 ku:2 robust:1 streak:1 untrained:1 complex:1 main:1 spread:1 stereotyped:1 arrow:2 neurosci:2 noise:18 immanent:1 neuronal:2 en:1 slow:1 position:1 wish:3 acf:1 torus:1 xl:2 specific:1 explored:2 divergent:1 physiological:1 leam:2 consequent:1 exists:1 nat:1 easier:2 rejection:1 likely:1 explore:2 contained:1 aa:2 corresponds:3 presentation:2 ann:1 labelled:1 change:3 determined:1 specifically:1 uniformly:1 called:2 total:3 discriminate:1 e:1 premium:1 uneven:2 heaviside:2 |
1,367 | 2,244 | Parametric Mixture Models for
Multi-Labeled Text
Naonori Ueda
Kazumi Saito
NTT Communication Science Laboratories
2-4 Hikaridai, Seikacho, Kyoto 619-0237 Japan
{ueda,saito}@cslab.kecl.ntt.co.jp
Abstract
We propose probabilistic generative models, called parametric mixture models (PMMs), for multiclass, multi-labeled text categorization problem. Conventionally, the binary classification approach
has been employed, in which whether or not text belongs to a category is judged by the binary classifier for every category. In contrast, our approach can simultaneously detect multiple categories of
text using PMMs. We derive efficient learning and prediction algorithms for PMMs. We also empirically show that our method could
significantly outperform the conventional binary methods when applied to multi-labeled text categorization using real World Wide
Web pages.
1
Introduction
Recently, as the number of online documents has been rapidly increasing, automatic text categorization is becoming a more important and fundamental task in
information retrieval and text mining. Since a document often belongs to multiple
categories, the task of text categorization is generally defined as assigning one or
more category labels to new text. This problem is more difficult than the traditional
pattern classification problems, in the sense that each sample is not assumed to be
classified into one of a number of predefined exclusive categories. When there are
L categories, the number of possible multi-labeled classes becomes 2L . Hence, this
type of categorization problem has become a challenging research theme in the field
of machine learning.
Conventionally, a binary classification approach has been used, in which the multicategory detection problem is decomposed into independent binary classification
problems. This approach usually employs the state-of-the-art methods such as support vector machines (SVMs) [9][4] and naive Bayes (NB) classifiers [5][7]. However,
since the binary approach does not consider a generative model of multi-labeled text,
we think that it has an important limitation when applied to the multi-labeled text
categorization.
In this paper, using independent word-based representation, known as Bag-of-Words
(BOW) representation [3], we present two types of probabilistic generative models
for multi-labeled text called parametric mixture models (PMM1, PMM2), where
PMM2 is a more flexible version of PMM1. The basic assumption under PMMs is
that multi-labeled text has a mixture of characteristic words appearing in singlelabeled text that belong to each category of the multi-categories. This assumption
leads us to construct quite simple generative models with a good feature: the objective function of PMM1 is convex (i.e., the global optimum solution can be easily
found). We present efficient learning and prediction algorithms for PMMs. We also
show the actual benefits of PMMs through an application of WWW page categorization, focusing on those from the ?yahoo.com? domain.
2
2.1
Parametric Mixture Models
Multi-labeled Text
According to the BOW representation, which ignores the order of word occurrence
in a document, the nth document, dn , can be represented by a word-frequency
vector, xn = (xn1 , . . . , xnV ), where xni denotes the frequency of word wi occurrence
in dn among the vocabulary V =< w1 , . . . , wV >. Here, V is the total number
n
) be a category vector for
of words in the vocabulary. Next, let y n = (y1n , . . . , yL
n
n
n
d , where yl takes a value of 1(0) when d belongs (does not belong) to the lth
category. L is the total number of categories. Note that L categoriesP
are pre-defined
and that a document always belongs to at least one category (i.e., l yl > 0).
In the case of multi-class and single-labeled text, it is natural that x in the lth catQV
egory should be generated from a multinomial distribution: P (x|l) ? i=1 (?l,i )xi
PV
Here, ?l,i ? 0 and i=1 ?l,i = 1. ?l,i is a probability that the ith word wi appears
in a ducument belonging to the lth class. We generalize this to multi-class and
multi-labeled text as:
P (x|y) ?
V
Y
i=1
x
(?i (y)) i ,
where ?i (y) ? 0 and
V
X
?i (y) = 1.
(1)
i=1
Here, ?i (y) is a class-dependent probability that the ith word appears in a document
belonging to class y. Clearly, it is impractical to independently set a multinomial
parameter vector to each distinct y, since there are 2L ? 1 possible classes. Thus,
we try to efficiently parameterize them.
2.2
PMM1
In general, words in a document belonging to a multi-category class can be regarded
as a mixture of characteristic words related to each of the categories. For example, a
document that belongs to both ?sports? and ?music? would consist of a mixture of
characteristic words mainly related to both categories. Let ? l = (?l,1 , . . . , ?l,V ). The
above assumption indicates that ?(y)(= (?1 (y), . . . , ?V (y))) can be represented by
the following parametric mixture:
?(y) =
L
X
hl (y)? l , where hl (y) = 0 for l such that yl = 0.
(2)
l=1
PL
Here, hl (y)(> 0) is a mixing proportion ( l=1 hl (y) = 1). Intuitively, hl (y) can
also be interpreted as the degree to which x has the lth category. Actually, by
experimental verification using about 3,000 real Web pages, we confirmed that the
above assumption was reasonable.
Based on the parametric mixture assumption, we can construct a simple parametric
PL
mixture model, PMM1, in which the degree is uniform: hl (y) = yl / l0 =1 yl0 .
For example, in the case of L = 3, ?((1, 1, 0)) = (? 1 + ? 2 )/2 and ?((1, 1, 1)) =
(? 1 + ? 2 + ? 3 )/3.
Substituting Eq. (2) into Eq. (1), PMM1 can be defined by
!x i
PL
V
Y
l=1 yl ?l,i
P (x|y, ?) ?
.
PL
l0 =1 yl0
i=1
(3)
A set of unknown model paramters in PMM1 is ? = {? l }L
l=1 .
Of course, multi-category text may sometimes be weighted more toward one category than to the rest of the categories among multiple categories. However, being
averaged over all biases, they could be canceled and therefore PMM1 would be
reasonable. This motivates us to construct PMM1.
PMMs are different from usual distributional mixture models in the sense that the
mixing is performed in a parameter space, while the latter several distributional
components are mixed. Since the latter models assume that a sample is generated
from one component, they cannot represent ?multiplicity.? On the other hand,
PMM1 can represent 2L ? 1 multi-category classes with only L parameter vectors.
2.3 PMM2
In PMM1, shown in Eq. (2), ?(y) is approximated by {? l }, which can be regarded
as the ?first-order? approximation. We consider the second order model, PMM2, as
a more flexible model, in which parameter vectors of duplicate-category, ? l,m , are
also used to approximate ?(y).
L X
L
X
?(y) =
hl (y)hm (y)? l,m , where ? l,m = ?l,m ? l + ?m,l ? m .
(4)
l=1 m=1
Here, ?l,m is a non-negative bias parameter satisfying ?l,m + ?m,l = 1, ?l, m.
Clearly, ?l,l = 0.5. For example, in the case of L = 3, ?((1, 1, 0)) = {(1+2?1,2 )? 1 +
(1 + 2?2,1 )? 2 }/4, ?((1, 1, 1)) = {(1 + 2(?1,2 + ?1,3 ))? 1 + (1 + 2(?2,1 + ?2,3 ))? 2 + (1 +
2(?3,1 + ?3,2 ))? 3 }/9. In PMM2, unlike in PMM1, the category biases themselves
can be estimated from given training data.
Based on Eq. (4), PMM2 can be defined by
(P P
) xi
V
L
L
Y
m=1 yl ym ?l,m,i
l=1
P (x|y; ?) ?
PL
PL
l=1 yl
m=1 ym
i=1
(5)
A set of unknown parameters in PMM2 becomes ? = {? l , ?l,m }L,L
l=1,m=1 .
2.4 Related Model
Very recently, as a more general probabilistic model for multi-latent-topics text,
called Latent Dirichlet Allocation (LDA), has been proposed [1]. However, LDA is
formulated in an ?unsupervised? manner. Blei et al. also perform single-labeled text
categorization using LDA in which individual LDA is fitted to each class. Namely,
they do not explain how to model the observed class labels y in LDA.
In contrast, our PMMs can efficiently model class y, depending on other classes
through the common basis vectors. Moreover, based on the PMM assumtion, models
much simpler than LDA can be constructed as mentioned above. Moreover, unlike
in LDA, it is feasible to compute the objective functions for PMMs exactly as shown
below.
3
Learning & Prediction Algorithms
3.1 Objective functions
Let D = {(xn , y n )}N
n=1 denote the given training data (N labeled documents). The
unknown parameter ? is estimated by maximizing posterior p(?|D). Assuming
? map = arg max? {log P (xn |y n , ?) + log p(?)}.
that P (y) is independent of ?, ?
Here, p(?) is prior over the parameters. We used the following conjugate priors
QL QV
??1
for PMM1
(Dirichlet distributions) over ?l and ?l,m as: p(?) ? l=1 i=1 ?l,i
QL QV
Q
Q
L
L
??1
??1
and p(?) ? ( l=1 i=1 ?l,i )( l=1 m=1 ?l,m ) for PMM2. Here, ? and ? are
hyperparameters and in this paper we set ? = 2 and ? = 2, each of which is
equivalent to Laplace smoothing for ?l,i and ?l,m , respectively.
? map is given by
Consequently, the objective function to find ?
J(?; D) = L(?; D) + (? ? 1)
L X
V
X
log ?l,i + (? ? 1)
L X
L
X
log ?l,m .
(6)
l=1 m=1
l=1 i=1
Of course, the third term on the RHS of Eq. (6) is just ignored for PMM1. The
likelihood term, L, is given by
PMM1 :
L(?; D) =
N X
V
X
xn,i log
n=1 i=1
PMM2 :
L(?; D) =
N X
V
X
L
X
hnl ?l,i ,
(7)
l=1
xn,i log
n=1 i=1
L X
L
X
hnl hnm ?l,m,i .
(8)
l=1 m=1
Note that ?l,m,i = ?l,m ?l,i + ?m,l ?m,i .
3.2 Update formulae
The optimization problem given by Eq. (6) cannot be solved analytically; therefore
some iterative method needs to be applied. Although the steepest ascend algorithms
involving Newton?s method are available, here we derive an efficient algorithm in a
similar manner to the EM algorithm [2]. First, we derive parameter update formulae
for PMM2 because they are more general than those for PMM1. We then explain
those for PMM1 as a special case.
Suppose that ?(t) is obtained at step t. We then attmpt to derive ?(t+1) by using
n
?(t) . For convenience, we define gl,m,i
and ?l,m,i as follows.
n
gl,m,i
(?) = hnl hnm ?l,m,i
L X
L
X
hnl hnm ?l,m,i ,
(9)
l=1 m=1
?l,m,i (? l,m ) = ?l,m ?l,i /?l,m,i , ?m,l,i (? l,m ) = ?m,l ?m,i /?l,m,i .
PL P L
n
Noting that l=1 m=1 gl,m,i
(?) = 1, L for PMM2 can be rewritten as
(10)
hnl hnm ?l,m,i X n n
)
h 0 h 0 ?l0 ,m0 ,i }
hnl hnm ?l,m,i 0 0 l m
n,i
l ,m
l,m
X
X
X
X
n
n
n
n
=
xn,i
gl,m,i
(?(t) ) log hnl hnm ?l,m,i
?
xn,i
gl,m,i
(?(t) ) log gl,m,i
(?).(11)
L(?; D) =
n,i
X
xn,i {
X
n
gl,m,i
(?(t) )} log{(
n,i
l,m
l,m
Moreover, noting that ?l,m,i (? l,m ) + ?m,l,i (? l,m ) = 1, we rewrite the first term on
the RHS of Eq. (11) as
X
X
?l,m ?l,i n n
(t)
n
xn,i
gl,m,i
(?(t) ) ?l,m,i (? l,m ) log{(
)h h ?l,m,i }
?l,m ?l,i l m
n,i
l,m
(t)
+?m,l,i (? l,m ) log{(
?m,l ?m,i n n
)hl hm ?l,m,i } .
?m,l ?m,i
(12)
From Eqs.(11) and (12), we obtain the following important equation:
L(?; D) = U(?|?(t) ) ? T (?|?(t) ).
(13)
Here, U and T are defined by
n
X
(t)
n
U(?|?(t) ) =
(?(t) ) ?l,m,i (? l,m ) log hnl hnm ?l,m ?l,i
xn,i gl,m,i
n,i,l,m
T (?|?(t) ) =
X
xn,i
n,i,l,m
o
(t)
+?m,l,i (? l,m ) log hnl hnm ?m,l ?m,i ,
(14)
n
(t)
n
n
gl,m,i
(?(t) ) log gl,m,i
(?) + ?l,m,i (? l,m ) log ?l,m,i (? l,m )
o
(t)
+?m,l,i (? l,m ) log ?m,l,i (? l,m ) .
(15)
From Jensen?s inequality, T (?|?(t) ) ? T (?(t) |?(t) ) holds. Thus we just maximize
U(?|?(t) ) + log P (?) w.r.t. ? to derive the parameter update formula. Noting that
n
n
, we can derive the following formulae:
? qm,l,i
?l,m,i ? ?m,l,i and ql,m,i
PN
P
L
n
2 n=1 xni m=1 ql,m,i
(?(t) )?l,m,i (?(t) ) + ? ? 1
(t+1)
, ?l, i, (16)
?l,i
= P V PN
P
L
n
2 i=1 n=1 xni m=1 ql,m,i
(?(t) )?l,m,i (?(t) ) + V (? ? 1)
P N PV
n n
(t)
(t)
(t+1)
n=1
i=1 xi ql,m,i (? )?l,m,i (? ) + (? ? 1)/2
?l,m =
, ?l, m 6= l. (17)
PV P N
n n
(t)
i=1
n=1 xi ql,m,i (? ) + ? ? 1
These parameter updates always converge to a local optimum of J given by Eq. (6).
In PMM1, since unknown parameter is just {? l }, by modifying Eq. (9) as
hn ?l,i
n
,
gl,i
(?) = PL l
n
l=1 hl ?l,i
(18)
and rewriting Eq. (7) in a similar manner, we obtain
X
X
X
X
n
n
n
xn,i
xn,i
L(?; D) =
gl,i
(?(t) ) log hnl ?l,i ?
gl,i
(?(t) ) log gl,i
(?).
n,i
n,i
l
(19)
l
In this case, U becomes a simpler form as
U(?|?(t) ) =
N X
V
X
n=1 i=1
xn,i
L
X
n
gl,i
(?(t) ) log hnl ?l,i .
(20)
l=1
P L PV
Therefore, P
maximizing U(?|?(t) )+(? ? 1) l=1 i=1 log ?l,i w.r.t. ? under the
constraint i ?l,i = 1, ?l, we can obtain the following update formula for PMM1:
PN
n
(t)
(t+1)
n=1 xn,i gl,i (? ) + ? ? 1
?li
= PV PN
, ?l, i.
(21)
n
(t)
i=1
n=1 xn,i gl,i (? ) + V (? ? 1)
Remark: The parameter update given by Eq. (21) of PMM1 always converges to
the global optimum solution.
Proof: The Hessian matrix, H, of the objective function, J, of PMM1 becomes
2
d2 J(? + ??; D)
T ? J(?; D)
H=?
? =
????T
d?2
?=0
P
2
n
X
X ?li 2
h
?
li
i
n
l
= ?
xi P n
? (? ? 1)
. (22)
?li
l hi ?li
n,i
l,i
Here, ? is an arbitrary vector in the ? space. Noting that xni ? 0, ? > 1 and ? 6= 0,
H is negative definite; therefore J is a strictly convex
Pfunction of ?. Moreover, since
the feasible region defined by J and constraints
i ?l,1 = 1, ?l is a convex set,
the maximization problem here becomes a convex programming problem and has
a unique global solution. Since Eq. (21) always increases J at each iteration, the
learning algorithm given above always converges to the global optimum solution,
irrespective of any initial parameter value.
3.3 Prediction
? denote the estimated parameter. Then, applying Bayes? rule, the opLet ?
timum category vector y ? for x? of a new document is defined as: y ? =
? under a uniform class prior assumption. Since this maxiarg maxy P (y|x? ; ?)
mization problem belongs to the zero-one integer problem (i.e., NP-hard problem),
an exhaustive search is prohibitive for a large L. Therefore, we solve this problem
approximately with the help of the following greedy-search algorithm. That is, first,
? is maximized. Then, for the reonly one yl1 value is set to 1 so that P (y|x? ; ?)
? is set to 1
maining elements, only one yl2 value, which mostly increases P (y|x? ; ?),
? cannot increase
under a fixed yl1 value. This procedure is repeated until P (y|x? ; ?)
any further. This algorithm successively determines an element in y to increase the
posterior probability until its value does not improve. This is very efficient because
it requires the calculation of the posterior probability at most L(L + 1)/2 times,
while the exhaustive search needs 2L ? 1 times.
4
Experiments
4.1 Automatic Web Page Categorization
We tried to categorize real Web pages linked from the ?yahoo.com? domain1 . More
specifically, Yahoo consists of 14 top-level categories (i.e., ?Arts & Humanities,?
?Business & Economy,? ?Computers & Internet,? and so on), and each category is
classified into a number of second-level subcategories. By focusing on the secondlevel categories, we can make 14 independent text categorization problems. We used
11 of these 14 problems2 . In those 11 problems, mininum (maximum) values of L
and V were 21 (40) and 21924 (52350), respectively. About 30?45% of the pages
are multi-labeled over the 11 problems. To collect a set of related Web pages for
each problem, we used a software robot called ?GNU Wget (version 1.5.3). A text
multi-label can be obtained by following its hyperlinks in reverse toward the page
of origin.
We compared our PMMs with the convetional methods: naive Bayes (NB), SVM,
k-nearest neighbor (kNN), and three-layer neural networks (NN). We used linear
SVMlight (version 4.0), tuning the C (penalty cost) and J (cost-factor for negative
and positive samples) parameters for each binary classification to improve the SVM
results [6]3 . In addition, it is worth mentioning that when performing the SVM,
PV
each xn was normalized to be i=1 xni = 1 because discrimination is much easier
in the V ? 1-dimensional simplex than in the original V dimensional space. In other
words, classification is generally not determined by the number of words on the
page; actually, normalization could also significantly improve the performance.
1
This domain is a famous portal site and most related pages linked from the domain
are registered by site recommendation and therefore category labels would be reliable.
2
We could not collect enough pages for three categories due to our communication
network security. However, we believe that 11 independent problems are sufficient for
evaluating our method.
3
Since the ratio of the number of positive samples to negative samples per category
was quite small in our web pages, SVM without the J option provided poor results.
No.
1
2
3
4
5
6
7
8
9
10
11
Table 1: Performance for 3000 test data using 2000 training data.
NB
SVM
kNN
NN
PMM1
PMM2
41.6 (1.9)
47.1 (0.3) 40.0 (1.1) 43.3 (0.2) 50.6 (1.0) 48.6 (1.0)
75.0 (0.6)
74.5 (0.8) 78.4 (0.4) 77.4 (0.5) 75.5 (0.9) 72.1 (1.2)
56.5 (1.3)
56.2 (1.1) 51.1 (0.8) 53.8 (1.3) 61.0 (0.4) 59.9 (0.6)
39.3 (1.0)
47.8 (0.8) 42.9 (0.9) 44.1 (1.0) 51.3 (2.8) 48.3 (0.5)
54.5 (0.8)
56.9 (0.5) 47.6 (1.0) 54.9 (0.5) 59.7 (0.4) 58.4 (0.6)
66.4 (0.8) 67.1 (0.3) 60.4 (0.5) 66.0 (0.4) 66.2 (0.5) 65.1 (0.3)
51.8 (0.8)
52.1 (0.8) 44.4 (1.1) 49.6 (1.3) 55.2 (0.5) 52.4 (0.6)
52.6 (1.1)
55.4 (0.6) 53.3 (0.5) 55.0 (1.1) 61.1 (1.4) 60.1 (1.2)
42.4 (0.9)
49.2 (0.7) 43.9 (0.6) 45.8 (1.3) 51.4 (0.7) 49.9 (0.8)
41.7 (10.7) 65.0 (1.1) 59.5 (0.9) 62.2 (2.3) 62.0 (5.1) 56.4 (6.3)
47.2 (0.9)
51.4 (0.6) 46.4 (1.2) 50.5 (0.4) 54.2 (0.2) 52.5 (0.7)
We employed the cosine similarity for kNN method (see [8] for more details). As for
NNs, an NN consists of V input units and L output units for estimating a category
vector from each frequency vector. We used 50 hidden units. An NN was trained
to maximize the sum of cross-entropy functions for target and estimated category
vectors of training samples, together with a regularization term consisting of a sum
of squared NN weights. Note that we did not perform any feature transformations
such as TFIDF (for an example, see e.g., [8]) because we wanted to evaluate the
basic performance of each detection method purely.
We used the F-measure as the performance measure which is defined as the weighted
harmonic average of two well-known statistics: precision, P , and recall, R. Let
n
n
? n = (?
y n = (y1n , . . . , yL
) and y
y1n , . . . , y?L
) be actual and predicted category vecn
tors for x , respectively. Subsequently, the Fn = 2Pn Rn /(Pn + Rn ), where
PL
PL
PL
PL
Pn = l=1 yln y?ln / l=1 y?ln and Rn = l=1 yln y?ln / l=1 yln . We evaluated the perP3000
1
formance by F? = 3000
n=1 Fn using 3000 test data independent of the training
data. Although micro- and macro-averages can be used, we think that the samplebased F -measure is the most suitable for evaluating the generalization performance,
since it is natural to consider the i.i.d. assumption for documents.
4.2 Results
For each of the 11 problems, we used five pairs of training and test data sets. In
Table 1 (Table 2) we compared the mean of the F? values over five trials by using
2000 (500) training documents. Each number in parenthesis in the Tables denotes
the standard deviation of the five trials. PMMs took about five minutes for training
(2000 data) and only about one minute for the test (3000 data) on 2.0-Ghz Pentium
PC, averaged over the 11 problmes. The PMMs were much faster than the k-NN
and NN. In the binary approach, SVMs with optimally tuned parameters produced
rather better results than the NB method. The performance by SVMs, however,
was inferior to those by PMMs in almost all problems. These experimental results
support the importance of considering generative models of multi-category text.
When the training sample size was 2000, kNN provided comparable results to the
NB method. On the other hand, when the training sample size was 500, the kNN
method obtained results similar to or slightly better than those of SVM. However,
in both cases, PMMs significantly outperformed kNN. We think that the memorybased approach is limited in its generalization ability for multi-labeled text categorization.
The results of well-regularized NN were fair, although it took an intolerable amount
of training time, indicating that flexible discrimination would not be necessary for
Table 2: Performance for 3000 test
No.
NB
SVM
kNN
1
21.2 (1.0) 32.5 (0.5) 34.7 (0.4)
2
73.9 (0.7) 73.8 (1.2) 75.6 (0.6)
3
46.1 (2.9) 44.9 (1.9) 44.1 (1.2)
4
15.2 (0.9) 33.6 (0.5) 37.1 (1.0)
5
34.1 (1.6) 42.7 (1.3) 43.9 (1.0)
6
50.2 (0.3) 56.0 (1.0) 54.4 (0.9)
7
22.1 (0.8) 32.1 (0.5) 37.4 (1.1)
8
32.7 (4.4) 38.8 (0.6) 48.1 (1.3)
9
17.6 (1.6) 32.5 (1.0) 35.3 (0.4)
10 40.6 (12.3) 55.0 (1.1) 53.7 (0.6)
11
34.2 (2.2) 38.3 (4.7) 40.2 (0.7)
data using 500 training data.
NN
PMM1
PMM2
33.8 (0.4) 43.9 (1.0) 43.2 (0.8)
74.8 (0.9) 75.2 (0.4) 69.7 (8.9)
45.1 (1.0) 56.4 (0.3) 55.4 (0.5)
33.8 (1.1) 41.8 (1.2) 41.9 (0.7)
45.3 (0.9) 53.0 (0.3) 53.1 (0.6)
57.2 (0.7) 58.9 (0.9) 59.4 (1.0)
33.9 (0.8) 46.5 (1.3) 45.5 (0.9)
43.1 (1.0) 54.1 (1.5) 53.5 (1.5)
31.6 (1.7) 40.3 (0.7) 41.0 (0.5)
55.8 (4.0) 57.8 (6.5) 57.9 (5.9)
40.9 (1.2) 49.7 (0.9) 49.0 (0.5)
discriminating high-dimensional, sparse-text data. The results obtained by PMM1
were better than those by PMM2, which indicates that a model with a fixed ? l,m =
0.5 seems sufficient, at least for the WWW pages used in the experiments.
5
Concluding Remarks
We have proposed new types of mixture models (PMMs) for multi-labeled text
categorization, and also efficient algorithms for both learning and prediction. We
have taken some important steps along the path, and we are encouraged by our
current results using real World Wide Web pages. Moreover, we have confirmed
that studying the generative model for multi-labeled text is beneficial in improving
the performance.
References
[1] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. to appear Advances in Neural Information Processing Systems 14. MIT Press.
[2] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete
data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1-38. 1977.
[3] S. T. Dumais, J. Platt, D. Heckerman, & M. Sahami. Inductive learning algorithms
and representations for text categorization. In Proc. of ACM-CIKM?98, 1998.
[4] T. Joachims. Text categorization with support vector machines: Learning with many
relevant features. In Proc. of the European Conference on Machine Learning, 137-142,
Berlin, 1998.
[5] D. Lewis & M. Ringuette. A comparison of two learning algorithms for text categorization. In Third Anual Symposium on Document Analysis and Information Retrieval,
81-93. 1994.
[6] K. Morik, P. Brockhausen, and T. Joachims. Combining statistical learning with
knowledge-based approach. A case study in intensive care monitoring. In Proc. of
International Conference on Machine Learning (ICML?99), 1999.
[7] K. Nigam, A. K. McCallum, S. Thrun, & T. Mitchell. Text classification from labeled
and unlabeled documents using EM. Machine Learning, 39:103-134, 2000.
[8] Y. Yang & J. Pederson. A comparative study on feature selection in text categorization. In Proc of International Conference on Machine Learning, 412-420, 1997.
[9] V. N. Vapnik. Statistical learning theory. John Wiley & Sons, Inc., New York. 1998.
| 2244 |@word trial:2 version:3 proportion:1 seems:1 d2:1 tried:1 initial:1 tuned:1 document:14 current:1 com:2 assigning:1 john:1 fn:2 wanted:1 update:6 discrimination:2 generative:6 prohibitive:1 greedy:1 mccallum:1 ith:2 steepest:1 blei:2 simpler:2 five:4 dn:2 constructed:1 along:1 become:1 symposium:1 consists:2 manner:3 ascend:1 themselves:1 multi:23 kazumi:1 decomposed:1 actual:2 considering:1 increasing:1 becomes:5 provided:2 estimating:1 moreover:5 interpreted:1 transformation:1 impractical:1 every:1 exactly:1 classifier:2 qm:1 platt:1 unit:3 appear:1 positive:2 local:1 path:1 becoming:1 approximately:1 collect:2 challenging:1 co:1 mentioning:1 limited:1 averaged:2 unique:1 definite:1 procedure:1 saito:2 significantly:3 word:14 pre:1 cannot:3 convenience:1 unlabeled:1 egory:1 judged:1 nb:6 selection:1 applying:1 www:2 equivalent:1 conventional:1 map:2 maximizing:2 independently:1 convex:4 rule:1 regarded:2 laplace:1 target:1 suppose:1 programming:1 humanity:1 origin:1 element:2 approximated:1 satisfying:1 distributional:2 labeled:18 observed:1 domain1:1 solved:1 parameterize:1 pmm:1 region:1 mentioned:1 dempster:1 trained:1 rewrite:1 purely:1 basis:1 easily:1 mization:1 represented:2 maining:1 distinct:1 exhaustive:2 quite:2 solve:1 ability:1 statistic:1 knn:7 think:3 laird:1 online:1 took:2 propose:1 macro:1 relevant:1 combining:1 rapidly:1 bow:2 mixing:2 optimum:4 categorization:16 comparative:1 converges:2 help:1 derive:6 depending:1 nearest:1 eq:13 predicted:1 pfunction:1 modifying:1 subsequently:1 generalization:2 tfidf:1 memorybased:1 strictly:1 pl:12 hold:1 substituting:1 m0:1 tor:1 proc:4 outperformed:1 bag:1 label:4 qv:2 weighted:2 mit:1 clearly:2 always:5 rather:1 pn:7 l0:3 joachim:2 indicates:2 mainly:1 likelihood:2 contrast:2 pentium:1 detect:1 sense:2 economy:1 dependent:1 nn:9 hidden:1 canceled:1 classification:7 flexible:3 among:2 arg:1 yahoo:3 art:2 smoothing:1 special:1 field:1 construct:3 ng:1 encouraged:1 unsupervised:1 icml:1 simplex:1 np:1 duplicate:1 employ:1 micro:1 simultaneously:1 individual:1 consisting:1 detection:2 yl0:2 mining:1 mixture:12 pc:1 predefined:1 xni:5 naonori:1 necessary:1 incomplete:1 fitted:1 maximization:1 cost:2 deviation:1 uniform:2 optimally:1 nns:1 dumais:1 fundamental:1 international:2 discriminating:1 probabilistic:3 yl:9 anual:1 ym:2 together:1 w1:1 squared:1 successively:1 hn:1 li:5 japan:1 inc:1 performed:1 try:1 linked:2 bayes:3 option:1 formance:1 characteristic:3 efficiently:2 maximized:1 generalize:1 famous:1 produced:1 monitoring:1 confirmed:2 worth:1 classified:2 explain:2 frequency:3 proof:1 xn1:1 mitchell:1 recall:1 knowledge:1 actually:2 focusing:2 appears:2 evaluated:1 just:3 until:2 hand:2 web:7 lda:7 believe:1 normalized:1 inductive:1 hence:1 analytically:1 regularization:1 laboratory:1 yl2:1 inferior:1 cosine:1 harmonic:1 recently:2 common:1 multinomial:2 empirically:1 jp:1 belong:2 automatic:2 tuning:1 robot:1 similarity:1 posterior:3 belongs:6 reverse:1 inequality:1 wv:1 binary:8 care:1 employed:2 converge:1 maximize:2 multiple:3 kyoto:1 ntt:2 faster:1 calculation:1 cross:1 retrieval:2 parenthesis:1 prediction:5 involving:1 basic:2 iteration:1 represent:2 sometimes:1 normalization:1 addition:1 rest:1 unlike:2 jordan:1 integer:1 intolerable:1 noting:4 svmlight:1 yang:1 enough:1 multiclass:1 intensive:1 whether:1 penalty:1 york:1 hessian:1 remark:2 ignored:1 generally:2 amount:1 svms:3 category:35 outperform:1 estimated:4 cikm:1 per:1 paramters:1 rewriting:1 sum:2 almost:1 reasonable:2 ueda:2 comparable:1 gnu:1 hi:1 internet:1 layer:1 constraint:2 software:1 concluding:1 performing:1 cslab:1 according:1 poor:1 belonging:3 conjugate:1 beneficial:1 slightly:1 em:3 heckerman:1 son:1 wi:2 hl:9 maxy:1 intuitively:1 multiplicity:1 samplebased:1 taken:1 ln:3 equation:1 sahami:1 studying:1 available:1 rewritten:1 appearing:1 occurrence:2 yl1:2 original:1 denotes:2 dirichlet:3 top:1 newton:1 music:1 multicategory:1 hikaridai:1 society:1 objective:5 parametric:7 exclusive:1 usual:1 traditional:1 berlin:1 thrun:1 topic:1 hnm:8 toward:2 assuming:1 morik:1 ratio:1 difficult:1 ql:7 mostly:1 negative:4 motivates:1 unknown:4 perform:2 yln:3 communication:2 rn:3 arbitrary:1 namely:1 pair:1 security:1 registered:1 usually:1 pattern:1 below:1 hyperlink:1 max:1 reliable:1 royal:1 y1n:3 suitable:1 natural:2 business:1 regularized:1 nth:1 improve:3 xnv:1 irrespective:1 conventionally:2 hm:2 naive:2 text:32 prior:3 subcategories:1 mixed:1 limitation:1 allocation:2 hnl:11 degree:2 verification:1 sufficient:2 rubin:1 kecl:1 course:2 gl:18 bias:3 wide:2 neighbor:1 sparse:1 benefit:1 ghz:1 xn:17 world:2 vocabulary:2 evaluating:2 ignores:1 approximate:1 global:4 assumed:1 xi:5 search:3 latent:3 iterative:1 table:5 nigam:1 improving:1 european:1 domain:3 did:1 rh:2 hyperparameters:1 repeated:1 fair:1 site:2 wiley:1 precision:1 theme:1 pv:6 third:2 formula:5 minute:2 jensen:1 svm:7 consist:1 vapnik:1 importance:1 portal:1 easier:1 entropy:1 sport:1 recommendation:1 determines:1 lewis:1 acm:1 brockhausen:1 lth:4 formulated:1 consequently:1 feasible:2 hard:1 specifically:1 determined:1 called:4 total:2 experimental:2 indicating:1 support:3 latter:2 categorize:1 evaluate:1 |
1,368 | 2,245 | Learning a Forward Model of a Reflex
Bernd Porr and Florentin W?org?otter
Computational Neuroscience
Psychology
University of Stirling
FK9 4LR Stirling, UK
bp1,faw1 @cn.stir.ac.uk
Abstract
We develop a systems theoretical treatment of a behavioural system that
interacts with its environment in a closed loop situation such that its motor actions influence its sensor inputs. The simplest form of a feedback
is a reflex. Reflexes occur always ?too late?; i.e., only after a (unpleasant, painful, dangerous) reflex-eliciting sensor event has occurred. This
defines an objective problem which can be solved if another sensor input
exists which can predict the primary reflex and can generate an earlier
reaction. In contrast to previous approaches, our linear learning algorithm allows for an analytical proof that this system learns to apply feedforward control with the result that slow feedback loops are replaced by
their equivalent feed-forward controller creating a forward model. In
other words, learning turns the reactive system into a pro-active system.
By means of a robot implementation we demonstrate the applicability of
the theoretical results which can be used in a variety of different areas in
physics and engineering.
1 Introduction
Feedback loops are prevalent in animal behaviour, where they are normally called a ?reflex?. However, the reflex has the disadvantage of always being too late. Thus, an objective
goal is to avoid a reflex (feedback) reaction. This can be done by an anticipatory (feedforward) action; for example when retracting a limb in response to heat radiation without
actually having to touch the hot surface, which would elicit a pain-induced reflex. While
this has been interpreted as successful forward control [1] the question arises how such a
behavioural system can be robustly generated.
In this article we introduce a linear algorithm for temporal sequence learning between two
sensor events and provide an analytical proof that this process turns a pre-wired reflex loop
into its equivalent feed-forward controller. After learning the system will respond with an
anticipatory action thereby avoiding the reflex.
Figure 1: Diagram of the system in its environment (in Laplace-notation). The input signal
is (?disturbance?) reaching both sensor inputs
at different times as indicated by the
temporal delay . The environmental transfer functions are denoted as .
are linear
the filtered inputs which converge with weights onto the output
transfer functions,
neuron .
2 The learning rule and its environment
Fig. 1 shows the general situation which arises when temporal sequence learning takes
place in a system which interacts with its environment [2]. We distinguish two loops:
The inner loop represents the reflex which has fixed unchanging properties. The outer
loop represents the to-be-learned anticipatory action. Sequence learning requires causally
related input events at both sensors
(e.g. heat radiation and pain) where denotes
the time delay between both inputs. The outer loop receives the earlier (anticipatory) input.
The delayed and un-delayed signals
are processed by a linear transform (e.g. a lowor band-pass filter), subsequently their sum is taken with weights on a single neuron. Note
that all input signals are filtered. The system is therefore completely isotropic. Line
is fanned out in order to adjust to the a priori unknown delay by the combination of
different transforms
(see below). The output of the neuron is in the L APLACE-domain
given by:
,
#
"$&%
!"
with
'()* +
(1)
where are the synaptic weights. In the following we will drop the function argument
for the sake of brevity wherever possible. The transfer functions in Fig. 1 denote how
the environment influences the different signals. The goal of sequence learning is that the
outer loop should after learning functionally replace the inner loop such that the reflex will
cease to be triggered. In this case we receive
which we call the ?desired state?
of the system. This allows calculating the general requirements for the outer loop without
having to specify the actual learning process. The reflex pathway is described by
-/.
( 0 1+32,4658769:%
;=?> + < <<@ B A
(2)
where
represents the delay in L APLACE-notation. The signal on the anticipatory
(outer) pathway has the representation
2 46587
(3)
A :
%! <
where
is the learned transfer-function which generates the anticipatory
response triggered by the input . We want to express
by the environmental transferfunctions
and
.
is solved for the condition
where the reflex is no
longer triggered. Eliminating
and we get:
.
BA > ?= > 4 <2 462 58467 587
(4)
Eq. 4 can be further simplified. Following standard control theory [3] we neglect the denominator, because it does not add additional poles to the transfer function
. Such a pole
appears only for
. A transfer function
, however, is meaningless because it
violates temporal causality. Thus, the denominator can at most add phase-shifts to the systems behaviour. As a consequence, we may set
and the behaviour of
is
determined by:
(5)
The interpretation of the last equation is straight-forward. The learning goal of
requires compensating the disturbance . The disturbance, however, enters the system only
after having been filtered by the environmental transfer function . Thus, compensation
of
requires to reverse this filtering by a term
which is the inverse environmental
transfer function (hence ?inverse controller?). The second term
in Eq. 5 compensates
for the delay between the two sensor signals originating from the disturbance .
"& 2 5*7
A
2 587
& .
A ' > B 4 2 46587
4
A
2 4 587
3 /.
Having outlined the general setup in terms of our linear approach and system theoretic
notation we devote the remaining three sections to the following topics: 2.1. The learning rule and convergence to a given solution
under this rule. 2.2. The construction
. 3. Implementation of the system in a (real world) robot
of (approximate) solutions
experiment.
A
A
2.1 The learning rule and convergence.
A ! : ,
Here, we assume that a set of functions exists (as will be be specified below) for which a
solution can be approximated by
. We will now specify the learning rule,
by which the development of the weight values is controlled and show that any deviation
from the given solution
is eliminated due to learning. In terms of the time domain
functions , corresponding to
and , our learning rule is given by:
A
#
=
(6)
Thus, the weight change depends on the correlation between
and the time derivative
of . Since the structure of the system is completely isotropic (see Fig. 1) and learning
can take place at any synapse we shall call our learning algorithm isotropic sequence order
learning (?ISO-learning?). The positive constant is taken small enough such that all
weight changes occur on a much longer time scale (i.e., very slowly) as compared to the
decay of the responses . This rule is related to the one used in ?temporal difference?
learning [4]. The total weight change can be calculated by [5]:
> >
#
(7)
:,
4
>
> represents the derivative of in the LAPLACE domain. We assume
where
that the reflex pathway is unchanging with a fixed weight ! . (negative feedback).
Note, that its open loop transfer characteristic given by must carry a low-pass
component, otherwise the reflex loop would be unstable. We keep & . as before.
=#"
Furthermore
assume that for a given set of B we have found a set of weights $,%
$!
"&% whichwesolves
Eq. 5. We will show that a perturbation of the weights will be
compensated by applying the learning procedure. Since we do not make any assumption as
to the size of the perturbation this is indicative of convergence in general. To this end, we
substitute
. Stability of the solution is expected if the weight change
opposes the perturbation, thus, if
. Here, we however assume an ?adiabatic?
environment in which the system internally relaxes on a time scale much shorter than the
time scale on which the disturbances occur. To be specific, a disturbance/perturbation may
. In calculating the weight change (7) due to this disturbance signal we
occur near
disregard any subsequent disturbances as well as perturbations ( ) following the steady
state condition. We use the relations for and and insert them into Eq. 7. For we
have:
> =
.
$
$
.
" 1 3 &
(8)
.
Inserting Eqs. 2 and 8 into Eq. 1 we get:
2
+
4
8
5
7
(9)
=?> !"
Substituting this yields:
=?> ! :
(10)
>
We use the superscript 4 and to denote the arguments > and respectively and
calculate the weight change using Eq. 7 integrating between
and :
,
> 0 4 4=?> " 4 9
(11)
4 4
We realize that the first part of this integral describes the unperturbed equilibrium state and
can be dropped, thus, together with 4 , which holds because is a transfer
function, we get:
> =? > 4
(12)
4 4
"
Furthermore we assume orthogonality (see also below)
> 4
given by:$
(13)
.
=?> B 4 4 for
and get accordingly:
, # > !" #$ %&' #
(14)
4)(+*-,/* . % * .
>
0 , %
(15)
)
4
+
(
*
/
,
.
.
*
*
We now apply P LANCHEREL? S theorem [5] in order to transfer the integral into the time
domain and prove that it is negative. This assures stability and, hence, convergence, because we know that is small, preventing oscillatory behaviour. We have:
#
#
(16)
2
4
1
6
3
5
7
where we call
the autocorrelation function of # : 9<; $ which is the inverse
8
1
3
5
7
transform of ( 9 denotes a convolution) and is the temporal derivative of
the impulse response of the inverse transform of the remaining second term in Eq. 15.
Since we know that B: must carry a low-pass component we can in general state that
represents a (non-standard) high-pass. Its derivative has a very high
the fraction
(+*,=* % 1* . (ideally > ) and vanishes soon thereafter. The autocorrelation
negative value4)for
positive around . . Thus, the integral in question will remain negative for almost
1allisrealistic
choices of 6# . As an important
holds
special case we find that this especially
.
if we assume delta-pulse disturbance at . , corresponding to ,
2.2 Construction of solutions.
% =
Here, we use a set of well-known functions (band-pass filters) and show explicitly that
a solution which approximates the inverse controller (Eq. 5) can be constructed for
and discuss how the approximation is improved for higher values of .
%
+ 5 5
(
% 1 > 1
.
The transfer functions of the band-pass filters , which we use, are specified in the
where represents the complex conjugate of
L APLACE-domain:
the pole
. Real and imaginary parts of the poles are given by
, where is the frequency of the oscillation. The
damping characteristic of the resonator is reflected by
. Concerning convergence
one finds in Eq. 16 that with such a set of functions
for
and that
converges fast to zero for
. Band-pass functions are not orthogonal to each other but
numerically we found that they can be approximately treated of being orthogonal. In fact
only a small drift of the weights is observed which could be compensated if required. In
practise, however, this becomes unimportant as discussed below. The use of resonators is
also motivated by biology [6] and band-pass filtered response characteristics are prevalent
in neuronal systems which also have been used in other neuro-theoretical approaches [7].
>
5
1
.
/.
.
% =
> 2 46587
=
We return to Eq. 5. Let us first assume that the environment does not filter the disturbance,
thus
. Then, for the case
, an approximative solution of Eq. 5 can be easily
constructed by developing
into a Taylor series and obtaining the parameters through
comparing coefficients in:
=
2 5*7
4
= + 4 4
>
>
"$!#& %(5 ' " *
#+% 5 '
#
"
"
-!)
4 -# /,. "
=
> 7 # % ,
> 2 4 5*7
%
(17)
%
7
%
0
Accordingly we get for the parameters of
:
3 .
2
1
For un-filtered throughput
, this result shows that for all there exists a resonator
with a weight , which approximates
to the second order. The approximation
continues to improve for higher orders of , which we pursued up to
(fourth order
represents an enTaylor), but the set of equations becomes rather cluttered. In general
vironmental transfer function which is passive and ?well-behaved?. Thus, in most cases it
can be represented by just another passive low- or band-pass filter (sum of complex conjugated poles). Under this assumption a solution can also be constructed for the complete
term
by a combination of
resonators.
' =
% =
> 4 2 4 5*7
4
% =
4
As mentioned above, constructing solutions becomes impractical for
and it would
require to know and
a priori. Note, if you would know
, you had already
reached your goal of designing the inverse controller and learning would be obsolete. Thus,
normally a set of resonators must be predefined in a somewhat arbitrary way and their
weights shall be learned. The uniqueness of the solution assured by orthogonality becomes secondary in practise, because ? without prior knowledge of and
? one has
to use an over-complete set of , in order to make sure that a solution can be found. In
practise, this means that a large enough set of filters must be used which normally leads to
a manifold of solutions. Now obviously the question arises if satisfactory solutions exist
under these relaxed conditions and if they remain stable.
4
Figure 2:
Robot experiment: (a) The robot has 2 output neurons for speed ( ) and steering
angle ( ). The retraction mechanism is implemented by 3 resonators ( ,
Hz) which connect the collision sensors (CS) to the neurons
(speed) and
(steering
angle) with fixed weights (reflex). Each range finder (RF) is fed into a filter bank of 10
resonators
with
Hz where its output converges with variable
weights on both the and -neuron. A more detailed technical description together with
a set of movies can be found at: http://www.cn.stir.ac.uk/predictor/real
? movie 1. (b,d) Parts of the motion trajectory for one trial in an arena of
with three obstacles (shaded). Circles denote collisions. (c) Development of the weights
from the left range finder sensor to the the neuron .
=
=
=
. .
.
. .,.
3 Implementation in a robot experiment.
In this section, we show a robot experiment where we apply a conventional filter bank
approach using rather few filters with constant and logarithmically spaced frequencies
and demonstrate that the algorithms still produces the desired behaviour.
The task in this robot experiment is collision avoidance [8]. The built-in reflex-behaviour
is a retraction reaction after the robot has hit an obstacle which represents the inner loop
feedback mechanism1. The robot has three collision sensors ( ) and two range finders
( ), which produce the predictive signals. When driving around there is always a causal
relation between the earlier occurring range finder signals and the later occurring collision,
which drives the learning process. Fig. 2b shows that early during learning many collisions
(circles) occur. After a collision a fast reflex-like retraction&turning reaction is elicited. On
the other hand, the robot movement trace is now free of collisions after successful learning
of the temporal correlation between range finder and collision signals (Fig. 2d) and the
1
In fact it is also possible to construct an attraction-case if the reflex performs an initial attractionreaction.
trajectory is maximally smooth. The robot always found a stable solution, but those were as expected - not unique. This is partly due to the different initial conditions but also due
to the over-complete set of . Possible solutions, which we have observed, are that the
robot after learning simply stops in front of an obstacle and that it slightly oscillates back
and forth. The more common solution of the robot is that it continuously drives around
and uses mainly his steering to avoid obstacles. Note that this rather complex behaviour is
established by only two neurons. Fig. 2c shows that the weight change slows down after the
last collision has happened (dotted line in c). The still existing smaller weight change is due
to the fact that after functional silencing of
(no more collisions) temporally correlated
inputs still exist namely between the left and right range finders. Thus, learning is now
governed by these correlations instead and is driven by the earliest response of one of them
which finally leads to the desired stabilisation.
4 Discussion
Replacing a feedback loop with its equivalent feed-forward controller is of central relevance
for efficient control particularly in slow feedback systems, where long loop-delays exist. So
far, feed-forward control is in general model-based and, thus, often not robust [9]. On the
other hand, it has been suggested earlier by studies of limb movement control that temporal
sequence learning could be used to solve the inverse controller problem [1].
% =
Figure 3: Differences between the Sutton and Barto models (a,c) and ISO-learning (b) in
. a) shows the drive reinforcement-model by Sutton and Barto [4] and
the case of
c) the temporal difference (TD) learning by Sutton and Barto [10]. Note that the obsolete
summation-point
in a) allows to add the reward-signal in c). b) shows ISO-learning
like in Fig. 1 with
. Additionally the circuit for the weight change (learning) is
in the Sutton and Barto-models (a,c) are first order low-pass
shown. The input-filters
filters (eligibility trace).
and represent addition and multiplication, respectively. is
the derivative.
% =
Widely used models of derivative based temporal sequence learning are those by Sutton
and Barto which have the aim to model experiments of classical conditioning [4, 11, 10].
Fig. 3 shows their models in comparison to ISO-learning. All models strengthen the weight
if
precedes
(or , respectively). All models use filters at the inputs. However, in
the Sutton and Barto-models these filtered input signals are only used as an input for the
learning circuit (Fig. 3a,c) whereas the output is a superposition of the original input signals. Learning is therefore achieved by correlating the filtered input with the derivative of
the (un-filtered) output-signal. Thus, filtered signals are correlated with un-filtered signals.
In contrast to the Sutton and Barto-models, our model is completely isotropic and uses
the filtered signals for both, the learning circuit and the output since the filtered signals
are also responsible for an appropriate behaviour of the organism. These different wirings
reflect the different learning goals: in our model the weight
stabilises when the input
has become silent (the reflex has been avoided). In the Sutton and Barto-models the
6
weight stabilises if the output has reached a specific condition. In the drive-reinforcement
model this is the case if the output-signal caused by
has a similar strength than the
output triggered by . This reflects the Rescorla/Wagner rule [12]. In the case of TDlearning learning stops if the prediction error between reward and the output is zero, thus
if optimally predicts . In general our model is closely related to any correlation-based
sequence-learning [4, 13] and is not related to any form of reinforcement-learning [10, 14]
as it does not need a special reward- or punishment-signal.
The current study demonstrates analytically the convergence of ISO-learning in a closed
loop paradigm in conjunction with some rather general assumptions concerning the structure of such a system. Thus, this type of learning is able to generate a model-free inverse controller of a reflex, which improves the performance of conventional feedbackcontrol, while the feedback still serves as a fall-back. Apart from biological implications
this promises a broad field of applications in physics and engineering.
References
[1] Daniel M. Wolpert and Zoubin Ghahramani. Computational principles of movement
neuroscience. Nature Neuroscience supplement, 3:1212?1217, 2000.
[2] P. Read Montague, Peter Dayan, and Terrence J. Sejnowski. Bee foraging in uncertain
environments using predictive hebbian learning. Nature, 377:725?728, 1995.
[3] W.E Sollecito and S.G Reque. Stability. In Jerry Fitzgerald, editor, Fundamentals of
System Analysis, chapter 21. Wiley, New York, 1981.
[4] R.S. Sutton and A.G. Barto. Towards a modern theory of adaptive networks: expectation and prediction. Psychol. Review, 88:135?170, 1981.
[5] John L. Stewart. Fundamentals of signal theory. Mc Graw-Hill, New York, 1960.
[6] Gordon M. Shepherd, editor. The synaptic organisation of the brain. Oxford University Press, New York, 1990.
[7] Steven Grossberg. A spectral network model of pitch perception. J Acoust Soc Am,
98(2):862?879, 1995.
[8] P.F.M.J Verschure and T. Voegtlin. A bottom-up approach towards the aquisition,
retention, and expression of sequential representations: Distributed adaptive control
III. Neural Networks, 11:1531?1549, 1998.
[9] William J. Palm. Modeling, Analysis and Control of Dynamic Systems. Wiley, New
York, 2000.
[10] R.S. Sutton. Learning to predict by method of temporal differences. Machine learning, 3(1):9?44, 1988.
[11] R.S. Sutton and A.G. Barto. Simulation of anticipatory responses in classical conditioning by a neuron-like adaptive element. Behav. Brain. Res., 4(3):221?235, 1982.
[12] R.A. Rescorla and A.R. Wagner. A theory of pavlovian conditioning: Variations in
the effectiveness of reinforcement and nonreinforcement. In A.H Black and W.F.
Prokasy, editors, Classical conditioning 2, current theory and research, pages 64?99.
ACC, New York, 1972.
[13] A. Harry Klopf. A drive-reinforcement model of single neuron function. In John S.
Denker, editor, Neural Networks for computing: AIP conference proceedings, volume
151 of AIP conference proceedings, New York, 1986. American Institute of Physics.
[14] Christofer J.C.H Watkins and Peter Dayan. Q-learning. Machine Learning, 8:279?
292, 1992.
| 2245 |@word trial:1 eliminating:1 open:1 simulation:1 pulse:1 thereby:1 carry:2 initial:2 series:1 daniel:1 reaction:4 imaginary:1 existing:1 comparing:1 current:2 must:4 john:2 realize:1 subsequent:1 unchanging:2 motor:1 drop:1 pursued:1 obsolete:2 indicative:1 accordingly:2 isotropic:4 iso:5 lr:1 filtered:12 org:1 constructed:3 become:1 prove:1 pathway:3 autocorrelation:2 introduce:1 expected:2 brain:2 compensating:1 td:1 actual:1 becomes:4 notation:3 circuit:3 interpreted:1 acoust:1 impractical:1 temporal:11 oscillates:1 demonstrates:1 hit:1 uk:3 control:8 normally:3 internally:1 causally:1 positive:2 before:1 engineering:2 dropped:1 retention:1 consequence:1 sutton:11 oxford:1 approximately:1 black:1 shaded:1 range:6 grossberg:1 unique:1 responsible:1 procedure:1 area:1 elicit:1 word:1 pre:1 integrating:1 zoubin:1 get:5 onto:1 influence:2 applying:1 www:1 equivalent:3 conventional:2 compensated:2 cluttered:1 painful:1 rule:8 avoidance:1 attraction:1 his:1 stability:3 variation:1 laplace:2 construction:2 strengthen:1 approximative:1 designing:1 us:2 logarithmically:1 element:1 approximated:1 particularly:1 continues:1 predicts:1 observed:2 steven:1 bottom:1 solved:2 enters:1 calculate:1 tdlearning:1 movement:3 mentioned:1 environment:8 vanishes:1 fitzgerald:1 reward:3 ideally:1 retracting:1 practise:3 dynamic:1 predictive:2 completely:3 easily:1 montague:1 represented:1 chapter:1 heat:2 fast:2 sejnowski:1 precedes:1 widely:1 solve:1 otherwise:1 compensates:1 transform:3 superscript:1 obviously:1 sequence:8 triggered:4 analytical:2 rescorla:2 inserting:1 loop:17 forth:1 description:1 convergence:6 requirement:1 wired:1 produce:2 converges:2 develop:1 ac:2 radiation:2 eq:12 soc:1 implemented:1 c:1 closely:1 filter:12 subsequently:1 violates:1 require:1 behaviour:8 biological:1 summation:1 aquisition:1 insert:1 voegtlin:1 hold:2 around:3 equilibrium:1 predict:2 substituting:1 driving:1 early:1 uniqueness:1 superposition:1 reflects:1 sensor:10 always:4 silencing:1 aim:1 reaching:1 rather:4 avoid:2 barto:10 conjunction:1 earliest:1 prevalent:2 mainly:1 contrast:2 am:1 dayan:2 relation:2 originating:1 denoted:1 priori:2 development:2 animal:1 special:2 field:1 construct:1 having:4 eliminated:1 biology:1 represents:8 broad:1 throughput:1 gordon:1 aip:2 few:1 modern:1 delayed:2 replaced:1 phase:1 william:1 arena:1 adjust:1 predefined:1 implication:1 integral:3 shorter:1 orthogonal:2 damping:1 taylor:1 desired:3 circle:2 causal:1 re:1 theoretical:3 uncertain:1 graw:1 earlier:4 obstacle:4 modeling:1 disadvantage:1 stewart:1 stirling:2 applicability:1 pole:5 deviation:1 predictor:1 delay:6 successful:2 too:2 front:1 optimally:1 connect:1 foraging:1 punishment:1 fundamental:2 physic:3 terrence:1 together:2 continuously:1 central:1 reflect:1 slowly:1 creating:1 american:1 derivative:7 return:1 conjugated:1 harry:1 coefficient:1 explicitly:1 caused:1 depends:1 later:1 closed:2 reached:2 elicited:1 stir:2 characteristic:3 yield:1 spaced:1 mc:1 trajectory:2 drive:5 straight:1 acc:1 oscillatory:1 retraction:3 synaptic:2 frequency:2 proof:2 stop:2 treatment:1 knowledge:1 improves:1 actually:1 back:2 appears:1 feed:4 higher:2 reflected:1 response:7 specify:2 synapse:1 improved:1 anticipatory:7 done:1 maximally:1 furthermore:2 just:1 correlation:4 hand:2 receives:1 touch:1 replacing:1 defines:1 indicated:1 behaved:1 impulse:1 hence:2 analytically:1 jerry:1 read:1 satisfactory:1 wiring:1 during:1 eligibility:1 steady:1 hill:1 theoretic:1 demonstrate:2 complete:3 performs:1 motion:1 pro:1 passive:2 common:1 functional:1 conditioning:4 volume:1 discussed:1 occurred:1 interpretation:1 approximates:2 functionally:1 numerically:1 organism:1 outlined:1 had:1 robot:13 stable:2 longer:2 surface:1 add:3 florentin:1 driven:1 reverse:1 apart:1 nonreinforcement:1 additional:1 somewhat:1 relaxed:1 steering:3 converge:1 paradigm:1 signal:21 hebbian:1 smooth:1 technical:1 long:1 concerning:2 finder:6 controlled:1 prediction:2 neuro:1 pitch:1 controller:8 denominator:2 expectation:1 represent:1 achieved:1 receive:1 addition:1 want:1 whereas:1 diagram:1 meaningless:1 sure:1 shepherd:1 induced:1 hz:2 effectiveness:1 call:3 near:1 feedforward:2 iii:1 enough:2 relaxes:1 variety:1 psychology:1 silent:1 inner:3 cn:2 shift:1 motivated:1 expression:1 peter:2 york:6 behav:1 action:4 collision:11 fk9:1 unimportant:1 detailed:1 transforms:1 band:6 processed:1 stabilisation:1 simplest:1 generate:2 http:1 exist:3 happened:1 dotted:1 neuroscience:3 delta:1 shall:2 promise:1 express:1 thereafter:1 fraction:1 sum:2 inverse:8 angle:2 fourth:1 respond:1 you:2 place:2 almost:1 oscillation:1 distinguish:1 strength:1 occur:5 dangerous:1 orthogonality:2 your:1 sake:1 generates:1 speed:2 argument:2 pavlovian:1 palm:1 developing:1 combination:2 conjugate:1 describes:1 remain:2 slightly:1 smaller:1 wherever:1 taken:2 behavioural:2 equation:2 assures:1 turn:2 opposes:1 discus:1 mechanism:1 know:4 fed:1 end:1 serf:1 apply:3 limb:2 denker:1 appropriate:1 spectral:1 robustly:1 substitute:1 original:1 denotes:2 remaining:2 bp1:1 calculating:2 neglect:1 ghahramani:1 especially:1 eliciting:1 classical:3 objective:2 question:3 already:1 primary:1 interacts:2 devote:1 pain:2 outer:5 topic:1 manifold:1 unstable:1 setup:1 trace:2 negative:4 slows:1 ba:1 implementation:3 unknown:1 neuron:10 convolution:1 compensation:1 situation:2 perturbation:5 arbitrary:1 drift:1 bernd:1 required:1 specified:2 namely:1 learned:3 established:1 able:1 suggested:1 below:4 perception:1 rf:1 built:1 hot:1 event:3 treated:1 disturbance:10 turning:1 improve:1 movie:2 temporally:1 psychol:1 prior:1 review:1 bee:1 multiplication:1 filtering:1 article:1 principle:1 editor:4 bank:2 last:2 soon:1 free:2 verschure:1 institute:1 fall:1 wagner:2 distributed:1 feedback:9 calculated:1 world:1 preventing:1 forward:8 porr:1 reinforcement:5 adaptive:3 simplified:1 avoided:1 far:1 prokasy:1 approximate:1 keep:1 otter:1 active:1 correlating:1 un:4 additionally:1 nature:2 transfer:13 robust:1 obtaining:1 complex:3 constructing:1 domain:5 assured:1 stabilises:2 resonator:7 neuronal:1 fig:9 causality:1 slow:2 wiley:2 adiabatic:1 governed:1 watkins:1 late:2 learns:1 theorem:1 down:1 specific:2 unperturbed:1 decay:1 cease:1 organisation:1 exists:3 sequential:1 supplement:1 occurring:2 wolpert:1 simply:1 reflex:23 environmental:4 goal:5 towards:2 replace:1 change:9 determined:1 called:1 total:1 pas:10 secondary:1 partly:1 disregard:1 klopf:1 unpleasant:1 arises:3 brevity:1 reactive:1 relevance:1 avoiding:1 correlated:2 |
1,369 | 2,246 | Expected and Unexpected Uncertainty:
ACh and NE in the Neocortex
Angela Yu
Peter Dayan
Gatsby Computational Neuroscience Unit
17 Queen Square, London WC1N 3AR, United Kingdom.
[email protected]
[email protected]
Abstract
Inference and adaptation in noisy and changing, rich sensory environments are rife with a variety of specific sorts of variability. Experimental
and theoretical studies suggest that these different forms of variability
play different behavioral, neural and computational roles, and may be
reported by different (notably neuromodulatory) systems. Here, we refine our previous theory of acetylcholine?s role in cortical inference in
the (oxymoronic) terms of expected uncertainty, and advocate a theory
for norepinephrine in terms of unexpected uncertainty. We suggest that
norepinephrine reports the radical divergence of bottom-up inputs from
prevailing top-down interpretations, to influence inference and plasticity.
We illustrate this proposal using an adaptive factor analysis model.
1 Introduction
Animals negotiating rich environments are faced with a set of hugely complex inference
and learning problems, involving many forms of variability. They can be unsure which context presently pertains, cues can be systematically more or less reliable, and relationships
amongst cues can change smoothly or abruptly. Computationally, such different forms of
variability need to be represented, manipulated, and wielded in different ways. There is
ample behavioral evidence that can be interpreted as suggesting that animals do make and
respect these distinctions,5 and there is even some anatomical, physiological and pharmacological evidence as to which neural systems are engaged.29
Perhaps best delineated is the involvement of neocortical acetylcholine (ACh) in uncertainty. Following seminal earlier work,11, 14 we suggested6, 35 that ACh reports on the uncertainty associated with a top-down model, and thus controls the integration of bottom-up
and top-down information during inference. A corollary is that ACh should also control the
way that bottom-up information influences the learning of top-down models. Intuitively,
this cholinergic signal reports on expected uncertainty, such that ACh levels are high when
top-down information is not expected to support good predictions about bottom-up data
and should be modified according to the incoming data.
We6, 35 formally demonstrated the inference aspects of this idea using a hidden Markov
model (HMM), in which top-down uncertainty derives from slow contextual changes. In
extending this quantitative model to learning, we found, surprisingly, that it violated our
qualitative theory of ACh. That is, in the HMM model, greater uncertainty in the topdown model (ie a lower posterior responsibility for the predominant context), reported by
higher ACh levels, leads to comparatively slower learning about that context. By contrast,
we had expected that higher ACh should lead to faster learning, since it would indicate
that the top-down model is potentially inadequate. In resolving this conflict, we realized
that, at least in this particular HMM framework, we had incorrectly fused different sorts
of uncertainty. As a further consequence, by thinking more generally about contextual
change, we also realized the formal need for a signal reporting on unexpected uncertainty,
that is, on strong violation of top-down predictions that are expected to be correct. There is
suggestive empirical evidence that one of many roles for neocortical norepinephrine (NE)
is reporting this;29 it is also consonant with various existing theories associated with NE.
In sum, we suggest that expected and unexpected uncertainty play complementary but distinct roles in representational inference and learning. Both forms of uncertainties are postulated to decrease the influence of top-down information on representational inference and
increase the rate of learning. However, unexpected uncertainty rises whenever there is a
global change in the world, such as a context change, while expected uncertainty is a more
subtle quantity dependent on internal representations of properties of the world. Here, we
start by outlining some of the evidence for the individual and joint roles of ACh and NE in
uncertainty. In section 3, we describe a simple, adaptive, factor analysis model that clarifies the uncertainty notions. Differential effects induced by disrupting ACh and NE are
discussed in Section 4, accompanied by a comparison to impairments found in animals.
2 ACh and NE
ACh and NE are delivered to the cortex from a small number of subcortical nuclei: NE
originates solely in the locus coeruleus, while the primary sources of ACh are nuclei in the
basal forebrain (nucleus basalis magnocellularis, mainly targeting the neocortex, and medial septum, mainly targeting the hippocampus). Cortical innervations of these modulators
are extensive, targeting all cortical regions and layers.9, 30
As is typical for neuromodulators, physiological studies indicate that the effects of direct
application of ACh or NE are confusingly diverse. Within a small cortical area, iontophoresis or perfusion of ACh or NE (or their agonists) may cause synatic facilitation or suppression, depending on the cell and depending on whether the firing is spontaneous or stimulusevoked; it may also induce direct hyperpolarization or depolarization. 9, 10, 17 Direct application of either neuromodulator or its agonist, paired with sensory stimulation, results in
a general enhancement of stimulus-evoked responses, as well as an increased propensity
for experience-dependent reorganization of cortical maps (in contrast, depletion of either
substance attenuates cortical plasticity).9 More interestingly, ACh and NE both seem to
selectively suppress intracortical and feedback synaptic transmission while enhancing thalamocortical processing.8, 12, 13, 15, 17, 18, 20 Based on these roughly similar anatomical and
physiological properties, cholinergic and noradrenergic systems have been attributed correspondingly similar general computational roles, such as modulating the signal-to-noise
ratio in sensory processing.9, 10
However, the effects of ACh and NE depletion in animal behavioral studies, as well as
microdialysis of the neuromodulators during different conditions, point to more specific
and distinct computational roles for ACh and NE. In our previous work on ACh, 6, 35 we
suggested that it reports on expected uncertainty, ie uncertainty associated with estimated
parameters in an internal model of the external world. This is consistent with results from
animal conditioning experiments, in which animals learn faster about stimuli with variable
predictive consequences.24 A series of lesion studies indicates cortical ACh innervation is
essential for this sort of faster learning.14
In contrast to ACh, a large body of experimental data associates NE with the specific ability
to learn new underlying relationships in the world, especially those contradicting existent
knowledge. Locus coeruleus (LC) neurons fire phasically and robustly to novel objects
encountered during free exploration,34 novel sensory stimuli,25, 28 unpredicted changes in
stimulus properties such as presentation time,2 introduction of association of a stimulus
with reinforcement,19, 28, 32 and extinction or reversal of that association.19, 28 Moreover, this
activation of NE neurons habituates rapidly when there is no predictive value or contingent
response associated with the stimuli, and also disappears when conditioning is expressed
at a behavioral level.28
There are few sophisticated behavioral studies into the interactions between ACh and NE.
However, it is known that NE and ACh both rise when contingencies in an operant conditioning task are changed, but while NE level rapidly habituates, ACh level is elevated in a
more sustained fashion.3, 28 In a task designed to tax sustained attention, lesions of the basal
forebrain cholinergic neurons induced persistent impairments, 22 while deafferentation of
cortical adrenergic inputs did not result in significant impairment compared to controls. 21
One of the best worked-out computational theories of the drive and function of NE is that
of Aston-Jones, Cohen and their colleagues.1, 33 They studied NE in the context of vigilance and attention in well-learned tasks, showing how NE neurons are driven by selective
task-relevant stimuli, and that, influenced by increased electrotonic coupling in the locus
coeruleus, a transition from a high tonic, low phasic activity mode to a low tonic, high
phasic activity mode is associated with increased behavioral performance through NE?s
suggested effect of increasing the signal to noise ratio of target cortical cells. This is a
very impressive theory, with neural and computational support. However, its focus on
well-learned tasks, means that other drives of NE activity (particularly novelty) and effects
(particularly plasticity) are downplayed, and a link to ACh is only a secondary concern. We
focus on these latter aspects, proposing that NE reports unexpected uncertainty, ie uncertainty induced by a mismatch between prediction and observation, such as when there is a
dramatic change in the external environment. We do not claim that this is the only role of
NE; but do see it as an important complement to other suggestions.
3 Inference and Learning in Adaptive Factor Analysis
Our previous model of the role of ACh in cortical inference involved a generative scheme
with
contextual variable , evolving over time with slow Markov dynamics
adiscrete
, a discrete representational variable that was stochastically
"$#&%('*)
(normal distribution). The
determined by , and a noisy observed' variable !
inferential task was to determine
$+-,(,(,. ; the HMM structure makes this interesting
because top-down ( ) and bottom-up ( ) information have to be integrated. Top down
information can be uncertain, in which case mainly bottom-up information should be
used to infer4
. We suggested that ACh reports the uncertainty in the top-down context,
namely /1032 65 ,(,(,. 78 , where 95 is the most likely value of the context and 2
indicates the use of an approximation. ACh thereby reports expected uncertainty, as in the
qualitative picture above, and appropriately
controls cortical inference. However, if one
also considers learning, for instance if is unknown, then the
less certain the animal
is that 95 is the true contextual state, the less learning accorded to 65 . This is exactly
the opposite of what we should expect according to our empirically-supported arguments
above.
In fact, this way of viewing ACh is also not consistent with a more systematic reading 5, 16 of
Holland & Gallagher?s cholinergic results,14 which imply that ACh is better seen as a report
of uncertainty in parameters rather than uncertainty in states. In order to model this more
fitting picture of ACh, we need an explicit model of parameter uncertainty. We constrain
the problem to a single, implicit, context :; / . It is easiest (and perhaps more realistic)
to develop the new picture in a continuous space, in which the parameter governing the
relationship between / and is < (scalar for convenience), which is imperfectly
known (hence the parameter uncertainty, reported by ACh), and indeed can change. Again,
stochastically specifies through a normal distribution.
Specifying how < can change over time requires making an assumption about the nature
of the context. In particular, novelty plays a critical role in model evolution. In general,
p(y; ?)
y
15
10
5
0
?5
0
p(x|y)
x
35
70
20
4
10
3
2
0
1
?10
0
0
?10
0
2
3
1
10
4
Figure 1: Adaptive factor analysis model. (a) 2-layer adaptive factor analysis model, as specified by
Eq. 1 & 2. (b) Sample sequence of data points generated with parameters: !"$# &% , ' (*),+-.+0/ ,
132
176
17?
54 ,
8+ , 9;:<5=> ,
@4 . 4 major shifts in A occurred (including initial ACB ), whose
projections into D space, ' A , are denoted as large circles. E : DGF AIHJ+. , K : DLF AIHNM , O : DGF AIHQP , R :
DLF A
HJSP . Small T denotes U&V projected into D space and fall along the line ' WU . (c) Same sequence
viewed in U space. X : major shifts in A , Y : A V , R : U V , Z : D optimally projected into U space, ie
[
[
B
B
B
D
]\' &^I9`: _
' &a_ ' ^b9`: _
D , where
D is the mean of the posterior distribution of U given only the
[
c V3SeAI
c Vf F vs. FdU&
c V3S
DhgF . X : iJV f "+.M , Z : ijV f k# P ,
observation D and flat priors. (d) Scatter plot of FdU`
K : i V f kl$# % , dashed line denotes parity. Larger i V f corresponds to greater reliance on D V rather than
A c Vf for inferring U c V , while the intermediate value of i V f kl# % exactly balances top-down uncertainty
with bottom-up uncertainty in the inference of Uc V .
we might expect small amounts of novelty, as models continually readjust, and we can
allow for this by modeling continual small changes in < . However, in order to allow for
the possibility of macroscopic changes implied by substantial novelty (as reported by NE),
which are of evident importance in many experiments, we must add a specific component
to the model. The interaction between microscopic and macroscopic novelty is essentially
the interaction between ACh and NE. In all, assume that
m 'onQp
'q # +
<
< < 7$rkstrvu xw
(1)
y 'q +
z
y '0q +
{
u
/
u
8}|
(2)
y
0
'
q
+?
u
with the initial value <~
(see Figure 1). We will see later that the binary is
the key to the model of NE; it comes from an assumption that there
can occasionally (|? / )
mm
s
w
/ 0
y
be dramatic changes in a model that force its radical revision. is another parameter; we
assume it is known and fixed. Figure 1(b) & (c) shows a sample sequence of a particular
setting of the model: the output can be quite noisy, although there are clear underlying
regularities in .
?? '0?
At time? , consider the case that we? can make the approximation that <;? <C? ? ,
where <L? is the estimate of < and ? is its variance (uncertainty), which is reported by
ACh. Here, the open circles indicate that this estimate is made before is observed. We
; then go on to study learning.
first consider how the ACh term influences inference
? '?about
q? +
For inference, it can easily be shown that
, where
+ ?? q # +
?q
?
r
Q
? ?
$
r
m ??Gn
p
8
?
m
q?
+ ?
?
? q
#+
?
r
N? ?
$
?
< ?
r
m $?n
p
8
??
(3)
whence the effect
of ACh is exactly as in our qualitative picture. The
more uncertainty
?
?
?
(ie the larger ? ), the smaller the role of the top-down expectation <G? in determining .
Examples of just such effects can be found in Figure 1 (d).
y
For learning,m ?m start
with
the distribution of < given and assume u
. In this case,
n p
? q #+ r
, we get
writing ?
?
'
<
u
y
m
?
8
?
W?
m ?
?
8 m 9'
/
? m ?
8
?
m C
?
y 'o?
~
r
q
z
+
with the obvious semantics for the product of two Gaussian distributions. This is almost exactly the standard form for a Kalman filter update for
< , and leads to standard results, such
as variance of the estimate going initiallyq like / ? , but ultimately reaching an asymptote
+
which balances the rate of change from z and the rate of new information from the .
Importantly,mGin
this simple model, the uncertainty in < does not depend on the prediction
?
errors 0 < , but rather changes as a function only of time.
However, if one takes into account the possibility that u / , then the posterior distribution
for < is the two-component mixture
u y ? : ' u y 7r u ? ' u
<
<
/
<
/
(4)
8
m 9' m ?
8
$ m
m ?
y '0? r q + 3r |
y 'o? r q + r q + ?
? m ?
?
C?
| ?
? ?
z
z
{
?
?
/
?
/0
~
~
As increases, the number of mixture components
in the posterior distribution increases
exponentially as , since each setting of the 0 length binary string u .u + ,(,, u is, barring
?
probability zero accidents, associated with a different component in the mixture. Thus, just
as for switching state-space models,7 exact inference is impractical.
One possibility would be to use a variational approximations.7, 23 From the neural perspective of the involvement of neuromodulators, we propose an approximate learning algorithm
in which signals reporting uncertainty, corresponding to our conceptual
roles for ACh and
NE, control the interactions
between the (approximate) distribution at 0 / , ? 2 < 7$ 7$ ,
'
'
'
where 7 $ + ,,(, 78 , and bottom-up information relayed by the new observation, ?
. To control the exponential expansion in the hidden
space, we
approximate
? 78
78
? ? 78 '0? 78 ?
the posterior ? 2 < 78 78 as a single Gaussian,
.
is our best
<
<
<
?
7$ , and 78 , corresponding to the ACh level, is the
estimate of < 7$ after observing
? 78
uncertainty in our estimate <
. In general, we might consider the NE level as reporting
the posterior responsibility of the u6 / component of the equivalent mixture of equation 4.
Even more straightforwardly, we can measure
a Z-score,
namely prediction error scaled by
$
? ? 0 ? ? ? ? ? 0 ? , where ? mG< ? 78 and
uncertainty
in
our
estimates:
m m ? ? ? 7$r q # + r q z + ? r nQp , assuming that u6 y . Whenever exceeds a threshold
?
to have come from
an
unmodified version of the current comvalue
, ie is unlikely
?
?
y
ponent, we assume u / . Otherwise, u
. Now the learning problem reduces to a
modified version of Kalman filter:
?
? ? 7$r q z + r
prediction variance about <
(5)
8
? m ? ? m ?? m ? r m ?m ? q # + r n p ?
?
?
Kalman gain
(6)
?
?
? 0
78 m ? ?
correction variance
(7)
?
? 78 r
? m ;? 7$ ?
<
<
0 <
estimated mean
(8)
The difference from the conventional Kalman filter
is the y additional
component
of the
tran?
?
y
q +
u ?
{ if
/ .
sition noise variance, , which depends on? u6 :
if u
,
Closer examination indicates that the ACh ( ) and NE ( ) signals have the desired
se?
mantics. In the learning
mean ?estimate, , results
algorithm, large uncertainty about the
in large Kalman gain,
, which causes a large shift in <
. Large
also weakens the
influence of top-down information? in inference as in equation 3.q High NE levels also y leads
to
faster y learning: large means u / , which causes
(rather than ?
had
u?
been ), ultimately resulting in a large Kalman gain and thus fast shifting of < . High
NE levels also enhances the dominance of bottom-up
information in inference via its in?
teractions with ACh: large promotes large . Note that this system predicts interesting
reciprocal relationships between ACh and NE: higher ACh leads to smaller normalized prediction errors and therefore less active NE signalling, whereas greater NE would generally
increase estimator uncertainty and thus ACh level.
'
'
Figure 2(a) shows an example sequence of < < + ,,(, generated from a model (same parameters as in Figure 1), and the estimated means using our approximate learning algorithm. The learning algorithm is clearly able to adjust to major changes in < , although
15
10
5
0
?5
0
5000
4000
3000
35
70
5
0
0
2000
1000
35
70
0
0
3
10
Figure 2: Approximate learning algorithm. (a)
: DtV projected into U space, Y : actual AIV , X : estimated
means A c V . General patterns of A V are captured by A c V , though details may differ. k l . (b) S Z
S : ACh, S : NE, : . ACh level rises whenver c V detected to be + (NE level exceeds ) and then
smoothly falls. NE level is constant monitor of prediction error. (c) Mean summed square error over
-step sequences trials (
V \A c VIS AIV a ) , as a function of . Error bars show standard errors of
the means over %W trials. Mean square error for optimal }l is Pl , compared to exact learning
error l$+.l (lower line). Model parameters were same as in Figure 1.
more subtle changes? in < can miss detection, such as the third large shift in < . Figure 2(b)
shows
higher ACh ( ) and NE ( ? levels both correspond to fast learning, ie fast shifting
?
of < . However, whereas NE is a constant monitor of prediction errors and fluctuates accordingly with every data point, ACh falls smoothly and predictably, and only depends on
the observations when global changes in the environment
have been detected. Figure 2(b)
?
shows ladle-shaped dependence of estimation error, < 0 < , on the threshold value
. For
the particular setting of model parameters used here, learning is optimal for
around .
4 Differential Effects of Disrupting ACh and NE Signalling
The different roles of the NE ( ) and ACh ( ) can be teased apart by disrupting each
and observing the subsequent effects
on learning in our model. We will examine several
?
different manipulations of and that disrupt normal learning, and relate the results to
impairments observed in experimental manipulation of ACh or NE levels in animals. Of
course, the complete experimental circumstances are far more complicated; we consider
the general nature of the effects.
?
First, we simulate depletion of cortical NE by setting
. An example is shown in
Figure 3(a). By ruling out the possibility of u / , the system is unabley?y to cope with
< shifts. Mean error over
abrupt, global changes in the world, ie when
trials (same
y
setting as in Figure 2(c)) without NE is , more than an order of magnitude larger than
full approximate learning ( ) and exact learning ( / ). This is consistent with the large
errors of similar magnitude in Figure 2(c) for very large
, which effectively blocks the
NE system from reporting global changes. However, as long as the underlying parameters
remain the same, ie < does
not change greatly, the inference process functions normally, as
y
we can see in the first steps in Figure 3(a). These results are consistent with experimental observations: NE-lesioned animals are impaired in learning changes in reinforcement
contingencies,26, 28 but have little difficulty doing previously learned discrimination tasks.21
y
We can also simulate depletion of cortical ACh by setting to a small constant value.
Figure 3(b) shows severe damage is caused the learning algorithm,
but the inference symp?
toms are distinct? from NE depletion. Permanently small corresponds to over-confidence
in estimates of < , thus making adaptation of that estimate slow, similar to NE depletion.
However, because the NE system is still intact, the system? is able to detect when dramatically differs from the prediction (which is often, since < is slow to adapt and leaves little
room
for variance), and thus to base inference of directly on the bottom-up information
?
. Thus, inference is less impaired than learning, which has also been observed in
?
10
0
1
35
70
1
35
70
10
0
Figure 3: Disrupting NE and ACh signals. (a) NE signal set to . (b) ACh signal set to $#+oM . S :
V ,
S S :A c , : U&
c V , Z :projection of DIV into U space. Learning of c
V is poor in both manipulations, but
inference in ACh-depletion is less impaired.
ACh-lesioned animals.31 Moreover, the system exhibits a peculiar hesitancy in inference,
ie constantly switching back and forth between
on
$ relying
8 m top-down estimate of , based
mm ?
? m ?
on < and bottom-up estimate, based on ?
. This tendency is particularly
?
severe when the new < is similar to the previous one, which can be thought of as a form of
interference. Interestingly, hippocampal cholinergic deafferentation in animals also bring
about a stronger susceptibility to interference compared with controls. 10
Saturation of ACh and NE are also easy to model, by setting and very high all the
time. The effect of these two manipulations are similar, both cause the estimation of <
and inference of to base strongly on the observation (data not shown). The performance nN
decrements
in the estimation of < and inference about are functions of the output
p '0q # +
in our model, and do not worsen when there are global changes in continnoise,
gencies. Unfortunately, directly relevant experimental data is scarce. Administration of
cholinergic agonists in the cortex has failed to induce impairments in tasks with changing
contingencies, consistent with our predictions. However, to our knowledge, cholinergic
and noradrenergic agonists have not yet been administered in combination with systematic
manipulation of variability in the predictive consequences of stimuli and so the validity of
our predictions remains to be tested.
?
5 Discussion
We have suggested that ACh and NE report expected and unexpected uncertainty in representational learning and inference. As such, high levels of ACh and NE should both
correspond to faster learning about the environment and enhancement of bottom-up processing in inference. However, whereas NE reports on dramatic changes, ACh has the
subtler role of reporting on uncertainties in internal estimates.
We formalized these ideas in an adaptive factor analysis model. The model is adaptive in
that the mean of the hidden variable is allowed to alter greatly from time to time, capturing
the idea of a generally stable context which occasionally undergoes large changes, leading
to substantial novelty in inputs. As exact learning is intractable, we proposed an approximate learning algorithm in which the roles for ACh and NE are clear, and demonstrated
that it performs learning and inference competently. Moreover, by disrupting one or both
of ACh and NE signalling systems, we showed that the two systems have interacting but
distinct patterns of malfunctioning that qualitatively resemble experimental results in animal studies. There is no single collection of definitive experimental studies, and teasing
apart the effects of NE and ACh is tricky, since they appear to share many properties. Our
model helps understand why, and should also help with the design of experiments to clarify
the relationship.
Of course, the adaptive factor analysis model is overly simple in many ways. In particular,
it only considers one particular context; and so refers all the uncertainty to the parameters
of that context. This is exactly the complement of our previous model, 6, 35 which referred
all the uncertainty to the choice of context rather than the parameters within each context.
The main conceptual difference is that the idea that ACh reports on the latter form of contextual uncertainty sits ill with the data on how uncertainty boosts learning; this fits better
within the present model. Given multiple contexts, which could formally be handled within
the framework of a mixture model, the tricky issue is to decide whether the parameters of
the current context have changed, or a new (or pre-existing) context has taken over. Exploring this is important work for the future. More generally, a thoroughly hierarchical and
non-linear model is clearly required as at a minimum as a way of addressing some of the
complexities of cortical inference.
Acknowledgement
We are very grateful to Zoubin Ghahramani and Maneesh Sahani for helpful discussions.
Funding was from the Gatsby Charitable Foundation and the NSF.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
Aston-Jones, G, Rajkowski, J, & Cohen, J (1999) Biol Psychiatry 46:1309-1320.
Carli, M, Robbins, TW, Evenden, JL, & Everitt, BJ (1983) Behav Brain Res 9:361-80.
Dalley, JW et al. (2001) J Neurosci 21:4908-4914.
Daw, ND, Kakade, S, & Dayan, P (2001) Neural Networks 15:603-616.
Dayan, P, Kakade, S, & Montague, PR (2000) In NIPS 2000:451-457.
Dayan, P & Yu, A (2002) In NIPS 2002.
Ghahramani, Z & Hinton, G (2000) Neural Computation 12:831-64.
Gil, Z, Conners, BW, & Amitai, Y (1997) Neuron 19:679-86.
Gu, Q (2002) Neuroscience, 111:815-835.
Hasselmo, ME (1995) Behavioural Brain Research 67:1-27.
Hasselmo, ME, Wyble, BP & Wallenstein, GV (1996) Hippocampus 6:693-708.
Hasselmo, ME & Cekic, M (1996) Behavioural Brain Research 79: 153-161.
Hasselmo, ME et al (1997) J Neurophysiology 78:393-408.
Holland, PC & Gallagher, M (1999) Trends In Cognitive Sciences 3:65-73.
Hsieh, CY, Cruikshank, SJ, & Metherate, R (2000) Brain Research 880:51064.
Kakade, S & Dayan, P (2002) Psychological Review 109:533-544.
Kimura, F, Fukuada, M, & Tsumoto, T (1999) Eur. Jour. of Neurosci. 11:3597-3609.
Kobayashi, M et al. (1999) European Journal of Neuroscience 12:264-272.
Mason, ST & Iversen, SD (1978) Brain Res150:135-48.
McCormick, DA (1989) Trends Neurosci 12:215-221.
McGaughy, J, Sandstrom, M, et al (1997) Behav Neurosci 111:646-52.
McGaughy, J & Sarter, M (1998) Behav Neurosci 112:1519-25.
Minka, TP (2001) A Family of Algorithms for Approximate Bayesian Inference. PhD, MIT.
Pearce, JM & Hall, G (1980) Psychological Review 87:532-552.
Rajkowski, J, Kubiak, P, & Aston-Jones, G (1994) Brain Res Bull 35:607-16.
Robbins, TW (1984) Psychological Medicine 14:13-21.
Robbins, TW, Everitt, BJ, & Cole, BJ (1985) Physiological Psychology 13:127-150.
Sara, SJ, Vankov, A, & Herve, A (1994) Brain Res Bull 35:457-65.
Sara, SJ (1998) Comptes Rendus de l?Academie des Sciences Serie III 321:193-198.
Sarter, M, Bruno, JP (1997) Brain Research Reviews 23:28-46.
Sarter, M, Holley, LA, & Matell, M (2000) In SFN 2000 abstracts.
Sullivan, RM (2001) Ingegrative Physiological and Behavioral Science 36:293-307.
Usher, M, et al. (1999) Science 5401:549-554.
Vankov, A, Herve-Minvielle, A, & Sara, SJ (1995) Eur J Neurosci109:903-911.
Yu, A & Dayan, P (2002) Neural Networks 15:719-730
| 2246 |@word neurophysiology:1 noradrenergic:2 trial:3 version:2 hippocampus:2 stronger:1 nd:1 extinction:1 open:1 hsieh:1 dramatic:3 thereby:1 serie:1 initial:2 series:1 score:1 united:1 interestingly:2 existing:2 current:2 contextual:5 activation:1 scatter:1 yet:1 must:1 realistic:1 subsequent:1 plasticity:3 gv:1 asymptote:1 designed:1 plot:1 medial:1 update:1 v:1 discrimination:1 cue:2 generative:1 leaf:1 signalling:3 accordingly:1 reciprocal:1 sits:1 relayed:1 along:1 direct:3 differential:2 persistent:1 qualitative:3 sustained:2 rife:1 advocate:1 fitting:1 behavioral:7 symp:1 notably:1 indeed:1 expected:11 roughly:1 examine:1 brain:8 relying:1 actual:1 little:2 jm:1 innervation:2 increasing:1 revision:1 underlying:3 moreover:3 competently:1 what:1 easiest:1 interpreted:1 string:1 depolarization:1 proposing:1 impractical:1 kimura:1 quantitative:1 every:1 continual:1 exactly:5 scaled:1 rm:1 uk:2 control:7 unit:1 originates:1 normally:1 tricky:2 appear:1 continually:1 before:1 kobayashi:1 sd:1 consequence:3 switching:2 solely:1 firing:1 might:2 studied:1 evoked:1 specifying:1 sara:3 rajkowski:2 block:1 differs:1 sullivan:1 area:1 empirical:1 evolving:1 maneesh:1 thought:1 inferential:1 projection:2 confidence:1 induce:2 refers:1 pre:1 suggest:3 zoubin:1 get:1 convenience:1 targeting:3 context:17 influence:5 seminal:1 writing:1 equivalent:1 map:1 demonstrated:2 conventional:1 go:1 attention:2 hugely:1 onqp:1 formalized:1 abrupt:1 estimator:1 importantly:1 facilitation:1 u6:3 notion:1 spontaneous:1 play:3 target:1 exact:4 academie:1 associate:1 trend:2 particularly:3 predicts:1 bottom:12 role:15 observed:4 cy:1 region:1 decrease:1 substantial:2 environment:5 complexity:1 lesioned:2 dynamic:1 ultimately:2 existent:1 depend:1 grateful:1 predictive:3 gu:1 easily:1 joint:1 montague:1 represented:1 various:1 distinct:4 fast:3 describe:1 london:1 modulators:1 detected:2 ponent:1 whose:1 quite:1 larger:3 fluctuates:1 otherwise:1 ability:1 noisy:3 delivered:1 sequence:5 mg:1 ucl:2 propose:1 tran:1 interaction:4 product:1 adaptation:2 relevant:2 rapidly:2 tax:1 representational:4 forth:1 ach:63 enhancement:2 extending:1 transmission:1 regularity:1 impaired:3 perfusion:1 object:1 help:2 illustrate:1 radical:2 ac:2 depending:2 coupling:1 develop:1 weakens:1 eq:1 strong:1 resemble:1 indicate:3 come:2 differ:1 correct:1 filter:3 exploration:1 viewing:1 ijv:2 exploring:1 correction:1 mm:2 clarify:1 around:1 b9:1 hall:1 normal:3 bj:3 claim:1 major:3 susceptibility:1 estimation:3 propensity:1 cole:1 modulating:1 robbins:3 hasselmo:4 mit:1 clearly:2 gaussian:2 modified:2 rather:5 reaching:1 confusingly:1 acetylcholine:2 corollary:1 dlf:2 focus:2 indicates:3 mainly:3 greatly:2 contrast:3 psychiatry:1 suppression:1 detect:1 whence:1 helpful:1 inference:30 dayan:7 dependent:2 nn:1 unlikely:1 integrated:1 basalis:1 hidden:3 selective:1 going:1 semantics:1 issue:1 ill:1 denoted:1 animal:12 prevailing:1 integration:1 summed:1 barring:1 shaped:1 yu:3 jones:3 thinking:1 alter:1 future:1 report:11 stimulus:8 few:1 manipulated:1 divergence:1 individual:1 bw:1 fire:1 detection:1 possibility:4 adjust:1 cholinergic:7 severe:2 predominant:1 violation:1 mixture:5 pc:1 sarter:3 wc1n:1 peculiar:1 closer:1 experience:1 herve:2 circle:2 desired:1 re:3 theoretical:1 uncertain:1 psychological:3 increased:3 negotiating:1 earlier:1 modeling:1 instance:1 gn:1 ar:1 tp:1 unmodified:1 queen:1 bull:2 addressing:1 imperfectly:1 inadequate:1 optimally:1 reported:5 straightforwardly:1 acb:1 thoroughly:1 eur:2 jour:1 st:1 kubiak:1 ie:10 systematic:2 fused:1 again:1 neuromodulators:3 vigilance:1 external:2 stochastically:2 cognitive:1 leading:1 suggesting:1 account:1 de:2 intracortical:1 accompanied:1 postulated:1 caused:1 depends:2 nqp:1 vi:1 later:1 responsibility:2 observing:2 doing:1 start:2 sort:3 complicated:1 worsen:1 om:1 square:3 variance:6 clarifies:1 correspond:2 bayesian:1 agonist:4 drive:2 influenced:1 whenever:2 synaptic:1 colleague:1 involved:1 mcgaughy:2 obvious:1 minka:1 associated:6 attributed:1 gain:3 knowledge:2 subtle:2 sophisticated:1 back:1 higher:4 tom:1 response:2 jw:1 though:1 strongly:1 governing:1 implicit:1 just:2 aiv:2 mode:2 undergoes:1 perhaps:2 magnocellularis:1 effect:12 validity:1 normalized:1 true:1 evolution:1 hence:1 pharmacological:1 during:3 amitai:1 hippocampal:1 evident:1 neocortical:2 disrupting:5 fdu:2 complete:1 performs:1 bring:1 variational:1 novel:2 funding:1 stimulation:1 hyperpolarization:1 empirically:1 cohen:2 conditioning:3 exponentially:1 jp:1 jl:1 discussed:1 interpretation:1 association:2 elevated:1 occurred:1 significant:1 everitt:2 neuromodulatory:1 bruno:1 had:3 stable:1 cortex:2 impressive:1 add:1 base:2 posterior:6 showed:1 perspective:1 involvement:2 driven:1 apart:2 manipulation:5 occasionally:2 certain:1 binary:2 seen:1 minimum:1 captured:1 greater:3 contingent:1 additional:1 accident:1 novelty:6 determine:1 signal:9 dashed:1 resolving:1 multiple:1 full:1 infer:1 reduces:1 exceeds:2 faster:5 adapt:1 long:1 promotes:1 paired:1 prediction:12 involving:1 enhancing:1 essentially:1 rks:1 expectation:1 sition:1 circumstance:1 cell:2 proposal:1 whereas:3 source:1 macroscopic:2 appropriately:1 wallenstein:1 usher:1 induced:3 vankov:2 ample:1 seem:1 intermediate:1 iii:1 easy:1 variety:1 fit:1 psychology:1 dtv:1 opposite:1 idea:4 administration:1 shift:5 administered:1 whether:2 handled:1 septum:1 abruptly:1 peter:1 cause:4 behav:3 impairment:5 electrotonic:1 generally:4 dramatically:1 clear:2 se:1 amount:1 neocortex:2 specifies:1 nsf:1 gil:1 neuroscience:3 estimated:4 overly:1 anatomical:2 diverse:1 discrete:2 basal:2 key:1 dominance:1 reliance:1 threshold:2 monitor:2 changing:2 sum:1 teractions:1 uncertainty:42 reporting:6 almost:1 ruling:1 decide:1 wu:1 family:1 i9:1 vf:2 capturing:1 layer:2 refine:1 encountered:1 activity:3 worked:1 constrain:1 bp:1 flat:1 aspect:2 simulate:2 argument:1 according:2 unpredicted:1 combination:1 unsure:1 poor:1 feraina:1 smaller:2 remain:1 kakade:3 tw:3 delineated:1 making:2 subtler:1 presently:1 intuitively:1 pr:1 operant:1 interference:2 depletion:7 taken:1 computationally:1 equation:2 behavioural:2 previously:1 forebrain:2 remains:1 rendus:1 phasic:2 locus:3 reversal:1 accorded:1 hierarchical:1 robustly:1 slower:1 permanently:1 carli:1 top:16 angela:1 denotes:2 iversen:1 xw:1 medicine:1 ghahramani:2 especially:1 dgf:2 readjust:1 comparatively:1 implied:1 realized:2 quantity:1 damage:1 primary:1 dependence:1 microscopic:1 amongst:1 enhances:1 div:1 exhibit:1 link:1 hmm:4 me:4 considers:2 phasically:1 assuming:1 kalman:6 reorganization:1 length:1 relationship:5 sfn:1 ratio:2 balance:2 kingdom:1 unfortunately:1 potentially:1 relate:1 rise:3 suppress:1 attenuates:1 design:1 unknown:1 mccormick:1 neuron:5 observation:6 markov:2 pearce:1 incorrectly:1 hinton:1 variability:5 tonic:2 interacting:1 complement:2 namely:2 required:1 specified:1 extensive:1 adrenergic:1 kl:2 conflict:1 distinction:1 learned:3 boost:1 daw:1 nip:2 able:2 suggested:4 topdown:1 bar:1 pattern:2 mismatch:1 reading:1 saturation:1 reliable:1 including:1 shifting:2 critical:1 difficulty:1 force:1 examination:1 scarce:1 scheme:1 aston:3 imply:1 ne:60 picture:4 disappears:1 sahani:1 faced:1 prior:1 review:3 acknowledgement:1 determining:1 expect:2 suggestion:1 interesting:2 subcortical:1 outlining:1 foundation:1 nucleus:3 contingency:3 consistent:5 charitable:1 systematically:1 share:1 tsumoto:1 neuromodulator:1 course:2 changed:2 surprisingly:1 thalamocortical:1 free:1 supported:1 parity:1 formal:1 allow:2 understand:1 fall:3 correspondingly:1 feedback:1 cortical:14 world:5 transition:1 rich:2 sensory:4 made:1 adaptive:8 reinforcement:2 projected:3 qualitatively:1 collection:1 far:1 cope:1 sj:4 approximate:8 cekic:1 suggestive:1 global:5 incoming:1 active:1 conceptual:2 predictably:1 consonant:1 disrupt:1 continuous:1 norepinephrine:3 habituates:2 why:1 learn:2 nature:2 expansion:1 complex:1 european:1 da:1 did:1 main:1 neurosci:5 decrement:1 noise:3 definitive:1 contradicting:1 lesion:2 complementary:1 allowed:1 body:1 referred:1 fashion:1 gatsby:4 slow:4 lc:1 inferring:1 explicit:1 exponential:1 third:1 down:16 specific:4 substance:1 showing:1 mason:1 hjs:1 physiological:5 coeruleus:3 evidence:4 derives:1 essential:1 concern:1 intractable:1 effectively:1 importance:1 phd:1 gallagher:2 magnitude:2 smoothly:3 likely:1 failed:1 unexpected:7 expressed:1 scalar:1 holland:2 corresponds:2 constantly:1 deafferentation:2 presentation:1 viewed:1 room:1 change:24 typical:1 determined:1 miss:1 comptes:1 secondary:1 engaged:1 experimental:8 tendency:1 la:1 intact:1 formally:2 selectively:1 internal:3 support:2 latter:2 pertains:1 violated:1 tested:1 biol:1 |
1,370 | 2,247 | Linear Combinations of Optic Flow Vectors for
Estimating Self-Motion ?a Real-World Test of a
Neural Model
Matthias O. Franz
MPI f?ur biologische Kybernetik
Spemannstr. 38
D-72076 T?ubingen, Germany
[email protected]
Javaan S. Chahl
Center of Visual Sciences, RSBS
Australian National University
Canberra, ACT, Australia
[email protected]
Abstract
The tangential neurons in the fly brain are sensitive to the typical optic
flow patterns generated during self-motion. In this study, we examine
whether a simplified linear model of these neurons can be used to estimate self-motion from the optic flow. We present a theory for the construction of an estimator consisting of a linear combination of optic flow
vectors that incorporates prior knowledge both about the distance distribution of the environment, and about the noise and self-motion statistics
of the sensor. The estimator is tested on a gantry carrying an omnidirectional vision sensor. The experiments show that the proposed approach
leads to accurate and robust estimates of rotation rates, whereas translation estimates turn out to be less reliable.
1
Introduction
The tangential neurons in the fly brain are known to respond in a directionally selective
manner to wide-field motion stimuli. A detailed mapping of their local motion sensitivities
and preferred motion directions shows a striking similarity to certain self-motion-induced
flow fields (an example is shown in Fig. 1). This suggests a possible involvement of these
neurons in the extraction of self-motion parameters from the optic flow, which might be
useful, for instance, for stabilizing the fly?s head during flight manoeuvres.
A recent study [2] has shown that a simplified computational model of the tangential neurons as a weighted sum of flow measurements was able to reproduce the observed response
fields. The weights were chosen according to an optimality principle which minimizes
the output variance of the model caused by noise and distance variability between different scenes. The question on how the output of such processing units could be used for
self-motion estimation was left open, however.
In this paper, we want to fill a part of this gap by presenting a classical linear estimation
approach that extends a special case of the previous model to the complete self-motion
problem. We again use linear combinations of local flow measurements but, instead of
prescribing a fixed motion axis and minimizing the output variance, we require that the
quadratic error in the estimated self-motion parameters be as small as possible. From this
75
elevation (deg.)
45
15
?15
?45
?75
0
30
60
90
120
150
180
azimuth (deg.)
Figure 1: Mercator map of the response field of the neuron VS7. The orientation of each
arrow gives the local preferred direction (LPD), and its length denotes the relative local
motion sensitivity (LMS). VS7 responds maximally to rotation around an axis at an azimuth
of about 30? and an elevation of about ?15? (after [1]).
optimization principle, we derive weight sets that lead to motion sensitivities similar to
those observed in tangential neurons. In contrast to the previous model, this approach also
yields the preferred motion directions and the motion axes to which the neural models are
tuned. We subject the obtained linear estimator to a rigorous real-world test on a gantry
carrying an omnidirectional vision sensor.
2
2.1
Modeling fly tangential neurons as optimal linear estimators for
self-motion
Sensor and neuron model
In order to simplify the mathematical treatment, we assume that the N elementary motion
detectors (EMDs) of our model eye are arranged on the unit sphere. The viewing direction
of a particular EMD with index i is denoted by the radial unit vector di . At each viewing
direction, we define a local two-dimensional coordinate system on the sphere consisting of
two orthogonal tangential unit vectors ui and vi (Fig. 2a). We assume that we measure
the local flow component along both unit vectors subject to additive noise. Formally, this
means that we obtain at each viewing direction two measurements xi and yi along ui and
vi , respectively, given by
xi = pi ? ui + nx,i
and
yi = pi ? vi + ny,i ,
(1)
where nx,i and ny,i denote additive noise components and pi the local optic flow vector.
When the spherical sensor translates with T while rotating with R about an axis through
the origin, the self-motion-induced image flow pi at di is [3]
pi = ??i (T ? (T ? di )di ) ? R ? di .
(2)
?i is the inverse distance between the origin and the object seen in direction di , the socalled ?nearness?. The entire collection of flow measurements xi and yi comprises the
a.
b.
y
optic flow
vectors
LPD unit
vectors
ui
LMSs
summation
w11
di
pi
w12
vi
+
w13
z
x
Figure 2: a. Sensor model: At each viewing direction di , there are two measurements xi
and yi of the optic flow pi along two directions ui and vi on the unit sphere. b. Simplified
model of a tangential neuron: The optic flow and the local noise signal are projected onto
a unit vector field. The weighted projections are linearly integrated to give the estimator
output.
input to the simplified neural model of a tangential neuron which consists of a weighted
sum of all local measurements (Fig. 2b)
?? =
N
X
i
wx,i xi +
N
X
wy,i yi
(3)
i
with local weights wx,i and wy,i . In this model, the local motion sensitivity (LMS) is
defined as wi = k(wx,i , wy,i )k, the local preferred motion direction (LPD) is parallel to the
vector w1i (wx,i , wy,i ). The resulting LMSs and LPDs can be compared to measurements
on real tangential neurons.
As our basic hypothesis, we assume that the output of such model neurons is used to estimate the self-motion of the sensor. Since the output is a scalar, we need in the simplest
case an ensemble of six neurons to encode all six rotational and translational degrees of
freedom. The local weights of each neuron are chosen to yield an optimal linear estimator
for the respective self-motion component.
2.2
Prior knowledge
An estimator for self-motion consisting of a linear combination of flow measurements necessarily has to neglect the dependence of the optic flow on the object distances. As a
consequence, the estimator output will be different from scene to scene, depending on the
current distance and noise characteristics. The best the estimator can do is to add up as
many flow measurements as possible hoping that the individual distance deviations of the
current scene from the average will cancel each other. Clearly, viewing directions with low
distance variability and small noise content should receive a higher weight in this process.
In this way, prior knowledge about the distance and noise statistics of the sensor and its
environment can improve the reliability of the estimate.
If the current nearness at viewing direction di differs from the the average nearness ?
? i over
all scenes by ??i , the measurement xi can be written as ( see Eqns. (1) and (2))
T
>
>
+ nx,i ? ??i ui T,
(4)
xi = ?(?
?i ui , (ui ? di ) )
R
where the last two terms vary from scene to scene, even when the sensor undergoes exactly
the same self-motion.
To simplify the notation, we stack all 2N measurements over the entire EMD array in
the vector x = (x1 , y1 , x2 , y2 , ..., xN , yN )> . Similarly, the self-motion components along
the x-, y- and z-directions of the global coordinate systems are combined in the vector
? = (Tx , Ty , Tz , Rx , Ry , Rz )> , the scene-dependent terms of Eq. (4) in the 2N -vector
n = (nx,1 ? ??1 u1 T, ny,1 ? ??1 v1 T, ....)> and the scene-independent terms in the
>
?1 v1> , ?(v1 ? d1 )> ), ....)> . The entire
6xN-matrix F = ((??
? 1 u>
1 , ?(u1 ? d1 ) ), (??
ensemble of measurements over the sensor can thus be written as
x = F ? + n.
(5)
Assuming that T, nx,i , ny,i and ?i are uncorrelated, the covariance matrix C of the scenedependent measurement component n is given by
Cij = Cn,ij + C?,ij u>
i CT uj
(6)
with Cn being the covariance of n, C? of ? and CT of T. These three covariance matrices,
together with the average nearness ?
? i , constitute the prior knowledge required for deriving
the optimal estimator.
2.3
Optimized neural model
Using the notation of Eq. (5), we write the linear estimator as
?? = W x.
(7)
W denotes a 2N x6 weight matrix where each of the six rows corresponds to one model
neuron (see Eq. (3)) tuned to a different component of ?. The optimal weight matrix is
chosen to minimize the mean square error e of the estimator given by
? 2 ) = tr[W CW > ]
e = E(k? ? ?k
(8)
where E denotes the expectation. We additionally impose the constraint that the estimator
should be unbiased for n = 0, i.e., ?? = ?. From Eqns. (5) and (7) we obtain the constraint
equation
W F = 16x6 .
(9)
The solution minimizing the associated Euler-Lagrange functional (? is a 6x6-matrix of
Lagrange multipliers)
J = tr[W CW > ] + tr[?> (16x6 ? W F )]
(10)
can be found analytically and is given by
1
?F > C ?1
(11)
2
with ? = 2(F > C ?1 F )?1 . When computed for the typical inter-scene covariances of a
flying animal, the resulting weight sets are able to reproduce the characteristics of the LMS
and LPD distribution of the tangential neurons [2]. Having shown the good correspondence
between model neurons and measurement, the question remains whether the output of such
an ensemble of neurons can be used for some real-world task. This is by no means evident given the fact that - in contrast to most approaches in computer vision - the distance
distribution of the current scene is completely ignored by the linear estimator.
W =
3
3.1
Experiments
Linear estimator for an office robot
As our test scenario, we consider the situation of a mobile robot in an office environment.
This scenario allows for measuring the typical motion patterns and the associated distance
statistics which otherwise would be difficult to obtain for a flying agent.
a.
75
2.25
25
2.
2.5
-15
1.5
1.25
1
-150
-120
-60
75
-30
0
30
azimuth (deg.)
-120
1.5
0.5
0.25
0.25
-90
-60
0.75
1. 1.
25 75
1
0.75
1
0.75
0.5
-150
180
1.25
1.25
1.5
0.25
-45
150
0.25
1
1.
1.75
1
0.75
-15
1.25
1.5
75
1.25
120
0.75
1.25
1.5
1.5
90
0.5
0.5
0.75
15
60
0.25
1
-75
-180
0.75
0.25
45
elevation (deg.)
1
0.75
-90
2
1.25
0.75
-75
-180
2.5
1.7
5
1.5
1.75
-45
b.
2
3
2.75
2.25
2
1.75
2.25
15
2.5
elevation (deg.)
45
-30
0
30
azimuth (deg.)
60
90
120
150
180
Figure 3: Distance statistics of an indoor robot (0 azimuth corresponds to forward direction): a. Average distances from the origin in the visual field (N = 26). Darker areas
represent larger distances. b. Distance standard deviation in the visual field (N = 26).
Darker areas represent stronger deviations.
The distance statistics were recorded using a rotating laser scanner. The 26 measurement
points were chosen along typical trajectories of a mobile robot while wandering around
and avoiding obstacles in an office environment. The recorded distance statistics therefore
reflect properties both of the environment and of the specific movement patterns of the
robot. From these measurements, the average nearness ??i and its covariance C? were
computed (cf. Fig. 3, we used distance instead of nearness for easier interpretation).
The distance statistics show a pronounced anisotropy which can be attributed to three main
causes: (1) Since the robot tries to turn away from the obstacles, the distance in front and
behind the robot tends to be larger than on its sides (Fig. 3a). (2) The camera on the robot
usually moves at a fixed height above ground on a flat surface. As a consequence, distance
variation is particularly small at very low elevations (Fig. 3b). (3) The office environment
also contains corridors. When the robot follows the corridor while avoiding obstacles,
distance variations in the frontal region of the visual field are very large (Fig. 3b).
The estimation of the translation covariance CT is straightforward since our robot can only
translate in forward direction, i.e. along the z-axis. CT is therefore 0 everywhere except
the lower right diagonal entry which is the square of the average forward speed of the robot
(here: 0.3 m/s). The EMD noise was assumed to be zero-mean, uncorrelated and uniform
over the image, which results in a diagonal Cn with identical entries. The noise standard
b.
75
75
45
45
elevation (deg.)
elevation (deg.)
a.
15
-15
15
-15
-45
-45
-75
-75
0
30
60
90
120
azimuth (deg.)
150
180
0
30
60
90
120
azimuth (deg.)
150
180
Figure 4: Model neurons computed as part of the linear estimator. Notation is identical
to Fig. 1. The depicted region of the visual field extends from ?15? to 180? azimuth and
from ?75? to 75? elevation. The model neurons are tuned to a. forward translation, and b.
to rotations about the vertical axis.
deviation of 0.34 deg./s was determined by presenting a series of natural images moving at
1.1 deg./s to the flow algorithm used in the implementation of the estimator (see Sect. 3.2).
?
?, C? , CT and Cn constitute the prior knowledge necessary for computing the estimator
(Eqns. (6) and (11)).
Examples of the optimal weight sets for the model neurons (corresponding to a row of
W ) are shown in Fig. 4. The resulting model neurons show very similar characteristics to
those observed in real tangential neurons, however, with specific adaptations to the indoor
robot scenario. All model neurons have in common that image regions near the rotation or
translation axis receive less weight. In these regions, the self-motion components to be estimated generate only small flow vectors which are easily corrupted by noise. Equation (11)
predicts that the estimator will preferably sample in image regions with smaller distance
variations. In our measurements, this is mainly the case at the ground around the robot
(Fig. 3). The rotation-selective model neurons weight image regions with larger distances
more highly, since distance variations at large distances have a smaller effect. In our example, distances are largest in front and behind the robot so that the rotation-selective neurons
assign the highest weights to these regions (Fig. 3b).
3.2
Gantry experiments
The self-motion estimates from the model neuron ensemble were tested on a gantry with
three translational and one rotational (yaw) degree of freedom. Since the gantry had a
position accuracy below 1mm, the programmed position values were taken as ground truth
for evaluating the estimator?s accuracy.
As vision sensor, we used a camera mounted above a mirror with a circularly symmetric
hyperbolic profile. This setup allowed for a 360? horizontal field of view extending from
90? below to 45? above the horizon. Such a large field of view considerably improves
the estimator?s performance since the individual distance deviations in the scene are more
likely to be averaged out. More details about the omnidirectional camera can be found in
[4]. In each experiment, the camera was moved to 10 different start positions in the lab
with largely varying distance distributions. After recording an image of the scene at the
start position, the gantry translated and rotated at various prescribed speeds and directions
and took a second image. After the recorded image pairs (10 for each type of movement)
were unwarped, we computed the optic flow input for the model neurons using a standard
gradient-based scheme [5].
a.
b.
20
150
estimator response [%]
estimated self-motion
18
rotation
16
14
12
translation
10
100
50
8
6
4
4
6
8
10
12
14
16
18
20
0
22
true self-motion
2
3
4
5
d.
c.
0.6
0.6
0.5
0.5
estimator response
estimator response
1
0.4
0.3
0.2
0.1
0
0.4
0.3
0.2
0.1
1
2
3
0
1
2
3
Figure 5: Gantry experiments: Results are given in arbitrary units, true rotation values
are denoted by a dashed line, translation by a dash-dot line. Grey bars denote translation
estimates, white bars rotation estimates a. Estimated vs. real self-motion; b. Estimates of
the same self-motion at different locations; c. Estimates for constant rotation and varying
translation; d. Estimates for constant translation and varying rotation.
The average error of the rotation rate estimates over all trials (N=450) was 0.7? /s (5.7%
rel. error, Fig. 5a), the error in the estimated translation speeds (N=420) was 8.5 mm/s
(7.5% rel. error). The estimated rotation axis had an average error of magnitude 1.7? ,
the estimated translation direction 4.5? . The larger error of the translation estimates is
mainly caused by the direct dependence of the translational flow on distance (see Eq. (2))
whereas the rotation estimates are only indirectly affected by distance errors via the current
translational flow component which is largely filtered out by the LPD arrangement. The
larger sensitivity of the translation estimates can be seen by moving the sensor at the same
translation and rotation speeds in various locations. The rotation estimates remain consistent over all locations whereas the translation estimates show a higher variance and also a
location-dependent bias, e.g., very close to laboratory walls (Fig. 5b). A second problem
for translation estimation comes from the different properties of rotational and translational
flow fields: Due to its distance dependence, the translational flow field shows a much wider
range of values than a rotational flow field. The smaller translational flow vectors are often
swamped by simultaneous rotation or noise, and the larger ones tend to be in the upper
saturation range of the used optic flow algorithm. This can be demonstrated by simultaneously translating and rotating the semsor. Again, rotation estimates remain consistent while
translation estimates are strongly affected by rotation (Fig. 5c and d).
4
Conclusion
Our experiments show that it is indeed possible to obtain useful self-motion estimates from
an ensemble of linear model neurons. Although a linear approach necessarily has to ignore
the distances of the currently perceived scene, an appropriate choice of local weights and
a large field of view are capable of reducing the influence of noise and the particular scene
distances on the estimates. In particular, rotation estimates were highly accurate - in a range
comparable to gyroscopic estimates - and consistent across different scenes and different
simultaneous translations. Translation estimates, however, turned out to be less accurate
and less robust against changing scenes and simultaneous rotation.
The components of the estimator are simplified model neurons which have been shown to
reproduce the essential receptive field properties of the fly?s tangential neurons [2]. Our
study suggests that the output of such neurons could be directly used for self-motion estimation by simply combining them linearly at a later integration stage. As our experiments
have shown, the achievable accuracy would probably be more than enough for head stabilization under closed loop conditions.
Finally, we have to point out a basic limitation of the proposed theory: It assumes linear
EMDs as input to the neurons (see Eq. (1)). The output of fly EMDs, however, is only
linear for very small image motions. It quickly saturates at a plateau value at higher image
velocities. In this range, the tangential neuron can only indicate the presence and the sign of
a particular self-motion component, not the current rotation or translation velocity. A linear
combination of output signals, as in our model, is no more feasible but would require some
form of population coding. In addition, a detailed comparison between the linear model
and real neurons shows characteristic differences indicating that tangential neurons usually
operate in the plateau range rather than in the linear range of the EMDs [2]. As a consequence, our study can only give a hint on what might happen at small image velocities. The
case of higher image velocities has to await further research.
Acknowledgments
The gantry experiments were done at the Center of Visual Sciences in Canberra. The
authors wish to thank J. Hill, M. Hofmann and M. V. Srinivasan for their help. Financial support was provided by the Human Frontier Science Program and the Max-PlanckGesellschaft.
References
[1] Krapp, H.G., Hengstenberg, B., & Hengstenberg, R. (1998). Dendritic structure and receptive
field organization of optic low processing interneurons in the fly. J. of Neurophysiology, 79, 1902 1917.
[2] Franz, M. O. & Krapp, H C. (2000). Wide-field, motion-sensitive neurons and matched filters for
optic flow fields. Biol. Cybern., 83, 185 - 197.
[3] Koenderink, J. J., & van Doorn, A. J. (1987). Facts on optic flow. Biol. Cybern., 56, 247 - 254.
[4] Chahl, J. S, & Srinivasan, M. V. (1997). Reflective surfaces for panoramic imaging. Applied
Optics, 36(31), 8275 - 8285.
[5] Srinivasan, M. V. (1994). An image-interpolation technique for the computation of optic flow and
egomotion. Biol. Cybern., 71, 401 - 415.
| 2247 |@word neurophysiology:1 trial:1 achievable:1 stronger:1 open:1 grey:1 covariance:6 tr:3 contains:1 series:1 tuned:3 current:6 written:2 additive:2 happen:1 wx:4 hofmann:1 hoping:1 v:1 filtered:1 nearness:6 location:4 height:1 mathematical:1 along:6 direct:1 corridor:2 consists:1 manner:1 inter:1 indeed:1 mpg:1 examine:1 ry:1 brain:2 spherical:1 anisotropy:1 provided:1 estimating:1 notation:3 matched:1 what:1 minimizes:1 preferably:1 act:1 exactly:1 unit:9 yn:1 local:14 tends:1 kybernetik:1 consequence:3 interpolation:1 mercator:1 might:2 au:1 suggests:2 programmed:1 range:6 averaged:1 acknowledgment:1 camera:4 differs:1 area:2 hyperbolic:1 projection:1 radial:1 onto:1 close:1 influence:1 cybern:3 map:1 demonstrated:1 center:2 straightforward:1 stabilizing:1 estimator:25 array:1 fill:1 deriving:1 financial:1 population:1 coordinate:2 variation:4 construction:1 hypothesis:1 origin:3 velocity:4 particularly:1 predicts:1 observed:3 fly:7 await:1 region:7 sect:1 movement:2 highest:1 environment:6 ui:8 carrying:2 flying:2 completely:1 translated:1 easily:1 various:2 tx:1 laser:1 larger:6 tested:2 otherwise:1 statistic:7 directionally:1 matthias:1 took:1 adaptation:1 turned:1 combining:1 loop:1 translate:1 moved:1 pronounced:1 extending:1 rotated:1 object:2 wider:1 derive:1 depending:1 help:1 ij:2 eq:5 come:1 australian:1 indicate:1 direction:17 filter:1 stabilization:1 human:1 australia:1 viewing:6 translating:1 require:2 assign:1 wall:1 elevation:8 dendritic:1 elementary:1 summation:1 frontier:1 scanner:1 mm:2 around:3 ground:3 mapping:1 lm:5 vary:1 perceived:1 estimation:5 currently:1 sensitive:2 largest:1 weighted:3 clearly:1 sensor:12 rather:1 mobile:2 varying:3 office:4 encode:1 ax:1 panoramic:1 mainly:2 contrast:2 rigorous:1 dependent:2 prescribing:1 entire:3 integrated:1 selective:3 reproduce:3 germany:1 translational:7 orientation:1 denoted:2 socalled:1 animal:1 special:1 integration:1 field:19 extraction:1 emd:3 having:1 identical:2 cancel:1 yaw:1 stimulus:1 simplify:2 hint:1 tangential:14 simultaneously:1 national:1 individual:2 consisting:3 freedom:2 organization:1 interneurons:1 highly:2 behind:2 accurate:3 capable:1 necessary:1 respective:1 orthogonal:1 rotating:3 instance:1 modeling:1 obstacle:3 w1i:1 measuring:1 deviation:5 entry:2 euler:1 uniform:1 azimuth:8 front:2 corrupted:1 considerably:1 combined:1 sensitivity:5 together:1 quickly:1 again:2 reflect:1 recorded:3 tz:1 koenderink:1 de:1 coding:1 caused:2 vi:5 later:1 try:1 view:3 lab:1 closed:1 biologische:1 start:2 parallel:1 minimize:1 square:2 accuracy:3 variance:3 characteristic:4 largely:2 ensemble:5 yield:2 rx:1 trajectory:1 detector:1 simultaneous:3 plateau:2 against:1 ty:1 associated:2 di:10 attributed:1 treatment:1 knowledge:5 improves:1 higher:4 x6:4 response:5 maximally:1 arranged:1 done:1 strongly:1 stage:1 flight:1 horizontal:1 undergoes:1 effect:1 y2:1 unbiased:1 multiplier:1 true:2 analytically:1 symmetric:1 laboratory:1 omnidirectional:3 white:1 during:2 self:25 eqns:3 mpi:1 presenting:2 evident:1 complete:1 hill:1 motion:39 image:14 common:1 rotation:22 functional:1 interpretation:1 measurement:17 similarly:1 had:2 reliability:1 dot:1 moving:2 robot:14 similarity:1 surface:2 add:1 recent:1 involvement:1 scenario:3 certain:1 ubingen:1 yi:5 seen:2 impose:1 dashed:1 signal:2 sphere:3 w13:1 basic:2 vision:4 expectation:1 represent:2 receive:2 whereas:3 want:1 addition:1 doorn:1 operate:1 probably:1 induced:2 subject:2 recording:1 tend:1 spemannstr:1 flow:31 incorporates:1 reflective:1 near:1 presence:1 enough:1 cn:4 translates:1 whether:2 six:3 lpd:5 wandering:1 cause:1 constitute:2 ignored:1 useful:2 detailed:2 simplest:1 generate:1 sign:1 estimated:7 write:1 affected:2 srinivasan:3 changing:1 v1:3 imaging:1 sum:2 inverse:1 everywhere:1 respond:1 striking:1 extends:2 w12:1 comparable:1 ct:5 dash:1 correspondence:1 quadratic:1 optic:17 constraint:2 scene:17 x2:1 flat:1 u1:2 speed:4 optimality:1 prescribed:1 according:1 combination:5 smaller:3 remain:2 across:1 ur:1 wi:1 swamped:1 emds:4 taken:1 equation:2 remains:1 turn:2 away:1 indirectly:1 appropriate:1 vs7:2 rz:1 denotes:3 assumes:1 cf:1 neglect:1 uj:1 classical:1 move:1 question:2 arrangement:1 receptive:2 dependence:3 responds:1 diagonal:2 gradient:1 distance:33 cw:2 thank:1 nx:5 tuebingen:1 assuming:1 length:1 index:1 rotational:4 minimizing:2 difficult:1 setup:1 cij:1 implementation:1 upper:1 vertical:1 neuron:38 situation:1 unwarped:1 variability:2 head:2 saturates:1 y1:1 stack:1 arbitrary:1 pair:1 required:1 optimized:1 able:2 bar:2 wy:4 pattern:3 usually:2 indoor:2 below:2 saturation:1 program:1 reliable:1 max:1 natural:1 scheme:1 improve:1 w11:1 eye:1 axis:7 prior:5 mof:1 relative:1 limitation:1 mounted:1 degree:2 agent:1 consistent:3 principle:2 egomotion:1 uncorrelated:2 pi:7 translation:20 row:2 last:1 hengstenberg:2 side:1 bias:1 wide:2 van:1 xn:2 world:3 evaluating:1 forward:4 collection:1 author:1 projected:1 franz:2 simplified:5 ignore:1 preferred:4 deg:12 global:1 assumed:1 xi:7 additionally:1 robust:2 krapp:2 necessarily:2 main:1 linearly:2 arrow:1 noise:13 profile:1 allowed:1 x1:1 fig:14 canberra:2 ny:4 darker:2 position:4 comprises:1 wish:1 specific:2 essential:1 circularly:1 rel:2 mirror:1 magnitude:1 anu:1 horizon:1 gap:1 easier:1 depicted:1 simply:1 likely:1 visual:6 lagrange:2 scalar:1 corresponds:2 truth:1 content:1 feasible:1 typical:4 except:1 determined:1 reducing:1 indicating:1 formally:1 support:1 frontal:1 avoiding:2 d1:2 biol:3 |
1,371 | 2,248 | Concentration Inequalities for the Missing Mass
and for Histogram Rule Error
David McAllester
Toyota Technological Institute at Chicago
[email protected]
Luis Ortiz
University of Pennsylvania
[email protected]
Abstract
This paper gives distribution-free concentration inequalities for the missing mass and the error rate of histogram rules. Negative association methods can be used to reduce these concentration problems to concentration
questions about independent sums. Although the sums are independent,
they are highly heterogeneous. Such highly heterogeneous independent
sums cannot be analyzed using standard concentration inequalities such
as Hoeffding?s inequality, the Angluin-Valiant bound, Bernstein?s inequality, Bennett?s inequality, or McDiarmid?s theorem.
1 Introduction
The Good-Turing missing mass estimator was developed in the 1940s to estimate the probability that the next item drawn from a fixed distribution will be an item not seen before.
Since the publication of the Good-Turing missing mass estimator in 1953 [9], this estimator has been used extensively in language modeling applications [4, 6, 12]. Recently
a large deviation accuracy guarantee was proved for the missing mass estimator [15, 14].
The main technical result is that the missing mass itself concentrates ? [15] proves that
the probability that missing mass deviates from its expectation by more than is at most
independent of the underlying distribution. Here we give a simpler proof of the
stronger result that the deviation probability is bounded by .
A histogram rule is defined by two things ? a given clustering of objects into classes and a
given training sample. In a classification setting the histogram rule defined by a given clustering and sample assigns to each cluster the label that occurred most frequently for that
cluster in the sample. In a decision-theoretic setting, such as that studied by Ortiz and Kaebling [16], the rule associates each cluster with the action choice of highest performance
on the training data for that cluster. We show that the performance of a histogram rule (for
a fixed clustering) concentrates near its expectation ? the probability that the performance
deviates from its expectation by more than is bounded by
independent of the
clustering or the underlying data distribution.
2 The Exponential Moment Method
All of the results in this paper are based on the exponential moment method of proving
concentration inequalities. The exponential moment was perhaps first used by Bernstein
but was popularized by Chernoff. Let
be any real-valued random variable with finite
mean. Let
be
if
and
is
. The
following lemma is the central topic of Chernoff?s classic paper [5].
Lemma 1 (Chernoff) For any real-valued variable with finite mean
following for any where the ?entropy?
is defined as below.
!#"%$'& (*)
we have the
(1)
+
-,
:>3?-,
(2)
./12 0 4357698;:<+3=
2 $CB
A@
(3)
Lemma 1 follows, essentially, from the observation that for 3D
AE we have the following.
2
2
2
2
2
F
G HAJI "9$ (K)ML , ( A@ $B , ="%( 4N O4PQ"%$'& ))
(4)
Lemma 1 is called the exponential moment method because of the first inequality in (4).
The following two observations provide a simple general tool.
R
6%8;:>3?A S3UTGRK3WV for
M [T '
V]\ ^_R] .
Observation 3 If ` , aSaKa , b are independent then 698;:<dcJef e W3=g,hciej6%8;:> e 3? .
Observation 2 Let be any positive constant satisfying
all
. Formula (2) implies that for
we have
3X
YE
ZE
2
2
2 k,A g, :>l ?3? +m,J (
There exists a unique largest open interval +3no O 3#npqj (possibly with infinite endpoints)
such that for 3ZrZ+3 no O 3 npq we have that :<+
3? is finite. For 3ZrZ+3 no O H3 npq we
define the expectation of sW+7 at inverse temperature 3 as follows.
2
2 sW7d, :>l 3= @ sW7 $HB
(5)
Equation (5) can be taken as the definition of 2 for continuous distributions on . For
3trh34no O u3#npqj let v V +u3= be 2 @ w5D 2 x V B . The quantity v V 2Q|%|u3= is the
Gibbs-variance at inverse temperature 3 . For 3yrX+3 no O
3 np
q we let z{;+
< denote
the KL-divergence from 2 to which can be written as follows.
z{;+ 2 |9| <},J 2 S35~6%8;:>3=
(6)
2
Let + #no O 4np
qj be the smallest open interval containing all values of the form
for 3?r?+3 no O 3 npq . If the open interval no O < npq is not empty then 2 is a
monotonically increasing function of 3?r?+3 no O >3 npq . For ?r? no O ? npq define
3g+ to be the unique value 3 satisfying
2 g,? . For any continuous function s we
(
now define the double integral ? ? sWMR]'? V R to be the function ? satisfying ?+?[C,?E ,
???+?[g,iE , and ?<? ??+ g,tsW where ?<?M+ and ?<? ?? are the first and second derivatives
of ? respectively. We now have the following general theorem.
Theorem 4 For any real-valued variable
, any
r + no O npq , and 3 r
3 no O 3 npq we have the following.
? , #3g+ 5~698;:<+3g
(7)
Some further observations also prove useful. Let
be an arbitrary real-valued random
variable. For a discrete distribution the Gibbs distribution
can be defined as follows.
,
,
698;:<+W3=,
z{ M 2 %" (K) |9| <
(
? V
$ v V +3g
2
} K3T
v V
? V
(8)
(9)
(10)
| 5D | small we have the following.
(
?V
+ 5~ x V
C },
$ v V +3g v V }E
Formula (7) is proved by showing that 3g is the the optimal 3 in (2). Up to sign convenFormula (9) can be clarified by noting that for
tions (7) is the equation for physical entropy in statistical mechanics. Equation (8) follows
from (7) and (6). Equations (9) and (10) then follow from well known equations of statistical mechanics. An implicit derivation of (9) and (10) can be found in section six of
Chernoff?s original paper [5].
, c be ` e
As a simple example of the use of (9), we derive Hoeffding?s inequality. Consider a sum
where the
are independent and
is bounded to an interval of width .
Note
that
each
remains
bounded
to
this
interval
at
all values of . Hence
. We then have that
. Hoeffding?s inequality now follows from
(1) and (9).
e
e
v V 3? ` c be ` Ve
e
vQV e g3=H
3
e
3 Negative Association
The analysis of the missing mass and histogram rule error involve sums of variables that are
not independent. However, these variables are negatively associated ? an increase in one
variable is associated with decreases in the other variables. Formally, a set of real-valued
random
variables
, ,
is negatively associated if for any two disjoint subsets and
, and any two non-decreasing, or any two non-increasing,
of the integers
functions from ! "#! to and $ from %! &'! to we have the following.
)( * +$ -, /.
)( 0
$ 1, /.
` aSaKa b
l aSaKa
s
sW e }r _ j r ?Y sW e r _ ] + f r d
Dubhasi and Ranjan [8] give a survey of methods for establishing and using negative association. This section states some basic facts about negative association.
` aSaKa b
e
, c ef e 7?, c ef?e
+
H
Z+7?M
~?` aKaSa ?b
Lemma 5 Let
, ,
be any set of negatively associated variables. Let
, ,
be independent shadow variables, i.e., independent variables such that
is distributed
identically to . Let
and
. For any set of negatively associated
variables we have
.
X?e
Lemma 6 Let be any sample of 2 items (ball throws) drawn IID from a fixed distribution
on the integers (bins)
43% . Let 5 6( be the number of times integer ( occurs in the
, , 5 73 are negatively associated.
sample. The variables 5
aSaSaK
jl l aSaKa >
` Ka aSa b , and any non-decreasing
s ` + ` Sa aSa s b b are negatively asso-
Lemma 7 For any negatively associated variables
, ,
functions , , , we have that the quantities
, ,
ciated. This also holds if the functions are non-increasing.
s ` aSaKa s b
se
` Sa aSa b
|
|
l e ` KaSaKa
` b aS aKa , b e , l e
e
Q`aKaSa b
e ,
e , l | e
e
Lemma 8 Let
,
,
be a negatively associated set of variables. Let
,
be 0-1 (Bernoulli) variables such that
is a stochastic function of
, i.e.,
. If
is a non-decreasing function
of
then ,
,
are negatively associated. This also holds if
is
non-increasing.
` , l | e
4 The Missing Mass
Suppose that we draw words (or any objects) independently from a fixed distribution over
a countable (but possibly infinite) set of words. We let the probability of drawing word
be denoted as . For a sample of 2 draws the missing mass of , denoted , is the
total probability mass of the items not occurring in the sample, i.e.
.
Theorem 9 For the missing mass
ing.
k, c !
tE , we have the follow-
as defined above, and for
^
} #5
} _T
V
2
(11)
V
2
(12)
To prove theorem 9 let
be a Bernoulli variable which is 1 if word does not occur in
the sample and 0 otherwise. The missing mass can now be written as
.
The variables are monotonic functions of the word counts so by lemmas 6 and 7 we
have that the
are negatively associated. By lemma 5 we can then assume that the
variables are independent. The analysis of this independent sum uses the following
general concentration inequalities for independent sums of bounded variables.
, c
, c e ` e e
e
` aKaSa
Lemma 10 Let
where
,
, are independent random variables
with
and each is a non-negative constant. Let
be
. For
we
have the following.
e rJ E# l
+ #5
e
+ _T
e
?E
V
c e ` e Ve
V
c e ` N O
(13)
(14)
Before proving (13) and (14) we first show how (13) and (14) imply (11) and (12) respectively. For the missing mass
we have the following.
m, c ?
,J+ , l },? l 5~
To prove (11) we note that formula (13) implies the following where we use the fact that
for
AE we have (
l \ .
#5 '
c VV
c V \ 2 , 2 V
To prove (12) we note that formula (14) implies the following.
_T F
c
V
V \ 6%8 `
c
V
\2
,
2
V
We now compare (13) and (14) to other well known bounds. Hoeffding?s inequality [11]
yields the following.
(15)
V
_T '
c e ` Ve
In the missing mass application we have that c e ` Ve can be which fails to yield (12).
l on the Angluin-Valiant
The Srivistav-Stangier bound [17], which itself an improvement
where Snp
q is e e .
bound [1, 10], yields the following for E
V
C [T '
(16)
npqWc e ` e e
It is possible to show that in the missing mass application npq c e ` e e can be
l
so this bound does not handle the missing mass. A weaker version of the lower-deviation
inequality (13) can be derived from Bernstein?s inequality [3] (see [7]). However, neither
Bernstein?s inequality nor Bennett?s inequality [2] can handle the upward deviation of the
missing mass.
To prove (13) and (14) we first note the following lemma.
rD E1 l and let D? r *E1 l be a Bernoulli
and ~? and any 3 and constant ?
6%8;:>+? 3='G698;:<M? ? 3=
This lemma follows from the observation that for any convex function s on the interval
E1 l we have that sW+ is less than l 57
sWME QTy sW l and so we have the following.
2
2
2
2
A@ ? $B Zi@x l 57~ TX ? B ,? l 5~ xQTy ? ,itI ? $xL
Lemma 11 and equation (2) now imply the following which implies that for the proof of
(13) and (14) we can assume without loss of generality that the variables e are Bernoulli.
Lemma 12 Let
, c e e e with e r E1 with the variables e independent. Let
?}, c e e ?e where e r *E# l with ?e l = e . For any such , 7? , and we
have the following.
+
H
Z+ ?
Lemma 11 Let be a random variable with
variable with
. For any such variables
we have the following.
7? ,t
.
,?c e ` e e
To prove (13) let
we have the following.
3DZE
where the
e
are independent Bernoulli variables. For
v V e 3=HA 2 + e , l ' e
So we have v V +
3?Fcie Ve e . Formula (13) now follows from (9).
Formula (14)
follows from observations 2 and 3 and the following lemma of Kearns and Saul [13].
Lemma 13 (Kearns&Saul) For a Bernoulli variable
.
, l
698;:< g3?
we have the following where
5
g K3UT l ^}698 ` < V 3 V
g K3UT ^}69 8 V ` 3 V
is
(17)
(18)
5 Histogram Rule Error
Now we consider the problem of learning a histogram rule from an IID sample of pairs
on such pairs. The problem is to find
drawn from a fixed distribution
a rule mapping to the two-element set
so as to minimize the expectation of the
where is a given loss function from
to the interval
loss
. In the
. In the decision-theoretic setting
classification setting one typically takes to be
is the hidden state and can be arbitrarily complex and
is the cost of taking action
in the presence of hidden state . In the general case (covering both settings) we
and
assume only
.
= >r
Q+ 1
=
]E1 l
*E#
*E1 l l
= [
j = 1HrD E1 l
= 'r ]E1 l
E1 l
We are interested in histogram rules with respect to a fixed clustering. We assume a given
cluster function mapping to the integers from to . We consider a sample of 2
pairs drawn IID from a fixed distribution on . For any cluster index . , we define ,
. . We define 5 .
to be the subset of the sample consisting of pairs such that
to be , . For any cluster index . and
we define , and , as follows.
Q
l
g,
j
| |
r *E# l
j ;
;
, ug,
l
<1
, ;g,t
Q ! "%(*) ,
>1
5 ._
Q !
If 5j._},JE then we define , ; to be 1. We now define the rule and "! from class index
to labels as follows.
# )% ('
& &98`* ,j ;
$
+#)% (,
&&98` * , ;
. },
Q
! ._g,
Ties are broken stochastically with each outcome equally likely so that the rule -! is a
random variable only partially determined by the sample . We are interested in the generalization loss of the empirical rule .
4 g,i =- I Q
1 L
.
Theorem 14 For
defined as above we have the following for positive .
0/1
I
# L 5
0/1
I
# L T
2
2
2
2
V
(19)
V
(20)
3
4
l 5 f _
?5
To prove this we need some additional terminology. For each class label . define , to be
. . Define , to be ,
5"! .
the probability over selecting a pair that
. In other words, ), is the additional loss on class . when assigns the wrong
, "! .
label to this class. Define the random variable , to be 1 if . 7
6 8! . and 0 otherwise.
The variable , represents the statement that the empirical rule is ?wrong? (non-optimal)
on class . . We can now express the generalization loss of as follows.
j _
Q
{
g,
{
= _ , j _
},:
! T
9
e{ e e
e
(21)
j _
The variable , is a monotone stochastic function of the count 5 . ? the probability
of error declines monotonically in the count of the class. By lemma 8 we then have that
the variables
are negatively associated so we can treat them as independent. To prove
theorem 14 we start with an analysis of
.
,
e
Lemma 15
, l
+ , , l ' <>; =
?
~ 2 , and show the following.
5 ._H Q Ty+1,;, l | 5j._'
Q (22)
(23)
b
V
(24)
Proof: To prove this lemma we consider a threshold
-, , l
5j._H Q
+ , , l | 5 ._'
Q
Formula (23) follows by the Angluin-Valiant bound [1, 7].1 To prove (24) we note that if
-, ,
l then either , "!f . H
"! ._
T { \ or ,f l 5<"! . H
l 5<8! ._
15{ \ . By
a combination of Hoeffding?s inequality and the union bound we have that the probability
that one of these two conditions holds is bounded by the left hand side of (24). Lemma 15
now follows by setting to 2 , and noting that ,
.
{ l
We now prove (19) using lemma 15 and (10). For Y? we have 3g ??E and for
3DZE we have the following.
v V + e V { Ve 1,j3?, e V { Ve 2 -,;, l l 5~ 2 -,;, l
e V { Ve 2 , , l 'Y e V { Ve
, , l 'A e V { Ve ; = + ?
Since -, is bounded to the interval E1 we have that v V + e { e -,f 3= is also bounded
l
by e V { eV \ ^ . By (10) we then have the following for 3?E where ?,m \ _6%8
. In
l
l
(
deriving (27) we use the fact that
is a monotonically decreasing function of for
l\.
/54 /067
.
"!$#$% '&(*),-+ 0/213 3 / 8 + :9;< ; == > @? 2BA 3
/
/4 /
/
/4 /
1
1
1
3
N
1
3 9T;< ; @? 2BUV 3
D
!
#
%
'
E
&
F
)
)
+
C
>=T= >
-HGI
NPO 8 Q
N /N
? KJML
? SRML
/
/
= >
= >
1
1
W!D#% '&E*),-+ GI
9 ;<
3
2 )
N /[Z 8
N / Z ; = L 2BVU
X? S/ JYL
= > @? ERYL
. = > 1
C !D#% '&E*),-+ 0/ N /[Z 8 2A 3
C !D#% '&E*) -+ / 8 Z N 2 3
(25)
(26)
(27)
(28)
(29)
Formula (19) now follows from (29) and a downward variant of observation 2. The proof
of (20) is similar but uses (18). For
we/ have the
following
where
is
.
/
/
/
3X
YE
l
T698 4 / \
.
1 N 1 3
1
N 1/ 4 / 3 b
\ !$#% ]&E*) -^
)
+ GI
A UV 3
:
_
NPO 8 Q
N - `Xa N 1 3 c9
= > ? / SRY. L
= > ? KJYL /
1
1
bf
!$#% ]&E*)d-+ GI
+ e g _ AhUV 3
2 )
N /[Z 8
N - :`X_ a
L
= > X? KJYL
= > @? ERYL
The downward deviation Angluin-Valiant
bound3 used
here follows
from (9) and the observation
1
j"k we have
that for a Bernoulli variable i and
l i
: i C + .
1
4/
.
C
C
/ 1
/
! # % ]&E*) -+
3
N / Z 8 2 A
! # % ]&E*)d-+ / 8 Z 2 3
N
Formula (20) now follows from (31) and observation 2.
References
[1] D. Anguluin and L. Valiant. Fast probabalistic algorithms for hamiltonian circuits. Journal of
Computing Systems Science, 18:155?193, 1979.
[2] G. Bennnett. Probability inequalities for the sum of independent ranndom variables. Journal of
the American Statistical Association, 57:33?45, 1962.
[3] S. Bernstein. The Theory of Probabilities. Gastehizdat Publishing House, Moscow, 1946.
[4] Stanley Chen and Joshua Goodman. An empirical study of smoothing techniques for language
modeling, August 1998. Technical report TR-10-98, Harvard University.
[5] H. Chernoff. A measure of the asymptotic efficiency of tests of a hypothesis based on the sum
of observations. Annals of Mathmematical Statistics, 23:493?507, 1952.
[6] Kenneth W. Church and William A. Gale. A comparison of the enhanced Good-Turing and
deleted estimation methods for estimating probabilities of English bigrams. Computer Speech
and Language, 5:19?54, 1991.
[7] Luc Devroye, L?aszl?o Gy?orfi, and G?abor Lugosi. A Probabilistic Theory of Pattern Recognition.
Springer, 1996.
[8] Devdatt P. Dubhashi and Desh Ranjan. Balls and bins: A study in negative dependence. Random
Structures and Algorithms, 13(2):99?124, 1998.
[9] I. J. Good. The population frequencies of species and the estimation of population parameters.
Biometrika, 40(16):237?264, December 1953.
[10] T. Hagerup and C. R?ub. A guided tour of chernoff bounds. Information Processing Letters,
33:305?309, 1989.
[11] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the
American Statistical Association, 58:13?30, 1963.
[12] Slava M. Katz. Estimation of probabilities from sparse data for the language model component
of a speech recognizer. IEEE Transactions on Acoustics, Speech and Signal Processing, ASSP35(3):400?401, March 1987.
[13] Michael Kearns and Lawrence Saul. Large deviation methods for approximate probabilistic
inference, with rates of convergence. In UAI-98, pages 311?319. Morgan Kaufmann, 1998.
[14] Samuel Kutin. Algorithmic Stability and Ensemble-Based Learning. PhD thesis, University of
Chicago, 2002.
[15] David McAllester and Robert Schapire. On the convergence rate of good-turing estimators. In
COLT00, 2000.
[16] Luis E. Ortiz and Leslie Pack Kaelbling. Sampling methods for action selection in influence
diagrams. In Proceedings of the Seventeenth National Conference on Artificial Intelligence,
pages 378?385, 2000.
[17] Anand Srivastav and Peter Stangier. Integer multicommodity flows with reduced demands. In
European Symposium on Algorithms, pages 360?371, 1993.
(30)
(31)
| 2248 |@word version:1 bigram:1 stronger:1 bf:1 open:3 tr:1 multicommodity:1 moment:4 selecting:1 ka:1 written:2 luis:2 chicago:2 intelligence:1 item:4 hamiltonian:1 desh:1 bvu:1 clarified:1 org:1 mcdiarmid:1 simpler:1 symposium:1 prove:11 upenn:1 frequently:1 mechanic:2 nor:1 decreasing:4 increasing:4 estimating:1 underlying:2 bounded:9 circuit:1 mass:18 developed:1 zrz:2 guarantee:1 ti:1 tie:1 biometrika:1 wrong:2 before:2 positive:2 treat:1 establishing:1 lugosi:1 studied:1 seventeenth:1 unique:2 union:1 empirical:3 orfi:1 word:6 cannot:1 selection:1 influence:1 missing:17 ranjan:2 independently:1 convex:1 survey:1 assigns:2 rule:15 estimator:5 deriving:1 proving:2 classic:1 handle:2 population:2 stability:1 annals:1 enhanced:1 suppose:1 us:2 hypothesis:1 associate:1 element:1 harvard:1 satisfying:3 ze:3 recognition:1 aszl:1 decrease:1 technological:1 highest:1 devdatt:1 broken:1 asa:3 negatively:11 efficiency:1 tx:1 leo:1 derivation:1 fast:1 buv:1 artificial:1 outcome:1 valued:5 drawing:1 otherwise:2 statistic:1 gi:3 itself:2 convergence:2 cluster:7 empty:1 double:1 tti:1 object:2 tions:1 derive:1 h3:1 sa:2 throw:1 shadow:1 implies:4 concentrate:2 guided:1 stochastic:2 mcallester:3 bin:2 generalization:2 hold:3 cb:1 k3:1 mapping:2 lawrence:1 algorithmic:1 u3:3 smallest:1 recognizer:1 estimation:3 hgi:1 label:4 largest:1 tool:1 publication:1 derived:1 improvement:1 bernoulli:7 aka:1 inference:1 typically:1 abor:1 hidden:2 interested:2 upward:1 classification:2 denoted:2 smoothing:1 sampling:1 chernoff:6 represents:1 report:1 x_:1 divergence:1 ve:10 national:1 consisting:1 william:1 ortiz:3 highly:2 analyzed:1 integral:1 jyl:1 modeling:2 leslie:1 cost:1 kaelbling:1 deviation:6 subset:2 tour:1 ie:1 probabilistic:2 michael:1 thesis:1 central:1 containing:1 possibly:2 hoeffding:6 gale:1 stochastically:1 american:2 derivative:1 gy:1 start:1 trh:1 minimize:1 accuracy:1 variance:1 kaufmann:1 ensemble:1 yield:3 iid:3 definition:1 ty:1 frequency:1 npo:2 proof:4 associated:11 proved:2 stanley:1 follow:2 qty:2 generality:1 xa:1 implicit:1 hand:1 perhaps:1 ye:2 hence:1 slava:1 width:1 covering:1 samuel:1 theoretic:2 temperature:2 ef:2 recently:1 ji:1 physical:1 endpoint:1 jl:1 association:6 occurred:1 katz:1 gibbs:2 rd:1 uv:1 language:4 inequality:19 arbitrarily:1 joshua:1 seen:1 morgan:1 additional:2 mr:1 monotonically:3 signal:1 rj:1 asso:1 technical:2 ing:1 e1:10 equally:1 variant:1 basic:1 heterogeneous:2 essentially:1 expectation:5 ae:2 histogram:9 interval:8 diagram:1 goodman:1 thing:1 december:1 anand:1 flow:1 integer:5 near:1 noting:2 presence:1 bernstein:5 identically:1 hb:1 w3:2 pennsylvania:1 reduce:1 decline:1 qj:3 six:1 peter:1 speech:3 action:3 useful:1 se:1 involve:1 extensively:1 reduced:1 angluin:4 schapire:1 s3:1 sign:1 disjoint:1 discrete:1 express:1 terminology:1 threshold:1 deleted:1 drawn:4 neither:1 kenneth:1 vqv:1 monotone:1 sum:10 turing:4 inverse:2 letter:1 draw:2 gastehizdat:1 decision:2 bound:8 kutin:1 occur:1 popularized:1 ball:2 combination:1 march:1 g3:1 taken:1 equation:6 remains:1 count:3 original:1 moscow:1 clustering:5 publishing:1 sw:9 prof:1 ciated:1 dubhashi:1 question:1 quantity:2 occurs:1 concentration:7 dependence:1 me:1 topic:1 devroye:1 index:3 robert:1 statement:1 negative:6 ba:1 countable:1 observation:11 finite:3 arbitrary:1 august:1 david:2 pair:5 kl:1 acoustic:1 bound3:1 below:1 pattern:1 ev:1 imply:2 church:1 sn:1 deviate:2 asymptotic:1 loss:6 free:1 english:1 side:1 weaker:1 vv:1 institute:1 saul:3 taking:1 sparse:1 distributed:1 transaction:1 approximate:1 ml:1 uai:1 continuous:2 pack:1 probabalistic:1 complex:1 european:1 main:1 je:1 fails:1 exponential:4 xl:1 house:1 toyota:1 theorem:7 formula:9 showing:1 exists:1 valiant:5 ci:1 phd:1 te:1 downward:2 occurring:1 demand:1 chen:1 entropy:2 likely:1 kaebling:1 partially:1 monotonic:1 springer:1 luc:1 bennett:2 infinite:2 determined:1 lemma:23 kearns:3 called:1 total:1 specie:1 formally:1 ub:1 c9:1 |
1,372 | 2,249 | Combining Dimensions and Features in
Similarity-Based Representations
Daniel J. Navarro
Department of Psychology
Ohio State University
[email protected]
Michael D. Lee
Department of Psychology
University of Adelaide
[email protected]
Abstract
This paper develops a new representational model of similarity data
that combines continuous dimensions with discrete features. An algorithm capable of learning these representations is described, and
a Bayesian model selection approach for choosing the appropriate
number of dimensions and features is developed. The approach is
demonstrated on a classic data set that considers the similarities
between the numbers 0 through 9.
1
Introduction
A central problem for cognitive science is to understand the way people mentally
represent stimuli. One widely used approach for deriving representations from data
is to base them on measures of stimulus similarity (see Shepard 1974). Similarity
is naturally understood as a measure of the degree to which the consequences of
one stimulus generalize to another, and may be measured using a number of experimental methodologies, including ratings scales, confusion probabilities, or grouping
or sorting tasks. For a domain with n stimuli, similarity data take the form of an
n ? n matrix, S = [sij ], where sij is the similarity of the ith and jth stimuli. The
goal of similarity-based representation is then to ?nd structured and interpretable
descriptions of the stimuli that capture the pattern of similarities.
Modeling the similarities between stimuli requires making assumptions about both
the representational structures used to describe stimuli, and the processes used to
assess the similarities across these structures. The two best developed representational approaches in cognitive modeling are the ?dimensional? and ?featural? approaches (Goldstone, 1999). In the dimensional approach, stimuli are represented by
continuous values along a number of dimensions, so that each stimulus corresponds
to a point in a multi-dimensional space, and the similarity between two stimuli is
measured according to the distance between their representative points. In the featural approach, stimuli are represented in terms of the presence or absence of a set
of discrete (usually binary) features or properties, and the similarity between two
stimuli is measured according to their common and distinctive features.
The dimensional and featural approaches have di!erent strengths and weaknesses.
Dimensional representations are constrained by the metric axioms, such as the tri-
angle inequality, that are violated by some empirical data. Featural representations
are ine"cient when representing inherently continuous aspects of the variation between stimuli. It has been argued that spatial representations are most appropriate
for low-level perceptual stimuli, whereas featural representations are better suited to
high-level conceptual domains (e.g., Carroll 1976, Tenenbaum 1996, Tversky 1977).
In general, though, stimuli convey both perceptual and conceptual information. As
Carroll (1976) concludes: ?Since what is going on inside the head is likely to be
complex, and is equally likely to have both discrete and continuous aspects, I believe
the models we pursue must also be complex, and have both discrete and continuous
components? (p. 462).
This paper develops a new model of similarity that combines dimensions with features in the obvious way, allowing a stimulus to take continuous values on a number
of dimensions, as well as potentially having a number of discrete features. We describe an algorithm capable of learning these representations from similarity data,
and develop a Bayesian model selection approach for choosing the appropriate number of dimensions and features. Finally, we demonstrate the approach on a classic
data set that considers the similarities between the numbers 0 through 9.
2
2.1
Dimensional, Featural and Combined Representations
Dimensional Representation
In a dimensional representation, the ith stimulus is represented by a point pi =
(pi1 , . . . , piv ) in a v-dimensional coordinate space. The dissimilarity between the
ith and jth stimuli is then usually modeled as the distance between their points
according to one of the family of Minkowskian metrics
d?ij =
?
v
X
k=1
r
jpik ? pjk j
! r1
+ c,
(1)
where c is a non-negative constant. Dimensional representations can be learned using a variety of multidimensional scaling algorithms (e.g., Cox & Cox, 1994), which
have placed particular emphasis on the r = 1 (City-Block) and r = 2 (Euclidean)
cases because of their relationship, respectively, to so-called ?separable? and ?integral? stimulus dimensions (Garner 1974). Pairs of separable dimensions are those,
like shape and size, that can be attended to separately. Integral dimensions, in
contrast, are those rarer cases like hue and saturation that are not easily separated.
2.2
Featural Representation
In a featural representation, the ith stimulus is represented by a vector of m binary variables fi = (fi1 , . . . , fim ), where fik = 1 if the ith stimulus possesses the
kth feature, and fik = 0 if it does not. Each feature is also usually associated
with a positive weight, wk , denoting its importance or salience. No constraints are
placed on the way features may be assigned to stimuli. Rather than requiring features partition stimuli, as in many clustering methods, or that features nest within
one another, as in many tree-?tting methods, the ?exible nature of human mental
representation demands that features are allowed to overlap in arbitrary ways.
Although a number of models have been proposed for measuring the similarity
between featurally represented stimuli (Navarro & Lee, 2002), the most widely used
is the Contrast Model (Tversky, 1977). The Contrast Model assumes the similarity
between two stimuli increases according to the weights of the (common) features
they share, decreases according to the weights of the (distinctive) features that one
has but the other does not, and these common and distinctive sources of information
are themselves weighted in arriving at a ?nal similarity value. Particular emphasis
(e.g., Shepard & Arabie, 1979; Tenenbaum, 1996) has been given to the special case
of the Contrast Model where only common features are used, and feature weights
are additive, so that the similarity of the ith and jth stimuli is given by
s?ij =
m
X
wk fik fjk + c.
(2)
k=1
Although learning common feature representations is a di"cult combinatorial optimization problem, several successful additive clustering algorithms have been developed (e.g., Lee, 2002; Ruml, 2001; Tenenbaum, 1996).
2.3
Combined Representation
The obvious generalization of dimensional and featural approaches is to represent
stimuli in terms of continuous values along a set of dimensions and the presence or
absence of a number of discrete features. If there are v dimensions and m features,
the ith stimulus is de?ned by a point pi , a feature vector fi , and the feature weights
w = (w1 , . . . , wm ).
With this representational structure in place, we assume the similarity between
the ith and jth stimuli is then simply the sum of the similarity arising from their
common features (Eq. 2), minus the dissimilarity arising from their dimensional
di!erences (Eq. 1), as follows
s?ij =
?
m
X
k=1
3
wk fik fjk
!
?
?
v
X
k=1
r
jpik ? pjk j
! 1r
+ c.
Model Fitting and Selection
Proposing the combined representational approach immediately presents two challenges. The ?rst model ?tting problem is to develop a method for learning representations that ?t the similarity data well using a given number of dimensions
and features. The second model selection problem is to choose between alternative
combined representations of the same data that use di!erent numbers of features
and dimensions.
Formally, we conceive of the representational model as specifying the number of
dimensions and features and the nature of the distance metric, and being parameterized by the feature variables and weights, coordinate locations and the additive constant. This means a particular representation is given by R! (!) where
" = (v, m, r) and ! = (p1 , . . . , pn , f1 , . . . , fn , w, c).
Following Tenenbaum (1996), we assume that the observed similarities come from
independent Gaussian distributions with means sij and common variance #. The
variance corresponds to the precision of the data which, for empirical similarity
data averaged across information sources (such as individual participants) is easily
estimated (Lee 2001), and otherwise must be speci?ed by assumption.
Under these assumptions, the likelihood of a similarity matrix given a particular
representation is
p (S j R! , !)
?
?
1
1
p exp ? 2 (sij ? s?ij )2
2#
# 2$
i<j
#
!
X
1
1
= ? p ?n(n!1)/2 exp "? 2
(sij ? s?ij )2 $ ,
2#
# 2$
i<j
=
Y
giving the log-likelihood function
ln p (S j R! , !) = ?
1 X
n (n ? 1) ? p ?
2
(s
?
s
?
)
?
ln # 2$ .
ij
ij
2# 2 i<j
2
Within this framework, we solve the model ?tting problem by ?nding the maximum
likelihood parameter values !" . Measures of data ?t like maximum likelihood, however, are clearly not appropriate for choosing between representations with di!erent
numbers of dimensions and features, because of di!erences in model complexity. For
this reason, we tackle the model selection problem using a Bayesian approach.
3.1
Fitting Algorithm
Our learning algorithm for the combined model relies on the observation (Tenenbaum, 1996) that it is relatively easy to ?nd the maximum likelihood values of
the continuous parameters?the coordinate locations, feature weights, and additive
constant?given values for the discrete feature assignments.
If ! is partitioned into !C = (p1 , . . . , pn , w, c) and a ?xed !D = (f1 , . . . , fn ), then
we solve the optimization problem
arg max ln p (S j R! , !D , !C )
"C
where w, c ? 0,
(3)
using the Levenberg-Marquardt approach (More, 1977). Since distances are preserved under translation for the Minkowskian family of metrics, we assume without
loss of generality that p1 is the origin.
With this optimization capability in place, our learning algorithm may be described
by the following ?ve stage process:
Step 1: Choose a maximum number of dimensions vmax and features mmax . Start
with v = 1 and m = 1, making the lone feature the current feature to be optimized.
Step 2: Find a starting (seed) value for the current feature by considering all possibilities that have exactly one pair of stimuli with the feature, choosing the possibility
with the best data-?t using Eq. 3.
Step 3: Consider all possible representations arising from changing the assignment
of one stimulus in relation to the current feature. If any of these changes improve
the ?t of the representation as a whole, update the representation to be the one
with the best ?t. Repeat this process until no change is found that improves the
representation. The current representation at this point is recorded as the best?tting representation with v dimensions and m features.
Step 4: If there are fewer than mmax features, then add a new feature, make it the
current feature, and return to Step 2.
Step 5: If there are fewer than vmax dimensions, then add a new dimension, reset
the number of features to m = 1, and again make the lone feature the current
feature to be optimized. Return to Step 2.
The output of this algorithm is a grid of vmax ? mmax representations, one for each
possible combination of number of dimensions and number of features.
3.2
Model Selection
Given representational models with di!erent numbers of dimensions and features,
the Bayesian approach is to select the one with the maximum posterior probability
Z
p (R! )
p (R! j S) =
p (S j R! , !) p (! j R! ) d!.
p (S)
Since all models relate to the same similarity data, p (S) is a constant. If we assume
that all representations are a priori equally likely, the posterior becomes
p (R! j S) /
XZ
"D
p (S j R! , !) p (! j R! ) d!C .
(4)
This Bayesian approach embodies an automatic form of Ockham?s Razor, balancing data-?t against model complexity, because it considers the model at all of its
parameterizations. Complicated models that use many parameters (i.e., have high
parametric complexity), or parameters that interact in complicated ways (i.e., have
high functional form complexity) to achieve good levels of data-?t at their optimal
values will typically ?t data poorly at other parameter values, and so will have
smaller posteriors.
For the combined model, the posterior in Eq. 4 is not well approximated by simple
measures such as the Bayesian Information (BIC: Schwarz, 1978) that have previously been applied to dimensional and featural representations (Lee & Navarro,
2002). This is because the BIC measures only parametric complexity, and treats
each additional parameter as having an equal e!ect on model complexity. Binary
feature membership parameters and continuous coordinate location parameters,
however, will clearly have di!erent e!ects on model complexity. In addition, because the BIC does not measure functional form complexity, it is not sensitive to the
change in representational model complexity arising from di!erent distance metrics.
There are also di"culties approximating the posterior by a multivariate Gaussian
with !" as the mode, as in the Laplacian approximation (see Kass & Raftery, 1995,
p. 778), because the featural component of the combined model makes the posterior
multimodal.
For these reasons, we employed Monte Carlo methods with importance sampling
(e.g., Oh & Berger, 1993), in which the posterior is numerically approximated by
p (R! j S) ?
N
1 X p (S j R! , !i ) p(!i j R! )
,
N i=1
g(!i j R! )
where each of the N !i values is independently sampled from g(?). In the following
evaluation, we assumed that p(! j R! ) is uniform over !, and speci?ed an importance
distribution g(?) that was Gaussian over !C and multinomial over !D . As the
posterior may be multimodal and non-standard, g(?) was heavy tailed, and we
sampled extensively (N = 5 ? 106 ) to ensure convergence.
8
2 4
0
3
Feature
4
8
3
6
1
2
0 1 2
9
5
7
(a)
6
9
6 7 8 9
2 3 4 5 6
1
3
5
7
9
1 2 3 4
4 5 6 7 8
additive constant
Weight
0.444
0.345
0.331
0.291
0.255
0.216
0.214
0.172
0.148
(b)
Figure 1: Representations of the numbers similarity data using the (a) dimensional
and (b) featural approaches.
4
An Illustrative Example
Shepard, Kilpatric and Cunningham (1975) collected data measuring the ?abstract
conceptual similarity? of the numbers 0 through 9. Figure 1(a) displays a twodimensional representation of the numbers, using the City-Block metric. This representation explains only 78.6% of the variance, and fails to capture important
regularities evident in the raw data, such the fact that the number 7 is more similar
to 8 than it is to 9, or that 3 is much more similar to 0 than it is to 8, and so on.
Figure 1(b) shows an eight-feature representation of the numbers using the same
data, as reported by Tenenbaum (1996). This representation explains 90.9% of
the variance, with features corresponding to arithmetic concepts (e.g., f2, 4, 8g and
f3, 6, 9g) and to numerical magnitude (e.g., f1, 2, 3, 4g and f6, 7, 8, 9g). We note in
passing that the representations displayed in Figure 1 are also recovered when our
algorithm is restricted to purely dimensional or purely featural representations.
Figure 1 suggests that the numbers data is a candidate for combined representation.
Features are appropriate for representing the arithmetic concepts, but a ?magnitude?
dimension seems to o!er a more e"cient and meaningful representation of this
regularity than the ?ve features used in Figure 1(b).
We ?tted combined models with between one and three dimensions and one and
eight features to the same similarity data, and calculated the log posterior for each.
Because the raw data needed to estimate the precision of these averaged data are
unavailable, we followed the arguments presented in Lee (2002) to make a conservative choice of # = 0.15. The results are shown in Figure 2. All of the representations
using one dimension are more likely than those using two or three dimensions. Of
the one dimensional representations, the four feature version is preferred, although
the likelihoods of representations with other numbers of features are close enough
to warrant consideration in choosing a ?best? representation, particularly given the
assumptions made about data precision.
For the sake of concreteness, however, Figure 3 describes the representation with one
dimension and four features, which explains 90.0% of the variance. The one dimension almost orders the numbers according to their magnitude, with the violations
being very small. The four features all capture meaningful arithmetic concepts, corresponding to ?powers of two?, ?multiples of three?, ?multiples of two? (or ?even
10
Log Posterior
1D
0
2D
?10
?20
3D
1
2
3
4
5
6
7
Number of Features
8
Figure 2: Log posteriors for combined representations with between one and three
dimensions, and one and eight features.
0 1
2
3
4
5
6
7
8
9
Feature
4
8
3
6
9
2
4
6
8
1
3
9
additive constant
2
Weight
0.286
0.282
0.224
0.157
0.568
Figure 3: Representation of the numbers similarity data using one dimension (shown
on the left) and four features (shown on the right).
numbers?) and ?powers of three?. Encouragingly, these features are close to those
in Figure 1(b) that do not deal with numerical magnitude.
5
Conclusion
Future work will examine the use of other featural similarity models besides the
purely common features approach, and will also look to develop learning algorithms that do not rely on maximum likelihood estimation, but instead consider
the posterior probability of a representation. Reliable analytic approximations to
the posterior will be required for this purpose.
Most importantly, however, the combined representation of a wide range of similarity data needs to be examined. Although the numbers data is a promising start,
it is just a ?rst test of the combined approach to similarity-based representation.
Demonstrating the generality and usefulness of the ability to represent stimuli in
terms of both dimensions and features remains a challenge for future research.
Acknowledgments
This research was supported by Australian Research Council Grant DP0211406.
We thank Tom Gri"ths and two anonymous reviewers for helpful comments and
discussions.
References
[1] Carroll, J. D. (1976). Spatial, non-spatial and hybrid models for scaling. Psychometrika, 41, 439?463.
[2] Cox, T. F. & Cox, M. A. A. (1994). Multidimensional Scaling. London: Chapman and
Hall.
[3] Garner, W. R. (1974).The Processing of Information and Structure. Potomac, MD:
Erlbaum.
[4] Goldstone, R. L. (1999). Similarity. In R.A. Wilson and F.C. Keil (eds.), MIT Encyclopedia of the Cognitive Sciences, pp. 763?765. Cambridge, MA: MIT Press.
[5] Lee, M. D. (2001). Determining the dimensionality of multidimensional scaling representations for cognitive modeling. Journal of Mathematical Psychology, 45(1), 149?166.
[6] Lee, M. D. (2002). Generating additive clustering models with limited stochastic complexity. Journal of Classi?cation, 19(1), 69-85.
[7] Lee, M. D. & Navarro, D. J. (2002). Extending the ALCOVE model of category learning
to featural stimulus domains. Psychonomic Bulletin & Review, 9(1), 43-58.
[8] Kass, R. E. & Raftery, A. E. (1995). Bayes Factors. Journal of the American Statistical
Association, 90(430), 773-795.
[9] More, J. J. (1977). The Levenberg-Marquardt algorithm: Implementation and theory.
In G.A. Watson (ed.), Lecture Notes in Mathematics, 630, pp. 105?116. New York:
Springer-Verlag.
[10] Navarro, D. J. & Lee, M. D. (2002). Commonalities and distinctions in featural
stimulus representations. In: W. G. Gray, and C. D. Schunn (Eds.) Proceedings of the
24th Annual Conference of the Cognitive Science Society, pp. 685-690, Mahwah, NJ:
Lawrence Erlbaum.
[11] Oh, M. & Berger J. O. (1993). Integration of multimodal functions by Monte Carlo
importance sampling, Journal of the American Statistical Association, 88, 450-456.
[12] Ruml, W. (2001). Constructing distributed representations using additive clustering.
In: T. G. Dietterich, S. Becker, and Z. Ghahramani (Eds.) Advances in Neural Information
Processing 14. Cambridge, MA: MIT Press.
[13] Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics, 6(2),
461?464.
[14] Shepard, R. N. (1974). Representation of structure in similarity data: Problems and
prospects. Psychometrika, 39(4), 373?422.
[15] Shepard, R. N. & Arabie, P. (1979). Additive clustering representations of similarities
as combinations of discrete overlapping properties. Psychological Review, 86(2), 87?123.
[16] Shepard, R. N., Kilpatric, D. W. & Cunningham, J. P. (1975). The internal representation of numbers. Cognitive Psychology, 7, 82?138.
[17] Tenenbaum, J. B. (1996). Learning the structure of similarity. In D. S. Touretzky, M.
C. Mozer and M. E. Hasselmo (Eds.), Advances in Neural Information Processing Systems,
pp. 3?9, Cambridge, MA: MIT Press.
[18] Tversky, A. (1977). Features of similarity. Psychological Review, 84(4), 327?352.
| 2249 |@word cox:4 version:1 seems:1 nd:2 attended:1 minus:1 daniel:1 denoting:1 current:6 ka:2 recovered:1 marquardt:2 must:2 fn:2 numerical:2 additive:9 partition:1 shape:1 analytic:1 interpretable:1 update:1 fewer:2 cult:1 ith:8 mental:1 parameterizations:1 location:3 mathematical:1 along:2 ect:2 combine:2 fitting:2 inside:1 themselves:1 p1:3 xz:1 multi:1 examine:1 considering:1 becomes:1 psychometrika:2 estimating:1 what:1 xed:1 pursue:1 developed:3 proposing:1 lone:2 nj:1 multidimensional:3 tackle:1 exactly:1 grant:1 positive:1 understood:1 treat:1 consequence:1 emphasis:2 au:1 examined:1 specifying:1 suggests:1 limited:1 range:1 averaged:2 acknowledgment:1 block:2 empirical:2 axiom:1 close:2 selection:6 twodimensional:1 demonstrated:1 reviewer:1 starting:1 independently:1 immediately:1 fik:4 importantly:1 deriving:1 oh:2 classic:2 variation:1 coordinate:4 tting:4 annals:1 origin:1 approximated:2 particularly:1 observed:1 capture:3 decrease:1 prospect:1 mozer:1 complexity:10 arabie:2 tversky:3 purely:3 distinctive:3 f2:1 easily:2 multimodal:3 represented:5 separated:1 describe:2 london:1 monte:2 encouragingly:1 choosing:5 widely:2 solve:2 otherwise:1 ability:1 statistic:1 reset:1 combining:1 poorly:1 achieve:1 representational:8 ine:1 description:1 rst:2 convergence:1 regularity:2 r1:1 extending:1 generating:1 develop:3 measured:3 erent:6 ij:7 eq:4 come:1 australian:1 stochastic:1 human:1 explains:3 argued:1 pjk:2 f1:3 generalization:1 anonymous:1 hall:1 exp:2 seed:1 lawrence:1 commonality:1 purpose:1 estimation:1 combinatorial:1 schwarz:2 sensitive:1 council:1 hasselmo:1 city:2 alcove:1 weighted:1 mit:4 clearly:2 gaussian:3 rather:1 pn:2 wilson:1 likelihood:7 contrast:4 helpful:1 membership:1 typically:1 cunningham:2 relation:1 going:1 arg:1 priori:1 constrained:1 spatial:3 special:1 integration:1 equal:1 f3:1 having:2 sampling:2 chapman:1 look:1 warrant:1 future:2 stimulus:35 develops:2 ve:2 individual:1 possibility:2 evaluation:1 weakness:1 violation:1 integral:2 capable:2 tree:1 euclidean:1 psychological:2 modeling:3 measuring:2 assignment:2 uniform:1 usefulness:1 successful:1 erlbaum:2 reported:1 combined:12 lee:11 michael:2 w1:1 again:1 central:1 recorded:1 choose:2 nest:1 cognitive:6 american:2 return:2 f6:1 de:1 wk:3 wm:1 start:2 participant:1 capability:1 complicated:2 bayes:1 ass:1 conceive:1 variance:5 generalize:1 bayesian:6 raw:2 garner:2 carlo:2 cation:1 touretzky:1 ed:7 against:1 pp:4 obvious:2 naturally:1 associated:1 di:10 sampled:2 improves:1 dimensionality:1 methodology:1 tom:1 though:1 erences:2 generality:2 just:1 stage:1 until:1 overlapping:1 mode:1 gray:1 believe:1 dietterich:1 requiring:1 concept:3 assigned:1 deal:1 mmax:3 razor:1 illustrative:1 levenberg:2 evident:1 demonstrate:1 confusion:1 consideration:1 ohio:1 fi:2 common:8 mentally:1 multinomial:1 functional:2 psychonomic:1 shepard:6 association:2 numerically:1 cambridge:3 automatic:1 grid:1 mathematics:1 similarity:39 carroll:3 base:1 add:2 posterior:13 multivariate:1 verlag:1 inequality:1 binary:3 watson:1 additional:1 speci:2 employed:1 arithmetic:3 multiple:2 equally:2 laplacian:1 metric:6 represent:3 preserved:1 whereas:1 addition:1 separately:1 kilpatric:2 source:2 posse:1 tri:1 navarro:6 comment:1 presence:2 easy:1 enough:1 variety:1 bic:3 psychology:5 gri:1 fim:1 becker:1 passing:1 york:1 hue:1 tenenbaum:7 extensively:1 encyclopedia:1 category:1 goldstone:2 estimated:1 arising:4 discrete:8 four:4 demonstrating:1 changing:1 nal:1 concreteness:1 sum:1 angle:1 parameterized:1 place:2 family:2 almost:1 scaling:4 followed:1 display:1 annual:1 strength:1 constraint:1 sake:1 aspect:2 argument:1 pi1:1 separable:2 relatively:1 ned:1 department:2 structured:1 according:6 combination:2 across:2 smaller:1 describes:1 partitioned:1 making:2 restricted:1 sij:5 ln:3 previously:1 remains:1 needed:1 fi1:1 eight:3 appropriate:5 alternative:1 assumes:1 clustering:5 ensure:1 embodies:1 giving:1 ghahramani:1 approximating:1 society:1 parametric:2 md:1 kth:1 distance:5 thank:1 considers:3 collected:1 reason:2 besides:1 modeled:1 relationship:1 berger:2 potentially:1 relate:1 negative:1 implementation:1 allowing:1 observation:1 ockham:1 keil:1 displayed:1 head:1 arbitrary:1 rating:1 rarer:1 pair:2 required:1 optimized:2 learned:1 distinction:1 usually:3 pattern:1 challenge:2 saturation:1 including:1 max:1 reliable:1 power:2 overlap:1 rely:1 hybrid:1 representing:2 improve:1 nding:1 concludes:1 raftery:2 fjk:2 featural:16 review:3 determining:1 loss:1 lecture:1 degree:1 pi:2 share:1 translation:1 balancing:1 heavy:1 placed:2 repeat:1 supported:1 arriving:1 jth:4 salience:1 understand:1 wide:1 bulletin:1 distributed:1 dimension:32 calculated:1 made:1 vmax:3 preferred:1 conceptual:3 assumed:1 continuous:9 tailed:1 promising:1 nature:2 inherently:1 unavailable:1 interact:1 culties:1 complex:2 constructing:1 domain:3 ruml:2 whole:1 mahwah:1 allowed:1 convey:1 representative:1 cient:2 precision:3 fails:1 candidate:1 perceptual:2 exible:1 er:1 grouping:1 importance:4 dissimilarity:2 magnitude:4 demand:1 sorting:1 suited:1 simply:1 likely:4 springer:1 corresponds:2 relies:1 ma:3 goal:1 tted:1 absence:2 change:3 classi:1 conservative:1 called:1 experimental:1 osu:1 meaningful:2 formally:1 select:1 internal:1 people:1 adelaide:2 violated:1 |
1,373 | 225 | 574
Nowlan
Maximum Likelihood Competitive Learning
Steven J. Nowlan 1
Department of Computer Science
University of Toronto
Toronto, Canada
M5S lA4
ABSTRACT
One popular class of unsupervised algorithms are competitive algorithms. In the traditional view of competition, only one competitor,
the winner, adapts for any given case. I propose to view competitive adaptation as attempting to fit a blend of simple probability
generators (such as gaussians) to a set of data-points. The maximum likelihood fit of a model of this type suggests a "softer" form
of competition, in which all competitors adapt in proportion to
the relative probability that the input came from each competitor.
I investigate one application of the soft competitive model, placement of radial basis function centers for function interpolation, and
show that the soft model can give better performance with little
additional computational cost.
1
INTRODUCTION
Interest in unsupervised learning has increased recently due to the application of
more sophisticated mathematical tools (Linsker, 1988; Plumbley and Fallside, 1988;
Sanger, 1989) and the success of several elegant simulations of large scale selforganization (Linsker, 1986; Kohonen, 1982). One popular class of unsupervised
algorithms are competitive algorithms, which have appeared as components in a
variety of systems (Von der Malsburg, 1973; Fukushima, 1975; Grossberg, 1978).
Generalizing the definition of Rumelhart and Zipser (1986), a competitive adaptive
system consists of a collection of modules which are structurally identical except,
possibly, for random initial parameter variation. A set of rules is defined which
allow the modules to compete in some way for the right to respond to some subset
lThe author is visiting the University of Toronto while completing a PhD at Carnegie Mellon
University.
Maximum Likelihood Competitive Learning
of the inputs. Typically a module is a single unit, but this need not be the case.
Often, parameter restrictions are used to prevent "uninteresting" representations in
which the entire set of input patterns are represented by one module.
Most of the work on competitive systems, especially within the neural network literature, has focused on a fairly extreme form of competition in which only the winner
of the competition for a particular case is updated. Variants on this theme are
the schemes in which, in addition to the winner, all of the losers are updated in
some uniform fashion 2 ? Within the statistical pattern recognition literature (Duda
and Hart, 1973; McLachlan and Basford, 1988) a rather different form of competition is frequently encountered. In this form, which will be referred to as "soft"
competition, all competitors are updated but the amount of update is proportional
to how well each competitor did in the competition for the current case. Under a
statistical model, this "soft" form of competition performs exact gradient descent
in likelihood, while the more traditional winner-take-all, or "hard" competition, is
an approximation to gradient descent in likelihood.
In this paper I demonstrate the superiority of "soft" competitive learning by comparing "hard" and "soft" algorithms in a classification application. The classification network consists of a layer of Radial Basis Functions (RBF's) followed by a
layer of linear units which attempt to find a least mean square (LMS) fit to the
desired output function (Broomhead and Lowe, 1988; Lee and Kill, 1988; Niranjan
and Fallside, 1988). A network of this type can form a smooth approximation to
an arbitrary function, with the RBF centers serving as control points for fitting
the function (Keeler and Kowalski, 1989; Poggio and Girosi, 1989). A competitive
learning component adjusts the centers of the RBF's in an unsupervised fashion,
before the weights to the output units are adapted. Comparisons of hard and soft
algorithms for placing the RBF's on a hand-drawn digit recognition problem and
a subset of a speaker independant vowel recognition problem suggest that the soft
algorithm is superior. Comparisons are also made with more traditional classifiers
on the same problems.
2
COMPETITIVE PLACEMENT OF RBF'S
Radial Basis Function networks have been shown to be quite effective for some tasks,
however a major limitation is that a very large number of RBF's may be required
in high dimensional spaces. One method for using RBF's places the centers of the
RBF's at the interstices of some coarse lattice defined over the input space (Broomhead and Lowe, 1988). If we assume the lattice is uniform with k divisions along
each dimension, and the dimensionality of the input space is d, a uniform lattice
would require k d RBF's. This exponential growth makes the use of such a uniform
lattice impractical for any high dimensional space. Another choice is to center the
RBF's on the first n training samples, but this method is subject to sampling error,
2The feature maps of Kohonen (1982) are actually a special case in which a few units are
adapted at once, however the units which are adapted in addition to the winner are selected by a
neighbourhood function rather than by how well they represent the current data.
575
576
Nowlan
and a very large number of samples can be required to adequately represent the
distribution of inputs. This is particularly true in high dimensional spaces where it
is extremely difficult to visualize the input distribution and determine whether the
training examples adequately represent this distribution.
Moody and Darken (1988) have suggested a method in which a much smaller number
of RBF's are used, however the centers of these RBF's are allowed to adapt to the
input samples, so they learn to represent only the part of the input space actually
represented by the data. The adaptive strategy also allows the center of each RBF
to be determined by a large number of training samples, greatly reducing sampling
error. In their method, an unsupervised algorithm (a version of k-means) is used
to select the centers of the RBF's and some ad hoc heuristics are suggested for
adjusting the size of the RBF's to get a smooth interpolator. The weights from the
hidden to the output layer are adapted to minimize a Least Mean Square (LMS)
criterion. Moody and Darken were able to attain performance levels equivalent to a
multi-layer Back Propagation network on a chaotic time series prediction task and
a vowel discrimination task. Significant savings in training time were also reported.
The k-means algorithm used by Moody and Darken can be easily reformulated as a
form of competitive adaptation. In the basic k-means algorithm (Duda and Hart,
1973) the training samples are first assigned to the class of the closest mean. The
means are then recomputed as the average of the samples in their class. This two
step process is repeated until the means stop changing. This is simply the "batch"
version of a competitive learning scheme in which the activity of each competing
unit is proportional to the distance between its weight vector and the current input
vector, and the winning unit on each case adapts by adding a portion of the current
input to its weight vector (with appropriate normalization).
We will now consider a statistical formalization of a competitive process for placing
the centers of RBF's. Let each competing unit represent a radially symmetric
(spherical) gaussian probability distribution, with the weight vector of the unit jIj
representing the center or mean of the gaussian. The probability that the gaussian
associated with unit j generated an input vector Xle is
_ )
1 P( Xle = - e
(~k -/I i )l
l ... ~
(1)
1
KUj
where K is a normalization constant, and the covariance matrix is
uJ f.
A collection of M such units is a model of the input distribution. The parameters
of these M gaussians can be adjusted so that the overall average likelihood of generating the training examples is maximized. The likelihood of generating a set of
observations {Xl, X2,"" xn} from the current model is
L=
II P(lle)
(2)
Ie
where P( lie) is the probability of generating observation lie under the current model.
(For mathematical convenience we usually work with log L.) If gaussian i is selected
Maximum Likelihood Competitive Learning
with probability 'lri and a sample is drawn from the selected gaussian, the probability
of observing xJ: is
N
P(xJ:)
=
L 'lri p.(iJ:)
(3)
;=1
where Pi(iJ:) is the probability of observing il: under gaussian distribution i. The
summation in (3) is awkward to work with, and frequently one of the p.(iJ:) is much
larger than any of the others. Therefore, a convenient approximation for (3) is
(4)
This is equivalent to assigning all of the responsibility for an observation to the
gaussian with the highest probability of generating that observation. This approximation is frequently referred to as the "winner-take-all" assumption. It may also be
regarded as a "hard" competitive decision among the gaussians. When we use (3)
directly, all of the gaussians share responsibility for each observation in proportion
to their probability of generating the observation. This sharing of responsibility can
be regarded as a "soft" competitive decision among the gaussians.
The maximum likelihood estimate for the mean of each gaussian in our model can
be found by evaluating Blog L/ BPj = O. We will consider a simple model in which
we assume that 'lrj and Uj are the same for all of the gaussians, and compare the
hard and soft estimates for ilj.
With the hard approximation, substituting (4) in (2), the maximum likelihood
estimate of ilj has the simple form
:.
I-'j
=
EJ:EC; xJ:
N.
(5)
1
where Cj is the set of cases closest to gaussian j, and Nj is the size of this set. This
is identical to the expression for Pj in the k-means algorithm.
Rather than using the approximation in (4) we can find the exact maximum likelihood estimates for ilj by substituting (3) in (2). The estimate for the mean is
now
(6)
where pOlxJ:) is the probability, given that we have observed ?1:, of gaussian j
having generated XI:. For the simple model used here
Comparing (6) and (5), the hard competitive model uses the average of the cases
unit j is closest to in recomputing its mean, while the soft competitive model uses
the average of all the cases weighted by p(jlil:).
577
578
Nowlan
We can use either the approximate or exact likelihood algorithm to position the
RBF's in an interpolation network. If X" is the current input, each RBF unit
computes Pj(x,,) as its output activation aj. For the hard competitive model, a
winner-take-all operation then sets aj = 1 for the most active unit and ai = 0
for all other units. Only the winning unit will update its mean vector, and for
this update we use the iterative version of (5). In the soft competitive model we
normalize each aj by dividing it by the sum of aJ over all RBF's. In this case the
mean vectors of all of the hidden units are updated according to the iterative version
of (6). The computational cost difference between the winner-take-all operation in
the hard model and the normalization in the soft model is negligible; however, if the
algorithms are implemented sequentially, the soft model requires more computation
because all of the means, rather than just the mean of the winner, are updated for
each case.
The two models described in this section are easily extended to allow each spherical gaussian to have a different variance O'J. The activation of each RBF unit is
now a function of (ik - j1J)/O'j, but the expressions for the maximum likelihood
estimates of iIj are the same. Expressions for updating O'J can be found by solvO. Some simulations have also been performed with a network
ing 810gL/8O'J
in which each RBF had a diagonal covariance matrix, and each of the d variance
components was estimated separately (Nowlan, 1990).
=
3
APPLICATION TO TWO CLASSIFICATION TASKS
The architecture described above was used for a digit classification and a vowel
discrimination task. The networks were trained by first using the soft or hard
competitive algorithm to determine the means and variances of the RBF's, and,
once these were learned, then training the output layer of weights. The weights
from the RBF's to the output layer were trained using a recursive least squares
algorithm, allowing an exact LMS solution to be found with one pass through the
training set. (A target of +1 was used for the correct output category and -1
for all of the other categories.) For the hard competitive model the unnormalized
probabilities Pj (x) were used as the RBF unit outputs, while the soft competitive
model used the normalized probabilities pUli).
The first task required the classification of a set of hand drawn digits from 12
subjects. There were 480 input patterns, divided into 320 training patterns and
160 testing patterns, with examples from all subjects in both groups. Each pattern
was digitized on a 16 by 16 grid. These 256 dimensional binary vectors were used
as input to the classification network, and there were 10 output units.
Networks with 40 and 150 spherical gaussians were simulated. Both hard and soft
algorithms were used with all configurations. The performance of these networks
on the testing set is summarized in Table 1. This table also contains performance
results for a multi-layer back propagation network, a two layer linear network, and
a nearest neighbour classifier on the same task. The nearest neighbour classifier
used all 320 labeled training samples and based its decision on the class of the
Maximum Likelihood Competitive Learning
Type of Classifier
40 Sph. Gauss. - Hard
40 Sph. Gauss. - Soft
150 Sph. Gauss. - Hard
150 Sph. Gauss. - Soft
Layered BP Net
Linear Net
Nearest Neighbour
% Correct on Test Set
87.6%
91.8%
90.1%
94.0%
94.5%
60.0%
83.1%
Table 1: Summary of Performance for Digit Classification
nearest neighbour only3. The relatively poor performance of the nearest neighbour
classifier is one indication of the difficulty of this task. The two layer linear network
was trained with a recursive least squares algorithm4. The back propagation network was developed specifically for this task (Ie Cun, 1987), and used a specialized
architecture with three layers of hidden units, localized receptive fields, and weight
sharing to reduce the number of free parameters in the system.
Table 1 reveals that the networks were trained using the soft competitive algorithm
to determine means and variances of the RBF's were superior in performance to
identical networks trained with the hard competitive algorithm. The RBF network
using 150 spherical gaussians was able to equal the performance level of the sophisticated back propagation network, and a network with 40 spherical RBF's performed
considerably better than the nearest neighbour classifier.
The second task was a speaker independent vowel recognition task. The data consisted of a digitized version of the first and second formant frequencies of 10 vowels
for multiple male and female speakers (Peterson and Barney, 1952). Moody and
Darken (1988) have previously applied to this data an architecture which is very
similar to the one suggested here, and Huang and Lippmann (1988) have compared
the performance of a number of different classifiers on this same data. More recently, Bridle (1989) has applied a supervised algorithm which uses a "softmax"
output function to this data. This softmax function is very similar to the equation for P(j\Zk) used in the soft competitive model. The results from these studies
are included in Table 2 along with the results for RBF networks using both hard
and soft competition to determine the RBF parameters. All of the classifiers were
trained on a set of 338 examples and tested on a separate set of 333 examples.
As with the digit classification task, the RBF networks trained using the soft adaptive procedure show uniformly better performance than equivalent networks trained
using the hard adaptive procedure. The results obtained for the hard adaptive pro3Two, three, and five nearest neighbour classifiers were also tried, but they all perfonned worse
than nearest neighbour.
fThis network was included to show that the linear layer is not doing all of the work in the
hybrid RBF networks.
579
580
Nowlan
Type of Classifier
20 Sph. Gauss. - Hard
20 Sph. Gauss. - Soft
100 Sph. Gauss. - Hard
100 Sph. Gauss. - Soft
20 RBF's (Moody et al)
100 RBF's (Moody et al)
K Nearest Neighbours (Lippmann et al)
Gaussian Classifier (Lippmann et al)
2 Layer BP Net (Lippmann et al)
Feature Map (Lippmann et al)
2 Layer Softmax (Bridle)
% Correct on Test Set
75.1%
82.6%
82.6%
87.1%
73.3%
82.0%
82.0%
79.7%
80.2%
77.2%
78.0%
Table 2: Summary of Performance for Vowel Classification
cedure with 20 and 100 spherical gaussians are very close to Moody and Darken's
results, which is expected since the procedures are identical except for the manner
in which the variances are obtained. Table 2 also reveals that the RBF network
with 100 spherical gaussians, trained with the soft adaptive procedure, performed
better than any of the other classifiers that have been applied to this data.
4
DISCUSSION
The simulations reported in the previous section provide strong evidence that the
exact maximum likelihood (or soft) approach to determining the centers and sizes of
RBF's leads to better classification performance than the winner-take-all approximation. In both tasks, for a variety of numbers of RBF's, the exact maximum
likelihood approach outperformed the approximate method. Comparing (5) and
(6) reveals that this improved performance can be obtained with little additional
computational burden.
The performance of the RBF networks on these two classification tasks also shows
that hybrid approaches which combine unsupervised and supervised procedures are
capable of competent levels of performance on difficult problems. In the digit classification task the hybrid RBF network was able to equal the performance level of
a sophisticated multi-layer supervised network, while in the vowel recognition task
the hybrid network obtained the best performance level of any of the classification
networks. One reason why the hybrid model is interesting is that since the hidden unit representation is independent of the classification task, it may be used
for many different tasks without interference between the tasks. (This is actually
demonstrated in the simulations described, since each category in the two tasks can
be regarded as a separate classification problem.) Even if we are only interested in
using the network for one task, there are still advantages to the hybrid approach.
In many domains, such as speech, unlabeled samples can be obtained much more
Maximum Likelihood Competitive Learning
cheaply than labeled samples. To avoid over-fitting, the amount of training data
must generally be considerably greater than the number of free parameters in the
model. In the hybrid models, especially in high dimensional input spaces, most of
the parameters are in the unsupervised part of the modelS. The unsupervised stage
may be trained with a large body of unlabeled samples, and a much smaller body
of labeled samples can be used to train the output layer.
The performance on the digit classification task also shows that RBF networks can
deal effectively with tasks with high (256) dimensional input spaces and highly
non-gaussian input distributions. The competitive network was able to succeed on
this task with a relatively small number of RBF's because the data was actually
distributed over a much lower dimensional subspace of the input space. The soft
competitive network automatically concentrates its representation on this subspace,
and in this fashion performs a type of implicit dimensionality reduction. Moody
(1989) has also mentioned this type of dimensionality reduction as a factor in the
success of some of the models he has worked with.
The success of the soft adaptive strategy in these interpolation networks encourages
one to extend the soft interpretation in other directions. The feature maps of
Kohonen (1982) incorporate a hard competitive process, and a soft version of the
feature map algorithm could be developed. In addition, there is a class of decisiondirected, or "bootstrap" , learning algorithms which use their own outputs to provide
a training signal. These algorithms can be regarded as hard competitive processes,
and new algorithms which use the soft assumption may be developed from the
bootstrap procedure (Nowlan and Hinton, 1989). Bridle (1989) has suggested a
different type of output unit for supervised networks, which incorporates the idea
of a "soft max" type of competition. Finally, the maximum likelihood approach is
easily extended to non-gaussian models, and one model of particular interest would
be the Boltzmann machine.
Acknowledgements
I would like to thank Richard Lippmann of Lincoln Laboratories and John Moody of Yale University for making the vowel formant data available to me. I would also like to thank Geoff Hinton,
and the members of the Connectionist Research Group of the University of Toronto, for many
helpful comments and suggestions while conducting this research and preparing this paper.
References
Bridle, J. (1989). Probabilistic interpretation of feedforward classification network outputs, with
relationships to statistical pattern recognition. In Fougelman-Soulie, F . and Herault, J.,
editors, Neuro-computing: algorithm!, architecture! and application!. Springer-Verlag.
Broomhead, D. and Lowe, D. (1988). Multivanable functional interpolation and adaptive networks.
Complex Sy!tem&, 2:321-355.
Duda, R. and Hart, P. (1913). Pattern Clauijication And Scene Analy&i&. Wiley and Son.
Fukushima, K. (1915). Cognitron: A self-organizing multilayered neural network.
Cybernetic!, 20:121-136.
Biological
Sin the digit task, there are over 25 times as many parameters in the unsupervised part of the
network as there are in the supervised part.
581
582
Nowlan
Grossberg, S. (1978). A theory of visual coding, memory, and development. In Formal theorie$ oj
'IIi!.al perception. John Wiley and SOIUl, New York.
Huang, W. and Lippmann, R. (1988). Neural net and traditional classifiers. In Anderson, D.,
editor, Ne.ra.lInJormation Proceuing S1J!tem!. American lnatitute of Physics.
Keeler, E. H. J. and Kowalski, J. (1989). Layered neural networks with gaussian hidden units as
universal approximators. MCC Technical Report ACT-ST-272-89, MCC.
Kohonen, T. (1982). Self-organized formation of topologically correct feature maps. Biological
Cybernetic!, 43:59-69.
Ie Cun, Y. (1987). Modele! Connexionni!te$ de l'Apprentiuage. PhD thesis,
Marie Curie, Paris, France.
Universit~
Pierre et
Lee, S. and Kill, R. (1988). Multilayer feedfo.,ward potential function networks. In Proceeding!
IEEE Second International ConJerence on Ne.ral Network!, page 1:161, San Diego, Califorma.
Linsker, R. (1986). From basic network principles to neural architecture: Emergence of spatial
opponent cells. Proceeding! oj the Nationa.l Academ1J oj Science! USA, 83:7508-7512.
Linsker, R. (1988). Self-organization in a perceptual network. IEEE Computer Society, pages
105-117.
McLachlan, G. and Basford, K. (1988). Mixture Model!: InJerence and Application! to Clu!tering.
Marcel Dekker, New York.
Moody,
J. (1989).
Fast learning in multi-resolution hierarchies.
Yale University.
Technical Report
YALEU/DCS/R~681,
Moody, J. and Darken, C. (1988). Learning with localized receptive fields. In D. Touretzky,
G. Hinton, T. S., editor, Proceeding. oj the 1988 Connectioni!t Model! Summer School,
pages 133-143. Morgan Kauffman.
Niranjan, M. and Fallside, F. (1988). Neural networks and radial basis functions in classifying static
speech patterIUI. Technical Report CUEDIF-INFENGI7R22, Engineering Dept., Cambridge
University. to appear in Computers Speech and Language.
Nowlan, S. (1990). Maximum likelihood competition in RBF networks. Technical Report CRGT~90-2, University of Toronto Connectionist Research Group.
Nowlan, S. and Hinton, G. (1989). Maximum likelihood decision-directed adaptive equalization.
Technical Report CRG-TR-89-8, University of Toronto Connectionist Research Group.
Peterson, G. and Barney, H. (1952). Control methods used in a study of vowels. The Journal oj
the Acou!tical Society oj America, 24:175-184.
Plumbley, M. and Fallside, F. (1988). An information theoretic approach to unsupervised connectionist models. In D. Touretzky, G Hinton, T. S., editor, Proceeding! oj the 1988 Connec.
tioni$t Model! Summer School, pages 239-245. Morgan Kauffmann.
Poggio, G. and Girosi, F. (1989). A theory of networks for approximation and learning. A.I. Memo
1140, MIT.
Rumelhart, D. E. and Zipser, D. (1986). Feature discovery by competitive learning. In Parallel
di6trib.ted proceuing: Exploration. in the micro!tructure of cognition, volume I. Bradford
Books, Cambridge, MA.
Sanger, T. (1989). An optimality principle for unsupervised learning. In Touretzky, D., editor,
Advance! in Neural InJormation Proceuing Sy!tem$ 1, pages 11-19. Morgan Kauffman.
Von der Malsburg, C. (1973). Self-organization of orientation sensitive cells in striate cortex.
K ybernetik, 14:85-100.
| 225 |@word version:6 selforganization:1 proportion:2 duda:3 dekker:1 simulation:4 tried:1 covariance:2 independant:1 tr:1 barney:2 yaleu:1 reduction:2 configuration:1 series:1 contains:1 initial:1 current:7 comparing:3 nowlan:10 activation:2 assigning:1 must:1 john:2 girosi:2 update:3 discrimination:2 selected:3 coarse:1 toronto:6 plumbley:2 five:1 mathematical:2 along:2 ik:1 consists:2 fitting:2 kuj:1 combine:1 manner:1 ra:1 expected:1 frequently:3 multi:4 spherical:7 automatically:1 little:2 developed:3 impractical:1 nj:1 act:1 cuedif:1 growth:1 universit:1 classifier:13 control:2 unit:24 superiority:1 appear:1 before:1 negligible:1 engineering:1 proceuing:3 interpolation:4 suggests:1 grossberg:2 directed:1 testing:2 recursive:2 chaotic:1 bootstrap:2 digit:8 procedure:6 universal:1 mcc:2 attain:1 convenient:1 radial:4 suggest:1 get:1 convenience:1 close:1 layered:2 unlabeled:2 equalization:1 restriction:1 equivalent:3 map:5 demonstrated:1 center:11 focused:1 resolution:1 rule:1 adjusts:1 regarded:4 variation:1 kauffmann:1 updated:5 target:1 diego:1 hierarchy:1 analy:1 exact:6 us:3 rumelhart:2 recognition:6 particularly:1 updating:1 labeled:3 observed:1 steven:1 module:4 lrj:1 highest:1 mentioned:1 trained:10 division:1 basis:4 easily:3 geoff:1 represented:2 america:1 train:1 fast:1 effective:1 formation:1 quite:1 heuristic:1 larger:1 formant:2 ward:1 emergence:1 la4:1 injormation:1 hoc:1 advantage:1 indication:1 net:4 propose:1 jij:1 adaptation:2 kohonen:4 loser:1 organizing:1 adapts:2 lincoln:1 competition:12 normalize:1 generating:5 nearest:9 ij:3 school:2 strong:1 dividing:1 implemented:1 marcel:1 concentrate:1 direction:1 correct:4 exploration:1 softer:1 require:1 biological:2 summation:1 adjusted:1 keeler:2 crg:1 cognition:1 visualize:1 lm:3 clu:1 substituting:2 major:1 outperformed:1 sensitive:1 tool:1 weighted:1 mclachlan:2 mit:1 gaussian:15 rather:4 avoid:1 ej:1 ral:1 likelihood:20 greatly:1 lri:2 helpful:1 typically:1 entire:1 hidden:5 france:1 interested:1 overall:1 classification:17 among:2 orientation:1 herault:1 development:1 spatial:1 special:1 fairly:1 softmax:3 field:2 once:2 saving:1 having:1 equal:2 sampling:2 ted:1 identical:4 placing:2 preparing:1 unsupervised:11 linsker:4 tem:3 others:1 connectionist:4 report:5 richard:1 few:1 micro:1 neighbour:9 fukushima:2 vowel:9 attempt:1 organization:2 interest:2 investigate:1 highly:1 male:1 mixture:1 extreme:1 tical:1 capable:1 poggio:2 desired:1 increased:1 recomputing:1 soft:33 lattice:4 cost:2 subset:2 uninteresting:1 uniform:4 s1j:1 reported:2 considerably:2 st:1 international:1 ie:3 lee:2 probabilistic:1 physic:1 moody:11 ilj:3 von:2 thesis:1 huang:2 possibly:1 worse:1 algorithm4:1 book:1 american:1 potential:1 de:1 summarized:1 coding:1 ad:1 performed:3 view:2 lowe:3 responsibility:3 observing:2 doing:1 portion:1 competitive:34 parallel:1 curie:1 minimize:1 square:4 il:1 variance:5 conducting:1 kowalski:2 maximized:1 sy:2 tering:1 m5s:1 touretzky:3 sharing:2 definition:1 competitor:5 frequency:1 associated:1 basford:2 bridle:4 stop:1 static:1 adjusting:1 popular:2 broomhead:3 radially:1 dimensionality:3 cj:1 organized:1 sophisticated:3 actually:4 back:4 supervised:5 awkward:1 improved:1 anderson:1 just:1 stage:1 implicit:1 until:1 hand:2 propagation:4 aj:4 usa:1 normalized:1 true:1 consisted:1 adequately:2 assigned:1 symmetric:1 laboratory:1 deal:1 sin:1 self:4 encourages:1 speaker:3 bpj:1 unnormalized:1 criterion:1 theoretic:1 demonstrate:1 performs:2 recently:2 superior:2 specialized:1 functional:1 winner:10 volume:1 extend:1 he:1 interpretation:2 mellon:1 significant:1 cambridge:2 ai:1 grid:1 language:1 had:1 cortex:1 closest:3 own:1 female:1 verlag:1 blog:1 came:1 success:3 binary:1 approximators:1 der:2 morgan:3 additional:2 greater:1 determine:4 signal:1 ii:1 multiple:1 smooth:2 ing:1 technical:5 adapt:2 divided:1 hart:3 niranjan:2 cedure:1 prediction:1 variant:1 basic:2 neuro:1 multilayer:1 represent:5 normalization:3 cell:2 addition:3 separately:1 comment:1 subject:3 elegant:1 member:1 incorporates:1 connectioni:1 zipser:2 feedforward:1 iii:1 variety:2 xj:3 fit:3 architecture:5 competing:2 reduce:1 idea:1 cybernetic:2 whether:1 expression:3 reformulated:1 speech:3 york:2 generally:1 amount:2 category:3 estimated:1 kill:2 serving:1 carnegie:1 group:4 recomputed:1 interpolator:1 drawn:3 changing:1 prevent:1 pj:3 marie:1 sum:1 compete:1 respond:1 topologically:1 place:1 decision:4 layer:15 completing:1 followed:1 summer:2 yale:2 encountered:1 activity:1 adapted:4 placement:2 worked:1 bp:2 x2:1 scene:1 extremely:1 optimality:1 attempting:1 relatively:2 department:1 according:1 poor:1 smaller:2 son:1 cun:2 making:1 interference:1 equation:1 previously:1 available:1 gaussians:10 operation:2 opponent:1 appropriate:1 pierre:1 neighbourhood:1 batch:1 conjerence:1 malsburg:2 sanger:2 especially:2 uj:2 society:2 blend:1 strategy:2 receptive:2 striate:1 traditional:4 diagonal:1 visiting:1 fallside:4 gradient:2 subspace:2 distance:1 separate:2 thank:2 simulated:1 me:1 lthe:1 reason:1 relationship:1 modele:1 difficult:2 theorie:1 memo:1 boltzmann:1 allowing:1 observation:6 darken:6 descent:2 extended:2 hinton:5 digitized:2 dc:1 arbitrary:1 canada:1 required:3 paris:1 xle:2 learned:1 able:4 suggested:4 usually:1 pattern:8 perception:1 kauffman:2 appeared:1 max:1 memory:1 oj:7 perfonned:1 difficulty:1 hybrid:7 representing:1 scheme:2 ne:2 literature:2 acknowledgement:1 discovery:1 determining:1 relative:1 interesting:1 limitation:1 proportional:2 suggestion:1 localized:2 generator:1 principle:2 editor:5 classifying:1 pi:1 share:1 summary:2 gl:1 free:2 formal:1 allow:2 lle:1 peterson:2 distributed:1 soulie:1 dimension:1 xn:1 evaluating:1 computes:1 author:1 collection:2 adaptive:9 made:1 san:1 ec:1 approximate:2 lippmann:7 active:1 sequentially:1 reveals:3 xi:1 iterative:2 why:1 table:7 learn:1 zk:1 complex:1 tructure:1 domain:1 did:1 multilayered:1 allowed:1 repeated:1 competent:1 body:2 referred:2 fashion:3 wiley:2 iij:1 formalization:1 structurally:1 theme:1 position:1 exponential:1 winning:2 xl:1 lie:2 pul:1 perceptual:1 sph:8 evidence:1 burden:1 adding:1 effectively:1 phd:2 te:1 generalizing:1 simply:1 cheaply:1 visual:1 springer:1 ma:1 succeed:1 rbf:41 hard:22 included:2 determined:1 except:2 reducing:1 specifically:1 uniformly:1 pas:1 bradford:1 gauss:8 select:1 incorporate:1 dept:1 tested:1 |
1,374 | 2,250 | Mean-Field Approach to a Probabilistic Model
in Information Retrieval
Bin Wu, K. Y. Michael Wong
Department of Physics
Hong Kong University of Science and Technology
Clear Water Bay, Hong Kong
[email protected] [email protected]
David Bodoff
Department of ISMT
Hong Kong University of Science and Technology
Clear Water Bay, Hong Kong
[email protected]
Abstract
We study an explicit parametric model of documents, queries, and relevancy assessment for Information Retrieval (IR). Mean-field methods
are applied to analyze the model and derive efficient practical algorithms
to estimate the parameters in the problem. The hyperparameters are estimated by a fast approximate leave-one-out cross-validation procedure
based on the cavity method. The algorithm is further evaluated on several
benchmark databases by comparing with standard algorithms in IR.
1 Introduction
The area of information retrieval (IR) studies the representation, organization and access of
information in an information repository. With the advent and boom of the Internet, especially the World Wide Web (WWW), more and more information is available to be shared
online. Search on the Internet becomes increasingly popular. In this respect, probabilistic
models have become very useful in empowering information searches [1, 2].
In fact, information searches themselves contain rich information, which can be recorded
and fruitfully used to improve the performance of subsequent retrievals. This is an extension of the process of relevance feedback [3], which incorporates the relevance assessments
supplied by the user to construct new representations for queries, during the procedure of
the users interactive document retrieval. In the process, the feedback information helps
to refine the queries continuously, but the effects pertain only to the particular retrieval
session. On the other hand, our objective is to refine the representations of documents
and queries with the help of relevancy data, so that subsequent retrieval sessions can be
benefited.
Based on Fuhr and Buckley?s meta-structure [4] relating documents, queries and relevancy
assessments, one of us recently proposed a probabilistic model [5] in which these objects
are described by explicit parametric distribution functions, facilitating the construction of
a likelihood function, whose maximum can be used to characterize the documents and
queries. Rather than relying on heuristics as in many previous work, the proposed model
provides a unified formal framework for the following two tasks: (a) ad hoc information
retrieval, in which a query is given and the goal is to return a list of ranked documents
according to their similarities with the query; (b) document routing, in which a document
is given and the goal is to categorize it using a list of ranked queries according to their similarities with the document. (Here we assume a model in which categories are represented
by queries.)
In this paper, we report our recent progress in putting this new theoretical approach to
empirical tests. Since documents and queries are represented by high dimensional vectors in a vector space model, a mean-field approach will be adopted. mean-field methods
were commonly used to study magnetic systems in statistical physics, but thanks to their
ability to deal with high dimensional systems, they are increasingly applied to many areas of information processing recently [6]. In the present context, a mean-field treatment
implies that when a particular component of a document or query vector is analyzed, all
other components of the same and other vectors can be considered as background fields
satisfying appropriate average properties, and correlations of statistical fluctuations with
the background vectors can be neglected.
After introducing the parametric model in Section 2, the mean-field approach will be used
in two steps. First, in Section 3, the true representations of documents and queries will be
estimated by maximizing the total probability of observation. It results in a set of meanfield equations, which can be solved by a fast iterative algorithm. Respectively, the estimated true documents and queries will then be used for ad hoc information retrieval and
document routing.
Secondly, the model depends on a few hyperparameters which are conventionally determined by the cross-validation method. Here, as described in Section 4, the mean-field approach can be used again to accelerate the otherwise tedious leave-one-out cross-validation
procedure. For a given set of hyperparameter values, it enables us to carry out the systemwide iteration only once (rather than repeating once for each left-out document or
query), and the leave-one-out estimations of the document and query representations can
be obtained by a version of mean-field theory called the cavity method [7].
In Section 6, we compare the model with the standard tf-idf [8] and latent semantic indexing
(LSI) [9] on benchmark test collections. As we shall see, the validity of our model is well
supported by its superior performance. The paper is concluded in Section 7.
2 A Unified Probabilistic Model
Our work is motivated by Fuhr and Buckley?s conceptual model. Assume that a set of
documents and
queries is available to us. In the vector space model, each document
and query is represented by an
dimensional vector. The vectors are denoted by ( ),
which are referred to as the true meaning of the document (query). Our model consists of
the following 3 components:
(a) The document
we really observe is distributed around the true document vector
according to the probability distribution
, the difference resulting from the
documents containing terms that do not ideally represent the meaning of the document. In
other words, the document
is generated from its true meaning .
that the user actually submits is also distributed around the true
(b) Similarly, the query
query vector according to the probability distribution distribution
.
(c) There is some relation between the document and query, called relevancy assessment.
We denote this relation with a binary variable for each pair of document and query. If
, we say the document is relevant to the query, that is, the document is what the user
wants. Otherwise,
and the document is irrelevant to the query. Suppose we have
some relevancy relations between documents and queries (through historical records, from
experts, etc.). Then we hypothesize that the true documents and queries are distributed
, that is, the true representation of documents and
according to the distribution
queries should satisfy their relevancy relations.
We summarize the idea through a probabilistic meta-structure shown in Figure 1.
fQ (Q 0 | Q)
Q
B
fB (D,Q | B )
fD (D 0 | D)
D
data
Q0
D0
data
unknown
parameters
Figure 1: Probabilistic meta-structure
In order to complete the model, we need to hypothesize the form of the distribution functions. In this paper, we restrict the documents and queries to a hypersphere, since usually
only the cosines of the angles between documents and queries are used to determine the
similarity between documents and queries. Hence, we assume the following distribution
functions:
!
(1)
"
(b) The distribution of each observed query given its true location :
$ % &
!
(2)
#
"
(c) The prior distribution of the documents and queries, given the relevance relation between them:
1 )2 * /3 )4* ) * ) * &5
!
(' #)%
+*-, ./ 0
(3)
!
!
!
where 76 is the Dirac -function, and
, and are normalization constants of
,
and respectively, and are hence independent of and .
(a) The distribution of each observed document
given its true location
:
If we further assume that the observation of documents and queries are independent of each
other, we can obtain the total probability of observing all documents and queries, given the
relevancy relation between them:
8 (' )
* , 9: 0 ! ! !<; ! ) * A
%=?> ?= @
(4)
!; #) +* #) % +* /
(5)
)2 *
// )2 * ) * ) +*
) ) #)
* * *-
(6)
and : denotes all hyperparameters '
%/<, . There is now an appealing correspondence
! ; is
between the present model and spin models in statistical physics. It is observed that
where
just the familiar partition function and
is the energy function.
By maximizing the probability in Eq. (4), we can obtain an estimation of the true documents , which can be used in ad hoc retrieval: we define the similarity function between
two vectors as the cosine of the angle between them, and rank the similarities between
(instead of
) with a new query to determine whether the documents should be retrieved
or not. As a byproduct, we can also obtain the estimation of the true queries , which in
turn can be used in document routing: new documents should be compared with to determine whether it belongs to this category or not. So our model gives a unifying procedure
for both ad hoc retrieval and routing.
3 Parameter Estimation
!;
In this section, we derive a fast iterative algorithm for parameter estimation. First, we
replace the -function by its Fourier transform. Then
can be written as
! ;
i
i
where )
*
)
*
) i) * * i ) * %A
) ) ) < * * * < /
(7)
. In writing this
formula, we have changed the integration to the imaginary axis.
,
and , when the integration can be
Mean-field theory works in the limit of large
well approximated by taking the saddle point of . This is obtained by equating the partial
derivatives of with respect to , , and to zero, yielding
# )
*
)
*
* ) * *
)
/
)
/ ) +)4* # * )
*
A/ * ) * *
) -
A/ ) ) * # )
* "
(8)
(9)
(10)
(11)
This set of equations is referred to as the mean-field equations, since fluctuations around
the mean values of the parameters have been neglected. Due to its simple form, it can be
solved by an iterative scheme. Though we have not studied the theoretical convergence
of the iterative scheme, its effectiveness can be seen from the following arguments. If we
replace in Eq. (8) and in Eq. (9) by the respective values of
and
at the saddle
point, then the iteration process becomes a linear one. Now, Eqs. (8) and (9) differ from
and
respectively. Hence after
this linear iteration problem by scale factors of
using Eqs. (10) and (11), the problem is equivalent to rescaling the lengths of the iterated
)
*
)!
"
" )$! # ) *! # *
* !
.)
+*
vectors back to the hypersphere defined by
and
. This alternate
operation of linear iteration and rescaling back to the hypersphere makes it a very stable
algorithm. The complexity of the algorithm is linear in the number of documents and
queries. Empirically, it converges in just a few tens of steps. Alternatively, one may use
the Augmented Lagrangian method to find the saddle point of , whose convergence is
guaranteed, but is computationally more complex [10].
4 Hyperparameter Estimation
/
In our model, the parameters ,
and
determine the shape of the distributions ,
and , and influence the parameter estimation described in Section 3. We refer to them as
hyperparameters. They have to be chosen so that the model performs optimally when new
queries are raised to retrieve documents, or when new documents are routed.
A standard method for hyperparameter estimation in machine learning is leave-one-out
cross-validation [11]. Suppose we have
examples for training the model. Then each
time we pick one data as the validation set and train the model with the rest of the
examples. The hyperparameters are chosen as the ones that give the optimal performance
averaged over the test examples.
The exact leave-one-out cross-validation is very tedious, especially for multiple hyperparameters, because of the need to train the model times for each combination of hyperparameters. For this model, we propose an approximate leave-one-out procedure based on
the cavity method [7]. Suppose we have trained the model with all data, and obtain the
estimation
, which satisfies the steady state equation
' # )
* ,
)
/
)
*
*
*
# )
)
*
/
4
)
*
)
)
*
*
"
(12)
If the query
were left out from the training set of queries, the cavity estimation should
satisfy the equation
)
/
)
4
*
*
*
)
)
*
/
4
)
*
)
)
*
" (13)
*
By subtracting (7) by (8), and assuming that ' )
* , is approximately the same as
' )%
* , , we can get the difference,
+*
/ ) )4* )
#)
/ * ) * * /3 )
" (14)
)
*
For ad hoc retrieval, we eliminate * to obtain a set of linear equations for ) . The solution can be further simplified by using the mean-field argument that the changes induced
by removing the query
on documents can be decoupled. Hence we can neglect the
off-diagonal terms, yielding
' )%
* ,
) ) )
) ) /3/ ) "
*
(15)
)
Note that
have been known in the systemwide training. Then
can be estimated
by
. The similarities between
and
are then used to predict the
leave-one-out ad hoc retrieval performance of the model. Equations for document routing
can be derived analogously.
)
Note that we need to train the model only once, and the leave-one-out estimation of documents and queries can be obtained in one step. So the algorithm is extremely fast. Amazingly, it also gives reasonable estimations of hyperparameters, as shown in the following
experiments.
We remark that the mean-field technique can be applied to distributions of documents,
queries and relevance feedbacks other than those described by Eqs. (1-3). In the present
case spectified by Eqs. (1-3), our model is similar to the Gaussian model, if the spherical
constraint on ?s and ?s are replaced by a spherical Gaussian prior. Though leave-oneout cross-validation can be done exactly in the Gaussian model, it involves the inversion of
a large matrix. On the other hand, the mean-field estimation greatly simplifies the process
by neglecting the off-diagonal elements.
5 Experimental Results
We have applied the proposed method to ad hoc retrieval and routing for the test collections
of Cranfield and CISI. Because we treat both tasks identically, we use the same evaluation
criterion: the recall precision curve and the average retrieval precision. We have run two
versions of our algorithm: (a) in the original dimension, the observed documents
and
queries
are represented by the original tf-idf weights; (b) in the reduced dimension of
100, in which the original vectors are reduced by singular value decomposition (SVD) in
LSI.
In Figs. 2 (a-b), we show the recall precision curves at the optimal hyperparameters. The
mean-field estimates are compared with the baseline results of LSI. It is clear that our
method gives significant gains in retrieval precision. Comparisons using the original dimension or the Cranfield collection, not shown here due to space limitations, yield equally
satisfactory results.
0.4
0.4
Precision
0.6
Precision
0.6
MF
MF
0.2
0.2
LSI
LSI
0
0
0.2
0.4
0.6
Recall
0.8
1
0
0
0.2
0.4
0.6
0.8
1
Recall
Figure 2: The recall precision curves of the mean-field estimation (MF) and the baseline
(LSI) for (a) ad hoc retrieval (b) document routing for CISI in reduced dimension
For hyperparameter estimation, we can compare the mean-field results and those for exact
leave-one-out cross-validation in reduced dimension, since the computation of the exact
ones is still feasible. In Fig. 3, we have plotted the average precision versus the two hyperparameters, as computed by the two methods. They have very similar contours, although
there is a uniform displacement between their values. This demonstrates the usefulness of
the mean-field approximation in hyperparameter estimation.
In Table 1, we obtain the values of the optimal hyperparameters from the mean-field leave-
one-out method, and the average precisions of the exact leave-one-out are then computed
using these optimal hyperparameters. These are compared with the results of the exact
leave-one-out and listed in Table 1. For the hyperparameter estimation in the original
dimension, the exact leave-one-out is not available since it is too tedious. Instead, we
compare the hyperparameters with the ones from the -fold cross-validation. Whether we
compare the mean-field with the exact leave-one-out or -fold cross-validation, the optimal
hperparameters are comparable in most cases, and when there are discrepancies, one can
observe that the average precisions are essentially the same.
# /0
# / +
"
" "
Figure 3: Average retrieval precision versus hyperparameters for ad hoc retrieval in reduced
;
dimension for CISI: (a) mean-field leave-one-out, peaked at
(b) exact leave-one-out. peaked at
.
# /
# /
"
Table 1: The average retrieval precision for leave-one-out cross-validation in reduced dimension: mean-field versus exact.
CISI
Cranfield
Average precision
Average precision
ad hoc retrieval
LSI
?
?
0.079
?
?
0.178
Mean-Field
0.3
12.0
0.142
0.4
1.1
0.248
Exact
0.3
10.1
0.142
0.6
1.5
0.250
Document Routing
LSI
?
?
0.104
?
?
0.240
Mean-Field 28.9
1.6
0.192
2.5
1.1
0.351
Exact
23.0
2.5
0.193
0.9
0.7
0.356
#/
#/
#/
#/
6 Conclusion
We have considered a probabilistic model of documents, queries and relevancy assessments. Fast algorithms are derived for parameter and hyperparameter estimations. Significant improvement is achieved for both ad hoc retrieval and routing compared with tf-idf
and LSI. In another paper [12], we have compared the model with other heuristic methods such as Rocchio heuristics [3] and Bartell?s Multidimensional Scaling [13], and the
mean-field method still outperforms them. These successes illustrate the potentials of the
mean-field approach, which is especially suitable for systems with high dimensions and
numerous mutually interacting components, such as those in IR. Hence we anticipate that
mean-field methods will have increasing applications in many other probabilistic models
in IR.
Acknowledgments
We thank R. Jin for interesting discussions. This work was supported by the grant
HKUST6157/99P of the Research Grant Council of Hong Kong.
References
[1] Cohn, D. and T. Hofmann (2001). The Missing Link ? A Probabilistic Model of Document Content and Hypertext Connectivity. Advances in Neural Information Processing Systems 13, T. K. Leen, T. G. Dietterich and V. Tresp, eds., MIT Press, Cambridge,
MA, 430-436.
[2] Jaakola, T. and H. Siegelmann (2002). Active Information Retrieval. Advances in
Neural Information Processing Systems 14, T. G. Dietterich, S. Becker and Z. Ghahramani, eds., MIT Press, Cambridge, MA, 777-784.
[3] Rocchio, J. J. (1971). Relevance Feedback in Information Retrieval. SMART Retrieval
System?Experiments in Automatic Document Processing, G. Salton ed., PrenticeHall, Englewood Cliffs, NJ, Chapter 14.
[4] Fuhr, N. and C. Buckley (1991). A Probabilistic Learning Approach for Document
Indexing. ACM Transactions on Information Systems 9(3): 223-248.
[5] Bodoff, D., D. Enabe, A. Kanbil, G. Simon and A. Yukhimets (2001). A Unified
Maximumn Likelihood Approach to Document Retrieval. Journal of the American
Society for Information Science and Technology 52(10): 785-796.
[6] Opper, M. and D. Saad, eds. (2001). Advanced Mean Field Methods, MIT Press,
Cambridge, MA.
[7] Wong, K. Y. M. and F. Li (2002). Fast Parameter Estimation Using Green?s Functions.
Advances in Neural Information Processing System 14: 535-542, T.G. Dietterich, S.
Becker and Z. Ghahramani, eds., MIT Press, Cambridge, MA.
[8] Salton, G. and M. J. McGill (1983). Introduction to Modern Information Retrieval,
McGraw-Hill, New York, 63-66.
[9] Deerwester, S., S. T. Dumais, G. W. Furnas, T. K. Landauer and R. Harshman (1990).
Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science 41(16): 391-407.
[10] Nocedal, J. and S. J. Wright (1999). Numerical Optimization, Springer, Berlin, Ch.
17.
[11] Bishop, C. M. (1995). Neural Networks for Pattern Recognition, Clarendon Press,
Oxford, 372-375.
[12] Bodoff, D., B. Wu and K. Y. M. Wong (2002). Relevance Feedback meets Maximum
Likelihood, preprint.
[13] Bartell, B. T., G. W. Cottrell and R. K. Belew (1992). Latent Semantic Indexing Is
an Optimal Special Case of Multidimensional Scaling. Proceedings of the 15th International ACM SIGIR Conference on Research and Development in Information
Retrieval, 161-167.
| 2250 |@word kong:5 repository:1 version:2 inversion:1 tedious:3 relevancy:8 decomposition:1 pick:1 carry:1 document:58 systemwide:2 outperforms:1 imaginary:1 comparing:1 ust:3 written:1 cottrell:1 numerical:1 subsequent:2 partition:1 shape:1 enables:1 hofmann:1 hypothesize:2 record:1 hypersphere:3 provides:1 location:2 become:1 consists:1 themselves:1 relying:1 spherical:2 increasing:1 becomes:2 advent:1 what:1 unified:3 nj:1 multidimensional:2 interactive:1 exactly:1 demonstrates:1 grant:2 harshman:1 treat:1 limit:1 oxford:1 cliff:1 meet:1 fluctuation:2 approximately:1 equating:1 studied:1 jaakola:1 averaged:1 practical:1 acknowledgment:1 procedure:5 displacement:1 area:2 empirical:1 word:1 submits:1 get:1 pertain:1 context:1 influence:1 writing:1 wong:3 www:1 equivalent:1 lagrangian:1 missing:1 maximizing:2 bodoff:3 sigir:1 retrieve:1 mcgill:1 construction:1 suppose:3 user:4 exact:11 element:1 satisfying:1 approximated:1 recognition:1 database:1 observed:4 preprint:1 solved:2 hypertext:1 complexity:1 ideally:1 neglected:2 trained:1 smart:1 accelerate:1 represented:4 chapter:1 train:3 fast:6 query:46 whose:2 heuristic:3 say:1 otherwise:2 ability:1 transform:1 online:1 hoc:11 propose:1 subtracting:1 relevant:1 dirac:1 convergence:2 leave:18 converges:1 object:1 help:2 derive:2 illustrate:1 progress:1 eq:7 cisi:4 involves:1 implies:1 differ:1 routing:9 bin:1 really:1 anticipate:1 secondly:1 extension:1 around:3 considered:2 wright:1 predict:1 rocchio:2 estimation:19 council:1 tf:3 mit:4 gaussian:3 rather:2 derived:2 improvement:1 rank:1 likelihood:3 fq:1 hk:3 greatly:1 baseline:2 eliminate:1 relation:6 denoted:1 development:1 raised:1 integration:2 special:1 field:28 construct:1 once:3 peaked:2 discrepancy:1 report:1 few:2 modern:1 familiar:1 replaced:1 organization:1 fd:1 englewood:1 evaluation:1 analyzed:1 yielding:2 amazingly:1 byproduct:1 partial:1 neglecting:1 respective:1 decoupled:1 plotted:1 theoretical:2 introducing:1 uniform:1 usefulness:1 bartell:2 fruitfully:1 too:1 characterize:1 optimally:1 dumais:1 thanks:1 international:1 probabilistic:10 physic:3 off:2 michael:1 analogously:1 empowering:1 continuously:1 prenticehall:1 connectivity:1 again:1 recorded:1 containing:1 expert:1 derivative:1 american:2 return:1 rescaling:2 li:1 potential:1 boom:1 satisfy:2 ad:11 depends:1 analyze:1 observing:1 simon:1 ir:5 spin:1 yield:1 iterated:1 ed:5 energy:1 salton:2 gain:1 treatment:1 popular:1 recall:5 actually:1 back:2 clarendon:1 leen:1 evaluated:1 though:2 done:1 just:2 correlation:1 hand:2 web:1 cohn:1 assessment:5 effect:1 validity:1 contain:1 true:12 dietterich:3 hence:5 q0:1 satisfactory:1 semantic:3 deal:1 during:1 steady:1 cosine:2 hong:5 criterion:1 hill:1 complete:1 performs:1 meaning:3 recently:2 superior:1 empirically:1 relating:1 refer:1 significant:2 cambridge:4 automatic:1 session:2 similarly:1 access:1 stable:1 similarity:6 etc:1 recent:1 retrieved:1 irrelevant:1 belongs:1 meta:3 binary:1 success:1 seen:1 determine:4 multiple:1 d0:1 cross:10 retrieval:28 equally:1 essentially:1 iteration:4 represent:1 normalization:1 achieved:1 background:2 want:1 singular:1 concluded:1 saad:1 rest:1 induced:1 incorporates:1 effectiveness:1 identically:1 restrict:1 idea:1 simplifies:1 whether:3 motivated:1 becker:2 routed:1 york:1 remark:1 buckley:3 useful:1 clear:3 listed:1 repeating:1 ten:1 category:2 reduced:6 supplied:1 lsi:9 estimated:4 hyperparameter:7 shall:1 putting:1 nocedal:1 deerwester:1 run:1 angle:2 reasonable:1 wu:2 scaling:2 comparable:1 internet:2 guaranteed:1 correspondence:1 fold:2 refine:2 constraint:1 idf:3 fourier:1 argument:2 extremely:1 department:2 according:5 alternate:1 combination:1 increasingly:2 appealing:1 indexing:4 computationally:1 equation:7 mutually:1 turn:1 adopted:1 available:3 operation:1 observe:2 appropriate:1 magnetic:1 original:5 denotes:1 unifying:1 belew:1 neglect:1 siegelmann:1 ghahramani:2 especially:3 society:2 objective:1 parametric:3 diagonal:2 thank:1 link:1 berlin:1 water:2 assuming:1 length:1 oneout:1 unknown:1 observation:2 benchmark:2 jin:1 interacting:1 david:1 pair:1 usually:1 pattern:1 summarize:1 green:1 meanfield:1 suitable:1 ranked:2 advanced:1 scheme:2 improve:1 technology:3 numerous:1 axis:1 conventionally:1 tresp:1 prior:2 interesting:1 limitation:1 versus:3 validation:11 changed:1 supported:2 cranfield:3 formal:1 wide:1 taking:1 distributed:3 feedback:5 curve:3 dimension:9 world:1 opper:1 rich:1 fb:1 contour:1 commonly:1 collection:3 simplified:1 historical:1 transaction:1 approximate:2 mcgraw:1 cavity:4 active:1 conceptual:1 alternatively:1 landauer:1 search:3 iterative:4 latent:3 bay:2 table:3 complex:1 hyperparameters:14 facilitating:1 augmented:1 benefited:1 referred:2 fig:2 furnas:1 precision:14 explicit:2 formula:1 removing:1 bishop:1 phkywong:1 list:2 mf:3 saddle:3 springer:1 ch:1 satisfies:1 acm:2 ma:4 goal:2 shared:1 replace:2 feasible:1 change:1 content:1 determined:1 total:2 called:2 experimental:1 svd:1 categorize:1 relevance:6 hkust6157:1 |
1,375 | 2,251 | Replay, Repair and Consolidation
Szabolcs K?ali
Institute of Experimental Medicine
Hungarian Academy of Sciences
Budapest 1450, Hungary
[email protected]
Peter Dayan
Gatsby Computational Neuroscience Unit
University College London
17 Queen Square, London WC1N 3AR, U.K.
[email protected]
Abstract
A standard view of memory consolidation is that episodes are stored temporarily in the hippocampus, and are transferred to the neocortex through
replay. Various recent experimental challenges to the idea of transfer,
particularly for human memory, are forcing its re-evaluation. However,
although there is independent neurophysiological evidence for replay,
short of transfer, there are few theoretical ideas for what it might be
doing. We suggest and demonstrate two important computational roles
associated with neocortical indices.
1 Introduction
Particularly since the analysis of subject HM,1 the suggestion that human memories would
consolidate,2 has gripped experimental and theoretical communities. The idea is that storage of some sorts of knowledge (notably declarative information) involves a two-stage
process, with memories moving from an initial, temporary, home (usually taken to be the
hippocampus), which offers fast acting, but short-lived, plasticity, into a final, permanent
resting place (usually the neocortex), whose learning and forgetting are much slower.
Various sources of evidence have been adduced in favor of this proposition. First, it has
been suggested that for patients (or animal subjects) who have suffered insults to the hippocampus, recent memories are more compromised than older ones, suggesting that they
have yet to be consolidated to cortex.3, 4 Second, the same patients suffer from anterograde
amnesia (that is, they cannot lay down new memories), even though many neocortical areas
are palpably functioning, and procedural storage (including aversive conditioning and skill
learning) works (more) normally.5 Third, starting with the seminal work of Marr,6 who
(possibly by a mis-calculation7) suggested that the hippocampus was just large enough a
dynamic RAM as to store one day?s events, a variety of theoretical treatments has suggested
the possible characteristics and advantages of two-stage procedures. 8?10 This is widely regarded as reaching its apogee in the work of McClelland et al,11 who performed a careful
computational analysis of fast and slow learning in connectionist networks. Fourth, and
perhaps most compelling, an obvious substrate for replay to cortex is provided by the neurophysiologically observed12?14 reactivation during slow wave and REM sleep of patterns
of (rat) hippocampal neuronal firing observed during times when the subject is awake and
behaving, together with evidence of at least some coordination between hippocampal and
neocortical states during this reactivation.15
The first and third of these evidentiary foundations are currently under active debate, specially for episodic memories (ie autobiographical memories for happenings). Solid evidence that hippocampal damage really spares memories for distant events compared with
those for recent ones is extremely sparse, and the relevance of infra-human studies is put
into question by the orders-of-magnitude differences in the memory time-scales shown between humans and animals.16 The modeling studies are also more ambiguous than they
might seem, since their most convincing focus is on the tribulations of catastrophic interference.17 That is, slow learning is necessary in systems with rich distributed or population
coding because changes in synaptic efficacies occasioned by incorporating new information can easily overwrite the neural substrate for the storage of old information (the hoary
stability-plasticity dilemma18 ). This catastrophic interference can be avoided by re-storing
old patterns (or something equivalent10, 19) at the same time as storing new information.
Thus, according to these schemes, patterns are stored wholesale in the hippocampus when
they first appear, and are continually read back to cortex to cause plasticity along with the
new information. However, if the hippocampus is permanently required to prevent a catastrophe, then, first, there is no true consolidation: if neocortical plasticity is not inhibited
by hippocampal damage,20 then its integrity is permanently required to prevent degradation; and, second, what is the point of consolidation ? couldn?t the hippocampus suffice by
itself? This is particularly compelling in the case of episodes, since they are intrinsically
isolated events. We came to a realization of this through development of our own model for
consolidation,21 whose behavior convinced us of a flaw in our thinking. This second point
lies exactly at the heart of the perspective espoused by Nadel and Moscovitch, 16 amongst
others. They regard the hippocampus as the final point of storage for all episodic memory,
and permanently required for its recall. Of course, this idea equally well accounts for the
second strand of evidence above about anterograde amnesia.
If the hippocampus stores patterns permanently, what could the point be of replay? Here,
we consider two roles, both associated with concerns about the pattern matching process at
the heart of retrieval from the hippocampus. One is a new take on catastrophic interference,
arguing that replay is necessary to keep the patterns stored in the hippocampus in register
with the evolving cortical representation, so that they can still be recalled (and interpreted)
correctly even though the cortical code may have changed since they were stored. The other
computational role for replay is a new take on indexing, arguing that the cortical patterns
that should lead to retrieval of a hippocampal memory are not only close syntactic relatives
of the pattern that was originally stored, ie patterns whose actual neural code is similar,
but also patterns that are close semantic relatives, ie patterns that are closely related via the
network of semantic relationships that is stored in neocortex. In this scheme, the role of
replay is building an index to the memory, effectively a form of recognition model. 22
We first discuss briefly our existing model of consolidation,21 and its failings. Section 3
treats the repair of hippocampal indexing in the light of the vicissitudes of semantic change.
Section 4 sketches our account of the semantic elaboration of the index.
2 Semantic and Episodic Memory
Figure 1 shows our existing account of the interaction between the neocortex and the hippocampus in semantic and episodic memory.21 The neocortex is separated into
?lower?
areas (
) which are connected via bi-directional, variable, weights
with an
entorhinal/parahippocampal (EP) area ( ), and collectively act as a restricted Boltzmann
machine (RBM), trained in an unsupervised manner, using contrastive divergence. 23 It
learns a model of the statistical relationships amongst the inputs, so that
it can
produce
samples from conditional probability distributions such as
. The conventional interpretation for this is as a model of semantic memory ? the generic facts of
the world, stripped of information about the time and place and other circumstances under
which they were learnt. However, the individual patterns on which the semantic learning
is based are treated as episodic patterns, which should be recalled wholesale. One main
contribution of that work was to put episodic and semantic information into such particular
correspondence.
100
B
A
!"
!"
#$#$#$ %&%&%&%& '
$
'('('(
(
"
! "
! "!"! #
80
Percent recalled
HC
60
40
20
0
0
y E/P
200
400
600
800
Time (thousand presentations)
100
A
xA
B
xB
C
xC
C
80
Percent recalled
WC
WA
one?shot
consolidated
60
40
20
0
0
20
40
60
80
Time (thousand presentations)
100
Figure 1: (A) Model architecture. All units in neocortical areas A, B, and C are connected to all
units in area E/P through bidirectional, symmetric weights, but connections between units in the
input layer are restricted to the same cortical area. Each neocortical area contains 100 binary units.
The hippocampus (HC) is not directly implemented, but it can influence and store the patterns in
EP. All communication between the HC and the input areas is via area EP. (B) The consolidation of
episodic memories. Recall performance on specific (episodic) patterns as a function of time between
the initial presentation of the episodic pattern and testing (or, equivalently, time between training and
lesion in hippocampals) in the simulations. (C) Extinction of an episode due to semantic training,
in the isolated neocortical network trained to asymptotic performance on the episodic pattern (thin
line), and directly after the removal of the hippocampus from the full network, for a pattern which
has been hippocampally ?consolidated? for 250,000 presentations (thick line).
In this previous model, the hippocampus acts as a fast-learning repository for the EP representation of patterns that have been (relatively recently) experienced, and plays two roles:
aiding recall and training the neocortex. The hippocampus improves recall by performing
pattern completion on the EP representations induced by partial or noisy inputs , thus finding the nearest matching stored . In turn, this, through neocortical semantic knowledge,
engenders recall of an appropriate . The hippocampus trains the neocortex in an off-line
(sleep) mode, reporting the patterns that it has stored to the neocortex to give the latter?s
incremental plasticity the opportunity to absorb the new information. Given hippocampal damage, patterns that have been repeatedly replayed to cortex by the hippocampus (ie
older patterns) have a greater chance of being recalled correctly through neocortical inference than patterns that were learned more recently, and are therefore still dependent for
their recall on the integrity of the hippocampus.
Figure 1B shows the basic consolidation phenomenon in this model. The upper (thin) curve
shows how well on average the full model can recall whole items from a partial cue as a
function of time since the item was stored; the lower (thick) curve shows the same in the
case that the hippocampal contribution is eliminated immediately before testing. This is
the standard inverted U-shaped curve of graded retrograde amnesia, with distant memories spared compared with recent ones. However, figure 1C reveals what is really going
on. Both curves show how the neocortical network forgets particular episodic patterns as
a function of continued semantic training. Thick/thin lines are with/without prior consolidation using the hippocampus. Consolidation clearly does not help the longevity of the
memory ? if anything, it actually impedes it. This is essentially because the cortical code
changes slowly over presentations. Thus, first, the hippocampus is mandatorily required if
memories are to be preserved ? the forgetting curve for the normals in figure 1B is actually
dominated by hippocampal forgetting. Second, the inverted U-shaped curve in figure 1B
arises because testing happens immediately after hippocampal removal. The same curves
plotted for successive times after removal would show catastrophic memory failure.
Memories might turn out to be stabilized in the face of hippocampal damage in other
ways.21 For instance, cortical plasticity might be suppressed, if the hippocampus reports
unfamiliarity as a plasticizing signal. This is somewhat unlikely, since various forms of
continued plasticity remain active.3, 20 Alternatively, there might be synaptic stabilizing
mechanisms in the cortex such that synapses come never to change. This is certainly possible, but does not explain how recall can survive changes in the cortical code.
In sum, the model turns out to illustrate the key problem with standard theory of memory
transfer for episodes. We are thus forced to start from the possibility that the hippocampus
might indeed be a permanent repository, and reconsider the issue of replay and consolidation in the resulting light. In this new scheme, there is still a critical role for replay,
but one that is focused on the indexing relationship between neocortical and hippocampal
representations rather than on writing into cortex the contents of the hippocampus.
3 Maintaining Access to Episodes
Consider the fate of an episode that is stored in the hippocampus. In a hierarchical network
where the hippocampus is directly connected only to the topmost areas, successful recall of
such an episode depends on the correspondence between low- and high-level cortical areas
embodied by the neocortical network. This dependence actually has two related components. First, the high-level neocortical representation of the recall cue needs to be effective
in activating the correct hippocampal memory trace; second, the high-level representation
activated by hippocampal recall should effect the recall of the appropriate components of
the corresponding episode in lower level areas as well. These are both aspects of indexing.
The neocortical network is the substrate of neocortical learning, reflecting, for instance,
refinement of the existing semantic representation, changes in input statistics, or acquisition of a new semantic domain. Such plasticity may disrupt the recall of stored episodic
patterns by changing the correspondence between the input areas and EP. Thus, if the brain
is still to be able to recall hippocampally stored episodes, it either needs to maintain the
correspondence between the low-level and EP representations of the episodes by restricting
neocortical learning (achieved in the previous model by having the hippocampus replay its
old episodic patterns along with the new semantic patterns governing continued neocortical
plasticity), or it needs to update the connections between the hippocampus and EP such that
the hippocampally stored pattern continues to match the EP representation of the input pattern corresponding to the episode. The first of these possibilities may restrict the learning
abilities of the neocortical network. However, replay can be used to allow the connections
into and out of the hippocampus to track the changing neocortical representational code.
In order to assess the effect of neocortical learning on the recall of previously stored
episodes, either in the presence or absence of replay, the following paradigm was employed. We started training the neocortical network by presenting to the input areas random combinations of valid patterns (20 independently generated random binary patterns
for each area). After a moderate amount of such general training (10,000 pattern presentations total), the EP representations of particular input patterns were associated with
corresponding stored hippocampal traces, forming a set of stored episodes. The quality of
recall for these episodes was then monitored while general training continued. Figure 2A
shows as a function of the length of general semantic training the percentage of correct recall for the episodes stored after 10,000 presentations. The main plot is an average over all
episodes; the smaller plots show some individual episodes. Clearly, neocortical learning
comes to erase the route to recall, even though the episode remains perfectly stored in the
hippocampus throughout.
80
15
10
5
0
0
100
100
200
C
60
Percent recalled
100
40
20
0
0
D
20
50
100
150
200
Time (thousand presentations)
50
0
0
100
200
Time (thousand presentations)
Percent recalled
100
Percent recalled
B
Distance from stored
A
80
60
40
20
0
0
50
100
150
200
Time (thousand presentations)
Figure 2: How semantic training affects episodic recall for patterns stored after the first 10,000
presentations (A) without replay and (D) with the correspondence between hippocampal and neocortical representations updated during off-line replay. The larger graphs are averages over all stored
episodes, while the smaller graphs are for individual episodes. Recall was assessed by presenting partial episodic patterns (the original activations replaced by random patterns in one of the input areas),
performing hippocampal pattern completion in EP if the distance from a stored EP representation
was less than 20, and then performing 20 full iterations of Gibbs sampling in the neocortical network
with the cue areas clamped. A resulting distance of less than 5 from the target pattern was considered
a match. (B) and (C) analyze the reasons why episodic recall breaks down in (A). (B) shows how the
EP representation of stored episodes drifts away from the original stored patterns. (C) shows how
well recall works if it starts from the stored EP representation of the episode.
Figure 2B,C indicate the reasons for this behavior. Figure 2B shows that semantic learning
after the storage of the episode causes the EP representation of the episode to move away
from the version with which the stored hippocampal trace is associated. The magnitude
of this change is such that, eventually, even the full original episode may fail to activate
the corresponding hippocampal memory trace. The effect of representational change on
hippocampally directed recall in the input areas is milder in our case, as seen in Figure 2C;
provided that the correct hippocampal trace does get activated, the full episode can be
successfully recalled most of the time. However, this component accounts for the relatively
slower initial rise of episodic recall in Figure 2A (compare with Figure 2D), as well as
some of the variability between patterns in Figure 2A (data not shown).
In the ?replay? condition, the general training was interleaved with epochs of hippocampally initiated replay, assumed to take place during sleep. Within these epochs, the memory
traces stored in the hippocampus get activated at random, which leads to the reactivation of
the associated EP pattern, which in turn reactivates the input areas according to the existing
semantic mapping. The resulting pattern may be different from the one that initially gave
rise to the stored episode, due to subsequent changes in the neocortical connections. However, assuming that the neocortical semantic representation has not changed fundamentally
since the last time that particular episode was replayed (or when it was established), the
input representation resulting from replay should be close to the current low level representation of that particular episode. Indeed, maintaining this representational proximity
exactly sets the requirement for the frequency of replay of the episodes.
As in our previous model, we assume that the local connections within each neocortical
area implement a local attractor structure, which, in the absence of feedforward activation,
restricts activity patterns within that area to those that correspond to valid input patterns.
These local attractors turn feedback activation which is close to a valid pattern (namely, the
original episode) into an exact version of that pattern. Such an off-line reconstruction of
the low-level representation of stored episodes may then support a wide variety of memory
processes (including the previous model?s focus on gradually incorporating the information
carried by that episode into the neocortical knowledge base 11, 21 ). Here we focus on its
use for maintenance of the episodic index. To this end, starting from the reconstructed
episode, the semantic correspondence between the different levels is employed in the feedforward direction in order to determine the up-to-date EP representation of the episode.
This EP pattern is then associated with the stored hippocampal episode which initiated the
replay, so that the hippocampal and input level representations of the episode are again in
register. Figure 2B demonstrates the efficacy of replay: the hippocampally stored episode
now remains tied to the (shifting) EP representation of the episode, and episodic replay
stays at high levels despite substantial changes in the neocortical network.
4 Index Extension
Another important potential role for replay is extending the semantic aspects of the indexing scheme. It should be possible to retrieve episodic memories on the basis of all
input patterns to which they are closely related through the network of cortical semantic
knowledge. At present, this can happen only if the cortex produces similar EP codes for
all those input patterns that are semantically related. However, requiring that all semantic
proximity be coded by syntactic proximity in essentially one single layer, is far too stringent a requirement. Rather, we should expect that the bulk of semantic information lives
in synapses that are invisible to this layer, ie connections within and between lower layers,
and this information must also influence indexing.
One way to extend semantic indexing involves on-line sampling. That is, probabilistic
updating in the cortical semantic network starting from a given input pattern is the canonical
way of exploring the semantic neighborhood of an input. One can imagine doing this in a
on-line manner, spurred by an input. Over sampling, the cortical pattern and its EP code
change together, providing the opportunity for a match to be made between the EP activity
and the contents of episodic memory. These sampling dynamics would allow the recall of
semantically relevant episodes, even if their explicit code is rather distant.
The role for replay in this process is to allow the semantic index to be extended through
off-line rather than on-line sampling starting from the episodic patterns stored in the hippocampus. It is thus analogous to Sutton?s24 use of replay in his DYNA architecture, in
which an internal model of a Markov decision process is used to erase inconsistencies
in a learned value function, and also to the wake-sleep algorithm?s 22 use of sleep sampling to learn a recognition model. For the latter, off-line sampling ensures that inputs can
be mapped using a feedforward network, into codes associated with a generative model,
rather than relying on sluggish statistical or dynamical methods for inverting the generative
model, such as Gibbs sampling or its mean-field approximations. The main requirement
is for a further plastic layer between EP and CA3 (presumably the perforant path) so that
when replay based on an episode leads to a semantically, but not syntactically, related pattern, then the EP code for that pattern can induce hippocampal recall of the episode.
Figure 3 illustrates this use of replay in a highly simplified case (subject to the limita possible patterns,
tions of the RBM). Here, there are 3 modules
of units, each with
and a semantic structure such that
(with wrap-around, so, eg, ) and independent of the choice
in
and
. Figure 3A shows the covariance matrix of the activities of the
EP units
to the possible input patterns (arranged lexicographically). The relatedness of the EP
representation of related patterns is clear in the rich structure of this matrix ? this shows the
extent of the explicit code
learnt
the RBM. However, this code does not make indexing
by
and ! " " have been stored as
perfect. Imagine that
episodic patterns. That is, their EP representations are stored in the hippocampus and are
available
recall and replay. We may expect to retrieve
from its semantic relation
% for
$#
. Figure 3B shows the explicit proximity (inverse square distance, see
caption) of the EP representations of the input patterns to the EP representation of .
Although # is close, so are many other patterns that are not nearly so closely semantically
related. For instance, & (') and " & *+ are closer.
A
40
60
121
200
B 111
122
131
141
linear
proximity
1.5
1
0.5
0
?0.5
20
20 40 60
C0 111
121
0
221
10
321
221
20
30
421
40
failures
E1
50
344
60
E2
100 samples
100
223
114
0 111
D
0
331
441
211 221
121
10
20
30
40
50
60
334
332
114 124
500 samples
0
100
2000 samples
log scaled
proximities
444
441
324
334 344
234
0
10
20
30
40
50
434
444
60
Figure 3: Index expansion. Plots relate to the 3-module network. Conventions: ? denote the
possible input patterns or their EP representations.
,
,
etc. In (C), the entry for shows patterns that are not within Hamming distance of of any input
pattern. For this simulation, for reasons of simulation time, the input patterns were chosen to be
orthogonal; the hidden unit representations were nevertheless highly non-orthogonal; iterations of
Gibbs sampling were used during RBM learning. The weights associated with the network are not
over-trained. A) The covariance matrix of the EP representation of the possible patterns. The
banding shows the semantic structure (see text), but, as seen in (B), only weakly. B) The proximities
( "!$#
%'&(#)+*! ) of the EP representations (#% ) for all the patterns to that for , (the entry for #
)+*
is blank; see boxed - ). The numbers refer to the patterns as in the convention described. Despite
the covariance structure in (A), the syntactic representation of semantic closeness is weak: ./ is
not closely related to -- , for instance. Thus, episodic recall would be imperfect. Ratio of max-min
proximity (bar - ) is 4. C) Three stages of (unclamped) Gibbs sampling starting ( times each)
from the hippocampally replayed EP representations of , (left column) and , (right column).
Here, we determine to which (if any ? thus the ?failure? entry ) of the possible input patterns,
the sampled activities of the visible units are closest, and plot histograms of the resulting frequencies.
After only few iterations, - and 0 still dominate; after more, the semantically close patterns
and / dominate for , and 00 and for , . D) Logarithmically scaled proximities following
delta-rule learning for the mapping from EP representations of the patterns in (C) to , and ,
respectively. Now, the remapped EP representations of semantically relevant inputs are vastly closer
to their associated episodic memories. Ratios of max-min proximities are 14000 ( , ) and 7000 ( , ).
Figure 3C shows the course of replay.
two columns show histograms of the patterns
The
21
retrieved in the visible layer after
rounds of Gibbs sampling starting ( 1
times) from the hippocampal representation of (left) and (right). The network has
learnt much about the semantic relationships, although it is far from perfect (over-training
seems to make it worse, for reasons we do not understand), and equally likely patterns are
not generated exactly equally often.21 The columns of these histograms show how many
sampled visible patterns are not close to one of the valid inputs; this happens only rarely.
During replay, the EP representation of these semantically-related patterns is then available
so that a model mapping EP to an appropriate input to the hippocampal pattern matching
process can be learnt. Figure 3D shows how this affects the proximities for a model trained
using the delta rule. Again, left and right columns are for and ; now the semantic
associates of these patterns are mapped into inputs to the hippocampal pattern matching
process that are far nearer (note the logarithmic scale) to the stored representations of
and , and so the episodes can be appropriately retrieved from their semantic cousins.
5 Discussion
The important, but narrow, issue of whether episodic memories can ever be recalled without the hippocampus has polarized theoretical ratiocination about memory replay, a phe-
nomenon for which there is increasing neurophysiological evidence. This polarization has
hindered the field from studying the wider computational context of replay. In this paper,
we have considered two particular aspects of the consolidation of the indexing relationship
between semantic memory (in the neocortex) and episodic memory (in the hippocampus).
We showed how replay could be used to maintain the index in the face of on-going neocortical plasticity, and to broaden it in the light of neocortical semantic knowledge that is not
directly accessible through the explicit code in the upper layers of cortex. Unlike memory
consolidation, neither of these involves neocortical plasticity during replay. There may yet
be many other computations that can be accomplished through replay.
Broadening the index poses an interesting, only incompletely answered, theoretical question about the metrics of memory. The semantic model can be seen as a sort of manifold
in the space of all inputs; the episodes as particular points on the manifold; and retrieval
as finding the closest episodes to a presented cue, according to a distance function that
involves mapping the cue to the manifold, and mapping between points on the manifold.
Despite some theoretical suggestions,25 it is not clear how the semantic model specifies
these distances. Our pragmatic solution was to replay the episodes and rely on the transience of the Markov chain induced by Gibbs sampling to produce semantic cousins with
which it should be related. It would be desirable to consider more systematic approaches.
Our model involves interaction between a hippocampal store for episodes and a neocortical
store for semantics. However, the computational issues about indexing apply with the
same force if the episodes are actually stored separately elsewhere, such as in more frontal
structures (McClelland, personal communication). There are equal opportunities for these
areas to induce replay, and thus improve the index. What now seems unlikely, despite our
best earlier efforts, is that the problems of indexing can be circumvented by storing the
episodes wholly within the semantic network. By itself, this solves nothing.
Acknowledgements
We are very grateful to Jay McClelland for helpful discussions. Funding was from the
Hungarian Academy of Sciences and the Gatsby Charitable Foundation.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
W. Scoville and B. Milner, J Neurol Neurosurg Psychiatry 20, 11 (1957).
T. Ribot, Les maladies de la memoire, Appleton-Century-Crofts, New York, 1881.
L. R. Squire, Psychol Rev 99, 195 (1992).
L. R. Squire, R. E. Clark, and B. J. Knowlton, Hippocampus 11, 50 (2001).
A. R. Mayes and J. J. Downes, Memory 5, 3 (1997).
D. Marr, Philos Trans R Soc Lond B Biol Sci 262, 23 (1971).
D. J. Willshaw and J. T. Buckingham, Philos Trans R Soc Lond B Biol Sci 329, 205 (1990).
P. Alvarez and L. R. Squire, Proc Natl Acad Sci U S A 91, 7041 (1994).
J. M. Murre, Memory 5, 213 (1997).
R. M. French, Connection Science 9, 353 (1997).
J. L. McClelland, B. L. McNaughton, and R. C. O?Reilly, Psychol Rev 102, 419 (1995).
M. A. Wilson and B. L. McNaughton, Science 265, 676 (1994).
W. E. Skaggs and B. L. McNaughton, Science 271, 1870 (1996).
K. Louie and M. A. Wilson, Neuron 29, 145 (2001).
A. G. Siapas and M. A. Wilson, Neuron 21, 1123 (1998).
L. Nadel and M. Moscovitch, Curr Opin Neurobiol 7, 217 (1997).
M. McCloskey and N. J. Cohen, in The psychology of learning and motivation, vol 24, edited by G. Bower, 109?165,
Academic Press, New York, 1989.
G. A. Carpenter and S. Grossberg, Trends Neurosci 16, 131 (1993).
A. Robins, Connection Science 8, 259 (1996).
F. Vargha-Khadem et al., Science 277, 376 (1997).
S. K?ali and P. Dayan, in NIPS 13, edited by T. K. Leen, T. G. Dietterich, and V. Tresp, 24?30, MIT Press, Cambridge, 2001.
G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal, Science 268, 1158 (1995).
G. E. Hinton, Neural Computation, 14 (2002).
R. S. Sutton, in Machine Learning: Proceedings of the Seventh International Conference, 216?224, 1990.
L. K. Saul, in NIPS 9, edited by M. C. Mozer, M. I. Jordan, and T. Petsche, 267?273, MIT Press, London, UK, 1997.
| 2251 |@word repository:2 version:2 briefly:1 hippocampus:36 seems:2 anterograde:2 extinction:1 c0:1 hu:1 simulation:3 covariance:3 contrastive:1 solid:1 shot:1 initial:3 contains:1 efficacy:2 existing:4 current:1 blank:1 activation:3 yet:2 buckingham:1 must:1 distant:3 subsequent:1 happen:1 plasticity:11 visible:3 opin:1 plot:4 update:1 cue:5 generative:2 item:2 short:2 unfamiliarity:1 successive:1 along:2 amnesia:3 manner:2 forgetting:3 notably:1 indeed:2 behavior:2 brain:1 rem:1 relying:1 actual:1 increasing:1 erase:2 provided:2 suffice:1 what:5 banding:1 consolidated:3 interpreted:1 neurobiol:1 finding:2 act:2 exactly:3 willshaw:1 demonstrates:1 scaled:2 uk:2 unit:9 normally:1 limita:1 appear:1 continually:1 before:1 local:3 treat:1 frey:1 acad:1 despite:4 sutton:2 initiated:2 firing:1 path:1 might:6 bi:1 directed:1 grossberg:1 arguing:2 testing:3 implement:1 procedure:1 wholly:1 episodic:27 area:22 evolving:1 matching:4 reilly:1 induce:2 suggest:1 get:2 cannot:1 close:7 parahippocampal:1 storage:5 put:2 influence:2 seminal:1 writing:1 context:1 conventional:1 starting:6 independently:1 focused:1 stabilizing:1 immediately:2 scoville:1 rule:2 continued:4 regarded:1 marr:2 dominate:2 his:1 retrieve:2 population:1 stability:1 century:1 overwrite:1 mcnaughton:3 analogous:1 updated:1 target:1 play:1 imagine:2 milner:1 exact:1 substrate:3 caption:1 associate:1 logarithmically:1 trend:1 recognition:2 particularly:3 szabolcs:1 continues:1 lay:1 updating:1 observed:1 role:8 ep:37 module:2 thousand:5 ensures:1 connected:3 episode:49 edited:3 topmost:1 substantial:1 mozer:1 dynamic:2 personal:1 trained:4 weakly:1 grateful:1 ali:2 basis:1 easily:1 engenders:1 various:3 train:1 separated:1 forced:1 fast:3 effective:1 london:3 activate:1 phe:1 couldn:1 neighborhood:1 whose:3 widely:1 larger:1 favor:1 statistic:1 ability:1 syntactic:3 itself:2 noisy:1 final:2 advantage:1 ucl:1 reconstruction:1 interaction:2 relevant:2 budapest:1 hungary:1 realization:1 date:1 representational:3 academy:2 requirement:3 extending:1 produce:3 incremental:1 perfect:2 help:1 illustrate:1 tions:1 ac:1 pose:1 completion:2 wider:1 nearest:1 solves:1 soc:2 implemented:1 hungarian:2 involves:5 come:2 indicate:1 convention:2 direction:1 thick:3 closely:4 correct:3 human:4 stringent:1 spare:1 activating:1 really:2 proposition:1 extension:1 exploring:1 proximity:11 around:1 considered:2 normal:1 presumably:1 mapping:5 murre:1 failing:1 proc:1 currently:1 coordination:1 successfully:1 mit:2 clearly:2 reaching:1 rather:5 wilson:3 focus:3 unclamped:1 spared:1 psychiatry:1 helpful:1 inference:1 flaw:1 dayan:4 dependent:1 milder:1 unlikely:2 initially:1 hidden:1 relation:1 going:2 semantics:1 issue:3 kali:1 development:1 animal:2 field:2 equal:1 never:1 shaped:2 having:1 eliminated:1 sampling:12 unsupervised:1 survive:1 thin:3 thinking:1 hippocampally:7 nearly:1 report:1 connectionist:1 infra:1 inhibited:1 few:2 fundamentally:1 others:1 divergence:1 individual:3 replaced:1 attractor:2 maintain:2 curr:1 possibility:2 highly:2 evaluation:1 certainly:1 light:3 activated:3 natl:1 occasioned:1 wc1n:1 xb:1 chain:1 closer:2 partial:3 necessary:2 orthogonal:2 old:3 re:2 plotted:1 isolated:2 theoretical:6 instance:4 column:5 modeling:1 compelling:2 earlier:1 ar:1 queen:1 ca3:1 entry:3 successful:1 seventh:1 too:1 stored:36 learnt:4 international:1 ie:5 accessible:1 stay:1 probabilistic:1 off:5 systematic:1 together:2 s24:1 again:2 vastly:1 espoused:1 possibly:1 slowly:1 worse:1 suggesting:1 account:4 potential:1 de:1 coding:1 permanent:2 register:2 squire:3 depends:1 performed:1 view:1 break:1 doing:2 analyze:1 wave:1 sort:2 start:2 contribution:2 ass:1 square:2 who:3 characteristic:1 correspond:1 directional:1 weak:1 plastic:1 explain:1 synapsis:2 synaptic:2 failure:3 acquisition:1 frequency:2 obvious:1 e2:1 associated:9 mi:1 rbm:4 monitored:1 hamming:1 sampled:2 treatment:1 intrinsically:1 recall:28 knowledge:5 improves:1 actually:4 back:1 reflecting:1 bidirectional:1 originally:1 day:1 alvarez:1 replayed:3 arranged:1 leen:1 though:3 just:1 stage:3 xa:1 governing:1 sketch:1 french:1 mode:1 quality:1 perhaps:1 building:1 effect:3 perforant:1 requiring:1 true:1 functioning:1 dietterich:1 polarization:1 read:1 symmetric:1 adduced:1 semantic:42 neal:1 eg:1 round:1 during:8 ambiguous:1 anything:1 rat:1 hippocampal:27 presenting:2 neocortical:33 demonstrate:1 invisible:1 syntactically:1 percent:5 recently:2 funding:1 neurosurg:1 cohen:1 conditioning:1 extend:1 interpretation:1 resting:1 refer:1 cambridge:1 gibbs:6 appleton:1 siapas:1 philos:2 longevity:1 moving:1 access:1 cortex:8 behaving:1 etc:1 base:1 something:1 integrity:2 closest:2 own:1 recent:4 showed:1 perspective:1 retrieved:2 moderate:1 forcing:1 store:5 route:1 binary:2 came:1 life:1 inconsistency:1 accomplished:1 inverted:2 seen:3 wholesale:2 greater:1 somewhat:1 employed:2 determine:2 paradigm:1 signal:1 full:5 desirable:1 match:3 lexicographically:1 academic:1 offer:1 retrieval:3 elaboration:1 equally:3 e1:1 coded:1 nadel:2 basic:1 maintenance:1 patient:2 circumstance:1 essentially:2 metric:1 iteration:3 histogram:3 achieved:1 preserved:1 separately:1 wake:1 source:1 suffered:1 appropriately:1 specially:1 unlike:1 subject:4 induced:2 fate:1 seem:1 jordan:1 presence:1 feedforward:3 enough:1 variety:2 affect:2 skaggs:1 gave:1 psychology:1 architecture:2 restrict:1 perfectly:1 hindered:1 imperfect:1 idea:4 cousin:2 whether:1 effort:1 suffer:1 peter:1 york:2 cause:2 repeatedly:1 clear:2 amount:1 aiding:1 neocortex:9 mcclelland:4 specifies:1 percentage:1 restricts:1 canonical:1 stabilized:1 neuroscience:1 delta:2 correctly:2 track:1 bulk:1 vol:1 key:1 procedural:1 nevertheless:1 changing:2 prevent:2 neither:1 retrograde:1 ram:1 graph:2 sum:1 inverse:1 fourth:1 place:3 reporting:1 throughout:1 home:1 polarized:1 decision:1 consolidate:1 interleaved:1 layer:7 correspondence:6 sleep:5 activity:4 awake:1 dominated:1 wc:1 aspect:3 answered:1 extremely:1 min:2 lond:2 performing:3 relatively:2 circumvented:1 transferred:1 according:3 combination:1 remain:1 smaller:2 suppressed:1 rev:2 happens:2 restricted:2 repair:2 indexing:11 gradually:1 interference:3 taken:1 heart:2 previously:1 remains:2 discus:1 turn:5 mechanism:1 eventually:1 fail:1 dyna:1 end:1 studying:1 available:2 apply:1 hierarchical:1 away:2 generic:1 appropriate:3 petsche:1 slower:2 permanently:4 original:4 broaden:1 spurred:1 opportunity:3 maintaining:2 xc:1 medicine:1 graded:1 move:1 question:2 damage:4 dependence:1 amongst:2 wrap:1 distance:7 mapped:2 incompletely:1 sci:3 manifold:4 extent:1 declarative:1 reason:4 assuming:1 code:13 length:1 index:10 relationship:5 providing:1 convincing:1 ratio:2 equivalently:1 relate:1 reactivation:3 debate:1 trace:6 rise:2 reconsider:1 aversive:1 lived:1 boltzmann:1 upper:2 neuron:2 markov:2 extended:1 communication:2 variability:1 ever:1 hinton:2 community:1 drift:1 inverting:1 namely:1 required:4 connection:8 recalled:10 learned:2 narrow:1 temporary:1 established:1 nearer:1 trans:2 nip:2 able:1 suggested:3 bar:1 usually:2 pattern:78 dynamical:1 remapped:1 challenge:1 including:2 memory:38 max:2 shifting:1 event:3 critical:1 treated:1 rely:1 force:1 older:2 scheme:4 improve:1 started:1 carried:1 hm:1 psychol:2 embodied:1 tresp:1 text:1 prior:1 epoch:2 acknowledgement:1 removal:3 relative:2 asymptotic:1 expect:2 neurophysiologically:1 suggestion:2 interesting:1 clark:1 foundation:2 charitable:1 storing:3 course:2 changed:2 consolidation:13 convinced:1 last:1 elsewhere:1 allow:3 understand:1 institute:1 stripped:1 wide:1 face:2 saul:1 sparse:1 distributed:1 regard:1 curve:7 apogee:1 cortical:11 world:1 valid:4 rich:2 feedback:1 knowlton:1 made:1 refinement:1 avoided:1 simplified:1 far:3 reconstructed:1 skill:1 relatedness:1 absorb:1 keep:1 active:2 reveals:1 assumed:1 alternatively:1 disrupt:1 compromised:1 why:1 robin:1 learn:1 transfer:3 expansion:1 boxed:1 hc:3 broadening:1 domain:1 main:3 neurosci:1 whole:1 motivation:1 nothing:1 lesion:1 carpenter:1 neuronal:1 gatsby:3 slow:3 experienced:1 explicit:4 replay:37 lie:1 clamped:1 forgets:1 tied:1 third:2 bower:1 jay:1 learns:1 croft:1 down:2 specific:1 neurol:1 evidence:6 concern:1 incorporating:2 closeness:1 restricting:1 effectively:1 magnitude:2 entorhinal:1 sluggish:1 illustrates:1 logarithmic:1 likely:1 forming:1 neurophysiological:2 happening:1 strand:1 temporarily:1 mccloskey:1 catastrophe:1 impedes:1 collectively:1 chance:1 conditional:1 presentation:11 careful:1 absence:2 content:2 change:11 semantically:7 acting:1 degradation:1 total:1 catastrophic:4 experimental:3 la:1 rarely:1 pragmatic:1 college:1 internal:1 support:1 moscovitch:2 latter:2 arises:1 assessed:1 relevance:1 frontal:1 phenomenon:1 biol:2 |
1,376 | 2,252 | Categorization Under Complexity: A Unified
MDL Account of Human Learning of Regular
and Irregular Categories
Jacob Feldman*
Department of Psychology
Center for Cognitive Science
Rutgers University
Piscataway, NJ 08854
[email protected]
David Fass
Department of Psychology
Center for Cognitive Science
Rutgers University
Piscataway, NJ 08854
[email protected]
Abstract
We present an account of human concept learning-that is, learning of
categories from examples-based on the principle of minimum description length (MDL). In support of this theory, we tested a wide range
of two-dimensional concept types, including both regular (simple) and
highly irregular (complex) structures, and found the MDL theory to give
a good account of subjects' performance. This suggests that the intrinsic complexity of a concept (that is, its description -length) systematically
influences its leamability.
1- The Structure of Categories
A number of different principles have been advanced to explain the manner in which humans learn to categorize objects. It has been variously suggested that the underlying principle might be the similarity structure of objects [1], the manipulability of decision bound~
aries [2], or Bayesian inference [3][4]. While many of these theories are mathematically
well-grounded and have been successful in explaining a range of experimental findings,
they have commonly only been tested on a narrow collection of concept types similar to
the simple unimodal categories of Figure 1(a-e).
(a)
(b)
(c)
(d)
(e)
Figure 1: Categories similar to those previously studied. Lines represent contours of equal
probability. All except (e) are unimodal.
~http://ruccs.rutgers.edu/~jacob/feldman.html
Moreover, in the scarce research that has ventured to look beyond simple category types,
the goal has largely been to investigate categorization performance for isolated irregular
distributions, rather than to present a survey of performance across a range of interesting
distributions. For example, Nosofsky has previously examined the "criss-cross" category
of Figure 1(d) and a diagonal category similar to Concept 3 of Figure 2, as well as some
other multimodal categories [5J [6J. While these individual category structures are no doubt
theoretically important, they in no way exhaust the range of possible concept structures.
Indeed, if we view n-dimensional Cartesian space as the canvas upon which a category
may be represented, then any set of manifolds in that space may be considered as a potential category [7]. It is therefore natural to ask whether one such category-manifold is in
principle easier or more difficult to learn than another. Since previous investigations have
never considered any reasonably broad range of category structures, they have never been
in a position to answer this question.
In this paper we present a theory for human categorization, based on the MDL principle, that is much better equipped to answer questions about .the intrinsic leamability of
both structurally regular and structurally irregular categories. In support of this theory we
briefly present an experiment testing human subjects' learning of a range of concept types
defined over a continuous two-dimensional feature space, including both highly regular
and highly irregular structures. We find that our MDL-based theory gives a good account
of human learning for these concepts, and that descriptive complexity accurately predicts
the subjective difficulty of the various concept types tested.
2 Previous Investigations of Category Structure
The role of category structure in determining leamability has not been overlooked entirely
in the literature; in fact, the intrinsic structure of binary-featured categories has been investigated quite thoroughly. The classic work by Shepard et al. [8J showed that human
performance in learning such Boolean categories varies greatly depending on the intrinsic
logical structure of the concept. More recently, we have shown that this performance is
well-predicted by the intrinsic Boolean complexity of each concept, given by the length
of the shortest Boolean formula that describes the objects in the category [9]. This result suggests that a principle of simplicity or parsimony, manifested as a minimization of
complexity, might play an important role in human category learning.
The details of Boolean complexity analysis do not generalize easily to the type of continuous feature spaces we wish to investigate here. Thus a new approach is required, similar
in general spirit but differing in the mathematics. Our goals are therefore (1) to deploy
a complexity minimization technique such as MDL to quantify the complexity of categories defined over continuous features, and (2) to investigate the influence of complexity
on human category learning by testing a range of concept types differing widely in intrinsic
complexity.
3 Experiment
While the MDL principle that we plan to employ is applicable to concepts of any dimension, for reasons of convenience this experiment is limited to category structures that can
be formed within a two-dimensional feature space. This feature space is discretized into a
4 x 4 grid from which a legitimate category can be specified by the selection of any four grid
squares. Our motivation for discretizing the feature space is to place a constraint on possible category structure that will facilitate the computation of a complexity measure; this
does not restrict the range ofpossible feature values that can be adopted by stimuli. In principle, feature values are limited only by machine precision, but as a matter of convenience
we restrict features to adopting one of 1000 possible values in the range [0,1].
Concept 1
Concept 2
Concept 3
Concept 4
Concept 5
Concept 6
Concept 7
Concept 8
Concept 9
Concept 10
Concept 11
Concept 12
Figure 2: Abstract concepts used in experiment.
The particular 12 abstract category structures ("concepts") examined in the experiment are
shown in Figure 2. These concepts were considered to be individually interesting (from
a cross-theoretical perspective) and jointly representative of the broader range of available
concepts. The two categories in each concept are referred to as "positive" and "negative."
The positive category is represented by the dark-shaded regions, and the corresponding
negative category is its complement. Note that in many cases the categories are "disconnected" or multimodal. Nevertheless, these categories are not in any sense "probabilistic"
or "ill-de:fil1.ed"; a given point in feature space is ahvays either p_ositive or negative.
During the experiment, each stimulus is drawn randomly from the feature space and is
labeled "positive" or "negative" based on the region from which it was drawn. Uniform
sampling is used, so all 12 categories of Figure 2 have the same base rate for positives,
..)
4
1
P( posItIve
== 16
== 4'
The experiment itself was clothed as a video game that required subj ects to discriminate
between two classes of spaceships, "Ally" and "Enemy," by destroying Enemy ships and
quick-landing Allied ships. Each subject (14 total) played 12 five-minute games in which
the distribution ofAllies and Enemies corresponded (in random order) to the 12 concepts of
Figure 2. The physical features of the spaceships in all cases were the height of the "tube"
and the radius of the "pod." As shown in Figure 3, these physical features are mapped
randomly onto the abstract feature space such that the experimental concepts may be any
rigid rotation or reflection of the abstract concepts in Figure 2.
4 Derivation of the MDL Principle
The MDL principle is largely due to Rissanen [10] and is easily shown to be a consequence
of optimal Bayesian inference [11]. While several Bayesian algorithms have previously
been proposed as models of human concept learning [3][4], the implications of the MDL
principle for human learning have only recently come under scrutiny [12][13]. We briefly
review the relevant theory.
According to Bayes rule, a learner ought to select the category hypothesis H that maximizes
Pod Radius
(a)
(b)
(c)
(d)
Figure 3: (a) A spaceship. (b-d) Three possible instantiations of Concept 6 from Figure 2.
the posterior P(H I D), where D is the data, and
P(H I D)
=
P(D I H)P(H)
P(D)
(1)
Taking negative logarithms of both sides, we obtain
-log P(H I D)
==
-log-P(D I H) - log P(H)
+ log P(D)
(2)
The problem of maximizing P(H I D) is thus identical to the problem of minimizing
- log P (H I D). Since log P (D) is constant for all hypotheses, its value does not enter
into the minimization problem, and we can state that the hypothesis of choice ought to be
such as to minimize the quantity
-log P(D I H) - log P(H)
(3)
If we follow Rissanen and regard the quantity -log P(x) as the description length of x,
D L (x ), then Equation 3 instructs us to select the hypothesis that minimizes the total description length
DL(D I H) + DL(H)
(4)
What this means is that the hypothesis that is optimal from the standpoint of the Bayesian
decision maker is the same hypothesis that yields the most compact two-part code in Equation 4. Thus, besides the merits ofbrevity for its own sake, we see that maximal descriptive
compactness also corresponds to maximal inferential power. It is this equivalence between
description length and inference that leads us to investigate the role of descriptive complexity in the domain of concept learning.
5
Theory
In order to investigate the complexity of the 12 concepts of Figure 2, Equation 4 indicates
that we need to analyze (1) the description length of a hypothesis for each concept, DL (H),
and (2) the description length ofthe concept given the hypothesis, DL(D I H). We discuss
these in sequence.
5.1
The Hypothesis Description Length, DL(H)
In order to compute DL(H), we first fix a language! within which hypotheses about the
category structure can be expressed. We choose to use the "rectangle language" whose
alphabet (Table 1) consists of 10 classes of symbols representing the 10 different sizes of
rectangle that can be composited within a 4x4 grid: 1x 1, 1 x2, 1x3, 1 x4, 2x2, 2x3,
2x4, 3x3, 3x4, and 4x4. 2 Each member of the class "m x n" is an m x n or n?x m
rectangle situated at a particular position in the 4 x 4 grid. We allow a given hypothesis to
be represented by up to four distinct rectangles (i.e., four symbols).
Having specified a language, the issue is now the length of the hypothesis code. The derivation above suggests that a codelength of -log P(x) be assigned to each symbol x, which
corresponds to the so-called Shannon code. We therefore proceed to compute the Shannon
codelengths for the rectangle alphabet of Table 1. 3
1Equivalently, a model class. The particular choice of language (model class) is obviously an important determinant of the ultimate hypothesis description length. We mentionthat the MDL analysis
in this paper might be replaced by another theoretical approach, such as a Bayesian framework,
although we have not pursued this possibility. We adopt the MDL formulation partly because its
emphasis on representation (i.e., description) seems apt for a study of complexity.
2The class "m x n" contains all rectangles of dimension m x nand n x m.
3 We use the noninteger value - log P (x) rather than the integer
log P ( x)l. Logs are base-2.
r-
- ~ - ?-1
Table 1: Rectangle alphabet. The third and fourth columns show the probability that the
source generates a given member ofthe class "m x n" and the corresponding codelength.
Rectangle Class
Possible Locations
lxl
lx2
lx3
lx4
2x2
2x3
2x4
3x3
3x4
4x4
16
Probability
4
1
10
1
10
1
10
1
10
1
10
1
10
1
10
1
10
1
10
1
10
24
16
8
9
12
6
4
1
. 16
1
. 24
1
. 16
1
. "8
1
Codelength
-log (1~0)
-log (2~0)
-log (1~0)
-log (8~)
?9
-log
1
. 12
1
. "6
1
-log (1~0)
?4
1
?4
.l. . 1
(gI0 )
-log
(lo)
-log
1
(4 0)
-log (4~)
-log (1~)
Computing these codelengths requires t~at we specify the probability mass function of a
source, P(x). It is convenient for this purpose (and compatible with the subject's perspective) to imagine that the concepts in Figure 2 are produced by a "concept generator," an
information source whose parameters are essentially unknown. A reasonable assumption
is that the source randomly selects a rectangle class with uniform probability, and then selects an individual member of the chosen class also with uniform probability. Since there
are 10 classes, the assumption regarding class selection places a prior on each rectangle
class of P(m x n) == 1~.
Moreover, the assumption of uniform within-class sampling means that in order to encode
any individual rectangle, we need only consider the cardinality of the class to which it
belongs. We now recall that the individual rectangles of the class "m x n" differ only in
their positions within the 4 x 4 grid. Therefore, the cardinality of the class "m x n" is equal
to the number ofunique ways N m x n in which an m x n or n x m rectangle can be selected
from a 4 x 4 grid, where
N
- {
mXn -
(5-m)(5-n),
2(5 - m)(5 - n),
m==n
m =I n
(5)
Thus, the probability associated with an individual rectangle of class "m x n" is PN( m x n) .
rnXn
The corresponding Shannon codelengths are shown next to these probabilities in Table 1.
The description length of a particular hypothesis is the summed codeword lengths for all
the rectangles (up to four) that are comprised by the hypothesis.
5.2
The Likelihood Description Length, DL(D I H)
The second part of the two-part MDL code is the description of the concept with respect to
the selected hypothesis, corresponding to the Bayes likelihood. There are several possible
approaches to computing D L(D I H); we discuss one that is particularly straightforward.
We recall that a hypothesis H is composed of up to four rectangular regions. Computing
DL(D I H) therefore involves describing that portion of the positive category that falls
within each rectangular hypothesis region. This is conceptually the same problem that we
faced in computing DL(H) above, except that the region of interest for DL(H) was fixed
Table 2: Minimum description lengths for the 12 abstract concepts.
Concept
MDL Codelength
MDL Concept
8.0768 bits
2
8.3219 bits
3
27.3236 bits
4
17.8138 bits
5
16.5216 bits
6
14.4919 bits
7
17.1357 bits
8
22.5687 bits
9
14.4919 bits
10
15.0768 bits
11
27.1946 bits
12
28.1536 bits
lIE
.??;.'.
~
at 4x4, while the regions for DL(D I H) can be of any dimension 4x4 and smaller.
Guided by this analogy, we follow the procedure of the previous section to compute an
appropriate probability mass function. Since DL(D I H) must capture just the positive
squares in the hypothesis region (a maximum of four squares), the only rectangle classes
needed in the alphabet are those of size four: 1x 1, 1x2, 1x 3, 1x4, and 2x2.
6 Minimum Description Lengths for Experimental Concepts
Applying the MDL analysis above to the concepts in Figure 2 requires that we compute
the total description length DL(D I H) + DL(H) corresponding to all viable hypothe, ses for each concept. The hypothesis H corresponding to the shortest total codelength
DL(D I H) + DL(H) for each concept is the MDL hypothesis. 4 The MDL hypotheses for
all 12 concepts are shown in Table 2 along with the corresponding minimum codelengths.
It can be observed that while for some concepts the MDL hypothesis precisely conforms
to the true positive category (meaning that almost all of the concept information is carried
in the hypothesis code), for the majority of concepts the MDL hypothesis is broader than
the true category region (meaning that the concept information is distributed between the
hypothesis and likelihood codes).
4Note that the MDL hypothesis is not in general the most compact hypothesis, i.e., the hypothesis for which DL(H) is a minimum. Rather, the MDL hypothesis is the one for which the sum
DL(D I H) + DL(H) is minimum.
7 Results
For each game played by the subject (i.e., each concept in Figure 2), an overall measure
of performance (d') is computed. 5 Figure 4 shows performance for all subjects and all
concepts as a function of the concept complexities (MDL codelengths) in Table 2. There is
an evident decrease in performance with increasing complexity, which a regression analysis
shows to be highly significant (R 2 == .384, F(1,166) == 103.375, p < .000001), meaning
that the linear trend in the plot is very unlikely to be a statistical accident. Thus, the MDL
complexity predicts the subjective difficulty ofleaming across a broad range of concepts.
3.5r---+----,-------r--------.--------.-------,
2.5
~
2
Q.)
g 1.5
co
E
1
~ 0.5
+
+
++
Q.)
0..
-0.5
-1
5
10
15
20
25
30
Complexity, DL(H) + DL(DIH)
Figure 4: Performance vs. complexity for all 14 subjects. The d' performance for each
concept is indicated by a '+' and the mean d' for each concept is indicated by an '0'.
We mention that the MDL approach described here can be further modified to make "realtime" predictions of how subjects will categorize each new stimulus. In the most simplistic
approach, the prediction for each new stimulus x is made based on the MDL hypothesis
prevailing at the time that stimulus is observed. Correlation between this MDL prediction
and the subject's actual decision is found to be highly significant (p :::; .002) for each of the
12 concept types. The Pearson r statistics are given below:
Concept #:
Pearson r:
123
.46 .47 .19
456
.18 .20 .51
7
.18
8
.14
9
.34
10
.32
11
.32
12
.05
Figure 5 illustrates the behavior of the real-time MDL algorithm. Simulations for a variety
of data sets can be found at http://ruccs . rutgers. edu/ -dfass/mdlmovies. html.
.:++:
.:j.::
~
step 7
step 9
Step 19
step 59
step 113
Step 169
step 190
Figure 5: Real-time MDL hypothesis evolution for actual Concept 11 data. As the size
of the data set grows beyond 150, there is oscillation between the one-rectangle (2x4)
hypothesis. shown in Step 169 and the two-rectangle (1 x3) hypothesis shown in Step 190.
l
5 d (discriminability) gives a measure of subjects' intrinsic capacity to discriminate categories,
i.e., one that is independent of their criterion for responding "positive" [14].
8
Conclusions
As discussed above, MDL bears a tight relationship with Bayesian inference, and hence
serves as a reasonable basis for a theory of learning. The data presented above suggest
that human learners are indeed guided by something very much like Rissanen's principle
when classifying objects. While it is premature to conclude that humans construct anything precisely corresponding to the two-part code of Equation 4, it seems likely that they
employ some closely related complexity-minimization principle-and an associated "cognitive code" still to be discovered. This finding is consistent with many earlier observations
of minimum principles guiding human inference, especially in perception (e.g., the Gestalt
principle of Pragnanz). Moreover, our findings suggest a principled approach to predicting
the subjective difficulty of concepts defined over continuous features. As we had previously found with Boolean concepts, subjective difficulty correlates with intrinsic complexity: That which is incompressible is) in turn) incomprehensible. The MDL approach is an
elegant framework in which to make this observation rigorous and concrete, and one which
apparently accords well with human performance.
Acknowledgments
This research was supported by NSF SBR-9875175.
References
[IJ Nosofsky, R. M., "Exemplar-based accounts of relations between classification, recognition,
and typicality," Journal of Experimental Psychology: Learning) Memory, and Cognition,
Vol. 14, No.4, 1988, pp. 700-708.
[2J Ashby, F. G. and Alfonso-Reese, L. A., "Categorization as probability density estimation,"
Journal ofMathematical Psychology, Vol. 39, 1995, pp. 216-233.
[3J Anderson, J. R., "The adaptive nature of human categorization," Psychological Review, Vol. 98,
No.3, 1991,pp.409-429.
[4J Tenenbaum, J. B., "Bayesian modeling of human concept learning," Advances in Neural Information Processing Systems, edited by M. S. Kearns, S. A. Solla, and D. A. Cohn, Vol. 11, MIT
Press, Cambridge, MA, 1999.
[5J Nosofsky, R. M., "Optimal performance and exemplar models of classification," Rational Models of Cognition, edited by M. Oaksford and N. Chater, chap. 11, Oxford University Press,
Oxford, 1998, pp. 218-247.
[6J Nosofsky, R. M., "Further tests of an exemplar-similarity approach to relating identification and
categorization," Perception and Psychophysics, Vol. 45,1989, pp. 279-290;
[7J Feldman, J., "The structure of perceptual categories," Journal of Mathematical Psychology,
Vol. 41, No.2, 1997, pp. 145-170.
[8J Shepard, R. N., Hovland, C. I., and Jenkins, H. M., "Learning and memorization of classifications," Psychological Monographs: General and Applied, Vol. 75, No. 13, 1961, pp. 1-42.
[9J Feldman, J., "Minimization of Boolean complexity in human concept learning," Nature,
Vol. 407, 2000, pp. 630-632.
[1 OJ Rissanen, J., "Modeling by shortest data description," Automatica, Vol. 14, 1978, pp. 465-471.
[11 J Li, M. and Vitanyi, P., An Introduction to Kolmogorov Complexity and Its Applications,
Springer, New York, 2nd ed., 1997.
[12] Pothos, E. M. and Chater, N., "Categorization by simplicity: A minimum description length
approach to unsupervised clustering," Similarity and Categorization, edited by U. Hahn and
M. Ramscar, chap. 4, Oxford University Press, Oxford, 2001, pp. 51-72.
[13J Myung, 1. J., "Maximum entropy interpretation of decision bound and context models of categorization," Journal ofMathematical Psychology, Vol. 38, 1994, pp. 335-365.
[14J Wickens, T. D., Elementary Signal Detection Theory, Oxford University Press, Oxford, 2002.
| 2252 |@word determinant:1 briefly:2 seems:2 nd:1 simulation:1 jacob:3 mention:1 contains:1 subjective:4 must:1 plot:1 v:1 pursued:1 selected:2 rnxn:1 ofmathematical:2 location:1 instructs:1 five:1 height:1 mathematical:1 along:1 ect:1 viable:1 consists:1 lx2:1 manipulability:1 manner:1 theoretically:1 indeed:2 behavior:1 discretized:1 chap:2 actual:2 equipped:1 cardinality:2 increasing:1 underlying:1 moreover:3 maximizes:1 mass:2 what:1 codelength:5 parsimony:1 minimizes:1 unified:1 finding:3 differing:2 nj:2 ought:2 scrutiny:1 positive:9 sbr:1 consequence:1 oxford:6 might:3 emphasis:1 discriminability:1 studied:1 examined:2 equivalence:1 suggests:3 shaded:1 co:1 limited:2 range:11 acknowledgment:1 testing:2 x3:6 procedure:1 featured:1 inferential:1 convenient:1 regular:4 suggest:2 convenience:2 onto:1 selection:2 context:1 influence:2 applying:1 memorization:1 landing:1 quick:1 center:2 destroying:1 maximizing:1 straightforward:1 pod:2 typicality:1 survey:1 rectangular:2 simplicity:2 legitimate:1 rule:1 classic:1 imagine:1 play:1 deploy:1 ruccs:4 hypothesis:34 trend:1 recognition:1 particularly:1 noninteger:1 predicts:2 labeled:1 observed:2 role:3 capture:1 region:8 solla:1 decrease:1 edited:3 principled:1 monograph:1 complexity:23 tight:1 upon:1 learner:2 basis:1 multimodal:2 easily:2 represented:3 various:1 kolmogorov:1 mxn:1 derivation:2 alphabet:4 distinct:1 corresponded:1 pearson:2 quite:1 whose:2 widely:1 enemy:3 s:1 statistic:1 jointly:1 itself:1 obviously:1 descriptive:3 sequence:1 maximal:2 relevant:1 description:18 categorization:9 object:4 depending:1 exemplar:3 ij:1 predicted:1 involves:1 come:1 quantify:1 differ:1 guided:2 radius:2 closely:1 human:18 incomprehensible:1 fix:1 investigation:2 elementary:1 mathematically:1 considered:3 cognition:2 adopt:1 hovland:1 purpose:1 estimation:1 applicable:1 ofpossible:1 maker:1 individually:1 minimization:5 mit:1 modified:1 rather:3 pn:1 broader:2 chater:2 encode:1 indicates:1 likelihood:3 greatly:1 rigorous:1 sense:1 inference:5 rigid:1 unlikely:1 nand:1 compactness:1 relation:1 selects:2 issue:1 overall:1 html:2 ill:1 classification:3 plan:1 prevailing:1 summed:1 psychophysics:1 equal:2 construct:1 never:2 having:1 sampling:2 identical:1 x4:12 broad:2 look:1 unsupervised:1 stimulus:5 employ:2 randomly:3 composed:1 individual:5 variously:1 replaced:1 detection:1 interest:1 possibility:1 highly:5 investigate:5 mdl:31 implication:1 conforms:1 logarithm:1 isolated:1 theoretical:2 psychological:2 column:1 earlier:1 aries:1 boolean:6 modeling:2 uniform:4 comprised:1 successful:1 wickens:1 answer:2 varies:1 thoroughly:1 density:1 probabilistic:1 concrete:1 nosofsky:4 tube:1 choose:1 cognitive:3 doubt:1 li:1 account:5 potential:1 de:1 exhaust:1 matter:1 reese:1 view:1 analyze:1 apparently:1 leamability:3 portion:1 bayes:2 minimize:1 formed:1 square:3 largely:2 yield:1 ofthe:2 conceptually:1 generalize:1 bayesian:7 identification:1 accurately:1 produced:1 explain:1 ed:2 pp:11 composited:1 associated:2 rational:1 ask:1 logical:1 recall:2 follow:2 specify:1 formulation:1 anderson:1 just:1 correlation:1 canvas:1 ally:1 cohn:1 indicated:2 grows:1 facilitate:1 concept:67 true:2 evolution:1 hence:1 assigned:1 ramscar:1 during:1 game:3 anything:1 criterion:1 evident:1 reflection:1 meaning:3 recently:2 rotation:1 physical:2 shepard:2 discussed:1 interpretation:1 relating:1 significant:2 cambridge:1 feldman:4 enter:1 grid:6 mathematics:1 language:4 had:1 similarity:3 base:2 something:1 posterior:1 own:1 showed:1 perspective:2 belongs:1 ship:2 codeword:1 manifested:1 binary:1 discretizing:1 minimum:8 accident:1 shortest:3 signal:1 unimodal:2 cross:2 prediction:3 regression:1 simplistic:1 essentially:1 rutgers:6 grounded:1 represent:1 adopting:1 accord:1 irregular:5 source:4 standpoint:1 subject:10 elegant:1 alfonso:1 member:3 spirit:1 integer:1 variety:1 criss:1 psychology:6 restrict:2 regarding:1 whether:1 ultimate:1 proceed:1 york:1 dark:1 tenenbaum:1 situated:1 category:40 http:2 nsf:1 vol:10 four:7 nevertheless:1 rissanen:4 drawn:2 rectangle:18 sum:1 fourth:1 place:2 almost:1 reasonable:2 lx4:1 realtime:1 oscillation:1 decision:4 bit:12 entirely:1 bound:2 ashby:1 played:2 vitanyi:1 subj:1 constraint:1 precisely:2 x2:5 sake:1 generates:1 hypothe:1 department:2 according:1 piscataway:2 disconnected:1 across:2 describes:1 smaller:1 equation:4 previously:4 discus:2 describing:1 turn:1 needed:1 merit:1 dih:1 serf:1 adopted:1 available:1 jenkins:1 appropriate:1 incompressible:1 apt:1 responding:1 clustering:1 especially:1 hahn:1 question:2 quantity:2 diagonal:1 mapped:1 capacity:1 majority:1 manifold:2 reason:1 length:18 code:8 besides:1 relationship:1 minimizing:1 equivalently:1 difficult:1 negative:5 unknown:1 observation:2 discovered:1 overlooked:1 david:1 complement:1 required:2 specified:2 narrow:1 allied:1 beyond:2 suggested:1 below:1 perception:2 including:2 memory:1 video:1 oj:1 power:1 natural:1 difficulty:4 predicting:1 scarce:1 advanced:1 representing:1 oaksford:1 ofleaming:1 carried:1 faced:1 review:2 literature:1 prior:1 determining:1 bear:1 interesting:2 analogy:1 generator:1 consistent:1 principle:15 myung:1 gi0:1 systematically:1 classifying:1 lo:1 compatible:1 supported:1 side:1 allow:1 wide:1 explaining:1 taking:1 fall:1 distributed:1 regard:1 dimension:3 contour:1 commonly:1 collection:1 made:1 adaptive:1 premature:1 gestalt:1 correlate:1 compact:2 instantiation:1 automatica:1 conclude:1 continuous:4 table:7 learn:2 reasonably:1 nature:2 investigated:1 complex:1 domain:1 motivation:1 representative:1 referred:1 precision:1 structurally:2 position:3 guiding:1 wish:1 lie:1 perceptual:1 third:1 formula:1 minute:1 symbol:3 dl:21 intrinsic:8 illustrates:1 cartesian:1 easier:1 entropy:1 lxl:1 likely:1 expressed:1 springer:1 corresponds:2 ma:1 goal:2 codelengths:5 except:2 kearns:1 total:4 called:1 discriminate:2 partly:1 experimental:4 shannon:3 select:2 support:2 categorize:2 tested:3 |
1,377 | 2,253 | Automatic Acquisition and Efficient
Representation of Syntactic Structures
Zach Solan, Eytan Ruppin, David Horn
Faculty of Exact Sciences
Tel Aviv University
Tel Aviv, Israel 69978
{rsolan,ruppin,horn}@post.tau.ac.il
Shimon Edelman
Department of Psychology
Cornell University
Ithaca, NY 14853, USA
[email protected]
Abstract
The distributional principle according to which morphemes that occur in
identical contexts belong, in some sense, to the same category [1] has
been advanced as a means for extracting syntactic structures from corpus
data. We extend this principle by applying it recursively, and by using mutual information for estimating category coherence. The resulting
model learns, in an unsupervised fashion, highly structured, distributed
representations of syntactic knowledge from corpora. It also exhibits
promising behavior in tasks usually thought to require representations
anchored in a grammar, such as systematicity.
1
Motivation
Models dealing with the acquisition of syntactic knowledge are sharply divided into two
classes, depending on whether they subscribe to some variant of the classical generative
theory of syntax, or operate within the framework of ?general-purpose? statistical or distributional learning. An example of the former is the model of [2], which attempts to
learn syntactic structures such as Functional Category, as stipulated by the Government
and Binding theory. An example of the latter model is Elman?s widely used Simple Recursive Network (SRN) [3].
We believe that polarization between statistical and classical (generative, rule-based) approaches to syntax is counterproductive, because it hampers the integration of the stronger
aspects of each method into a common powerful framework. Indeed, on the one hand, the
statistical approach is geared to take advantage of the considerable progress made to date in
the areas of distributed representation, probabilistic learning, and ?connectionist? modeling. Yet, generic connectionist architectures are ill-suited to the abstraction and processing
of symbolic information. On the other hand, classical rule-based systems excel in just those
tasks, yet are brittle and difficult to train.
We present a scheme that acquires ?raw? syntactic information construed in a distributional
sense, yet also supports the distillation of rule-like regularities out of the accrued statistical knowledge. Our research is motivated by linguistic theories that postulate syntactic
structures (and transformations) rooted in distributional data, as exemplified by the work
of Zellig Harris [1].
2
The ADIOS model
The ADIOS (Automatic DIstillation Of Structure) model constructs syntactic representations of a sample of language from unlabeled corpus data. The model consists of two
elements: (1) a Representational Data Structure (RDS) graph, and (2) a Pattern Acquisition
(PA) algorithm that learns the RDS in an unsupervised fashion. The PA algorithm aims to
detect patterns ? repetitive sequences of ?significant? strings of primitives occurring in
the corpus (Figure 1). In that, it is related to prior work on alignment-based learning [4]
and regular expression (?local grammar?) extraction [5] from corpora. We stress, however,
that our algorithm requires no pre-judging either of the scope of the primitives or of their
classification, say, into syntactic categories: all the information needed for its operation is
extracted from the corpus in an unsupervised fashion.
In the initial phase of the PA algorithm the text is segmented down to the smallest possible
morphological constituents (e.g., ed is split off both walked and bed; the algorithm later
discovers that bed should be left whole, on statistical grounds).1 This initial set of unique
constituents is the vertex set of the newly formed RDS (multi-)graph. A directed edge is
inserted between two vertices whenever the corresponding transition exists in the corpus
(Figure 2(a)); the edge is labeled by the sentence number and by its within-sentence index.
Thus, corpus sentences initially correspond to paths in the graph, a path being a sequence
of edges that share the same sentence number.
(a)
mh mi
mk
mj
(b)
ci{j,k}l
ml mn
mi
ck
...
cj
ml
cu
cv
.
Figure 1: (a) Two sequences mi , mj , ml and mi , mk , ml form a pattern ci{j,k}l =
mi , {mj , mk }, ml , which allows mj and mk to be attributed to the same equivalence class,
following the principle of complementary distributions [1]. Both the length of the shared
context and the cohesiveness of the equivalence class need to be taken into account in
estimating the goodness of the candidate pattern (see eq. 1). (b) Patterns can serve as
constituents in their own right; recursively abstracting patterns from a corpus allows us
to capture the syntactic regularities concisely, yet expressively. Abstraction also supports
generalization: in this schematic illustration, two new paths (dashed lines) emerge from the
formation of equivalence classes associated with cu and cv .
In the second phase, the PA algorithm repeatedly scans the RDS graph for Significant
P atterns (sequences of constituents) ( SP), which are then used to modify the graph (Algorithm 1). For each path pi , the algorithm constructs a list of candidate constituents,
ci1 , . . . , cik . Each of these consists of a ?prefix? (sequence of graph edges), an equivalence
class of vertices, and a ?suffix? (another sequence of edges; cf. Figure 2(b)).
The criterion I 0 for judging pattern significance combines a syntagmatic consideration (the
pattern must be long enough) with a paradigmatic one (its constituents c1 , . . . , ck must have
high mutual information):
I 0 (c1 , c2 , . . . , ck )
=
2
e?(L/k) P (c1 , c2 , . . . , ck ) log
P (c1 , c2 , . . . , ck )
?kj=1 P (cj )
(1)
where L is the typical context length and k is the length of the candidate pattern; the probabilities associated with a cj are estimated from frequencies that are immediately available
1
We remark that the algorithm can work in any language, with any set of tokens, including individual characters ? or phonemes, if applied to speech.
Algorithm 1 PA (pattern acquisition), phase 2
1: while patterns exist do
2:
for all path ? graph do {path=sentence; graph=corpus}
3:
for all source node ? path do
4:
for all sink node ? path do {source and sink can be equivalence classes}
5:
degree of separation = path index(sink) ? path index(source);
6:
pattern table ? detect patterns(source, sink, degree of separation, equivalence table);
7:
end for
8:
end for
9:
winner ? get most significant pattern(pattern table);
10:
equivalence table ? detect equivalences(graph, winner);
11:
graph ? rewire graph(graph, winner);
12:
end for
13: end while
in the graph (e.g., the out-degree of a node is related to the marginal probability of the corresponding cj ). Equation 1 balances two opposing ?forces? in pattern formation: (1) the
length of the pattern, and (2) the number and the cohesiveness of the set of examples that
support it. On the one hand, shorter patterns are likely to be supported by more examples;
on the other hand, they are also more likely to lead to over-generalization, because shorter
patterns mean less context.
A pattern tagged as significant is added as a new vertex to the RDS graph, replacing the
constituents and edges it subsumes (Figure 2). Note that only those edges of the multigraph that belong to the detected pattern are rewired; edges that belong to sequences not
subsumed by the pattern are untouched. This highly context-sensitive approach to pattern
abstraction, which is unique to our model, allows ADIOS to achieve a high degree of
representational parsimony without sacrificing generalization power.
During the pass over the corpus the list of equivalence sets is updated continuously; the
identification of new significant patterns is done using thecurrent equivalence sets (Figure 3(d)). Thus, as the algorithm processes more and more text, it ?bootstraps? itself and
enriches the RDS graph structure with new SPs and their accompanying equivalence sets.
The recursive nature of this process enables the algorithm to form more and more complex patterns, in a hierarchical manner. The relationships among these can be visualized
recursively in a tree format, with tree depth corresponding to the level of recursion (e.g.,
Figure 3(c)). The PA algorithm halts if it processes a given amount of text without finding
a new SP or equivalence set (in real-life language acquisition this process may never stop).
Generalization. A collection of patterns distilled from a corpus can be seen as an empirical grammar of sorts; cf. [6], p.63: ?the grammar of a language is simply an inventory of
linguistic units.? The patterns can eventually become highly abstract, thus endowing the
model with an ability to generalize to unseen inputs. Generalization is possible, for example, when two equivalence classes are placed next to each other in a pattern, creating new
paths among the members of the equivalence classes (dashed lines in Figure 1(b)). Generalization can also ensue from partial activation of existing patterns by novel inputs. This
function is supported by the input module, designed to process a novel sentence by forming
its distributed representation in terms of activities of existing patterns (Figure 6). These
are computed by propagating activation from bottom (the terminals) to top (the patterns) of
the RDS. The initial activities wj of the terminals cj are calculated given the novel input
s1 , . . . , sk as follows:
wj = max {I(sk , cj )}
m=1..k
(2)
102: do you see the cat?
101: the cat is eating
103: are you sure?
Sentence Number
Within-Sentence Index
101_1
101_4
101_3
101_2
101_5
101_6
END
her
ing
show
eat
play
is
cat
Pam
the
BEGIN
(a)
131_3
131_2
109_7
END
ing
121_12
stay
121_10
play
121_8
101_6
109_6
cat
the
BEGIN
109_5
121_9
eat
109_4
(b)
109_9
101_5
109_8
101_4
101_3
101_2
is
131_1
101_1
121_13
121_11
131_1
131_3
101_1
109_4
PATTERN 230: the cat is {eat, play, stay} -ing
165_1
Equivalence Class 230:
stay, eat, play
165_2
221_3
here
stay
play
171_3
165_3
eat
221_1
we
171_2
they
BEGIN
(d)
END
101_2
109_5
121_9
121_8
171_1
ing
stay
131_2
play
eat
is
cat
the
BEGIN
(c)
PATTERN 231: BEGIN {they, we} {230} here
221_2
Figure 2: (a) A small portion of the RDS graph for a simple corpus, with sentence #101
(the cat is eat -ing) indicated by solid arcs. (b) This sentence joins a pattern the cat
is {eat, play, stay} -ing, in which two others (#109,121) already participate. (c) The
abstracted pattern, and the equivalence class associated with it (edges that belong to sequences not subsumed by this pattern, e.g., #131, are untouched). (d) The identification
of new significant patterns is done using the acquired equivalence classes (e.g., #230). In
this manner, the system ?bootstraps? itself, recursively distilling more and more complex
patterns.
where I(sk , cj ) is the mutual information between sk and cj . For an equivalence class, the
value propagated upwards is the strongest non-zero activation of its members; for a pattern,
it is the average weight of the children nodes, on the condition that all the children were
activated by adjacent inputs. Activity propagation continues until it reaches the top nodes
of the pattern lattice. When the algorithm encounters a novel word, all the members of
the terminal equivalence class contribute a value of , which is then propagated upwards
as usual. This enables the model to make an educated guess as to the meaning of the
unfamiliar word, by considering the patterns that become active (Figure 6(b)).
3
Results
We now briefly describe the results of several studies designed to evaluate the viability of
the ADIOS model, in which it was exposed to corpora of varying size and complexity.
(a)
propnoun:
"Joe" | "Beth" |
"Jim" | "Cindy" |
"Pam" | "George";
(b) the horse is living very extremely far away.
BEGIN
article
"The" | "A"
article
"The"
noun:
"cat" | "dog" |
"cow" | "bird" |
"rabbit" |
"horse"
noun:
"cats" | "dogs" |
"cows" | "birds" |
"rabbits" |
"horses"
the cow is working at least until Thursday.
Jim loved Pam.
George is staying until Wednesday.
George worshipped the horse.
Cindy and George have a great personality.
Pam has a fast boat.
(c)
Sentence: George is working extremely far away
PATTERN.ID=144
SIGNIFICANCE=0.11
OCCURRENCES=38
SEQUENCE=(120)+(101)
MEAN.LENGTH=29.4
144
120
are
95
98
67
ly
far
away
END
is
celebrat
liv
play
stay
work
END
65
Beth
Cindy
George
Jim
Joe
Pam
far away
70
BEGIN
emphasize:
very |
extremely|really
101
66
ing
verb:
working | living |
playing
extreme
real
is
Figure 3: (a) A part of a simple grammar. (b) Some sentences generated by this grammar.
(c) The structure of a sample sentence (pattern #144), presented in the form of a tree that
captures the hierarchical relationships among constituents. Three equivalence classes are
shown explicitly (highlighted).
Emergence of syntactic structures. Figure 3 shows an example of a sentence from a
corpus produced by a simple artificial grammar and its ADIOS analysis (the use of a simple grammar, constructed with Rmutt, http://www.schneertz.com/rmutt, in these initial
experiments allowed us to examine various properties of the model on tightly controlled
data). The abstract representation of the sample sentence in Figure 3(c) looks very much
like a parse tree, indicating that our method successfully identified the grammatical structure used to generate its data. To illustrate the gradual emergence of our model?s ability for
such concise representation of syntactic structures, we show in Figure 4, top, four trees built
for the same sentence after exposing the model to progressively more data from the same
corpus. Note that both the number of distinct patterns and the average number of patterns
per sentence asymptote for this corpus after exposure to about 500 sentences (Figure 4,
bottom).
Novel inputs; systematicity. An important characteristic of a cognitive representation
scheme is its systematicity, measured by the ability to deal properly with structurally related
items (see [7] for a definition and discussion). We have assessed the systematicity of the
ADIOS model by splitting the corpus generated by the grammar of Figure 3 into training
and test sets. After training the model on the former, we examined the representations of
unseen sentences from the test set. A typical result appears in Figure 5; the general finding
was of Level 3 systematicity according to the nomenclature of [7]. This example can be
also understood using the concept of generating novel sentences from patterns, explained
in detail below; the novel sentence (Beth is playing on Sunday) can be produced by
the same pattern (#173) that accounts for the familiar sentence (the horse is playing on
Thursday) that is a part of the training corpus.
The ADIOS system?s input module allows it to process a novel sentence by forming its
distributed representation in terms of activities of existing patterns. Figure 6 shows the
activation of two patterns (#141 and #120) by a phrase that contains a word in a novel
context (stay), as well as another word never before encountered in any context (5pm).
(a)
(b)
(c)
122
114
(d)
72
69
69
66
95
68
END
tomorrow
Wednesday
Sunday
Tuesday
Thursday
Friday
Monday
Saturday
ing
until
is
celebrat
liv
Beth
Cindy
George
Jim
Joe
BEGIN
68
END
ing
at
least
until
Friday
Monday
Saturday
Sunday
Thursday
Tuesday
Wednesday
tomorrow
Jim
is
72
celebrat
liv
play
stay
work
113
65
Average Number of Detected Patterns
END
ing
at
least
until
celebrat
liv
play
stay
work
is
Total Number of Detected Patterns
65
Thursday
Friday
Monday
Saturday
Sunday
Thursday
Tuesday
Wednesday
tomorrow
66
ing
is
66
65
celebrat
liv
play
stay
work
Beth
Cindy
George
Jim
Joe
Pam
BEGIN
70
BEGIN
68
65
71
play
stay
work
66
89
8.00
120
7.00
100
6.00
80
5.00
4.00
60
3.00
40
2.00
20
1.00
0
0
200
400
600
0.00
1000
800
Number of Sentences in the corpus
Figure 4: Top: the build-up of structured information with progressive exposure to a corpus
generated by the simple grammar of Figure 3. (a) Prior to exposure. (b) 100 sentences. (c)
200 sentences. (d) 400 sentences. Bottom: the total number of detected patterns (4) and
the average number of patterns in a sentence ( ), plotted vs. corpus size.
(a) Unseen: Beth is playing on Sunday.
173
(b) the horse is playing on Thursday.
173
148
148
END
Tuesday
Wednesday
Sunday
Saturday
Friday
Monday
stay
work
92
ing
on
liv
is
rabbit
82
celebrat
horse
dog
bird
cat
cow
Beth
the
BEGIN
END
Thursday
Tuesday
Wednesday
Sunday
ing
on
Friday
Monday
Saturday
83
79
92
stay
work
play
is
celebrat
liv
93
86
82
bird
cat
cow
dog
horse
rabbit
the
Beth
79
BEGIN
147
83
Thursday
93
86
play
147
Figure 5: (a) Structured representation of an ?unseen? sentence that had been excluded
from the corpus used to learn the patterns; note that the detected structure is identical to
that of (b), a ?seen? sentence. The identity between the structures detected in (a) and (b)
is a manifestation of Level-3 systematicity of the ADIOS model (?Novel Constituent: the
test set contains at least one atomic constituent that did not appear anywhere in the training
set?; see [7], pp.3-4).
120... activation level: 0.667
141... activation level: 0.972
119
86
W0=1.0
113
74
93
89
END
week
winter
next
month
Tuesday
Wednesday
Sunday
Thursday
Saturday
Friday
until
Monday
2081
C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13
tomorrow
ing
work
liv
play
are
Pam
Jim
Joe
Cindy
George
and
Beth
Wednesday
Pam
Jim
Joe
George
Beth
100
W15=0.8
C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16 C17 C18
Cindy
W2..8=?
112
W8=1.0
BEGIN
W13=1.0
W1=?
W0=1.0
me
tell
her
see
show
give
help
get
you
build
we
did
can
BEGIN
1734 module
2080in action
1726
Figure 6: 1865
the input
(the two most relevant ? highly active ? patterns
responding to the input Joe and Beth are staying until 5pm). Leaf activation is proportional to the mutual information between inputs and various members of the equivalence
classes (e.g., on the left W15 = 0.8 is the mutual information between stay and liv, which is
a member of equivalence class #112). It is then propagated upwards by taking the average
at each junction.
2077
the
that
at
s
ing
go
look
not
just
're
you
they
look
go
start
bunny
BEGIN
Working with real data: the CHILDES corpus. To illustrate the scalability of our
2076
method, we describe here
briefly the outcome of applying the PA algorithm to a subset of
the CHILDES collection [8], which consists of transcribed speech produced by, or directed
at, children. The corpus we
1829 selected contained 9665 sentences (74500 words) produced
by parents. The results, one of which is shown in Figure 7, were encouraging: the algorithm found intuitively significant SPs and produced semantically adequate corresponding
1785
1398
1828
1785
1558
1739
equivalence sets. Altogether, 1062 patterns and 775 equivalence classes were established.
Representing the corpus in terms of these constituents resulted in a significant compression: the average number of constituents per sentence dropped from 6.70 in the raw data to
2.18 after training, and the entropy per letter was reduced from 2.6 to 1.5.
1960
CHILDES_2764 :
they don ?t want ta go for a ride ?
you don ?t want ta look for another ride ?
1959
CHILDES_2642 :
CHILDES_2504 :
1912
1629
1739
can we make a little house ?
should we make another little dance ?
1914
should we put the bed s in the house ?
should we take some doggie s on that house ?
1407 1656
1913
CHILDES_1038 :
where ?d the what go ?
BEGIN
where
'
s
Becky
Brennen
Eric
Miffy
mommy
that
the
the
big
biggest
blue
different
easy
little
littlest
next
right
round
square
white
other
yellow
green
orange
yellow
chicken
one
room
side
way
?
where are the what ? s he gon ta do go ?
CHILDES_2304 :
want Mommy to show you ?
like her to help they ?
Figure 7: Left: a typical pattern extracted from a subset of the CHILDES corpora collection [8]. Hundreds of such patterns and equivalence classes (underscored in this figure)
together constitute a concise representation of the raw data. Some of the phrases that can
be described/generated by pattern 1960 are: where?s the big room?; where?s the yellow
one?; where?s Becky?; where?s that?. Right: some of the phrases generated by ADIOS
(lower lines in each pair) using sentences from CHILDES (upper lines) as examples. The
generation module works by traversing the top-level pattern tree, stringing together lowerlevel patterns and selecting randomly one member from each equivalence class. Extensive
testing (currently under way) is needed to determine whether the grammaticality of the
newly generated phrases (which is at present less than ideal, as can be seen here) improves
with more training data.
4
Concluding remarks
We have described a linguistic pattern acquisition algorithm that aims to achieve a streamlined representation by compactly representing recursively structured constituent patterns
as single constituents, and by placing strings that have an identical backbone and similar
context structure into the same equivalence class. Although our pattern-based representations may look like collections of finite automata, the information they contain is much
richer, because of the recursive invocation of one pattern by another, and because of the
context sensitivity implied by relationships among patterns. The sensitivity to context of
pattern abstraction (during learning) and use (during generation) contributes greatly both to
the conciseness of the ADIOS representation and to the conservative nature of its generative
behavior. This context sensitivity ? in particular, the manner whereby ADIOS balances
syntagmatic and paradigmatic cues provided by the data ? is mainly what distinguishes it
from other current work on unsupervised probabilistic learning of syntax, such as [9, 10, 4].
In summary, finding a good set of structured units leads to the emergence of a convergent
representation of language, which eventually changes less and less with progressive exposure to more data. The power of the constituent graph representation stems from the interacting ensembles of patterns and equivalence classes that comprise it. Together, the local
patterns create global complexity and impose long-range order on the linguistic structures
they encode. Some of the challenges implicit in this approach that we leave for future work
are (1) interpreting the syntactic structures found by ADIOS in the context of contemporary
theories of syntax, and (2) relating those structures to semantics.
Acknowledgments. We thank Regina Barzilai, Morten Christiansen, Dan Klein, Lillian Lee
and Bo Pang for useful discussion and suggestions, and the US-Israel Binational Science
Foundation, the Dan David Prize Foundation, the Adams Super Center for Brain Studies at
TAU, and the Horowitz Center for Complexity Science for financial support.
References
[1] Z. S. Harris. Distributional structure. Word, 10:140?162, 1954.
[2] R. Kazman. Simulating the child?s acquisition of the lexicon and syntax - experiences
with Babel. Machine Learning, 16:87?120, 1994.
[3] J. L. Elman. Finding structure in time. Cognitive Science, 14:179?211, 1990.
[4] M. van Zaanen and P. Adriaans. Comparing two unsupervised grammar induction
systems: Alignment-based learning vs. EMILE. Report 05, School of Computing,
Leeds University, 2001.
[5] M. Gross. The construction of local grammars. In E. Roche and Y. Schab`es, ed.,
Finite-State Language Processing, 329?354. MIT Press, Cambridge, MA, 1997.
[6] R. W. Langacker. Foundations of cognitive grammar, volume I: theoretical prerequisites. Stanford University Press, Stanford, CA, 1987.
[7] T. J. van Gelder and L. Niklasson. On being systematically connectionist. Mind and
Language, 9:288?302, 1994.
[8] B. MacWhinney and C. Snow. The child language exchange system. Journal of
Computational Lingustics, 12:271?296, 1985.
[9] D. Klein and C. D. Manning. Natural language grammar induction using a
constituent-context model. In T. G. Dietterich, S. Becker, and Z. Ghahramani, ed.,
Adv. in Neural Information Proc. Systems 14, Cambridge, MA, 2002. MIT Press.
[10] A. Clark. Unsupervised Language Acquisition: Theory and Practice. PhD thesis,
COGS, University of Sussex, 2001.
| 2253 |@word cu:2 briefly:2 faculty:1 compression:1 stronger:1 c0:2 gradual:1 solan:1 concise:2 solid:1 recursively:5 initial:4 contains:2 selecting:1 prefix:1 existing:3 current:1 com:1 comparing:1 activation:7 yet:4 must:2 exposing:1 cindy:7 enables:2 asymptote:1 designed:2 progressively:1 childes:4 v:2 generative:3 leaf:1 guess:1 item:1 selected:1 cue:1 prize:1 node:5 contribute:1 lexicon:1 monday:6 c6:2 c2:5 constructed:1 become:2 saturday:6 tomorrow:4 edelman:1 consists:3 combine:1 dan:2 manner:3 acquired:1 indeed:1 behavior:2 elman:2 examine:1 multi:1 brain:1 terminal:3 encouraging:1 little:3 considering:1 begin:16 estimating:2 provided:1 israel:2 what:3 backbone:1 string:2 parsimony:1 c13:2 gelder:1 finding:4 transformation:1 w8:1 unit:2 ly:1 appear:1 educated:1 before:1 understood:1 local:3 modify:1 dropped:1 id:1 path:11 pam:8 bird:4 examined:1 equivalence:28 range:1 directed:2 horn:2 unique:2 acknowledgment:1 testing:1 atomic:1 recursive:3 practice:1 bootstrap:2 area:1 empirical:1 thought:1 pre:1 word:6 regular:1 symbolic:1 get:2 unlabeled:1 put:1 context:13 applying:2 www:1 center:2 primitive:2 exposure:4 go:5 rabbit:4 automaton:1 splitting:1 immediately:1 rule:3 financial:1 updated:1 construction:1 play:15 barzilai:1 exact:1 pa:7 element:1 continues:1 gon:1 distributional:5 labeled:1 bottom:3 inserted:1 module:4 capture:2 wj:2 adv:1 morphological:1 contemporary:1 gross:1 complexity:3 exposed:1 serve:1 eric:1 sink:4 compactly:1 mh:1 cat:12 various:2 train:1 distinct:1 fast:1 describe:2 detected:6 artificial:1 horse:8 tell:1 rds:8 formation:2 outcome:1 richer:1 widely:1 stanford:2 say:1 grammar:14 ability:3 unseen:4 syntactic:13 highlighted:1 itself:2 emergence:3 advantage:1 sequence:9 relevant:1 date:1 achieve:2 representational:2 bed:3 mommy:2 scalability:1 constituent:16 parent:1 regularity:2 generating:1 adam:1 leave:1 staying:2 help:2 depending:1 illustrate:2 ac:1 propagating:1 measured:1 school:1 c10:2 progress:1 eq:1 distilling:1 snow:1 stipulated:1 require:1 government:1 regina:1 exchange:1 generalization:6 ci1:1 really:1 expressively:1 accompanying:1 ground:1 great:1 scope:1 week:1 smallest:1 purpose:1 proc:1 currently:1 sensitive:1 create:1 successfully:1 mit:2 beth:11 sunday:8 aim:2 super:1 ck:5 cornell:2 varying:1 eating:1 linguistic:4 encode:1 properly:1 mainly:1 greatly:1 sense:2 detect:3 abstraction:4 suffix:1 initially:1 her:3 semantics:1 classification:1 ill:1 among:4 morpheme:1 c16:1 noun:2 integration:1 orange:1 mutual:5 marginal:1 comprise:1 construct:2 never:2 extraction:1 distilled:1 identical:3 progressive:2 placing:1 look:5 unsupervised:6 future:1 connectionist:3 others:1 report:1 distinguishes:1 randomly:1 winter:1 tightly:1 hamper:1 individual:1 resulted:1 familiar:1 phase:3 opposing:1 attempt:1 subsumed:2 highly:4 alignment:2 extreme:1 activated:1 edge:9 partial:1 experience:1 shorter:2 traversing:1 tree:6 srn:1 re:1 plotted:1 sacrificing:1 theoretical:1 mk:4 modeling:1 goodness:1 lattice:1 phrase:4 vertex:4 subset:2 hundred:1 c14:1 accrued:1 sensitivity:3 subscribe:1 stay:15 probabilistic:2 off:1 lee:1 together:3 continuously:1 roche:1 w1:1 thesis:1 postulate:1 transcribed:1 cognitive:3 creating:1 horowitz:1 sps:2 friday:6 account:2 c12:2 subsumes:1 zellig:1 explicitly:1 grammaticality:1 later:1 systematicity:6 portion:1 start:1 sort:1 walked:1 construed:1 pang:1 il:1 formed:1 square:1 phoneme:1 characteristic:1 ensue:1 correspond:1 counterproductive:1 ensemble:1 yellow:3 generalize:1 raw:3 identification:2 produced:5 strongest:1 reach:1 whenever:1 ed:3 definition:1 streamlined:1 c7:2 c15:1 acquisition:8 frequency:1 pp:1 associated:3 mi:5 attributed:1 conciseness:1 propagated:3 stop:1 newly:2 knowledge:3 improves:1 cj:8 cik:1 appears:1 ta:3 multigraph:1 done:2 just:2 anywhere:1 implicit:1 until:8 hand:4 working:4 parse:1 replacing:1 propagation:1 indicated:1 believe:1 aviv:2 usa:1 dietterich:1 concept:1 contain:1 former:2 tagged:1 polarization:1 excluded:1 deal:1 white:1 adjacent:1 round:1 during:3 adriaans:1 acquires:1 rooted:1 whereby:1 sussex:1 criterion:1 manifestation:1 syntax:5 stress:1 interpreting:1 upwards:3 meaning:1 ruppin:2 discovers:1 consideration:1 novel:10 common:1 enriches:1 endowing:1 becky:2 functional:1 niklasson:1 binational:1 winner:3 volume:1 untouched:2 belong:4 extend:1 he:1 relating:1 distillation:2 significant:8 unfamiliar:1 cambridge:2 cv:2 automatic:2 pm:2 language:10 had:1 ride:2 geared:1 own:1 life:1 seen:3 george:10 impose:1 c11:2 determine:1 paradigmatic:2 dashed:2 living:2 stem:1 segmented:1 ing:15 long:2 w13:1 divided:1 post:1 halt:1 controlled:1 schematic:1 variant:1 repetitive:1 chicken:1 c1:6 bunny:1 want:3 source:4 ithaca:1 w2:1 operate:1 sure:1 member:6 extracting:1 ideal:1 split:1 enough:1 viability:1 easy:1 psychology:1 loved:1 architecture:1 identified:1 cow:5 whether:2 motivated:1 expression:1 becker:1 speech:2 adios:12 nomenclature:1 constitute:1 repeatedly:1 remark:2 action:1 adequate:1 useful:1 amount:1 visualized:1 category:4 reduced:1 http:1 generate:1 exist:1 judging:2 estimated:1 per:3 klein:2 blue:1 four:1 liv:9 graph:17 letter:1 powerful:1 you:6 separation:2 christiansen:1 coherence:1 rewired:1 convergent:1 encountered:1 activity:4 occur:1 sharply:1 aspect:1 extremely:3 c8:2 concluding:1 macwhinney:1 eat:8 format:1 structured:5 department:1 according:2 manning:1 character:1 s1:1 explained:1 intuitively:1 taken:1 equation:1 eventually:2 needed:2 mind:1 end:15 available:1 operation:1 junction:1 prerequisite:1 hierarchical:2 away:4 generic:1 simulating:1 occurrence:1 encounter:1 altogether:1 personality:1 top:5 responding:1 cf:2 underscored:1 ghahramani:1 build:2 classical:3 implied:1 added:1 already:1 usual:1 exhibit:1 thank:1 morten:1 w0:2 participate:1 me:1 evaluate:1 induction:2 length:5 index:4 relationship:3 illustration:1 balance:2 difficult:1 upper:1 arc:1 finite:2 lillian:1 jim:8 interacting:1 verb:1 david:2 dog:4 pair:1 c3:2 sentence:33 extensive:1 c4:2 concisely:1 established:1 usually:1 exemplified:1 pattern:69 below:1 c18:1 challenge:1 lingustics:1 built:1 including:1 tau:2 max:1 green:1 power:2 natural:1 force:1 leeds:1 boat:1 recursion:1 advanced:1 mn:1 representing:2 scheme:2 excel:1 kj:1 text:3 prior:2 brittle:1 abstracting:1 generation:2 suggestion:1 proportional:1 babel:1 emile:1 clark:1 lowerlevel:1 foundation:3 degree:4 article:2 principle:3 systematically:1 playing:5 share:1 pi:1 summary:1 token:1 supported:2 placed:1 cohesiveness:2 side:1 taking:1 emerge:1 distributed:4 grammatical:1 van:2 depth:1 calculated:1 transition:1 made:1 collection:4 c5:2 far:4 emphasize:1 dealing:1 ml:5 abstracted:1 active:2 global:1 corpus:27 don:2 anchored:1 sk:4 table:4 promising:1 learn:2 mj:4 nature:2 ca:1 tel:2 contributes:1 tuesday:6 inventory:1 complex:2 sp:2 significance:2 did:2 motivation:1 whole:1 big:2 w15:2 child:5 complementary:1 allowed:1 biggest:1 join:1 fashion:3 ny:1 structurally:1 zach:1 candidate:3 house:3 invocation:1 learns:2 shimon:1 down:1 cog:1 list:2 exists:1 joe:7 ci:2 phd:1 occurring:1 suited:1 entropy:1 zaanen:1 simply:1 likely:2 forming:2 contained:1 bo:1 binding:1 syntagmatic:2 wednesday:8 schab:1 harris:2 extracted:2 ma:2 se37:1 identity:1 month:1 room:2 shared:1 considerable:1 change:1 typical:3 semantically:1 conservative:1 total:2 pas:1 eytan:1 e:1 indicating:1 thursday:10 support:4 latter:1 scan:1 assessed:1 c9:2 dance:1 |
1,378 | 2,254 | Hidden Markov Model of Cortical Synaptic
Plasticity: Derivation of the Learning Rule
Michael Eisele
W. M. Keck Center
for Integrative Neuroscience
San Francisco, CA 94143-0444
[email protected]
Kenneth D. Miller
W. M. Keck Center
for Integrative Neuroscience
San Francisco, CA 94143-0444
[email protected]
Abstract
Cortical synaptic plasticity depends on the relative timing of pre- and
postsynaptic spikes and also on the temporal pattern of presynaptic spikes
and of postsynaptic spikes. We study the hypothesis that cortical synaptic plasticity does not associate individual spikes, but rather whole firing episodes, and depends only on when these episodes start and how
long they last, but as little as possible on the timing of individual spikes.
Here we present the mathematical background for such a study. Standard methods from hidden Markov models are used to define what ?firing episodes? are. Estimating the probability of being in such an episode
requires not only the knowledge of past spikes, but also of future spikes.
We show how to construct a causal learning rule, which depends only
on past spikes, but associates pre- and postsynaptic firing episodes as if
it also knew future spikes. We also show that this learning rule agrees
with some features of synaptic plasticity in superficial layers of rat visual
cortex (Froemke and Dan, Nature 416:433, 2002).
1 Introduction
Cortical synaptic plasticity agrees with the Hebbian learning principle: Neurons that fire
together, wire together. But many features of cortical plasticity go beyond this simple
principle, such as the dependence on spike-timing or the nonlinear dependence on spike
frequency (see [1] or [2] for review). Studying these features may produce a better understanding of which neurons wire together in the neocortex.
Previous models of cortical synaptic plasticity [3]-[5] differed in their details, but they
agreed that nonlinear learning rules are needed to model cortical plasticity. In linear learning rules, the weight change induced by a presynatic spike would depend only on the postsynaptic spikes, but not on all the other presynaptic spikes. In the cortex, by contrast, the
contribution from a presynaptic spike is stronger when it occurs alone than when it occurs right after another presynaptic spike [5]. Similar results hold for postsynaptic spikes.
Consequently, the weight change depends in a complex way on the whole temporal pattern
of pre- and postsynaptic spikes. Even though this nonlinear dependence can be modeled
phenomenologically [3]-[5], its biological function remains unknown. We will not propose such a function here, but reduce this complex dependence to a few principles, whose
A
D
pre
1 - a20
post
LTP
B
LTD
a12 = 1
LTP or LTD
e2(1)> 0
spikes
1 e1(1) = 1
firing
episodes
C
2
a01
pre
0
a20
e0(1) = 0
1 - a01
post
LTP
LTD
LTP
time
Figure 1: A: Usually, models of cortical synaptic plasticity associate pre- and postsynaptic
spikes directly. They produce long-term potentiation (LTP) when the presynaptic spike
(pre) precedes the postsynaptic spike (post), and long-term depression (LTD) if the order is
reversed. When several pre- and postsynaptic spikes are interleaved in time, the outcome
depends in a complicated way on the whole spike pattern (LTP or LTD). B: In our model,
pre- and postsynaptic spikes are paired only indirectly. Each spike train is used to estimate
when firing episodes start and end. C: These firing episodes are then associated, with LTP
being induced if the presynaptic firing episode starts before the postsynaptic one and LTD
if the order is reversed and if the episodes are short. D: Hidden Markov model used to
estimate when firing episodes occur.
function may be easier to understand in future studies.
2 Basic learning principle
The basic principle behind our model is illustrated in fig. 1. We propose that the learning
rule does not associate pre- and postsynaptic spikes directly, but rather uses them to estimate whether the pre- or postsynaptic neuron is currently in a period of rapid firing (?firing
episode?) or a period of little or no firing. It then associates the firing episodes. When
the per- and postsynaptic firing episodes overlap, the synapse is strengthened or weakened
depending on which one started first, but independent of the precise temporal patterns of
spikes within a firing episode. As a consequence, the contribution of each spike to synaptic
plasticity will depend on whether it occurs alone, or surrounded by other spikes, and the
learning rule will be nonlinear. For the right parameter choice, the nonlinear features of
this rule will agree well with nonlinear features of cortical synaptic plasticity.
Implementation of this rule will be done in two steps. Firstly, we will define what ?firing
episodes? are. Secondly, we will associate the pre- and postsynaptic firing episodes. The
first step uses standard methods from hidden Markov models (see e.g. [6]). The pre- and
postsynaptic neuron will each be described by a Markov model with three states (fig. 1D),
which correspond to firing episodes (state 2; firing probability
), to the silence
between responses (state 0; firing probability
), and to the first spike of a new
firing episode (state 1; firing probability
; duration = 1 time step). As usual,
the parameters of the Markov model are the transition probabilities , which determine
how long firing episodes and silent periods are expected to last, and the emission rates
, which determine the firing rates. ! is the binary observable at time step " (!#$
at spikes and %&
otherwise), is the firing probability per time step in state ' ,
and (
)*,+- . In general, the pre- and postsynaptic neuron will have different
parameters ! and . .
Once the Markov model is defined, one can use standard algorithms (forward and backward
algorithm) to estimate, for any given spike sequence, the state probabilities over time. To
model cortical synaptic plasticity, we will increase the synaptic weight whenever the preand the postsynaptic neuron have simultaneous firing episodes (both in state 2), and decrease the weight whenever the postsynaptic firing episode starts first (pre in state 1 while
post already in state 2):
+
for
for
otherwise
(1)
where
and
are the amplitudes of synaptic potentiation and depression. In general,
the states are not known with certainty, only their probabilities are, and the actual weight
change is therefore defined as:
"! # %$ '&(-)*
$,+
-&(/.
'
100
32
'
(2)
( 3454647+
where the sum is over all possible pre- and postsynaptic states and is the probability
given the whole spike sequence
. As fig. 2 shows, this straightforward learning
rule produces weight changes that are similar to those seen in cortex [5]. (One can show
that this particular Markov model depends on the parameters and only through the two
where is the
combinations
+ and
15ms,
34ms,
time step. To fit the data on spike pairs and triplets [5], we set
96Hz , and
.)
20ms,
70ms,
98 45464
B
: <;"=?>
B
A@
& ;C=
B /;C=-& ?>
:
"4ED
&
:
?
;"=
This learning rule is, however, not biologically plausible, because it violates causality. The
estimates of state probabilities depend not only on past, but also on future observables,
while real synaptic plasticity can depend only on past spikes. To solve this causality problem, we will rewrite the learning rule, essentially deriving a new algorithm in place of the
familiar hidden Markov algorithms. We will derive this causal learning rule not only for
this specific 3-state model, but for general Markov models.
3 General form of the learning rule
3.1 Learning goal
To derive the general form of the learning rule for arbitrary pre- and postsynaptic Markov
models, we assume that the transition probabilities and emission probabilities
are given and that the weight change is some function
. *
" 2
(3)
of the pre- and postsynaptic states ! at time " and the time " itself. If the pre- and postsynaptic state sequences
and
were known, the weight at time " would simply
be the initial weight
plus all the previous weight changes:
(4)
9F * * ?G
9 F
? K *
G
L
M* L)*
@ # . H* H
%J 2
H?I
)*
In the current context, the state sequences are unknown and have to be estimated from the
spike trains
and
. Ideally, we would like to set the weight at time " equal to the
expectation value of
, given the spike trains
and
. But only part of
these spike trains are known at time " . Of the sequence
the synapse has already seen
the past values , ... , which we will call
, and the present value . But
L
L*
2/1 triplets; phen. model
2/1 triplets; hidden Markov model
2/1 triplets; linear rule
0.5
0.5
0.5
dw
1
dw
1
dw
1
0
0
0
?0.5
?0.5
?0.5
examples of 2/1 triplets
25
5
0
?5
?25
t2 (ms)
25
5
0
?5
?25
t2 (ms)
?25
0 ?5
25 5 t1 (ms)
5
0
?5
?25
t2 (ms)
?25
0 ?5
25 5 t1 (ms)
?25
0 ?5
25 5 t1 (ms)
5
t2 (ms)
25
25
0
?5
?25
25 5 0 ?5 ?25
t1 (ms)
1/2 triplets; phen. model
1/2 triplets; hidden Markov model
1/2 triplets; linear rule
0.5
0.5
0.5
dw
1
dw
1
dw
1
0
examples of 1/2 triplets
0
0
?0.5
?25
?5
?0.5
0
?25
?5
0 5
?25?5t1 (ms)
5
25
t2 (ms)
25
t2 (ms)
?25
?0.5
0
5
25
t2 (ms)
?25
?5
0 5
?25?5t1 (ms)
25
0
0 5
?25?5t1 (ms)
5
25
t2 (ms)
?5
0
5
25
25
?25 ?5 0 5 25
t1 (ms)
Figure 2: Weight change produced by spike triplets in various models. Our learning rule
(second column), which depends on the timing of firing episodes but only weakly on the
timing of individual spikes, and which was implemented using hidden Markov models,
agrees well with the phenomenological model (first column) that was used in [5, fig 3b]
to fit data from superficial layers in rat visual cortex. It certainly agrees better than a
purely linear rule (third column). Parameters were set so that all three models produce the
same results for spike pairs (1 presynaptic and 1 postsynaptic spike). Upper row: Weight
change produced by 2 presynaptic and 1 postsynaptic spikes (2/1 triplet). Lower row: 1
presynaptic and 2 postsynaptic spikes (1/2 triplet). . and are the times between preand postsynaptic spikes. The small boxes on the right show examples of spike patterns for
positive and negative and
=
=
=3
=
M
L*
it has not yet seen the future sequence , , ..., which we will call
. All one can
do is to make some assumption about what the future spikes will be, set accordingly,
and correct in the future, when the real spike sequence becomes known. Our algorithm
assumes no future spikes and sets the weight at time " equal to:
!
(5)
where is the expectation value given the spike sequences . The condition that
and
. One could make other
all future spikes are 0 is written as
F
G + M* M
M* L
L
L
?
46454 +
*
M
assumptions about the future spikes, but all these assumptions would affect only when
the weight changes, but not how much it changes in the long run. This is because the
expectation value of a past weight change:
(6)
# . H* H
%J 2 0 L
M* M* L
? M
L
0
M
J
will depend little on the future spikes
and , if the time is much earlier than the
time " . As " grows, most weight changes will lie in the distant past and depend only weakly
on our assumptions about future spikes.
Next we will show how to compute the expectation value in eq. (5) without having to store
the past spike trains . To simplify the notation, we will regard each pair of pre- and
. H* H
2
H
postsynaptic states
as a state of a combined pre- and postsynaptic Markov
model. We will also combine the pre- and postsynaptic spikes (
, each of which
can take the two values 0 or 1, to a single observable , which can take 4 values. The
desired weight is then equal to:
with
(7)
. MH
MH
2
H
F G +
@ "H J
H?I
F G
3.2 Running estimate of state probabilities
To compute , it is helpful to first compute the probabilities
'
"
(
+
(8)
of the states given the past and present spikes and assuming that there are no future spikes.
The " can be computed recursively, in terms of " + (this is similar to the familiar
forward algorithm for hidden Markov models). Write as:
"
(-)
(-
'
+
(9)
'(-
'
(10)
Because of the Markov property, future and present spikes and depend only on the
present state , but not on the past state ! or on . Similarly, depends only on
but not on . Thus the enumerator of the last expression is equal to:
-
'
(11)
'
'
(
+
'& (
'& (
+ 1&(
'& . & (-
" (
+ '
+
" &
(12)
with
(13)
The probabilities " of having no future spikes after state ' can be computed by the
backward algorithm:
"
(-
$,+
'
"
@ '&
'&
(
(14)
This is a linear equation with constant coefficients. As long as the end of the Markov chain
is far enough in the future, this equation reduces to an eigenvalue problem with the solution
"
" , where is the largest eigenvalue of the matrix with elements (
and is the corresponding eigenvector. As the matrix elements are positive, will be real,
and the eigenvector will be unique up to a constant factor (except for quite exceptional,
disconnected Markov chains, in which it may depend on the choice of end state). The
, which can be expressed in terms of
last unknown factor in eq. (12) is !
+
:
"
-
" +, -
-
&
@
(
(
"&
1(
'(
+
(15)
where the Markov property was used again. Putting everything together, one gets the update
rule for " :
(16)
"
.
" +
with
.
1&
1& 1& & " ?>
'(-
(-
" +
(17)
(18)
?>
> ,
&
The ratio " " +
"
" does not really depend on " but only on
the eigenvalue and the relative size of the elements of the eigenvector . If there is no
pre- or postsynaptic spike at time " (
), the normalization factor is equal
to 1, and . no longer depends on " or . In this case, eq. (16) is a linear equation
with constant coefficients, which can be integrated analytically from one spike to the
next,
thereby speeding up the numerical simulation. At pre- or postsynaptic spikes (
),
can be computed by summing eq. (16) over ' and using " :
)
!
"
1& & " ?>
'&
3.3 Running estimate of weights
+
" +
(19)
Using the knowledge of the probabilities " , one can now compute the weight
!
9F G +
F G +!
@ # ' " & "
F
+
G
The expectation value
in this equation will be equal to
(20)
(21)
, if there is
no pre- or postsynaptic spike at time " (
). In between spikes, the weight therefore
changes as:
@ # ' " '& "
(22)
At the time of spikes, the weight change is more complex, because earlier weight changes
have to be modified according to the new state information given by the spikes. To compute
it, let us introduce the quantities
'
(23)
"
"
1& F G +
The weight is equal to the sum of these :
"
(24)
and, as we will see next, the " can be computed in a recursive way, even in the presence
of spikes. Start with:
' "
'
(25)
"
"
' "
"
-
'
"
'& # @ -F G +
# '& @ ( ) +
-& '&
& F G +
'
(26)
Because of the Markov property, the last expectation value depends only on and , but
not on , ' , or , and it is thus equal to " +
> " + . The other two factors
(-) +
' & " ( ) ' !+
(27)
combine to give the same expression that already occurred in equation (9). As shown above
(eq. (16)), this expression is equal to
(28)
. (
" +
with the same
1&
as before. Putting everything together, one gets the update rule for
# ' " & " A@ 1& " +
"
":
(29)
Together with eqs. (16), (17), (19), and (24) this constitutes our learning rule. It is causal,
because it depends only on past, not on future signals, but in the long run it will give the
same weight change as the standard hidden Markov rule (2). In between spikes, the in
eq. (16) and the in eq. (29) evolve according to linear rules, and the weight changes
according to the simple rule (22). These simplifications are a consequence of assuming, in
the definition of , that there are no future spikes. Other assumptions are possible: One
could, for example, set equal to
, assuming that future spikes
occur with the rate predicted by the Markov model, and one could also derive a causal
learning rule for this (not shown), but then the evolution of and between spikes
would be nonlinear and the evolution of would also be more complex.
This learning rule still has a rather unusual form. Usually, one writes as the sum of
F G +
plus some weight change. Our rule can also be written in this form, if the are replaced
by:
+
(30)
"
"
"
"
'
+
(31)
;
'&
9F G +
; " is a measure for how much the weight should be changed if one suddenly learned, with
certainty, that the neurons are in state ' . By definition, the ; sum to zero: ; "
.
Inserting the update rule for " gives the update rule for ; " :
; " # ' " + " @ . ; " + @ " + (32)
# ' " + @ '& " A@
'&; " +
(33)
Summing over ' gives the update rule for :
@ # ' " '& " @ ! . '& ; " +
(34)
The last, ; -dependent sum is nonzero only if spikes arrive. It occurs because a new spike
F G +
changes the probability estimates of previous states, and thereby the desired weight.
3.4 Summary of the learning algorithm
To simplify notation, we combined the pre- and postsynaptic Markov models into a single
one. How does the learning rule look in terms of the original pre- and postsynaptic paramstates and the postsynaptic one
, then the
eters? If the presynaptic model has
combined model has
states. At each time step, we have to update not only
the weight but
signal traces , which we will now write as " , where
denotes the presynaptic and the postsynaptic state. However, one needs to update only
of the signal traces , because they factorize into a pre- and a postsynaptic
" . The learning algorithm is then given by:
part: " "
)* &
&
?K
* @
?
'&
?K
;
;
Initialization ("
: Define the states and the parameter and of the pre- and
postsynaptic Markov model.
# .%
" 2 for all possible state pairs.
Define the weight change
Find the leading eigenvector of both Markov chains in the absence of spikes:
;
Initialize , , and
otherwise)
& *
(
)*
' &
& *
;
;
(35)
for arbitrary start state and 0
Recursion "
*
,*
*
45464 :
* & )* L
'&
& * >
& *
*
&
M
1&
& * >
& ) *
& *
, and
.
"!
$ '&
&
@ "! !
& ,
&;
;
; # %$ ; 1& * &
? @ !
&
& ;
@ ;
Terminate at the end of the spike sequences
and
.
and analogous equations for
' "
' " +
(36)
(37)
(38)
(39)
(40)
(41)
4 Conclusion
This demonstrates that the basic principle of associating not individual spikes, but whole
firing episodes, can be implemented in a causal learning rule, which depends only on past
signals. This rule does not have to store the time of all past spikes, but only a few signal
traces and , and may thus be biologically plausible. For the right parameter choice,
it agrees well with some nonlinear features of cortical synaptic plasticity (fig. 2). This
does not imply that actual synaptic plasticity follows the same rule, but only that these
particular features are consistent with our basic principle. Based on the predictions of
this rule, one could design more precise experimental tests of whether cortical synaptic
plasticity associates individual spikes or whole firing episodes.
;
Acknowledgments
This work was supported by R01-EY11001. We thank T. Sejnowski for his comments on a
similar type of learning rules, which he suggested to call ?hidden Hebbian learning?. The
second author (KM) would like to emphasize that his contribution to this paper was limited
to assistance in writing.
References
[1] G.-Q. Bi and M.-M. Poo Synaptic modification by correlated activity: Hebb?s postulate revisited.
Ann. Rev. Neurosci., 24:139?166, 2001.
[2] O. Paulsen and T. J. Sejnowski. Natural patterns of activity and long-term synaptic plasticity.
Curr Opin Neurobiol., 10:172?179, 2000.
[3] W. Senn, H. Markram, and M. Tsodyks. An algorithm for modifying neurotransmitter release
probability based on pre- and postsynaptic spike timing. Neural Comput., 13:35?67, 2001.
[4] P. J. Sjostrom, Turrigiano G. G., and S. B. Nelson. Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron, 32:1149?1164, 2001.
[5] R. C. Froemke and Y. Dan. Spike-timing-dependent synaptic modification induced by natural
spike trains. Nature, 416:433?438, 2002.
[6] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77:257?286, 1989.
| 2254 |@word stronger:1 km:1 integrative:2 simulation:1 paulsen:1 thereby:2 recursively:1 initial:1 phy:2 past:13 current:1 yet:1 written:2 numerical:1 distant:1 plasticity:18 opin:1 update:7 alone:2 selected:1 accordingly:1 short:1 revisited:1 firstly:1 mathematical:1 dan:2 combine:2 introduce:1 expected:1 rapid:1 little:3 actual:2 becomes:1 estimating:1 notation:2 what:3 neurobiol:1 eigenvector:4 temporal:3 certainty:2 demonstrates:1 before:2 t1:8 positive:2 timing:8 consequence:2 firing:29 plus:2 initialization:1 weakened:1 limited:1 bi:1 unique:1 acknowledgment:1 recursive:1 writes:1 pre:30 get:2 context:1 writing:1 center:2 poo:1 go:1 straightforward:1 duration:1 rule:37 deriving:1 his:2 dw:6 analogous:1 us:2 hypothesis:1 associate:7 element:3 recognition:1 tsodyks:1 episode:25 decrease:1 ideally:1 depend:9 rewrite:1 weakly:2 purely:1 observables:1 various:1 neurotransmitter:1 derivation:1 train:6 sejnowski:2 precedes:1 cooperativity:1 outcome:1 whose:1 quite:1 plausible:2 solve:1 otherwise:3 jointly:1 itself:1 sequence:9 eigenvalue:3 turrigiano:1 propose:2 inserting:1 keck:2 produce:4 depending:1 derive:3 eq:8 implemented:2 predicted:1 correct:1 modifying:1 a12:1 violates:1 everything:2 potentiation:2 really:1 biological:1 secondly:1 hold:1 currently:1 agrees:5 largest:1 exceptional:1 enumerator:1 modified:1 rather:3 release:1 emission:2 contrast:1 a01:2 helpful:1 dependent:2 integrated:1 hidden:12 initialize:1 equal:10 construct:1 once:1 having:2 look:1 constitutes:1 future:19 t2:8 simplify:2 few:2 individual:5 familiar:2 replaced:1 fire:1 curr:1 certainly:1 behind:1 chain:3 desired:2 causal:5 e0:1 a20:2 column:3 earlier:2 combined:3 michael:1 together:6 again:1 postulate:1 leading:1 coefficient:2 depends:12 start:6 complicated:1 contribution:3 miller:1 correspond:1 rabiner:1 eters:1 produced:2 simultaneous:1 whenever:2 synaptic:20 ed:1 definition:2 frequency:1 e2:1 associated:1 knowledge:2 agreed:1 amplitude:1 response:1 synapse:2 done:1 though:1 box:1 nonlinear:8 grows:1 evolution:2 analytically:1 nonzero:1 illustrated:1 assistance:1 rat:2 m:20 occurred:1 he:1 similarly:1 phenomenological:1 cortex:4 longer:1 store:2 binary:1 seen:3 determine:3 period:3 signal:5 reduces:1 hebbian:2 long:8 post:4 e1:1 paired:1 prediction:1 basic:4 essentially:1 expectation:6 normalization:1 background:1 sjostrom:1 comment:1 induced:3 hz:1 ltp:7 call:3 presence:1 enough:1 affect:1 fit:2 associating:1 silent:1 reduce:1 whether:3 expression:3 ltd:6 speech:1 depression:2 neocortex:1 ken:1 tutorial:1 senn:1 neuroscience:2 estimated:1 per:2 write:2 putting:2 kenneth:1 backward:2 sum:5 run:2 place:1 arrive:1 interleaved:1 layer:2 simplification:1 activity:2 eisele:2 occur:2 according:3 combination:1 disconnected:1 postsynaptic:40 rev:1 biologically:2 modification:2 equation:6 agree:1 remains:1 needed:1 end:4 unusual:1 studying:1 indirectly:1 original:1 assumes:1 running:2 denotes:1 suddenly:1 r01:1 already:3 quantity:1 spike:83 occurs:4 dependence:4 usual:1 reversed:2 thank:1 nelson:1 presynaptic:11 assuming:3 modeled:1 ratio:1 trace:3 negative:1 implementation:1 design:1 unknown:3 upper:1 neuron:8 wire:2 markov:27 precise:2 arbitrary:2 pair:4 learned:1 beyond:1 suggested:1 usually:2 pattern:6 phenomenologically:1 overlap:1 natural:2 recursion:1 imply:1 started:1 speeding:1 review:1 understanding:1 evolve:1 relative:2 consistent:1 principle:7 surrounded:1 row:2 changed:1 summary:1 supported:1 last:6 silence:1 understand:1 markram:1 regard:1 cortical:13 transition:2 forward:2 author:1 san:2 far:1 observable:2 emphasize:1 summing:2 francisco:2 knew:1 factorize:1 triplet:12 nature:2 terminate:1 superficial:2 ca:2 complex:4 froemke:2 neurosci:1 whole:6 fig:5 causality:2 differed:1 strengthened:1 hebb:1 comput:1 lie:1 third:1 specific:1 easier:1 simply:1 visual:2 expressed:1 goal:1 consequently:1 ann:1 absence:1 change:20 except:1 experimental:1 preand:2 ucsf:2 correlated:1 |
1,379 | 2,255 | The Effect of Singularities in a Learning
Machine when the True Parameters Do
Not Lie on Such Singularities
Sumio Watanabe
Precision and Intelligence Laboratory
Tokyo Institute of Technology
4259 Nagatsuta, Midori-ku, Yokohama, 226-8503 Japan
E-mail: [email protected]
Shun-ichi Amari
Laboratory for Mathematical Neuroscience
RIKEN Brain Science Institute
Hirosawa, 2-1, Wako-shi, Saitama, 351-0198, Japan
E-mail: [email protected]
Abstract
A lot of learning machines with hidden variables used in information science have singularities in their parameter spaces. At
singularities, the Fisher information matrix becomes degenerate,
resulting that the learning theory of regular statistical models does
not hold. Recently, it was proven that, if the true parameter is
contained in singularities, then the coefficient of the Bayes generalization error is equal to the pole of the zeta function of the
Kullback information. In this paper, under the condition that the
true parameter is almost but not contained in singularities, we
show two results. (1) If the dimension of the parameter from inputs to hidden units is not larger than three, then there exits a
region of true parameters where the generalization error is larger
than those of regular models, however, if otherwise, then for any
true parameter, the generalization error is smaller than those of
regular models. (2) The symmetry of the generalization error and
the training error does not hold in singular models in general.
1
Introduction
A lot of learning machines with hidden parts such as multi-layer perceptrons [8],
gaussian mixtures[2], Boltzman machines, and Bayesian networks with latent variables [4] are nonidentifiable statistical models. In such learning machines, the mapping from the parameter to the probability distribution is not one-to-one. Moreover,
they have complex singularities. In this paper, a parameter w of a parametric probability density function p(x|w) is called to be a singularity if and only if det I(w) = 0,
where I(w) is the Fisher information matrix at w. If a learning machine has singularities, then neither the maximum likelihood estimator nor the Bayes a posteriori
distribution converges to the normal distribution in general [1][5].
Recently, despite of the mathematical difficulty of such learning machines, the
asymptotic Bayes generalization error has been clarified using algebraic geometrical
method [5][6]. The Bayes generalization error G(n), which is defined as the average
Kullback distance from the true distribution to the Bayes predictive distribution,
is equal to
?
1
G(n) = + o( )
n
n
where n is the number of training samples and (??) is the rational number that is
equal to the largest pole of the zeta function of the Kullback information and the
prior [6][7]. If the true parameter is not a singular point, then ? = d/2, where d
is the dimension of the parameter space, whereas, if the set of the true parameters
consists of singularities, then ? is different from d/2 [6][8].
In almost learning machines, singularities of the parameter space correspond to
smaller models contained in the parametric model. However, in practical applications, the true distribution is seldom contained completely in a finite model, and it
often happens that the true parameter is almost but not completely contained in
singularities.
In this paper, in order to clarify the effect of singularities when the true parameter
lies in the neighborhood of singularities, we propose a new scaling method by which
the Kullback distance from the singularities to the true distribution is equal to c/n,
where n is the number of training samples and c is a controlling parameter. This
scaling method, which is often used in comparing the powers of statistical hypothesis
testing algorithms, enables us to clarify the effect of singularities.
We show two results. (1) If the number of the parameters from inputs to hidden
units is not larger than three, then there exists c > 0 such that the generalization
error is larger than those of the corresponding regular model. However, if otherwise,
then for an arbitrary c ? 0, the generalization error is made to be smaller by the
singularities. (2) The symmetry of the generalization error and the training error
does not hold in nonidentifiable learning machines in general.
2
A Singular Model
Since singularities in learning machines with hidden variables have quite complex
geometrical structures in general, it needs the advanced method in modern algebraic
geometry to treat them in a general manner [6]. In this paper, we study a simple
hierarchical model. Even in this simple model, a universal phenomenon caused by
singularities can be found. Let us consider a learning problem:
1
1
Learner : p(y|x, a, b) = ? exp(? (y ? af(b, x))2 ),
(1)
2
2?
1
1
a0
True :
q(y|x)
= ? exp(? (y ? ? f(b0 , x))2 ),
(2)
2
n
2?
where y ? R1 is an output, x ? RM is an input with the probability distribution
q(x). The parameter space is defined by {(a, b) ? R1 ?RN }. The Kullback distance
from q(y|x) to p(y|x, a, b) is equal to (1/2n) a20 Ex[f (b0 , x)2 ], where Ex denotes the
expectation value over x. If f(0, x) ? 0, then an arbitrary point in {a = 0}?{b = 0}
is a singularity. We assume that the a priori distribution ?(a, b) is a C 1 -class
function and ?(b) ? ?(0, b) has a compact support.
Let Dn = {(xi , yi ); i = 1, 2, ? ? ? , n} be a set of training samples independently taken
from q(x)q(y|x). Both the Bayes a posteriori distribution p(a, b|Dn) and the Bayes
predictive distribution p(y|x, Dn ) are respectively defined by
p(a, b|Dn ) =
p(y|x, Dn ) =
n
Y
1
?(a, b)
p(yi |xi , a, b),
Cn
i=1
Z
p(y|x, a, b) p(a, b|Dn) da db,
where Cn is a normalizing constant. The generalization error G(n) and the training
error T (n) are respectively defined by
h
q(yn+1 |xn+1 ) i
,
Generalization Error: G(n) = E log
p(yn+1 |xn+1 , Dn )
n
h1 X
q(yk |xk ) i
Training Error: T (n) = E
,
log
n
p(yk |xk , Dn )
k=1
where E shows the expectation value over all sets of training samples Dn and the
testing samples (xn+1 , yn+1 ). If the learning machine is a regular statistical model,
then both G(n) = d/(2n) + o(1/n) and T (n) = ?d/(2n) + o(1/n) hold, where d is
the dimension of the parameter space, hence the coefficient d does not depend on
the true parameter. In this paper, we show that this property does not hold in a
singular learning machine.
We assume that the learning machine satisfies the condition
f(b, x) =
J
X
fj (b)ej (x)
(3)
j=1
where {ej (x)} isP
a set of orthonormal functions, Ex[ei (x)ej (x)] = ?ij . Then it follows
that kf(b)k2 ? j=1 fj (b)2 = Ex[f (b, x)2 ]. Then we have the following theorem.
Theorem 1 The Bayes generalization and training errors can be asymptotically
expanded as
?(a0 , b0 )
1
+ o( ),
2n
n
?(a0 , b0)
1
T (n) =
+ o( ).
2n
n
Here ?(a0 , b0 ) and ?(a0 , b0 ) are constant functions of n defined by
G(n) =
?(a0 , b0)
?(a0 , b0)
=
=
1 + a20 kf(b0 )k2 ? Eg
?(a0 , b0) ? Eg
J
hX
j=1
J
hX
a0 fj (b0 )
j=1
2gj
1 ?Z i
Z(g) ?gj
1 ?Z i
Z(g) ?gj
where g = (gj ) is the J dimensional gaussian distribution whose average and the covariance matrix are respectively zero and the identity, and Eg shows the expectation
value over g, and
Z
J
h
i ?(b)
X
1
2
Z(g) = exp
{
(g
+
a
f
(b
))f
(b)}
db.
j
0
j
0
j
2 kf(b)k2 j=1
kf(b)k
?
Proof of Theorem 1. We use the rescaling parameter ? = n a and define the
average < S(?, b) > of a function of S(?, b) by
R
?
exp(?L(?, b)) S(?, b) ?(?/ n, b) d? db
R
?
< S(?, b) >=
exp(?L(?, b)) ?(?/ n, b) d? db
where, we use notations d(?, b, x) = ?f (b, x) ? a0 f(b0 , x) and
n
1X
L(?, b) =
Li (?, b)
n
i=1
?
1
Li (?, b) =
d(?, b, xi)2 ? n i d(?, b, xi).
2
?
Here i ? yi ? a0 f(b0 , xi )/ n is a sample from the standard normal distribution.
The Bayes generalization and training errors are respectively equal to
h
i
Ln+1 (?, b)
G(n) = E ? log < exp{?
}>
n
n
h 1X
i
Lk (?, b)
T (n) = E ?
log < exp{?
}> .
n
n
k=1
When n ? ?, the central limiting theorem ensures the convergences in probability
and in law respectively,
n
n
1X
1 X
?
ej (xi ) ek (xi ) ? ?jk ,
i ej (xi ) ? gj ,
n i=1
n i=1
where g = (gj ) is subject to the normal distribution whose average and covariance
matrix are respectively equal to zero and the identity. Then by using log(1 ? t) =
?t + t2 /2 + o(t2 ) for small t, it follows that
J
h 1 ?Z
i
X
lim 2nG(n) =
Eg {
? a0 fj (b0 )}2 ,
n??
Z ?gj
j=1
lim 2nT (n) =
n??
lim 2nG(n) ? 2Eg
n??
J
hX
gj
j=1
1 ?Z i
,
Z ?gj
where Eg shows the expectation value over the random variable g and
Z
J
J
h 1X
i
X
Z(g) = exp ?
?2 fj (b)2 +
?fj (b)(gj + a0 fj (b0 )) ?(b) d? db.
2 j=1
j=1
By using the identity
{
? 1 ?Z
1 ?Z 2
1 ?2Z
?
} =
{
},
Z ?gj
Z ?gj2
?gj Z ?gj
and Eg [(?/?gj )f(g)] = Eg [gj f(g)] for an arbitrary function f(g), we obtain Theorem 1. (End of Proof: Theorem 1).
Theorem 1 shows that, if a0 = 0, then ?(a0 , b0) = 1, which coincides with the
general theory for the case when the true parameter is contained in the singularities
[6]. In fact, if a0 = 0, the zeta function of the Kullback information
Z
?(z) = a2z kbk2z ?(a, b) da db,
has the largest pole at z = ?1/2. The new point of this paper is that the learning
coefficient ?(a0 , b0 ) for a0 6= 0, b0 6= 0 is obtained. Unfortunately it can not be
represented by any simple function.
3
The Effect of Singularities
In order to study the effect of singularities, we adopt the simple learning machine,
af(b, x) =
N
X
abj ej (x)
(4)
j=1
where a ? R1 , b ? RN , x ? RM (N > 1). Also we assume that ?(b) depends only
the norm kbk, that is to say, ?(b) can be rewritten as ?(kbk). In this learning
machine, if the true regression function is y = 0, then the set of true parameters is
{(a, b); a = 0 or b = 0}.
Remark. By using the re-parameterization wi = abi , the learning machine eq.(4)
results in
N
X
1
1
p(y|x, w) = ? exp(? (y ?
wj ej (x)))2 ).
2
2?
j=1
This learner is a regular statistical model, hence both G(n) = N/(2n) + o(1/n) and
T (n) = ?N/(2n) + o(1/n) hold. Therefore, by comparing ?(a0 , b0) and ??(a0 , b0 )
with N , let us clarify the effect of singularities.
Theorem 2 Let us consider the learning machine and the true distribution given
by eq.(1) and eq.(2), which are restricted as eq.(4). If N ? 2, then the Bayes
generalization and training errors are respectively given by
h
YN (g) i
?(a0 , b0) = 1 + Eg (a20 kb0k2 + a0 b0 ? g)
(5)
YN?2 (g)
h
YN (g) i
(6)
?(a0 , b0) = 1 ? 2N + Eg (a20 kb0 k2 + 3a0 b0 ? g + 2kgk2 )
YN?2 (g)
where
YN (g) =
Z
?/2
0
1
d? sinN ? exp(? ka0 b0 + gk2 sin2 ?).
2
Proof of Theorem 2. We introduce the general polar coordinate b = (r, ?). The
function Z(g) in Theorem 1 is given by
Z
Z
((g + a0 b0 ) ? ?)2
} ?(r) r N?2 .
Z(g) = dr d? exp{
2
Therefore Z(g) is independent of the direction of g +a0 b0 , we can assume g +a0 b0 =
kg + a0 b0 k ? (1, 0, ? ? ? , 0) without loss of generality. By representing ? = b/r as
bi /r
bN /r
=
=
sin ?1 ? ? ? sin ?i?1 cos ?i (1 ? i ? N ? 1),
sin ?1 ? ? ? sin ?N?1 ,
we obtain
Z(g) = const.
Z
0
?/2
sinN?2 ?1 exp(
ka0 b0 + gk2
cos2 ?1 ) d?1 .
2
which completes the proof. (End of Proof: Theorem 2).
Unfortunately, the function ?(a0 , b0) in eq.(5) can not be represented by any classically analytic function. Figure 1 shows the value ?(a0 , b0) given by eq.(5) by
numerical calculations, for the cases N = 2, 3, .., 6. The horizontal and longitudinal lines respectively show |a0 |kb 0k and ?(a0 , b0 )/N . The generalization error
1.2
1
lambda/N
0.8
0.6
"Gener:N=2"
"Gener:N=3"
"Gener:N=4"
"Gener:N=5"
"Gener:N=6"
"lambda=1"
0.4
0.2
0
0
2
4
6
8
a0||b0||
10
12
Figure 1: Coefficients of Generalization Errors ?(a0 , b0 )/N for a0 kb0 k
0
-0.2
"Train:N=2"
"Train:N=3"
"Train:N=4"
"Train:N=5"
"Train:N=6"
mu/N
-0.4
-0.6
-0.8
-1
0
2
4
6
8
a0||b0||
10
12
Figure 2: Coefficients of Training Errors ?(a0 , b0 )/N for a0 kb0 k
is smaller than that of the corresponding regular statistical model if and only if
?(a0 , b0 )/N < 1.
For all cases 2 ? N ? 6, ?(a0 , b0 ) converges to the dimension N when |a0 |kb 0 k ?
?. If N = 2 and N = 3, ?(a0 , b0) becomes larger than N , if the true parameter
mismatches the singularities. When N = 2, in the region |a0 |kb 0 k > 2.8, ?(a0 , b0 ) >
N . When N = 3, only in the interval 3.8 < |a0 |kb 0k < 6.8, ?(a0 , b0) > N .
On the other hand, if N ? 4, the learning coefficient ?(a0 , b0) is always smaller than
N , even if the true parameter is not contained in singularities. If the dimension of
the parameter is large, then singularities make the Bayes generalization error smaller
than regular statistical models, independently of the place of the true parameter.
This result can be analyzed more precisely by the asymptotic expansion.
Theorem 3 The coefficients can be asymptotically expanded when |a0 |kb 0 k ? ?.
?(a0 , b0)
=
?(a0 , b0)
=
(N ? 1)(N ? 3)
1
+ o( 2
),
a20 kb0 k2
a0 kb0 k2
1
(N ? 1)2
+ o( 2
).
?N + 2
a0 kb0 k2
a0 kb0 k2
N?
In this theorem, a20 kb0 k2 /2 is equal to the Kullback distance from the singularities
to the true distribution. It should be emphasized that the symmetrical relation
?(a0 , b0 ) + ?(a0 , b0 ) = 0 does not hold near the singularities. In the generalization
error, the coefficient of 1/a20 kb0 k2 is positive if N = 2, whereas it is negative if
N ? 4. When N = 3, then the coefficient is equal to zero.
Proof of Theorem 3 The function YN (g) in Theorem 2 is rewritten as
Z 1
1
xN
x2
q
YN (g) =
exp(?
)dx
2
ka0 b0 + gk2 0
2
1 ? ka0 bx0 +gk2
Then by using
1
q
1?
x2
ka0 b0 +gk2
?
=1+
x2
,
2ka0 b0 + gk2
we have an asymptotic expansion,
?(a0 , b0 ) = 1 + Eg
h
CN
CN+2
+
M +1
M +3 i
ka
b
+
gk
2ka
b
0 0
0 0 + gk
(a20 kb0 k2 + a0 b0 ? g)
,
CN
CN?2
+
ka0 b0 + gkM ?1
2ka0 b0 + gkM +1
where CN = 2(N?1)/2 ?( N+1
2 ). The training error can be obtained by the same way.
(End of Proof: Theorem 3).
4
Discussion
Let us shortly discuss three points.
Firstly, in this paper, we compared a simple layered model with a regular statistical
model. If we employ a linear learner
y=
N
X
bj ej (x),
j=1
then we can expect the more precise statistical estimation by making it to be the
hierarchical model,
N
X
y=
abj ej (x),
j=1
if N ? 4 and Bayesian estimation is applied.
Secondly, the Bayesian model selection is usually carried out by minimizing the
stochastic complexities,
Z Y
n
F (Dn ) = ? log
p(yi |xi , a, b)?(a, b) dab. .
i=1
Let us consider the model selection problem, the model y = 0 or the model in
eq.(1). If the Kullback distance from the singularities to the true paramater is
equal to c/n and if n is sufficiently large, then for an arbitrary c, y = 0 is selected
with the probability one. Theoretically speaking, this fact shows that the minimum
stochastic complexity criterion is not equivalent to the minimum generalization error
criterion.
And lastly, we have shown that, if the true parameter is at the neighborhood of
singularities, then the symmetry of the generalization error and the training error
does not hold. Therefore the generalization error can not be estimated based on
the training error using the conventional method.
These three points are the important problems for future study.
5
Conclusion
Effect of singularities when the true parameter mismatches them is clarified. Singularities make the Bayes generalization error to be small if the dimension of the
inputs to hidden units is large. We expect that this research will be a base to clarify
the reason why neural information processing systems need hierarchical structures.
This work was supported by the Ministry of Education, Science, Sports, and Culture
in Japan, Grant-in-aid for scientific research 12680370.
References
[1] Amari,S., Park,H., and Ozeki,T. (2002) Geometrical singularities in the neuromanifold of multilayer perceptrons. Advances in Neural Information Processing
Systems, Vol.14.
[2] Hartigan, J.A. (1985) A Failure of likelihood asymptotics for normal mixtures.
Proceedings of the Berkeley Conference in Honor of J.Neyman and J.Kiefer,
Vol.2, pp.807-810.
[3] Hironaka, H. (1964). Resolution of singularities of an algebraic variety over a
field of characteristic zero. Annals of Mathematics, 79, 109-326.
[4] Rusakov, D, Geiger,D.(2002) Asymptotic model selection for naive Bayesian
networks. Proc. of UAI02.
[5] Watanabe, S. (1999). Algebraic analysis for singular statistical estimation. Lecture Notes in Computer Science, 1720, 39-50.
[6] Watanabe, S.,(2001) Algebraic analysis for nonidentifiable learning machines.
Neural Computation, 13,(4), pp.899-933.
[7] Watanabe, S. (2001) Algebraic information geometry for learning machines
with singularities. Advances in Neural Information Processing Systems, Vol.13,
329-336.
[8] Watanabe, S. (2001) Algebraic geometrical methods for hierarchical learning
machines. International Journal of Neural Networks, Vol.14, No.8, 1049-1060.
[9] Watanabe,S., & Amari,S.-I.(2003) Learning coefficients of layered models when
the true distriburion mismatches the singularities.Neural Computation, to appear.
| 2255 |@word effect:7 true:26 norm:1 hence:2 direction:1 tokyo:1 laboratory:2 parametric:2 cos2:1 stochastic:2 bn:1 covariance:2 kb:5 eg:11 sin:4 shun:1 education:1 distance:5 hx:3 coincides:1 criterion:2 generalization:22 mail:2 singularity:37 longitudinal:1 wako:1 secondly:1 reason:1 clarify:4 ka:2 comparing:2 nt:1 hold:8 sufficiently:1 normal:4 exp:13 dx:1 fj:7 mapping:1 bj:1 recently:2 minimizing:1 numerical:1 gk2:6 unfortunately:2 gk:2 enables:1 analytic:1 adopt:1 jp:2 negative:1 midori:1 polar:1 estimation:3 intelligence:1 selected:1 proc:1 parameterization:1 xk:2 largest:2 ozeki:1 finite:1 neuromanifold:1 seldom:1 mathematics:1 precise:1 clarified:2 gaussian:2 always:1 firstly:1 kb0:10 rn:2 arbitrary:4 ej:9 mathematical:2 dn:10 gj:15 base:1 consists:1 swatanab:1 likelihood:2 introduce:1 manner:1 theoretically:1 sinn:2 honor:1 nor:1 posteriori:2 multi:1 brain:2 sin2:1 yi:4 minimum:2 ministry:1 gener:5 usually:1 a0:57 mismatch:3 hidden:6 relation:1 becomes:2 moreover:1 notation:1 uai02:1 power:1 kg:1 priori:1 difficulty:1 af:2 calculation:1 ka0:8 advanced:1 equal:10 field:1 representing:1 ng:2 technology:1 berkeley:1 park:1 lk:1 regression:1 carried:1 multilayer:1 titech:1 future:1 bx0:1 rm:2 k2:11 t2:2 expectation:4 unit:3 grant:1 employ:1 yn:10 appear:1 modern:1 whereas:2 positive:1 prior:1 kf:4 interval:1 treat:1 completes:1 singular:5 geometry:2 asymptotic:4 despite:1 law:1 loss:1 expect:2 abi:1 lecture:1 proven:1 subject:1 db:6 complex:2 gkm:2 co:1 mixture:2 analyzed:1 near:1 pi:1 bi:1 variety:1 practical:1 supported:1 testing:2 culture:1 cn:7 institute:2 det:1 asymptotics:1 re:1 universal:1 dimension:6 xn:4 a20:8 regular:9 made:1 algebraic:7 speaking:1 layered:2 selection:3 remark:1 boltzman:1 pole:3 compact:1 equivalent:1 conventional:1 saitama:1 shi:1 kullback:8 go:1 independently:2 resolution:1 symmetrical:1 xi:9 estimator:1 latent:1 density:1 international:1 orthonormal:1 neuroscience:1 estimated:1 why:1 ku:1 coordinate:1 zeta:3 limiting:1 annals:1 controlling:1 hirosawa:1 vol:4 ichi:1 central:1 nagatsuta:1 symmetry:3 expansion:2 sumio:1 hypothesis:1 hartigan:1 classically:1 geometrical:4 dr:1 lambda:2 neither:1 jk:1 ek:1 da:2 rescaling:1 asymptotically:2 li:2 japan:3 coefficient:10 place:1 region:2 ensures:1 wj:1 caused:1 almost:3 depends:1 geiger:1 aid:1 precision:1 h1:1 lot:2 yk:2 scaling:2 watanabe:6 lie:2 mu:1 complexity:2 bayes:12 layer:1 kgk2:1 theorem:16 depend:1 emphasized:1 kiefer:1 predictive:2 precisely:1 characteristic:1 exit:1 learner:3 completely:2 correspond:1 x2:3 normalizing:1 isp:1 exists:1 gj2:1 bayesian:4 represented:2 expanded:2 riken:2 train:5 neighborhood:2 smaller:6 quite:1 whose:2 larger:5 failure:1 wi:1 say:1 pp:2 amari:4 otherwise:2 dab:1 making:1 happens:1 proof:7 kbk:2 restricted:1 contained:7 sport:1 rational:1 naive:1 taken:1 satisfies:1 ln:1 neyman:1 lim:3 hironaka:1 discus:1 identity:3 propose:1 end:3 fisher:2 rewritten:2 degenerate:1 hierarchical:4 paramater:1 called:1 generality:1 lastly:1 convergence:1 hand:1 yokohama:1 r1:3 horizontal:1 ei:1 shortly:1 denotes:1 converges:2 perceptrons:2 support:1 ac:1 phenomenon:1 const:1 ij:1 scientific:1 b0:54 eq:7 ex:4 |
1,380 | 2,256 | Improving Transfer Rates in Brain Computer
Interfacing: A Case Study
Peter Meinicke, Matthias Kaper, Florian Hoppe, Manfred Heumann and Helge Ritter
University of Bielefeld
Bielefeld, Germany
{pmeinick, mkaper, fhoppe, helge} @techfak.uni-bielefeld.de
Abstract
In this paper we present results of a study on brain computer interfacing.
We adopted an approach of Farwell & Donchin [4], which we tried to
improve in several aspects. The main objective was to improve the transfer rates based on offline analysis of EEG-data but within a more realistic
setup closer to an online realization than in the original studies. The objective was achieved along two different tracks: on the one hand we used
state-of-the-art machine learning techniques for signal classification and
on the other hand we augmented the data space by using more electrodes
for the interface. For the classification task we utilized SVMs and, as motivated by recent findings on the learning of discriminative densities, we
accumulated the values of the classification function in order to combine
several classifications, which finally lead to significantly improved rates
as compared with techniques applied in the original work. In combination with the data space augmentation, we achieved competitive transfer
rates at an average of 50.5 bits/min and with a maximum of 84.7 bits/min.
1 Introduction
Some neurological diseases result in the so-called locked-in syndrome. People suffering
from this syndrom lost control over their muscles, and therefore are unable to communicate.
Consequently, their brain-signals should be used for communication. Besides the clinical
application, developing such a brain-computer interface (BCI) is in itself an exciting goal
as indicated by a growing research interest in this field.
Several EEG-based techniques have been proposed for realization of BCIs (see [6, 12], for
an overview). There are at least four distinguishable basic approaches, each with its own
advantages and shortcomings:
1. In the first approach, participants are trained to control their EEG frequency pattern for binary decisions. Whether specific frequencies (the and rhythms)
in the power range are heightened or not results in upward or downward cursor
movements. A further version extended this basic approach for 2D-movements.
Transfer rates of 20-25 bits/min were reported [12].
2. Imaginations of movements, resulting in the ?Bereitschaftspotential? over sensorimotor cortex areas, are used to transmit information in the device of Pfurtscheller
Figure 1: Stimulusmatrix with one column highlighted.
et al. [8], which is in use by a tetraplegic patient. Blankertz et al. [2] applied
sophisticated methods for data-analysis to this approach and reached fast transfer
rates of 23 bits/min when classifying brain signals preceding overt muscle activity.
3. The thought translation device by Birbaumer et al. [5, 1] is based on slow cortical
potentials, i.e. large shifts in the EEG-signal. They trained people in a biofeedback
scenario to control this component. It is rather slow (<6 bits/min) and requires
intensively trained participants but is in practical use.
4. Farwell & Donchin [4, 3, 10] developed a BCI-System by utilizing specific positive deflections (P300) in EEG-signals accompanying rare events (as discussed in
detail below). It is moderately fast (up to 12 bits/min) and needs no practice of the
participant, but requires visual attention.
For BCIs, it is very desirable to have fast transfer rates. In our own studies, we therefore
tried to accelerate the fourth approach by using state-of-the-art machine learning techniques
and fusing data from different electrodes for data-analysis. For that purpose we utilized the
basic setup of Farwell & Donchin (referred to as F&D) [4] who used the well-studied
P300-Component to create a BCI-system. They presented a 6 6-matrix (see Fig. 1), filled
with letters and digits, and highlighted all rows and columns sequentially in random order. People were instructed to focus on one symbol in the matrix, and mentally count its
highlightings. From EEG-research it is known, that counting a rare specific event (oddballstimulus) in a series of background stimuli evokes a P300 for the oddball stimulus. Hence,
highlighting the attended symbol in the 6 6-matrix should result in a P300, a characteristic positive deflection with a latency of around 300ms in the EEG-signal. It is therefore
possible to infer the selected symbol by detecting the P300 in EEG-signals. Under suitable
circumstances, most brains expose a P300. Thus, no training of the participants is necessary. For identification of the right column and row associated with a P300, Farwell &
Donchin used the model-based techniques Area and Peak picking (both described in section
2) to detect the P300. In addition, as a data-driven approach, they used Stepwise Discriminant Analysis (SWDA). Using SWDA in a later study [3] resulted in transfer rates between
4.8 and 7.8 symbols per minute at an accuracy of 80% with a temporal distance of 125ms
between two highlightings.
In our work reported here we could improve several aspects of the F&D-approach by utilizing very recent machine learning techniques and a larger number of EEG-electrodes. First
of all, we could increase the transfer rate by using Support Vector Machines (SVM) [11] for
classification. Inspired by a recent approach to learning of discriminative densities [7] we
utilized the values of the SVM classification function as a measure of confidence which we
accumulate over certain classifications in order to speed up the transfer rate. In addition,
we enhanced classification rates by augmenting the data-space. While Farwell & Donchin
employed only data from a single electrode for classification, we used the data from 10
electrodes simultaneously.
2 Methods
In the following we describe the techniques used for acquisition, preprocessing and analysis
of the EEG-data.
Data acquisition. All results of this paper stem from offline analyses of data acquired
during EEG-experiments. The experimental setup was the following: participants were
seated in front of a computer screen presenting the matrix (see Fig. 1) and user instructions. EEG-data were recorded with 10 Ag/AgCl electrodes at positions of the extended
international 10-20 system (Fz, Cz, Pz, C3, C4, P3, P4, Oz, OL, OR 1 ) sampled at 200Hz
and low-pass filtered at 30Hz. The participants had to perform a certain number of trials.
For the duration of a trial, they were instructed
to focus their attention on a target symbol specified by the program,
to mentally count the highlightings of the target symbol, and
to avoid any body movement (especially eye moves and blinks).
Each trial is subdivided into a certain number of subtrials. During each subtrial, 12 stimuli
are presented, i.e. the 6 rows and the 6 columns are highlighted in random order. For
different BCI-setups, the time between stimulus onsets, the interstimulus interval (ISI),
was either 150, 300 or 500ms, while a highlighting always lasts 150ms. To each stimulus
correspondes an epoch, a time frame of 600ms after stimulus onset 2 During this interval a
P300 should be evoked if the stimulus contains the target symbol.
There is no pause between subtrials, but between trials. During the pause, the participants
had time to focus on the next target symbol, before they initiated the next trial. The target
symbol was chosen randomly from the available set of symbols and was presented by the
program in order to create a data set of labelled EEG-signals for the subsequent offline
analysis.
Data preprocessing. To compensate for slow drifts of the DC potential, in a first step the
linear trend of the raw data in each electrode over the duration of a trial was eliminated. In
a second step, the data was normalized to zero mean and unit standard deviation. This was
separately done for each electrode taking the data of all trials into account.
Classification of Epochs. Test- and trainingsets were created by choosing the data according to one symbol as testset, and the data of the other symbols as trainingset in a
crossvalidation scheme.
The task of classifying a subtrial for the identification of a target symbol has to be distinguished from the classification of a single epoch for detection of a signal, correlated with
oddball-stimuli, which we briefly refer to as a ?P300 component? in a simplified manner
in the following. In case of using a subtrial to select a symbol, two P300 components have
to be detected within epochs: one corresponding to a row-, another to a column-stimulus.
The detection algorithm works on the data of an epoch and has to compute a score which
reflects the presence of a P300 within that epoch. Therefore, 12 epochs have to be evaluated
for the selection of one target symbol.
For the P300-detection, we utilized two model-based methods which had been proposed by
F&D, and one completely data-driven method based on Support Vector Machines (SVMs)
[11]. For training of the classifiers, we built up a sets of epochs containing an equal number
of positive and negative examples, i.e. epochs with and without a P300 component.
1
2
OL denotes the position halfway between O1 and T5, and OR between O2 and T6 respectively.
With an ISI shorter than 450ms, there is a time overlap of consecutive epochs.
time course
model?based methods
trial
subtrial 1
subtrial 2
subtrial 3
stimulus
onsets
epoch of 600ms
Figure 2: Trials, subtrials and epochs in the course of time (left). Model-based methods for
analysis. Area calculates surface in the P300-window, Peak picking calculates differences
between peaks.
The first model-based method
uses as its score as shown in Fig. 2 the area in the P300window (?Area method?,
), while the second model-based method uses the difference
between the lowest
point before, and the highest point within the P300-window (?Peak
). Hyperparameters of the model-based methods were the boundaries
picking method?,
of the P300-window. They were selected regarding the average of epochs containing the
P300 by taking the boundaries of the largest area.
For the completely data-driven approach, SVMs were optimized to distinguish between the
two classes (w/o P300) implied by the training set. As compared with many traditional
classifiers, such as the SWDA method used by F&D, SVMs can realize Bayes-consistent
classifiers under very general conditions without requiring any specific assumptions about
the underlying data distributions and decision boundaries. Thereby convergence to the
Bayes optimum can be achieved by a suitable choice of hyperparameters.
When using SVMs, it is not clear what measure to take as the score of an epoch. The
problem is that the SVM has first of all been designed to assign binary class labels to its
input without any measure of confidence on the resulting decision.
However, a recent approach to learning of discriminative densities [7] suggests an interpretation of the usual discrimination function for SVMs with positive kernels in terms of scaled
density differences. This finding provides us with a well-motivated score of an epoch: with
as the data vector of an epoch and
as the corresponding class label which is
positive/negative for epochs with/without target stimulus
the
!$#%!'&
! SVM-score is computed as
"!
&
-,
!
)(+*
(1)
in our case is a Gaussian
where
Kernel function with bandwidth . (selected as the
# !
weight / for the soft-margin penalties by 0 -fold crossvalidation) evaluated at the 1 -th data
example. The mixing weights
were estimated by quadratic optimization for an SVM
objective with linear soft-margin penalties where we used the SMO-algorithm [9].
Combination of subtrials. Because EEG-data possess a very poor signal-to-noise ratio
(SNR), identification of the target symbol from a single subtrial is usually not reliable
enough to achieve a reasonable classification rate. Therefore, several subtrials have to be
combined for classification, slowing down the transfer rate. Thus, an important goal is to
decrease the amount of subtrials which have to be combined for a satisfactory classification
rate.
An important constraint for the development of the specific offline-analysis programs was
to realize a testing scheme which should be as close as possible to a corresponding online
evaluation. Therefore, we tested a method for certain -combinations of subtrials in the
following way: different series of successive subtrials were taken out of a test set and the
corresponding single classifications were combined as explained below. Thereby, the test
series contained only subtrials belonging to identical symbols and these were combined in
their original temporal order3.
In contrast, Farwell & Donchin randomly chose samples from a test set, built from subtrials
taken from different trials and belonging to different symbols. With this procedure, they
broke up the time course of the recorded data and did not distinguish between different
symbols, i.e. different positions in the matrix on the screen.
!
Based on the data of subtrials, one has to choose a row and a column in order to identify
4
! i.e. to classify a trial. Therefore, in a first step, the single scores
the target symbol,
! stimulus
! to the 1 -th row of the -th subtrial
of the epoch corresponding to the
associated
. Then, the target row was chosen
!
were summed! up
to the total score
as
with 1
. Equivalent steps were performed to choose the target
column. Based on these decisions the target symbol was finally selected in accordance to
the presented matrix.
3 Experimental Results
Before going into details, we outline our investigations about improving the usability of the
F&D-BCI. First, the different methods were compared to classify the data of the Pz electrode, which was originally used by Farwell & Donchin. Second, further single electrodes
were taken as input source. This revealed information about interesting scalp positions to
record a P300 and on the other hand indicated which channels may contain a useful signal.
Third, the SVM classification rate with respect to epochs was improved by increasing the
data-space. Therefore, the input vector for the classifier was extended by combining data
from the same epoch but from different electrodes. These tests indicated that the best classification rates could be achieved using as detection method an SVM with all ten electrodes
as input sources.
Since the results of the first three steps were established based on the data of one initial
experiment with only one participant, we evaluated the generality of these techniques by
testing different subjects and BCI parameters. Finally, the BCI performance in terms of
attainable communication rates is estimated from these analyses.
Method comparison using the Pz electrode as input source. All four methods were
applied to the data of one initial experiment with an ISI of 500ms and 3 subtrials per trial.
Figure 3 presents the classification rates of up to 10 subtrials.
The SVM method achieved best performance, its epoch classification rate was 76.3%
(SD=1.0) in a 10-fold crossvalidation with about 380 subtrials samples in the training sets,
and about 40 in the test sets. Of each subtrial in the training set, 4 epochs (2 with, 2 without
a P300) were taken as training samples, whereas all 12 epochs of the subtrials of the test
set were classified. For each training set, hyperparameters were selected by another 3-fold
crossvalidation on this set.
3
For a higher number of subtrial combinations, subtrials from different trials had to be combined.
However, real-world-application of this BCI don?t require such combinations with respect to the
finally achieved transfer rates reported in section 3.
4
The method index is omitted in the following.
Figure 3: (left) Method comparison on the Pz electrode: The three techniques were applied
to the data of the initial experiment. (right) Classification rates for different number of
electrodes.
SVM
100
90
90
80
80
classification rate (%)
classification rate (%)
Peak picking
100
70
60
50
40
30
70
60
50
40
30
20
20
10
10
0
0
6
12
18
24
30
36
42
48
54
60
66
72
78
84
6
90
12 18 24 30 36 42 48 54 60 66 72 78 84 90
time (s)
time (s)
P3
P4
OL
OR
OZ
Fz
Cz
Pz
C3
C4
Figure 4: Electrode comparison on the data of the initial experiment.
Different electrodes as input source. The method comparison tests were repeated for
each electrode. The results of the Peak picking and SVM method are shown in Figure 3.
The SVM is able to extract useful information from all ten electrodes, whereas the Peak
picking performance varies for different scalp positions. Especially, the electrodes over the
visual cortex areas OZ, OR and OL are useless for the model-based techniques, as the same
characteristics are revealed by tests with the Area method.
Higher-dimensional data-space. While Farwell & Donchin used only one electrode for
data-analysis, we extended the data-space by using larger numbers of electrodes. We calculated classification rates for Pz alone, three, seven, and ten electrodes. A signal correlated
with oddball-stimuli was classified at rates of 76.8%, 76.8%, 90.9%, and 94.5%, respectively for the different data-spaces of 120, 360, 840, and 1200 dimensions. These rates were
calculated with 850 positive and 850 negative epoch samples and a 3-fold crossvalidation.
This classified signal might be more than solely the traditional P300 component. Applying data-space augmentation for classification to infer symbols in the matrix results in the
classification rates depicted in Figure 3 (right) for an ISI of 500ms. Using ten electrodes
simultaneously, combined in one data vector, outperforms lower-dimensional data-spaces.
Figure 5: Mean-classification rates (left) and transfer rates (right) for different ISIs. Error
bars range from best to worst results. Note that a subtrial takes a specific amount of time.
Therefore, the time dependend transfer rates are decreasing with the number of subtrials.
Reducing the ISI and using more participants. The improved classification rates encouraged further experiments. To accelerate the system, we reduced the ISI to 300ms and
150ms. Additionally, to generalize the results, we recruited four participants. Means, best
and worst classification rates are presented in Figure 5, as well as average and best transfer
rates. The latter were calculated according to
(
(
where is the number of choices (36 here), the probability for classification, and the
time required for classification.
Using an ISI of 300ms results in slower transfer rates than using an ISI of 150ms. The
latter ISI results on the average in classifying a symbol after 5.4s with an accuracy of 80%
(disregarding delays between trials). The poorest performer needs 9s to reach this criterion,
the best performer achieves an accuracy of 95.2% already after 3.6s. The transfer rates, with
a maximum of 84.7 bits/min and an average of 50.5 bits/min outperform the EEG-based
BCI-systems we know.
4 Conclusion
With an application of the data-driven SVM-method to classification of single-channel
EEG-signals, we could improve transfer rates as compared with model-based techniques.
Furthermore, by increasing the number of EEG-channels, even higher classification and
transfer rates could be achieved. Accumulating the value of the classification function as
measure of confidence proved to be practical to handle series of classifications in order to
identify a symbol. This resulted in high transfer rates with a maximum of 84.7 bits/min.
5 Acknowledgements
We thank Thorsten Twellmann for supplying the SVM-algorithms and the Department of
Cognitive Psychology at the University of Bielefeld for providing the experimental environment. This work was supported by Grant Ne 366/4-1 and the project SFB 360 from the
German Research Council (Deutsche Forschungsgemeinschaft).
References
[1] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. K?bler,
J. Perelmouter, E. Taub, and H. Flor. A spelling device for the paralysed. Nature,
398:297?298, 1999.
[2] B. Blankertz, G. Curio, and K.-R. M?ller. Classifying single trial eeg: Towards brain
computer interfacing. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors,
Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT
Press.
[3] E. Donchin, K.M. Spencer, and R. Wijeshinghe. The mental prosthesis: Assessing the
speed of a p300-based brain-computer interface. IEEE Transactions on Rehabilitation
Engineering, 8(2):174?179, 2000.
[4] L.A. Farwell and E. Donchin. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and clinical
Neurophysiology, 70(S2):510?523, 1988.
[5] A. K?bler, B. Kotchoubey, T. Hinterberger, N. Ghanayim, J. Perelmouter, M. Schauer,
C. Fritsch, E. Taub, and N. Birbaumer. The thought translation device: a neurophysiological approach to commincation in total motor paralysis. Experimental Brain
Research, 124:223?232, 1999.
[6] A. K?bler, B. Kotchoubey, J. Kaiser, J.R. Wolpaw, and N. Birbaumer. Brain-computer
communication: Unlocking the locked in. Psychological Bulletin, 127(3):358?375,
2001.
[7] P. Meinicke, T. Twellmann, and H. Ritter. Maximum contrast classifiers. In Proc. of
the Int. Conf. on Artificial Neural Networks, Berlin, 2002. Springer. in press.
[8] G. Pfurtscheller, C. Neuper, C. Guger, B. Obermaier, M. Pregenzer, H. Ramoser, and
A. Schl?gl. Current trends in graz brain-computer interface (bci) research. IEEE
Transactions On Rehabilitation Engineering, pages 216?219, 2000.
[9] J. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Sch?lkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel
Methods ? Support Vector Learning, pages 185?208, Cambridge, MA, 1999. MIT
Press.
[10] J.B. Polikoff, H.T. Bunnell, and W.J. Borkowski. Toward a p300-based computer
interface. RESNA ?95 Annual Conference and RESNAPRESS and Arlington Va., pages
178?180, 1995.
[11] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995.
[12] J.R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller, and T.M. Vaughan.
Brain-computer interfaces for communication and control. Clinical Neurophysiology,
113:767?791, 2002.
| 2256 |@word neurophysiology:2 trial:15 version:1 briefly:1 meinicke:2 instruction:1 tried:2 attainable:1 attended:1 thereby:2 initial:4 series:4 contains:1 score:7 o2:1 outperforms:1 current:1 realize:2 tetraplegic:1 subsequent:1 realistic:1 motor:1 designed:1 discrimination:1 alone:1 selected:5 device:4 slowing:1 record:1 manfred:1 filtered:1 supplying:1 mental:2 detecting:1 provides:1 successive:1 along:1 agcl:1 combine:1 manner:1 acquired:1 isi:10 growing:1 brain:13 ol:4 inspired:1 decreasing:1 window:3 electroencephalography:1 increasing:2 project:1 pregenzer:1 underlying:1 deutsche:1 lowest:1 what:1 developed:1 finding:2 ag:1 temporal:2 classifier:5 scaled:1 platt:1 control:4 unit:1 grant:1 positive:6 before:3 engineering:2 accordance:1 sd:1 initiated:1 solely:1 might:1 chose:1 studied:1 evoked:1 suggests:1 locked:2 range:2 practical:2 testing:2 lost:1 practice:1 digit:1 procedure:1 wolpaw:2 area:8 significantly:1 thought:2 confidence:3 close:1 selection:1 applying:1 twellmann:2 accumulating:1 vaughan:1 equivalent:1 attention:2 duration:2 utilizing:3 handle:1 transmit:1 techfak:1 enhanced:1 target:13 heightened:1 user:1 us:2 trend:2 utilized:4 worst:2 graz:1 movement:4 highest:1 decrease:1 disease:1 environment:1 moderately:1 trained:3 completely:2 accelerate:2 fast:4 shortcoming:1 describe:1 detected:1 artificial:1 choosing:1 larger:2 fritsch:1 bci:10 dependend:1 bereitschaftspotential:1 bler:3 highlighted:3 itself:1 online:2 advantage:1 matthias:1 p4:2 combining:1 realization:2 p300:24 mixing:1 achieve:1 oz:3 interstimulus:1 crossvalidation:5 guger:1 convergence:1 electrode:24 optimum:1 pmeinick:1 assessing:1 augmenting:1 schl:1 heumann:1 broke:1 require:1 subdivided:1 assign:1 investigation:1 spencer:1 accompanying:1 around:1 achieves:1 consecutive:1 omitted:1 purpose:1 proc:1 overt:1 label:2 expose:1 council:1 largest:1 create:2 reflects:1 mit:2 interfacing:3 always:1 gaussian:1 rather:1 avoid:1 focus:3 contrast:2 detect:1 accumulated:1 going:1 germany:1 upward:1 classification:34 development:1 art:2 summed:1 oddball:3 field:1 equal:1 eliminated:1 encouraged:1 identical:1 stimulus:13 randomly:2 simultaneously:2 resulted:2 detection:4 interest:1 evaluation:1 paralysed:1 closer:1 necessary:1 shorter:1 filled:1 farwell:9 biofeedback:1 prosthesis:2 minimal:1 psychological:1 column:7 soft:2 classify:2 fusing:1 deviation:1 rare:2 snr:1 delay:1 front:1 reported:3 perelmouter:2 varies:1 combined:6 density:4 peak:7 international:1 ghanayim:2 ritter:2 off:1 picking:6 augmentation:2 recorded:2 containing:2 choose:2 obermaier:1 hinterberger:2 cognitive:1 conf:1 imagination:1 account:1 potential:3 de:1 int:1 onset:3 later:1 performed:1 reached:1 competitive:1 bayes:2 participant:10 accuracy:3 who:1 characteristic:2 identify:2 blink:1 generalize:1 lkopf:1 identification:3 raw:1 kotchoubey:3 classified:3 reach:1 sensorimotor:1 frequency:2 acquisition:2 associated:2 sampled:1 proved:1 intensively:1 sophisticated:1 originally:1 higher:3 arlington:1 improved:3 done:1 evaluated:3 generality:1 furthermore:1 smola:1 hand:3 indicated:3 bcis:2 dietterich:1 normalized:1 requiring:1 contain:1 hence:1 satisfactory:1 during:4 rhythm:1 m:13 criterion:1 presenting:1 outline:1 interface:6 mentally:2 overview:1 birbaumer:5 discussed:1 interpretation:1 accumulate:1 refer:1 taub:2 cambridge:2 had:4 cortex:2 surface:1 own:2 recent:4 driven:4 scenario:1 certain:4 binary:2 muscle:2 florian:1 preceding:1 syndrome:1 employed:1 performer:2 ller:1 signal:14 desirable:1 infer:2 stem:1 usability:1 clinical:3 compensate:1 va:1 calculates:2 basic:3 hoppe:1 patient:1 circumstance:1 kernel:3 cz:2 achieved:7 background:1 addition:2 separately:1 whereas:2 interval:2 source:4 sch:1 flor:1 posse:1 hz:2 subject:1 recruited:1 counting:1 presence:1 revealed:2 forschungsgemeinschaft:1 enough:1 trainingset:1 psychology:1 bandwidth:1 regarding:1 shift:1 whether:1 motivated:2 sfb:1 becker:1 penalty:2 peter:1 york:1 useful:2 latency:1 clear:1 amount:2 ten:4 svms:6 reduced:1 fz:2 outperform:1 estimated:2 track:1 per:2 four:3 halfway:1 deflection:2 letter:1 fourth:1 communicate:1 evokes:1 bielefeld:4 reasonable:1 p3:2 decision:4 bit:9 poorest:1 distinguish:2 fold:4 quadratic:1 annual:1 activity:1 scalp:2 constraint:1 your:1 aspect:2 speed:2 min:9 department:1 developing:1 according:2 combination:5 poor:1 belonging:2 helge:2 rehabilitation:2 explained:1 thorsten:1 taken:4 count:2 german:1 know:1 adopted:1 available:1 distinguished:1 slower:1 original:3 denotes:1 top:1 iversen:1 ghahramani:1 especially:2 implied:1 objective:3 move:1 already:1 kaiser:1 usual:1 traditional:2 spelling:1 distance:1 unable:1 thank:1 berlin:1 seven:1 discriminant:1 toward:2 besides:1 o1:1 index:1 useless:1 ratio:1 providing:1 setup:4 negative:3 perform:1 unlocking:1 extended:4 communication:4 head:1 frame:1 dc:1 drift:1 required:1 specified:1 c3:2 optimized:1 c4:2 smo:1 established:1 able:1 bar:1 mcfarland:1 below:2 pattern:1 usually:1 program:3 built:2 reliable:1 power:1 suitable:2 event:3 overlap:1 pause:2 blankertz:2 scheme:2 improve:4 eye:1 ne:1 created:1 extract:1 epoch:24 acknowledgement:1 interesting:1 consistent:1 exciting:1 editor:2 classifying:4 seated:1 translation:2 row:7 course:3 supported:1 last:1 gl:1 t6:1 offline:4 burges:1 taking:2 bulletin:1 boundary:3 calculated:3 cortical:1 world:1 dimension:1 t5:1 instructed:2 preprocessing:2 simplified:1 testset:1 transaction:2 uni:1 sequentially:1 paralysis:1 discriminative:3 don:1 additionally:1 channel:3 transfer:19 nature:2 eeg:18 improving:2 ramoser:1 did:1 main:1 s2:1 noise:1 hyperparameters:3 suffering:1 repeated:1 body:1 augmented:1 fig:3 referred:1 screen:2 slow:3 pfurtscheller:3 position:5 third:1 minute:1 down:1 specific:6 symbol:24 pz:6 svm:13 disregarding:1 stepwise:1 donchin:10 curio:1 sequential:1 vapnik:1 downward:1 cursor:1 margin:2 depicted:1 distinguishable:1 neurophysiological:1 visual:2 highlighting:5 contained:1 neurological:1 talking:1 springer:2 ma:2 goal:2 consequently:1 towards:1 labelled:1 reducing:1 called:1 total:2 pas:1 experimental:4 neuper:1 select:1 people:3 support:4 latter:2 tested:1 correlated:2 |
1,381 | 2,257 | Cluster Kernels for
Semi-Supervised Learning
Olivier Chapelle, Jason Weston, Bernhard Scholkopf
Max Planck Institute for Biological Cybernetics, 72076 Tiibingen, Germany
{first. last} @tuebingen.mpg.de
Abstract
We propose a framework to incorporate unlabeled data in kernel
classifier, based on the idea that two points in the same cluster are
more likely to have the same label. This is achieved by modifying
the eigenspectrum of the kernel matrix. Experimental results assess
the validity of this approach.
1
Introduction
We consider the problem of semi-supervised learning, where one has usually few
labeled examples and a lot of unlabeled examples. One of the first semi-supervised
algorithms [1] was applied to web page classification. This is a typical example
where the number of unlabeled examples can be made as large as possible since
there are billions of web page, but labeling is expensive since it requires human
intervention. Since then, there has been a lot of interest for this paradigm in the
machine learning community; an extensive review of existing techniques can be
found in [10].
It has been shown experimentally that under certain conditions, the decision function can be estimated more accurately, yielding lower generalization error [1, 4, 6] .
However, in a discriminative framework, it is not obvious to determine how unlabeled data or even the perfect knowledge of the input distribution P(x) can help in
the estimation of the decision function. Without any assumption, it turns out that
this information is actually useless [10].
Thus, to make use of unlabeled data, one needs to formulate assumptions. One
which is made, explicitly or implicitly, by most of the semi-supervised learning
algorithms is the so-called "cluster assumption" saying that two points are likely to
have the same class label if there is a path connecting them passing through regions
of high density only. Another way of stating this assumption is to say that the
decision boundary should lie in regions of low density. In real world problems, this
makes sense: let us consider handwritten digit recognition and suppose one tries to
classify digits 0 from 1. The probability of having a digit which in between a 0 and
1 is very low.
In this article, we will show how to design kernels which implement the cluster
assumption, i.e. kernels such that the induced distance is small for points in the
same cluster and larger for points in different clusters.
' :.. +.... .
+
Figure 1: Decision function obtained by an SVM with the kernel (1). On this
toy problem, this kernel implements perfectly the cluster assumption: the decision
function cuts a cluster only when necessary.
2
Kernels implementing the cluster assumption
In this section, we explore different ideas on how to build kernels which take into
account the fact that the data is clustered. In section 3, we will propose a framework
which unifies the methods proposed in [11] and [5].
2.1
Kernels from mixture models
It is possible to design directly a kernel taking into account the generative model
learned from the unlabeled data. Seeger [9] derived such a kernel in a Bayesian
setting. He proposes to use the unlabeled data to learn a mixture of models and
he introduces the Mutual Information kernel which is defined in such way that
two points belonging to different components of the mixture model will have a
low dot product. Thus, in the case of a mixture of Gaussians, this kernel is an
implementation of the cluster assumption. Note that in the case of a single mixture
model, the Fisher kernel [3] is an approximation of this Mutual Information kernel.
Independently, another extension of the Fisher kernel has been proposed in [12]
which leads, in the case of a mixture of Gaussians (J.Lk, ~k) to the Marginalized
kernel whose behavior is similar to the mutual information kernel,
q
K(x, y) =
L P(klx)P(kly)x T~kly.
(1)
k=l
To understand the behavior of the Marginalized kernel, we designed a 2D-toy problem (figure 1): 200 unlabeled points have been sampled from a mixture of two
Gaussians, whose parameters have then been learned with EM applied to these
points. An SVM has been trained on 3 labeled points using the Marginalized kernel
(1). The behavior of this decision function is intuitively very satisfying: on the
one hand, when not enough label data is available, it takes into account the cluster
assumption and does not cut clusters (right cluster), but on the other hand, the
kernel is flexible enough to cope with different labels in the same cluster (left side).
2.2
Random walk kernel
The kernels presented in the previous section have the drawback of depending on
a generative model: first, they require an unsupervised learning step, but more
importantly, in a lot of real world problems, they cannot model the input distribution with sufficient accuracy. When applying the mixture of Gaussians method
(presented above) to real world problems, one cannot expect the "ideal" result of
figure 1.
For this reason, in clustering and semi-supervised learning, there has been a lot
of interest to find algorithms which do not depend on a generative model. We
will present two of them, find out how they are related and present a kernel which
extends them. The first one is the random walk representation proposed in [11] .
The main idea is to compute the RBF kernel matrix (with the labeled and unlabeled
points) Kij = exp( -llxi - Xj 112 /2( 2 ) and to interpret it as a transition matrix of
. . After t steps
a random walk on a graph with vertices Xi , P(Xi -+ Xj) = "K'k
L.J p
tp
(where t is a parameter to be determined) , the probability of going from a point
Xi to a point Xj should be quite high if both points belong to the same cluster and
should stay low if they are in two different clusters.
Let D be the diagonal matrix whose elements are Dii = Lj K ij . The one step
transition matrix is D - 1 K and after t steps it is pt = (D - 1 K)t. In [11], the
authors design a classifier which uses directly those transition probabilities. One
would be tempted to use pt as a kernel matrix for a SVM classifier. However, it
is not possible to directly use pt as a kernel matrix since it is not even symmetric.
We will see in section 3 how a modified version of pt can be used as a kernel.
2.3
Kernel induced by a clustered representation
Another idea to implement the cluster assumption is to change the representation
of the input points such that points in the same cluster are grouped together in the
new representation. For this purpose, one can use tools of spectral clustering (see
[13] for a review) Using the first eigenvectors of a similarity matrix, a representation
where the points are naturally well clustered has been recently presented in [5]. We
suggest to train a discriminative learning algorithm in this representation. This
algorithm, which resembles kernel PCA, is the following:
1. Compute the affinity matrix, which is an RBF kernel matrix but with
diagonal elements being 0 instead of 1.
2. Let D be a diagonal matrix with diagonal elements equal to the sum of the
rows (or the columns) of K and construct the matrix L = D - 1 / 2 KD - 1 / 2 .
3. Find the eigenvectors (Vi, ... , Vk) of L corresponding the first k eigenvalues.
4. The new representation of the point Xi is (Vii' ... ' Vik) and is normalized
to have length one: ip(Xi)p = Vip /
Vfj)1/2.
0:=;=1
The reason to consider the first eigenvectors of the affinity matrix is the following.
Suppose there are k clusters in the dataset infinitely far apart from each other. One
can show that in this case, the first k eigenvalues of the affinity matrix will be 1 and
the eigenvalue k + 1 will be strictly less than 1 [5]. The value of this gap depends
on how well connected each cluster is: the better connected, the larger the gap is
(the smaller the k + 1st eigenvalue). Also, in the new representation in Rk there
will be k vectors Zl, .. . ,Zk orthonormal to each other such that each training point
is mapped to one of those k points depending on the cluster it belongs to.
This simple example show that in this new representation points are naturally
clustered and we suggest to train a linear classifier on the mapped points.
3
Extension of the cluster kernel
Based on the ideas of the previous section, we propose the following algorithm:
1. As before, compute the RBF matrix K from both labeled and unlabeled
points (this time with 1 on the diagonal and not 0) and D, the diagonal
matrix whose elements are the sum of the rows of K.
2. Compute L = D- 1 / 2 K D- 1 / 2 and its eigendecomposition L = U AUT.
3. Given a transfer function <p, let :Xi = <p(Ai), where the
of L, and construct L = U AuT.
Ai are the eigenvalues
4. Let iJ be a diagonal matrix with iJ ii = 1/ Lii and compute K = iJ1 /2 LiJ 1/ 2.
The new kernel matrix is K. Different transfer function lead to different kernels:
Linear <p(A) = A. In this case L = L and iJ = D (since the diagonal elements of
K are 1). It turns out that K = K and no transformation is performed.
Step <p(A) = 1 if A 2: Acut and 0 otherwise. If Acut is chosen to be equal to the k-th
largest eigenvalue of L, then the new kernel matrix K is the dot product
matrix in the representation of [5] described in the previous section.
Linear-step Same as the step function, but with <p(A) = A for A 2: Acut . This is
closely related to the approach consisting in building a linear classifier in
the space given by the first Kernel PCA components [8]: if the normalization matrix D and iJ were equal to the identity, both approaches would be
identical. Indeed, if the eigendecomposition of K is K = U AUT , the coordinates of the training points in the kernel PCA representation are given
by the matrix U A1 /2 .
At.
In this case, L
Lt and K
iJ1 /2 D1 /2 (D- 1K)t D- 1/ 2iJ1/2 .
The matrix D- 1K is the transition
matrix in the random walk described in section 2.2 and K can be interpreted as a normalized and symmetrized version of the transition matrix
corresponding to a t step random walk.
Polynomial <p(A)
This makes the connection between the idea of the random walk kernel of section
2.2 and a linear classifier trained in a space induced by either the spectral clustering
algorithm of [5] or the Kernel PCA algorithm.
How to handle test points If test points are available during training and if
they are also drawn from the same distribution as the training points (an assumption
which is commonly made), then they should be considered as unlabeled points and
the matrix K described above should be built using training, unlabeled and test
points.
However, it might happen that test points are not available during training. This is
a problem, since our method produces a new kernel matrix, but not an analytic form
of the effective new kernel that could readily be evaluated on novel test points. In
this case, we propose the following solution: approximate a test point x as a linear
combination of the training and unlabeled points, and use this approximation to
express the required dot product between the test point and other points in the
feature space. More precisely, let
aD =
argm~n 11<p(X) - n~u lli<P(Xi)II = K-
1v
-
Linear (Normal SVM)
--e-- Polynomial
- - Step
- - Pol - sle
0 .2
0.15
0.'
0.OS'-::'------:------:------:C'S:------=3C:2 - - 6 4 : ' : - -='28
Nb of labeled points
Figure 2: Test error on a text classification problem for training set size varying
from 2 to 128 examples. The different kernels correspond to different kind of transfer
functions.
with Vi = K(x, Xi)l . Here, <I> is the feature map corresponding to K, i.e., K(x, x') =
(<I>(x) . <I>(x / )). The new dot product between the test point x and the other points
is expressed as a linear combination of the dot products of k,
-
K(X,Xi)
-
0
- 1
= (Ka
)i = (KK vk
Note that for a linear transfer function,
standard one.
4
k
= K, and the new dot product is the
Experiments
4.1
Influence of the transfer function
We applied the different kernel clusters of section 3 to the text classification task of
[11], following the same experimental protocol. There are two categories mac and
windows with respectively 958 and 961 examples of dimension 7511. The width of
the RBF kernel was chosen as in [11] giving a = 0.55. Out of all examples, 987
were taken away to form the test set. Out of the remaining points, 2 to 128 were
randomly selected to be labeled and the other points remained unlabeled. Results
are presented in figure 2 and averaged over 100 random selections of the labeled
examples. The following transfer functions were compared: linear (i.e. standard
SVM), polynomial <p(A) = A5 , step keeping only the n + 10 where n is the number of
labeled points, and poly-step defined in the following way (with 1 2 Ai 2 A2 2 . .. ),
i
i
:S n + 10
> n + 10
For large sizes of the (labeled) training set, all approaches give similar results. The
interesting case are small training sets. Here, the step and poly-step functions work
very well. The polynomial transfer function does not give good results for very small
training sets (but nevertheless outperforms the standard SVM for medium sizes).
This might be due to the fact that in this example, the second largest eigenvalue is
0.073 (the largest is by construction 1). Since the polynomial transfer function tends
1
We consider here an RBF kernel and for this reason the matrix K is always invertible.
to push to 0 the small eigenvalues, it turns out that the new kernel has "rank almost
one" and it is more difficult to learn with such a kernel. To avoid this problem,
the authors of [11] consider a sparse affinity matrix with non-zeros entries only for
neighbor examples. In this way the data are by construction more clustered and
the eigenvalues are larger. We verified experimentally that the polynomial transfer
function gave better results when applied to a sparse affinity matrix.
Concerning the step transfer function, the value of the cut-off index corresponds
to the number of dimensions in the feature space induced by the kernel, since the
latter is linear in the representation given by the eigendecomposition of the affinity
matrix. Intuitively, it makes sense to have the number of dimensions increase with
the number of training examples, that is the reason why we chose a cutoff index
equal to n + 10.
The poly-step transfer function is somewhat similar to the step function, but is not as
rough: the square root tends to put more importance on dimensions corresponding
to large eigenvalues (recall that they are smaller than 1) and the square function
tends to discard components with small eigenvalues. This method achieves the best
results.
4.2
Automatic selection of the transfer function
The choice of the poly-step transfer function in the previous choice corresponds to
the intuition that more emphasis should be put on the dimensions corresponding to
the largest eigenvalues (they are useful for cluster discrimination) and less on the
dimensions with small eigenvalues (corresponding to intra-cluster directions). The
general form of this transfer function is
i
i
~
r
>r '
(2)
where p, q E lR and r E N are 3 hyperparameters. As before, it is possible to choose
qualitatively some values for these parameters, but ideally, one would like a method
which automatically chooses good values. It is possible to do so by gradient descent
on an estimate of the generalization error [2]. To assess the possibility of estimating
accurately the test error associated with the poly-step kernel, we computed the span
estimate [2] in the same setting as in the previous section. We fixed p = q = 2 and
the number of training points to 16 (8 per class). The span estimate and the test
error are plotted on the left side of figure 3.
Another possibility would be to explore methods that take into account the spectrum of the kernel matrix in order to predict the test error [7].
4.3
Comparison with other algorithms
We summarized the test errors (averaged over 100 trials) of different algorithms
trained on 16 labeled examples in the following table.
The transductive SVM algorithm consists in maximizing the margin on both labeled
and unlabeled. To some extent it implements also the cluster assumption since it
tends to put the decision function in low density regions. This algorithm has been
successfully applied to text categorization [4] and is a state-of-the-art algorithm for
0.25 ,-----~-~-~-r=_
=:=;:'
T'=
"=
'''O=,===;]
0.22 ,----~~-~~-7_
~
T'=
"=
'''==
O, ==]l
--e- S an estimale
-e-
S an estimate
0.2 1
0.2
0.2
0.19
0. 18
0 15
"~
~
~
0.17
0. 16
0.1
0. 15
10
15
20
25
30
10
12
14
16
16
20
Figure 3: The span estimate predicts accurately the minimum of the test error
for different values of the cutoff index r in the poly-step kernel (2). Left: text
classification task, right: handwritten digit classification
performing semi-supervised learning. The result of the Random walk kernel is taken
directly from [11]. Finally, the cluster kernel performance has been obtained with
p = q = 2 and r = 10 in the transfer function 2. The value of r was the one
minimizing the span estimate (see left side of figure 3).
Future experiments include for instance the Marginalized kernel (1) with the standard generative model used in text classification by Naive Bayes classifier [6].
4.4
Digit recognition
In a second set of experiments, we considered the task of classifying the handwritten
digits 0 to 4 against 5 to 9 of the USPS database. The cluster assumption should
apply fairly well on this database since the different digits are likely to be clustered.
2000 training examples have been selected and divided into 50 subsets on 40 examples. For a given run, one of the subsets was used as the labeled training set,
whereas the other points remained unlabeled. The width of the RBF kernel was set
to 5 (it was the value minimizing the test error in the supervised case).
The mean test error for the standard SVM is 17.8% (standard deviation 3.5%),
whereas the transductive SVM algorithm of [4] did not yield a significant improvement (17.6% ? 3.2%). As for the cluster kernel (2), the cutoff index r was again
selected by minimizing the span estimate (see right side of figure 3). It gave a test
error of 14.9% (standard deviation 3.3%). It is interesting to note in figure 3 the
local minimum at r = 10, which can be interpreted easily since it corresponds to
the number of different digits in the database.
It is somehow surprising that the transductive SVM algorithm did not improve the
test error on this classification problem, whereas it did for text classification. We
conjecture the following explanation: the transductive SVM is more sensitive to
outliers in the unlabeled set than the cluster kernel methods since it directly tries
to maximize the margin on the unlabeled points. For instance, in the top middle
part of figure 1, there is an unlabeled point which would have probably perturbed
this algorithm. However, in high dimensional problems such as text classification,
the influence of outlier points is smaller. Another explanation is that this method
can get stuck in local minima, but that again, in higher dimensional space, it is
easier to get out of local minima.
5
Conclusion
In a discriminative setting, a reasonable way to incorporate unlabeled data is
through the cluster assumption. Based on the ideas of spectral clustering and
random walks, we proposed a framework for constructing kernels which implement
the cluster assumption: the induced distance depends on whether the points are in
the same cluster or not. This is done by changing the spectrum of the kernel matrix. Since there exist several bounds for SVMs which depend on the shape of this
spectrum, the main direction for future research is to perform automatic model selection based on these theoretical results. Finally, note that the cluster assumption
might also be useful in a purely supervised learning task.
Acknowledgments
The authors would like to thank Martin Szummer for helpful discussion on this
topic and for having provided us with his database.
References
[1] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training.
In COLT: Proceedings of the Workshop on Computational Learning Theory. Morgan
Kaufmann Publishers, 1998.
[2] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46(1-3):131-159, 2002.
[3] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In Advances in Neural Information Processing, volume 11, pages 487-493. The
MIT Press, 1998.
[4] T. Joachims. Transductive inference for text classification using support vector machines. In Proceedings of the 16th International Conference on Machine Learning,
pages 200- 209. Morgan Kaufmann, San Francisco, CA, 1999.
[5] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an
algorithm. In Advances in Neural Information Processing Systems, volume 14, 200l.
[6] K Nigam, A. K McCallum, S. Thrun, and T. M. Mitchell. Learning to classify text
from labeled and unlabeled documents. In Proceedings of AAAI-9S, 15th Conference
of the American Association for Artificial Intelligence, pages 792- 799, Madison, US,
1998. AAAI Press, Menlo Park, US.
[7] B. Scholkopf, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Generalization
bounds via eigenvalues of the Gram matrix. Technical Report 99-035 , NeuroColt,
1999.
[8] B. Scholkopf, A. Smola, and K-R. Muller. Nonlinear component analysis as a kernel
eigenvalue problem. Neural Computation, 10:1299- 1310, 1998.
[9] M. Seeger. Covariance kernels from Bayesian generative models. In Advances in
Neural Information Processing Systems, volume 14, 200l.
[10] M. Seeger. Learning with labeled and unlabeled data. Technical report , Edinburgh
University, 200l.
[11] M. Szummer and T. Jaakkola. Partially labeled classification with markov random
walks. In Advances in Neural Information Processing Systems, volume 14, 200l.
[12] K Tsuda, T. Kin, and K Asai. Marginalized kernels for biological sequences. Bioinformatics , 2002. To appear. Also presented at ICMB 2002.
[13] Y. Weiss. Segmentation using eigenvectors: A unifying view. In International Conference on Computer Vision, pages 975- 982 , 1999.
| 2257 |@word trial:1 middle:1 version:2 polynomial:6 covariance:1 document:1 outperforms:1 existing:1 ka:1 surprising:1 ij1:3 readily:1 happen:1 shape:1 analytic:1 designed:1 discrimination:1 generative:6 selected:3 intelligence:1 mccallum:1 argm:1 lr:1 klx:1 scholkopf:3 consists:1 indeed:1 behavior:3 mpg:1 automatically:1 window:1 provided:1 estimating:1 medium:1 kind:1 interpreted:2 transformation:1 classifier:8 zl:1 intervention:1 appear:1 planck:1 before:2 local:3 tends:4 path:1 might:3 chose:1 emphasis:1 resembles:1 co:1 averaged:2 acknowledgment:1 implement:5 digit:8 suggest:2 get:2 cannot:2 unlabeled:23 selection:3 nb:1 put:3 applying:1 influence:2 map:1 maximizing:1 independently:1 asai:1 formulate:1 haussler:1 importantly:1 orthonormal:1 his:1 handle:1 coordinate:1 pt:4 construction:2 suppose:2 olivier:1 us:1 element:5 satisfying:1 expensive:1 recognition:2 cut:3 predicts:1 labeled:16 database:4 mukherjee:1 region:3 connected:2 intuition:1 pol:1 ideally:1 trained:3 depend:2 purely:1 usps:1 easily:1 train:2 effective:1 artificial:1 labeling:1 choosing:1 whose:4 quite:1 larger:3 say:1 otherwise:1 transductive:5 ip:1 sequence:1 eigenvalue:15 propose:4 product:6 combining:1 billion:1 exploiting:1 cluster:34 produce:1 categorization:1 perfect:1 help:1 depending:2 stating:1 ij:5 direction:2 drawback:1 closely:1 modifying:1 human:1 dii:1 implementing:1 require:1 generalization:3 clustered:6 biological:2 extension:2 strictly:1 considered:2 normal:1 exp:1 predict:1 achieves:1 a2:1 purpose:1 estimation:1 label:4 sensitive:1 grouped:1 largest:4 successfully:1 tool:1 rough:1 mit:1 always:1 modified:1 avoid:1 varying:1 jaakkola:2 derived:1 joachim:1 vk:2 improvement:1 rank:1 seeger:3 sense:2 helpful:1 inference:1 lj:1 going:1 germany:1 classification:11 flexible:1 colt:1 proposes:1 art:1 fairly:1 mutual:3 equal:4 construct:2 having:2 ng:1 identical:1 park:1 unsupervised:1 future:2 report:2 few:1 randomly:1 consisting:1 interest:2 a5:1 possibility:2 intra:1 introduces:1 mixture:8 yielding:1 necessary:1 taylor:1 walk:9 plotted:1 tsuda:1 theoretical:1 kij:1 classify:2 column:1 instance:2 tp:1 mac:1 vertex:1 entry:1 subset:2 deviation:2 perturbed:1 chooses:1 st:1 density:3 international:2 stay:1 off:1 invertible:1 connecting:1 together:1 again:2 aaai:2 choose:1 lii:1 american:1 toy:2 account:4 de:1 summarized:1 explicitly:1 vi:2 depends:2 ad:1 performed:1 try:2 jason:1 lot:4 root:1 view:1 bayes:1 ass:2 square:2 accuracy:1 kaufmann:2 correspond:1 yield:1 handwritten:3 unifies:1 bayesian:2 accurately:3 lli:1 cybernetics:1 against:1 obvious:1 naturally:2 associated:1 sampled:1 dataset:1 mitchell:2 recall:1 knowledge:1 segmentation:1 actually:1 higher:1 supervised:8 wei:2 evaluated:1 done:1 smola:2 hand:2 web:2 o:1 nonlinear:1 somehow:1 building:1 validity:1 normalized:2 symmetric:1 during:2 width:2 novel:1 recently:1 volume:4 belong:1 he:2 association:1 interpret:1 significant:1 ai:3 automatic:2 shawe:1 dot:6 chapelle:2 similarity:1 belongs:1 apart:1 discard:1 certain:1 muller:1 morgan:2 minimum:4 aut:3 somewhat:1 determine:1 paradigm:1 maximize:1 semi:6 ii:2 multiple:1 technical:2 divided:1 concerning:1 a1:1 vision:1 kernel:63 normalization:1 achieved:1 whereas:3 publisher:1 probably:1 induced:5 jordan:1 ideal:1 enough:2 xj:3 gave:2 perfectly:1 idea:7 vik:1 whether:1 pca:4 passing:1 useful:2 eigenvectors:4 svms:1 category:1 exist:1 estimated:1 per:1 express:1 nevertheless:1 blum:1 drawn:1 changing:1 cutoff:3 verified:1 sle:1 graph:1 sum:2 run:1 extends:1 saying:1 almost:1 reasonable:1 decision:7 bound:2 precisely:1 bousquet:1 span:5 performing:1 martin:1 conjecture:1 combination:2 belonging:1 kd:1 smaller:3 em:1 intuitively:2 outlier:2 taken:2 turn:3 vip:1 available:3 gaussians:4 apply:1 away:1 spectral:4 symmetrized:1 top:1 clustering:5 remaining:1 include:1 marginalized:5 madison:1 unifying:1 giving:1 build:1 diagonal:8 affinity:6 gradient:1 distance:2 thank:1 mapped:2 thrun:1 neurocolt:1 topic:1 extent:1 tuebingen:1 eigenspectrum:1 reason:4 length:1 useless:1 index:4 kk:1 minimizing:3 difficult:1 design:3 implementation:1 perform:1 markov:1 descent:1 community:1 required:1 extensive:1 connection:1 learned:2 usually:1 built:1 max:1 explanation:2 improve:1 lk:1 naive:1 lij:1 text:9 review:2 expect:1 interesting:2 eigendecomposition:3 sufficient:1 article:1 classifying:1 row:2 last:1 keeping:1 side:4 understand:1 institute:1 neighbor:1 taking:1 sparse:2 edinburgh:1 boundary:1 dimension:6 world:3 transition:5 gram:1 author:3 made:3 commonly:1 qualitatively:1 stuck:1 san:1 far:1 cope:1 tiibingen:1 approximate:1 implicitly:1 bernhard:1 francisco:1 discriminative:4 xi:9 spectrum:3 why:1 table:1 learn:2 zk:1 transfer:15 ca:1 menlo:1 nigam:1 williamson:1 poly:6 constructing:1 protocol:1 did:3 main:2 hyperparameters:1 lie:1 kin:1 rk:1 remained:2 svm:11 workshop:1 vapnik:1 importance:1 push:1 margin:2 gap:2 easier:1 vii:1 lt:1 likely:3 explore:2 infinitely:1 expressed:1 partially:1 corresponds:3 weston:1 llxi:1 identity:1 rbf:6 tempted:1 fisher:2 experimentally:2 change:1 typical:1 determined:1 called:1 experimental:2 support:2 latter:1 szummer:2 bioinformatics:1 incorporate:2 d1:1 |
1,382 | 2,258 | Critical Lines in Symmetry of Mixture Models
and its Application to Component Splitting
Kenji Fukumizu
Institute of Statistical
Mathematics
Tokyo 106-8569 Japan
[email protected]
Shotaro Akaho
AIST
Tsukuba 305-8568 Japan
[email protected]
Shun-ichi Amari
RIKEN
Wako 351-0198 Japan
[email protected]
Abstract
We show the existence of critical points as lines for the likelihood function of mixture-type models. They are given by embedding of a critical
point for models with less components. A sufficient condition that the
critical line gives local maxima or saddle points is also derived. Based
on this fact, a component-split method is proposed for a mixture of Gaussian components, and its effectiveness is verified through experiments.
1
Introduction
The likelihood function of a mixture model often has a complex shape so that calculation
of an estimator can be difficult, whether the maximum likelihood or Bayesian approach
is used. In the maximum likelihood estimation, convergence of the EM algorithm to the
global maximum is not guaranteed, while it is a standard method. Investigation of the likelihood function for mixture models is important to develop effective methods for learning.
This paper discusses the critical points of the likelihood function for mixture-type models
by analyzing their hierarchical symmetric structure. As generalization of [1], we show that,
given a critical point of the likelihood for the model with (H ? 1) components, duplication
of any of the components gives critical points as lines for the model with H components.
We call them critical lines of mixture models. We derive also a sufficient condition that
the critical lines give maxima or saddle points of the larger model, and show that given a
maximum of the likelihood for a mixture of Gaussian components, an appropriate split of
any component always gives an ascending direction of the likelihood. Based on this theory,
we propose a stable method of splitting a component, which works effectively with the
EM optimization for avoiding the dependency on the initial condition and improving the
optimization. The usefulness of the algorithm is verified through experiments.
2
2.1
Hierarchical Symmetry and Critical Lines of Mixture Models
Symmetry of Mixture models
Suppose fH (x | ? (H) ) is a mixture model with H components, defined by
PH
fH (x | ? (H) ) = j=1 cj p(x | ?j ),
cj = ?j /(?1 + ? ? ? + ?H ),
(1)
where p(x | ?) is a probability density function with a parameter ?. We write, for simplicity,
?(H) = (?1 , . . . , ?H ), ? (H) = (?1 , . . . , ?H ), and ? (H) = (?(H) ; ? (H) ).
The key of our discussion is the following two symmetric properties, which are satisfied by
mixture models;
(S-1) fH (x | ?(H) ; ? (H?2) , ?H?1 , ?H?1 ) = fH?1 (x | ?(H?2) , ?H?1 + ?H ; ? (H?1) ).
(S-2) There exists a function A(?) such that for j = H ? 1 and H,
?fH
?j ?fH?1
(x | ?(H) ; ? (H?2) , ?H?1 , ?H?1 ) =
(x | ?(H?2) , ?H?1 + ?H ; ? (H?1) ).
??j
A(?) ??H?1
In mixture models, the function A(?) is simply given by A(?) = ?1 + ? ? ? + ?H .
Hereafter, we discuss in general a model with the assumptions (S-1) and (S-2). The results
in Sections 2.1 and 2.2 depend only on these assumptions 1 . While in mixture models
similar conditions are satisfied with any choices of two components, we describe only the
case of H ? 1 and H just for simplicity. We write ?H for the space of the parameter ? (H) .
Another example which satisfies (S-1) and (S-2) is Latent Dirichlet Allocation (LDA,
[2]), which models data of a group structure (e.g. document as a set of words). For
x = (x1 , . . . , xM ), LDA with H components is defined by
Z
Q M PH
(H)
fH (x | ? ) =
u
p(x
|?
)
du (H) ,
DH (u (H) |?(H) ) ?=1
(2)
j
?
j
j=1
where DH (u (H) |?(H) )
?H?1
P
?( j ?j )
= Q ?(?
j)
j
QH
j=1
? ?1
uj j
is the Dirichlet distribution over the (H ?
1)-dimensional simplex ?H?1 . It is easy to see (S-1) and (S-2) hold for LDA by using
Lemma 6 in Appendix. LDA includes mixture models eq.(1) as the special case of M = 1.
It is straightforward from (S-1) that, given a parameter ? (H?1) = (? (H?1) ; ? (H?1) ) of the
model with (H ? 1) components and a scalar ?, the parameter ?? ? ?H defined by
?j = ? j ,
?H?1 = ??H?1 , ?H = (1 ? ?)?H?1 ,
?j = ? j
(1 ? j ? H ? 2)
?H?1 = ?H = ?H?1
(3)
gives the same function as fH?1 (x | ? (H?1) ). In mixture models/LDA, this corresponds to
duplication of the (H ? 1)-th component with partitioning the mixing/Dirichlet parameter
in the ratio ? : (1 ? ?). Since ? is arbitrary, a point in the smaller model is embedded into
the larger model as a line in the parameter space ?H . This implies that the parameter to
realize fH?1 (x | ? (H?1) ) lacks identifiability in ?H . Such singular structure of a model
causes various interesting phenomena in estimation, learning, and generalization ([3]).
2.2
Critical Lines ? Embedding of a Critical Point
Given a sample {X (1) , . . . , X (N ) }, we define an objective function for learning by
PN
LH (? (H) ) = n=1 ?n (fH (X (n) | ? (H) )),
(4)
where ?n (f ) are differentiable functions, which may depend on n. The objective of learning is to maximize LH . If ?n (f ) = log f for all n, maximization of LH (? (H) ) is equal to
the maximum likelihood estimation.
(H?1)
?
?
= (?1? , . . . , ?H?1
; ?1? , . . . , ?H?1
) is a critical point of LH?1 (? (H?1) ),
(H?1)
?LH?1
(??
) = 0. Embedding of this point into ?H gives a critical line;
?? (H?1)
Suppose ??
that is,
1
The results do not require that p(x | ?) is a density function. Thus, they can be easily extended
to function fitting in regression, which gives the results on multilayer neural networks in [1].
(H?1)
Theorem 1 (Critical Line). Suppose that a model satisfies (S-1) and (S-2). Let ? ?
?
be a critical point of LH?1 with ?H?1
6= 0, and ?? be a parameter given by eq.(3) for
(H?1)
??
. Then, ?? is a critical point of LH (? (H) ) for all ?.
Proof. Although this is essentially the same as Theorem 1 in [1], the following proof gives
better intuition. Let (s, t; ?, ?) be reparametrization of (?H?1 , ?H ; ?H?1 , ?H ), defined by
s = ?H?1 + ?H , t = ?H?1 ? ?H , ?H?1 = ? + ?H ?, ?H = ? ? ?H?1 ?.
(5)
This is a one-to-one correspondence, if ?H?1 + ?H 6= 0. Note that ? = 0 is equivalent to the condition ?H?1 = ?H . Let ? = (?(H?2) , s, t; ? (H?2) , ?, ?) be the
new coordinate, `H (?) be the objective function eq.(4) under this parametrization, and
?? be the parameter corresponding to ?? . Since we have, by definition, `H (?) =
s?t
s+t
(H?2)
LH (?(H?2) , s+t
, ? + s?t
2 , 2 ;?
2 ?, ? ? 2 ?), the condition (S-1) means
`H (?(H?2) , s, t; ? (H?2) , ?, 0) = LH?1 (?(H?2) , s; ? (H?2) , ?).
(6)
Then, it is clear that the first derivatives of `H at ?? with respect to ?(H?2) , s, ? (H?2) ,
(H?1)
and ? are equal to those of LH?1 (? (H?1) ) at ??
, and they are zero. The derivative
?`H (?? )/?t vanishes from eq.(6), and ?`H (?? )/?? = 0 from following Lemma 2.
Lemma 2. Let H be a hyperplane given by {? | ? = 0}. Then, for all ?o ? H, we have
?fH
?? (x
| ?o ) = 0.
Proof. Straightforward from the assumption (S-2) and
(7)
?
??
?
= ?H ??H?1
? ?H?1 ???H .
Given that a maximum of LH is larger than that of LH?1 , Theorem 1 implies that the
function LH always has critical points which are not global maximum. Those points lie on
lines in the parameter space. Further embedding of the critical lines into larger models gives
high-dimensional critical planes in the parameter space. This property is very general, and
in LDA and mixture models we do not need any assumptions on p(x | ?). In these models,
by the permutation symmetry of components, there are many choices for embedding, which
induces many critical lines and planes for LH .
2.3
Embedding of a Maximum Point in LDA and Mixture Models
The next question is whether or not the critical lines from a maximum of LH?1 gives
maxima of LH . The answer requires information on the second derivatives, and depends
on models. We show a general result on LDA, and that on mixture models as its corollary.
(H?1)
Theorem 3. Suppose that the model is LDA defined by eq.(2). Let ??
be an isolated
maximum point of LH?1 , and ?? be its embedding given by eq.(3). Define a symmetric
matrix R of the size dim? by
R=
PN
(H?1)
0
(n)
| ??
n=1 ?n (fH?1 (X
))
nP
M
(n) ?
?=1 I?
2
(n)
?
p(X? | ?H?1
)
????
?
?
?p(X? | ?H?1
) ?p(X? | ?H?1
)o
,
?
?=1
??
??
? 6=?
j=1 ?j +1
where ?0 (f ) denotes the derivative of ?(f ) w.r.t. f , and
Z
Y PH?1
(H?1)
?
?
(n)
DH?1 (u | ?1? , . . . , ?H?2
, ?H?1
+ 1)
,
I?(n) =
j=1 uj p(X? | ?j ) du
+
(n)
J?,?
=
Z
PH?11
(n)
(n)
? =1 J?,?
?H?2
?H?2
(n)
P M PM
?6=?
?
?
DH?1 (u | ?1? , . . . , ?H?2
, ?H?1
+ 2)
Y PH?1
j=1
?6=?,?
uj p(X?(n) | ?j ) du (H?1) .
Then, we have
(i) If R is negative definite, the parameter ?? is a maximum of LH for all ? ? (0, 1).
(ii) If R has a positive eigenvalue, the parameter ?? is a saddle point for all ? ? (0, 1).
(H?1)
Remark: The conditions on R depend only on the parameter ??
.
Proof. We use the parametrization ? defined by eq.(5). For each t, let Ht be a hyperplane
? H,t be the function LH restricted on Ht . The hyperplane Ht is a slice
with t fixed, and L
transversal to the critical line, along which LH has the same value. Therefore, if the Hessian
? H,t on Ht is negative definite at the intersection ?? (? = (t + 1)/2), the point
matrix of L
is a maximum of LH , and if the Hessian has a positive eigenvalue, ?? is a saddle point.
? H,t (?(H?1) , s; ? (H?1) , ?, 0) = LH?1 (?(H?1) , s;
Since in ? coordinate we have L
(H?1)
?
?
, ?), the Hessian of LH,t at ?? is given by
!
(H?1)
HessLH?1 (??
)
O
?
.
(8)
HessLH,t (?? ) =
? H,t (?? )
?2L
O
????
The off-diagonal blocks are zero, because we have
Lemma 2. By assumption,
(H?1)
HessLH?1 (??
)
? H,t (?? )
?2L
????a
= 0 for ?a 6= ? from
is negative definite. Noting that the terms
?
?2L
(n)
(?? )
H,t
; ?? )/?? vanish from Lemma 2, it is easy to obtain
????
P
H?1
?
?(1 ? ?)(?H?1
)3 /( j=1 ?j? ) ? R by using Lemma 6 and the definition of ?.
including ?fH (X
=
By setting M = 1 in LDA model, we have the sufficient conditions for mixture models.
Corollary 4. For a mixture model, the same assertions as Theorem 3 hold for
2
(n)
?
? = PN ?0 (fH?1 (X (n) | ??(H?1) )) ? p(X | ?H?1 ) .
R
n=1 n
????
(n)
?
Proof. For M = 1, J?,? = 0 and I (n) = ?H?1
/
2.4
Critical Lines in Various Models
PH?1
j=1
(9)
?j? . The assertion is obvious.
We further investigate the critical lines for specific models. Hereafter, we consider the
maximum likelihood estimation, setting ?n (f ) = log f for all n.
Gaussian Mixture, Mixture of Factor Analyzers, and Mixture of PCA
Assume that each component is the D-dimensional Gaussian density with mean ? and
variance-covariance matrix V as parameters, which is denoted by ?(x ; ?, V ). The matrix
? in eq.(9) has a form R
? = ST2 S3 , where S2 , S3 , and S4 correspond to the second
R
S3 S4
derivatives with respect to (?, ?), (?, V ), and (V, V ), respectively. It is well known that the
second derivative ? 2 ?/???? of a Gaussian density is equal to the first derivative ??/?V .
Then, S2 is equal to zero by the condition of a critical point. If the data is randomly
generated, S3 and S4 are of full rank almost surely. This type of matrix necessarily has
a positive eigenvalue. It is not difficult to extend this discussion to models with scalar or
diagonal variance-covariance matrices as variable parameters.
Similar arguments hold for mixture of factor analyzers (MFA, [4]) and mixture of probabilistic PCA (MPCA, [5]). In factor analyzers or probabilistic PCA, the variancecovariance matrix is restricted to the form
V = F F T + S,
where F is a factor loading of rank k and S is a diagonal or scalar matrix. Because the
F T +S)
F , the block in
first derivative of ?(x ; ?, F F T + S) with respect to F is ??(x;?,F
?V
? corresponding to the second derivatives on ? is not of full rank. In a similar manner to
R
? has a positive eigenvalue. In summary, we have the following
Gaussian mixtures, R
? is of full
Theorem 5. Suppose that a model is Gaussian mixture, MFA, or MPCA. If R
rank, every point ?? on the critical line is a saddle point of LH .
This theorem means that if we have the maximum likelihood estimator for H ? 1 components, we can find an ascending direction of likelihood by splitting a component and
modifying their means and variance-covariance matrices in the direction of the positive
eigenvector. This leads a component splitting method, which will be shown in Section 3.1.
Latent Dirichlet Allocation
We consider LDA with multinomial components. Using the D-dimensional random vector
x = (xa ) ? {(1, 0, . . . , 0)T , . . . , (0, . . . , 0, 1)T }, which indicates a chosen element, the
multinomial distribution over D elements is expressed as an exponential family by
QD
PD?1
PD?1 a
p(x | ?) = a=1 (pa )xa = exp{ a=1 ? a xa ? log(1 + a=1 e? )},
where pa is the expectation of xa , and ? ? RD?1 is a natural parameter given by ? a =
log(pa /pD ). It is easy to obtain
R=
PN
n=1 ?
0
(H?1)
(fH?1 (X (n) | ??
))
PM P
?=1
(n)
(n)
? 6=? J?,? p(X?
?
?
| ?H?1
)p(X?(n) | ?H?1
)
T
? (n) ? p?
? (n) ? p?
? (X
?
(H?1) )(X?
(H?1) ) ,
(10)
D?1
? ?(n) is the truncated (D ? 1)-dimensional vector, and p?
where X
is the
(H?1) ? (0, 1)
(H?1)
expectation parameter for (H ? 1)-th component of ??
.
(n)
In general, J?,? are intractable in large problems. We explain a simple case of H = 2
and M = D. Let pb be the frequency vector of the D elements, which is the maximum
(n)
likelihood estimator for the one multinomial model. In this case, we have J?,? = 1 and
PN P M
? ?(n) ? pb)(X
? ?(n) ? pb)T .
? (n) b)(X
? ?(n) ? pb)T ? PM (X
R = n=1
?=1
?,? =1 (X? ? p
(n)
First, suppose we have a data set with X? = e? for all n and 1 ? ? ? D = M , where
ej is the D-dimensional vector with the j-th component 1 and others zero. Then, we have
PD
? ?(n) ? pb) = 0, which means R < 0. The critical line
pb = (1/D, . . . , 1/D) and ?=1 (X
gives maxima for LDA with H = 2. Next, suppose the data consists of D groups, and every
(n)
data in the j-th group is given by X? = ej . While we have again pb = (1/D, . . . , 1/D),
PD
the matrix R is j=1 (N/D) ? D(D ? 1)(ej ? pb)(ej ? pb)T > 0. Thus, all the points
on the critical lines are saddle points. These examples explain two extreme cases; in the
former we have no advantage in using two components because all the data X (n) are the
same, while in the latter the multiple components fits better to the variety of X (n) .
3
3.1
Component Splitting Method in Mixture of Gaussian Components
EM with Component Splitting
It is well known that the EM algorithm suffers from strong dependency on initialization. In
addition, because the likelihood of a mixture of Gaussian components is not upper bounded
Algorithm 1 : EM with component splitting for Gaussian mixture
1. Initialization: calculate the sample mean ?1 and variance-covariance matrix V1 .
2. H := 1.
?h
3. For all 1 ? h ? H, diagonalize Vh? as Vh? = Uh ?h UhT , and calculate R
according to eq.(12) in Appendix.
? h corresponding to the
4. For 1 ? h ? H, calculate the eigenvector (rh , Wh ) of R
largest eigenvalue.
5. For 1 ? h ? H, optimize ? by line search to maximize the likelihood for
ch = 12 ch? ,
?h = ?h? ? ?rh ,
cH+1 = 21 ch? ,
?H+1 = ?h? + ?rh ,
Vh = Uh e??Wh ?h e??Wh UhT ,
(11)
VH+1 = Uh e?Wh ?h e?Wh UhT .
Let ?ho be the optimizer and Lh be the likelihood.
6. For h? := arg maxh Lh , split h? -th component according to eq.(11) with ?ho? .
(H+1)
7. Optimize the parameter ? (H+1) using EM algorithm. Let ??
be the result.
8. If H + 1 = MAX H, then END. Otherwise, H := H + 1 and go to 3.
5
4
3
3
3
4
4
3
3
6
2
3
2
0
2
0
1
0
1
1
0
0
0
-3
-3
-1
6
3
0
-6
-3
-6
(a) Data
3
-3
3
0
0
-3
(b) Success
-3
(c) Failure
Figure 1: Spiral data. In (b) and (c), the lines represent the factor loading vectors F h and
?Fh at the mean values, and the radius of a sphere is the scalar part of the variance.
for small variances, we should use an optimization technique to give an appropriate maximum. Sequential split of components can give a solution to these problems. From Theorem
5, a stable and effective way of splitting a Gaussian component is derived to increase the
likelihood. We propose EM with component splitting, which adds a component one by one
after maximizing the likelihood at each size. Ueda et al ([6]) proposes Split and Merge EM,
in which the components repeat split and merge in a triplet, keeping the total number fixed.
While their method works well, it requires a large number of trials of EM for candidate
triplets, and the splitting method is heuristic. Our splitting method is well based on theory,
and EM with splitting gives a series of estimators for all model sizes in a single run.
Algorithm 1 is the procedure of learning. We show only the case of mixture of Gaussian.
The exact algorithm for the mixture of PCA/FA will be shown in a forthcoming paper.
It is noteworthy that in splitting a component, not only the means but also the variancecovariance matrices must be modified. The simple additive rule Vnew = Vold + ?V tends
to fail, because it may make the matrix non-positive definite. To solve this problem, we
use Lie algebra expression to add a vector of ascending direction. Let V = U ?U T be
the diagonalization of V , and consider V (W ) = U eW ?eW U T for a symmetric matrix
W . This gives a local coordinate of the positive definite matrices around V = V (0).
Modification of V through W gives a stable way of updating variance-covariance matrices.
3.2
Experimental results
We show through experiments how the proposed EM with component splitting effectively
maximizing the likelihood. In the first experiment, the mixture of PCA with 8 components
of rank 1 is employed to fit the synthesized 150 data generated along a piecewise linear
spiral (Fig.1). Table 1-(a) shows the results over 30 trials with different random numbers.
We use the on-line EM algorithm ([7]), presenting data one-by-one in a random order. The
EM with random initialization reaches the best state (Fig.1-(b)) only 6 times, while EM
with component splitting achieves it 26 times. Fig.1-(c) shows an example of failure.
The next experiment is an image compression problem, in which the image ?Lenna? of 160?160 pixels
(Fig.2) is used. The image is partitioned into 20?20
blocks of 8?8 pixels, which are regarded as 400 data
in R64 . We use the mixture of PCA with 10 components of rank 4, and obtain a compressed image by
? = Fh (F T Fh )?1 F T X, where X is a 64 dimensional
X
h
h
block and h indicates the component of the shortest
Euclidean distance kX ? ?h k. Table 1-(b) shows the
P400
? j k2 , which
residual square error (RSE), j=1 kXj ? X
shows the quality of the compression. In both experiments, we can see the better optimization performance
of the proposed algorithm.
(a) Likelihood for spiral data (30 runs)
EM
EMCS
Best -534.9 (6 times) -534.9 (26 times)
Worst
-648.1
-587.9
Av.
-583.9
-541.3
20
40
60
80
100
120
140
160
20
40
60
80
100
120
140
160
Figure 2: ?Lenna?.
(b) RSE for ?Lenna? (10 runs)
?104 EM EMCS
Best 5.94
5.38
6.12
Worst 6.40
Av.
6.15
5.78
Table 1: Experimental results. EM is the conventional EM with random initialization, and
EMCS is the proposed EM with component splitting.
4
Discussions
In EM with component splitting, we obtain the estimators up to the specified number of
components. We need a model selection technique to choose the best one, which is another
important problem. We do not discuss it in this paper, because our method can be combined
with many techniques, which select a model after obtaining the estimators. However, we
should note that some famous methods such as AIC and MDL, which are based on statistical asymptotic theory, cannot be applied to mixture models because of the unidentifiability
of the parameter. Further studies are necessary on model selection for mixture models.
Although the computation to calculate the matrix R is not cheap in a mixture of Gaussian
components, the full variance-covariance matrices are not always necessary in practical
problems. It can save the computation drastically. Also, some methods to reduce the
computational cost should be more investigated.
In selecting a component to split, we try line search for all the components and choose the
one giving the largest likelihood. While this works well in our experiments, the proposed
method of component splitting can be combined with other criterions to select a component.
? h . In Gaussian
One of them is to select the component giving the largest eigenvalue of R
? is
mixture models, this is very natural; the block of the second derivatives w.r.t. V in R
equal to the weighted fourth cummulant, and a component with a large cummulant should
be split. However, in mixture of FA and PCA, this does not necessarily work well, because
the decomposition V = F F T + S does not give a natural parametrization. Although we
have discussed only local properties, a method incorporating global information might be
more preferable. These are left as a future work.
Appendix
(H)
(H)
Lemma 6. Suppose
the assumption (S-1).
Define
R ?H (u (H); ? (H)) satisfies
(H)
(H)
IH (? ; ? ) = ?H?1 ?(u ; ? )DH (u (H) | ?(H) )du (H) . Then, IH also satisfies (S-1);
IH (?(H) ; ? (H?2) , ?H?1 , ?H?1 ) = IH?1 (?(H?2) , ?H?1 + ?H ; ? (H?1) ).
Proof. Direct calculation.
? h for Gaussian mixture
Matrix R
We omit the index h for simplicity, and use Einstein?s convention. Let U = (u1 , . . . , uD )
and ? = diag(?1 , . . . , ?D ). For V (W ) = U eW ?eW U T , we have ?V (O)/?Wab =
(?a +(1??ab )?b )(ua uTb +ub uTa ), where ?ab is Kronecker?s delta. Let T (3) and T (4) be the
?(x(n) ;?? ,V? )
(H?1) .
f (H?1) (x(n) ;??
)
ap bq cr ds (4)
V V V Tpqrs ,
weighted third and fourth sample moments, respectively, with weights
(3)
abc
abcd
T?(3) and T?(4) are defined by T?(3)
= V ap V bq V cr Tpqr and T?(4)
=V
ap
?1
respectively,where
V is the (ap)-component of V . Direct calculation leads that the
O B
?
matrix R = B T C , where the decomposition corresponds to ? = (?, W ), is given by
??a
B?a ,Wbc = (?b + (1 ? ?bc )?c )uTb T?(3)
uc
CWab Wcd = (?a ub uTa + (1 ? ?ab )?b ua uTb )pq (?c ud uTc + (1 ? ?cd )?d uc uTd )rs
? T?pqrs ? (V pq V rs + V pr V qs + V ps V qr ) .
(12)
(4)
??a
bca
In the above equation, T?(3)
is the D ? D matrix with fixed a for T?(3)
.
References
[1] K. Fukumizu and S. Amari. Local minima and plateaus in hierarchical structures of multilayer
perceptrons. Neural Networks, 13(3):317?327, 2000.
[2] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Advances in Neural Information Processing Systems, 14, 2002. MIT Press.
[3] S. Amari, H. Park, and T. Ozeki. Geometrical singularities in the neuromanifold of multilayer
perceptrons. Advances in Neural Information Processing Systems, 14, 2002. MIT Press.
[4] Z. Ghahramani and G. Hinton. The EM algorithm for mixtures of factor analyzers. Technical
Report CRG-TR-96-1, University of Toronto, Department of Computer Science, 1997.
[5] M. Tipping and C. Bishop. Mixtures of probabilistic principal component analysers. Neural
Computation, 11:443?482, 1999.
[6] N. Ueda, R. Nakano, Z. Ghahramani, and G. Hinton. SMEM algorithm for mixture models.
Neural Computation, 12(9):2109?2128, 2000.
[7] M. Sato and S. Ishii. On-line EM algorithm for the normalized Gaussian network. Neural
Computation, 12(2):2209?2225, 2000.
| 2258 |@word trial:2 compression:2 loading:2 r:2 covariance:6 decomposition:2 tr:1 moment:1 initial:1 series:1 hereafter:2 selecting:1 document:1 bc:1 wako:1 must:1 realize:1 additive:1 shape:1 cheap:1 unidentifiability:1 plane:2 parametrization:3 blei:1 toronto:1 along:2 direct:2 consists:1 fitting:1 manner:1 brain:1 utc:1 ua:2 bounded:1 eigenvector:2 every:2 preferable:1 k2:1 partitioning:1 omit:1 positive:7 local:4 tends:1 analyzing:1 merge:2 noteworthy:1 might:1 ap:4 initialization:4 practical:1 block:5 definite:5 procedure:1 word:1 cannot:1 selection:2 optimize:2 equivalent:1 conventional:1 maximizing:2 go:3 straightforward:2 simplicity:3 splitting:18 estimator:6 rule:1 q:1 regarded:1 embedding:7 coordinate:3 qh:1 suppose:8 exact:1 pa:3 element:3 updating:1 worst:2 calculate:4 intuition:1 vanishes:1 pd:5 depend:3 algebra:1 rse:2 uh:3 easily:1 kxj:1 various:2 riken:2 effective:2 describe:1 analyser:1 heuristic:1 larger:4 solve:1 amari:4 otherwise:1 compressed:1 advantage:1 differentiable:1 eigenvalue:6 propose:2 mixing:1 qr:1 convergence:1 p:1 derive:1 develop:1 ac:1 eq:10 strong:1 kenji:1 implies:2 qd:1 convention:1 direction:4 radius:1 tokyo:1 modifying:1 shun:1 require:1 generalization:2 investigation:1 singularity:1 crg:1 hold:3 around:1 exp:1 optimizer:1 achieves:1 fh:18 estimation:4 largest:3 ozeki:1 neuromanifold:1 weighted:2 fukumizu:3 mit:2 gaussian:16 always:3 modified:1 pn:5 ej:4 cr:2 corollary:2 derived:2 rank:6 likelihood:22 indicates:2 ishii:1 dim:1 pixel:2 arg:1 denoted:1 transversal:1 proposes:1 special:1 uc:2 equal:5 ng:1 park:1 future:1 simplex:1 np:1 others:1 piecewise:1 report:1 randomly:1 uta:2 ab:3 investigate:1 variancecovariance:2 mdl:1 mixture:44 extreme:1 necessary:2 lh:26 bq:2 euclidean:1 isolated:1 assertion:2 maximization:1 cost:1 usefulness:1 dependency:2 answer:1 combined:2 density:4 probabilistic:3 off:1 again:1 satisfied:2 choose:2 derivative:10 japan:3 includes:1 depends:1 try:1 reparametrization:1 identifiability:1 square:1 variance:8 correspond:1 bayesian:1 famous:1 wab:1 explain:2 plateau:1 reach:1 suffers:1 definition:2 failure:2 frequency:1 obvious:1 proof:6 wh:5 cj:2 tipping:1 just:1 xa:4 d:1 lack:1 quality:1 lda:12 normalized:1 former:1 symmetric:4 aist:2 criterion:1 presenting:1 geometrical:1 image:4 multinomial:3 jp:3 extend:1 discussed:1 synthesized:1 rd:1 mathematics:1 pm:3 akaho:2 analyzer:4 pq:2 stable:3 maxh:1 add:2 success:1 minimum:1 employed:1 surely:1 maximize:2 shortest:1 ud:2 ii:1 full:4 multiple:1 technical:1 calculation:3 sphere:1 regression:1 multilayer:3 essentially:1 expectation:2 represent:1 addition:1 utd:1 singular:1 diagonalize:1 duplication:2 effectiveness:1 jordan:1 call:1 noting:1 split:8 easy:3 spiral:3 uht:3 variety:1 fit:2 forthcoming:1 reduce:1 whether:2 expression:1 pca:7 hessian:3 cause:1 remark:1 clear:1 s4:3 ph:6 induces:1 s3:4 delta:1 write:2 ichi:1 key:1 group:3 pb:9 verified:2 ht:4 v1:1 run:3 fourth:2 almost:1 family:1 ueda:2 appendix:3 guaranteed:1 aic:1 correspondence:1 tsukuba:1 sato:1 kronecker:1 wbc:1 u1:1 argument:1 department:1 according:2 smaller:1 em:22 partitioned:1 modification:1 restricted:2 pr:1 wcd:1 equation:1 discus:3 fail:1 ascending:3 end:1 hierarchical:3 einstein:1 appropriate:2 save:1 ho:2 shotaro:1 existence:1 denotes:1 dirichlet:5 nakano:1 ism:1 giving:2 ghahramani:2 uj:3 objective:3 question:1 fa:2 diagonal:3 distance:1 r64:1 index:1 ratio:1 abcd:1 difficult:2 negative:3 upper:1 av:2 st2:1 truncated:1 extended:1 hinton:2 arbitrary:1 specified:1 xm:1 including:1 max:1 mfa:2 critical:29 natural:3 residual:1 smem:1 vh:4 bca:1 asymptotic:1 embedded:1 permutation:1 interesting:1 allocation:3 mpca:2 sufficient:3 cd:1 summary:1 repeat:1 keeping:1 drastically:1 institute:1 slice:1 global:3 search:2 latent:3 triplet:2 table:3 symmetry:4 obtaining:1 improving:1 du:4 investigated:1 complex:1 necessarily:2 diag:1 rh:3 s2:2 x1:1 fig:4 exponential:1 lie:2 candidate:1 vanish:1 third:1 theorem:8 specific:1 bishop:1 exists:1 intractable:1 vnew:1 incorporating:1 sequential:1 effectively:2 ih:4 diagonalization:1 kx:1 intersection:1 vold:1 simply:1 saddle:6 expressed:1 scalar:4 ch:4 corresponds:2 satisfies:4 dh:5 abc:1 hyperplane:3 lemma:7 lenna:3 total:1 principal:1 experimental:2 ew:4 perceptrons:2 select:3 latter:1 ub:2 phenomenon:1 avoiding:1 |
1,383 | 2,259 | How the Poverty of the Stimulus
Solves the Poverty of the Stimulus
WilleIll ZuideIlla
Language Evolution and Computation Research Unit
and Institute for Cell, Animal and Population Biology
University of Edinburgh
40 George Square, Edinburgh EH8 9LL, United Kingdom
[email protected]
Abstract
Language acquisition is a special kind of learning problem because
the outcome of learning of one generation is the input for the next.
That makes it possible for languages to adapt to the particularities
of the learner. In this paper, I show that this type of language
change has important consequences for models of the evolution and
acquisition of syntax.
1
The Language Acquisition Problem
For both artificial systems and non-human animals, learning the syntax of natural
languages is a notoriously hard problem. All healthy human infants, in contrast,
learn any of the approximately 6000 human languages rapidly, accurately and spontaneously. Any explanation of how they accomplish this difficult task must specify
the (innate) inductive bias that human infants bring to bear, and the input data
that is available to them. Traditionally, the inductive bias is termed - somewhat unfortunately - "Universal Grammar", and the input data "primary linguistic data".
Over the last 30 years or so, a view on the acquisition of the syntax of natural
language has become popular that has put much emphasis on the innate machinery.
In this view, that one can call the "Principles and Parameters" model, the Universal
Grammar specifies most aspects of syntax in great detail [e.g. 1]. The role of
experience is reduced to setting a limited number (30 or so) of parameters. The main
argument for this view is the argument from the poverty of the stimulus [2]. This
argument states that children have insufficient evidence in the primary linguistic
data to induce the grammar of their native language.
Mark Gold [3] provides the most well-known formal basis to this argument. Gold
introduced the criterion "identification in the limit" for evaluating the success of a
learning algorithm: with an infinite number of training samples all hypotheses of
the algorithm should be identical, and equivalent to the target. Gold showed that
the class of context-free grammars is not learnable in this sense by any algorithm
from positive samples alone (and neither are other super'-jinite classes). This proof
is based on the fact that no matter how many samples from an infinite language a
learning algorithm has seen, the algorithm can not decide with certainty that the
samples are drawn from the infinite language or from a finite language that contains all samples. Because natural languages are thought to be at least as complex
as context-free grammars, and negative feedback is assumed to be absent in the
primary linguistic data, Gold's analysis, and subsequent work in learn ability theory
[1] , is usually interpreted as strong support for the argument from the poverty of the
stimulus, and, in the extreme, for the view that grammar induction is fundamentally
impossible (a claim that Gold would not subscribe to).
Critics of this "nativist" approach [e.g. 4, 5] have argued for different assumptions
on the appropriate grammar formalism (e.g. stochastic context-free grammars), the
available primary data (e.g. semantic information) or the appropriate learnability
criterion. In this paper I will take a different approach. I will present a model that
induces context-free grammars without a-priori restrictions on the search space, semantic information or negative evidence. Gold's negative results thus apply. Nevertheless, acquisition of grammar is successful in my model, because another process
is taken into account as well: the cultural evolution of language.
2
The Language Evolution Problem
Whereas in language acquisition research the central question is how a child acquires
an existing language, in language evolution research the central question is how this
language and its properties have emerged in the first place. Within the nativist
paradigm, some have suggested that the answer to this question is that Universal
Grammar is the product of evolution under selection pressures for communication
[e.g. 6]. Recently, several formal models have been presented to evaluate this view.
For this paper, the most relevant of those is the model of Nowak et al. [7].
In that model it is assumed that there is a finite number of grammars, that newcomers (infants) learn their grammar from the population, that more successful
grammars have a higher probability of being learned and that mistakes are made in
learning. The system can thus be described in terms of the changes in the relative
frequencies Xi of each grammar type i in the population. The first result that Nowak
et al. obtain is a "coherence threshold". This threshold is the necessary condition
for grammatical coherence in a population, i.e. for a majority of individuals to use
the same grammar. They show that this coherence depends on the chances that a
child has to correctly acquire its parents' grammar. This probability is described
with the parameter q. Nowak et al. show analytically that there is a minimum value
for q to keep coherence in the population. If q is lower than this value, all possible
grammar types are equally frequent in the population and the communicative success in minimal. If q is higher than this value, one grammar type is dominant; the
communicative success is much higher than before and reaches 100% if q = l.
The second result relates this required fidelity (called qd to a lower bound (be)
on the number of sample sentences that a child needs. Nowak et al. make the
crucial assumption that all languages are equally expressive and equally different
from each other. With that assumption they can show that be is proportional to
the total number of possible grammars N. Of course, the actual number of sample
sentences b is finite; Nowak et al. conclude that only if N is relatively small can
a stable grammar emerge in a population. I.e. the population dynamics require a
restrictive Universal Grammar.
The models of Gold and Nowak et al. have in common that they implicitly assume
that every possible grammar is equally likely to become the target grammar for
learning. If even the best possible learning algorithm cannot learn such a grammar,
the set of allowed grammars must be restricted. There is, however, reason to believe
that this assumption is not the most useful for language learning. Language learning
is a very particular type of learning problem, because the outcome of the learning
process at one generation is the input for the next. The samples from which a child
learns with its learning procedure, are therefore biased by the learning of previous
generations that used the same procedure[8].
In [9] and other papers, Kirby, Hurford and students have developed a framework
to study the consequences of that fact. In this framework, called the "Iterated
Learning Model" (ILM), a population of individuals is modeled that can each produce and interpret sentences, and have a language acquisition procedure to learn
grammar from each other. In the ILM one individual (the parent) presents a relatively small number of examples of form-meaning pairs to the next individual (the
child). The child then uses these examples to induce his own granunar. In the next
iteration the child becomes the parent, and a new individual becomes the child.
This process is repeated many times. Interestingly, Kirby and Hurford have found
that in these iterated transmission steps the language becomes easier and easier to
learn, because the language adapts to the learning algorithm by becoming more
and more structured. The structure of language in these models thus emerges from
the iteration of learning. The role of biological evolution, in this view, is to shape
the learning algorithms, such that the complex results of the iterated learning is
biologically adaptive [10]. In this paper I will show that if one adopts this view on
the interactions between learning, cultural evolution and biological evolution, the
models such as those of Gold [3] and Nowak et al. [7] can no longer be taken as
evidence for an extensive, innate pr~specification of human language.
3
A Simple Model of Grammar Induction
To study the interactions between language adaptation and language acquisition, I
have first designed a grammar induction algorithm that is simple, but can nevertheless deal with some non-trivial induction problems. The model uses context-free
grammars to represent linguistic abilities. In particular, the representation is limited to grammars G where all rules are of one of the following forms: (1) A 1-+ t, (2)
A 1-+ BC, (3) A 1-+ Bt. The nontenninals A, B, C are elements of the non-terminal
alphabet Vnt , which includes the start symbol S. t is a string of tenninal symbols from the terminal alphabet Vt 1 ? For determining the language L of a certain
grammar G I use simple depth-first exhaustive search of the derivation tree. For
computational reasons, the depth of the search is limited to a certain depth d, and
the string length is limited to length l. The set of sentences (L' ~ L) used in training and in communication is therefore finite (and strictly speaking not context-free,
but regular); in production, strings are drawn from a uniform distribution over L'.
The grammar induction algorithm learns from a set of sample strings (sentences)
that are provided by a teacher. The design of the learning algorithm is originally
inspired by [11] and is similar to the algorithm in [12]. The algorithm fits within a
tradition of algorithms that search for compact descriptions of the input data [e.g.
13, 14, 15]. It consists of three operations:
Incorporation: extend the language, such that it includes the encountered string;
if string s is not already part of the language, add a rule S 1-+ s to the
grammar.
INote that the restrictions on the rule-types above do not limit the scope of languages
that can be represented (they are essentially equivalent to Chomsky Normal Form). They
are, however, relevant for the language acquisition algorithm.
Compression: substitute frequent and long substrings with a nonterminal, such
that the gmmmar becomes smaller and the language remains unchangedj
for every valid substring z of the right-hand sides of all rules, calculate the
compression effect v(z) of substituting z with a nonterminal Aj replace all
valid occurrences of the substring z, = arymaxzv(z) with A if v(z') > 0, and
add a rule A f-+ Zl to the grammar. "Valid substrings" are those substrings
which can be replaced while keeping all rules of the forms 1- 3 described
above. The compression effect is measured as the difference between the
number of symbols in the grammar before and after the substitution. The
compression step is repeated until the grammar does not change anymore.
Generalization: equate two nonterminals, such that the grammar becomes smaller
and the language laryerj for every combination of two nonterminals A and
B (B :f S), calculate the compression effect v of equating A and B. Equate
the combination (A',B') = arymaxABv(A,B) ifv(A',B') > OJ i.e. replace
all occurrences of B with A. The compression effect is measured as the
difference between the number of symbols before and after replacing and
deleting redundant rules. The generalization step is repeated until the
grammar does not change anymore.
4
Learnable and U nlearnable Classes
The algorithm described above is implemented in C++ and tested on a variety of
target grammars2 ? I will not present a detailed analysis of the learning behavior
here, but limit myself to a simple example that shows that the algorithm can learn
some (recursive) grammars, while it can not learn others. The induction algorithm
receives three sentences (abed, abcabcd, abcabcabcd). The incorporation, compression (repeated twice) and generalization steps yield subsequently the following
grammars:
(a) Incorporation
S
S
S
f-+
f-+
f-+
abed
abcabcd
abcabcabcd
(b) Compression
S
S
S
X
Y
f-+
f-+
f-+
f-+
f-+
Yd
Xd
Xabcd
yy
(c) Generalization
S
S
X
X
f-+
f-+
f-+
f-+
Xd
Xabcd
XX
abc
abc
In (b) the substrings "abcabc" and "abc" are subsequently replaced by the nonterminals X and Y. In (c) the non-terminals X and Y are equated, which leads to
the deletion of the second rule in (b). One can check that the total size of the
grammar reduces from 24, to 19 and further down to 16 characters.
From this example it is also clear that learning is not always successful. Any of the
three grammars above ?a) and (b) are equivalent) could have generated the training data, but with these three input strings the algorithm always yields grammar
(c). Consistent with Gold's general proof [3], many target grammars will never be
learned correctly, no matter how many input strings are generated. In practice,
each finite set of randomly generated strings from some target grammar, might
yield a different result. Thus, for some number of input strings T, some set of target grammars are always acquired, some are never acquired, and some are some of
the time acquired. H we can enumerate all possible grammars, we can describe this
with a matrix Q, where each entry Qij describes the probability that the algorithm
learning from sample strings from a target grammar i, will end up with grammar
2The source code is available at http://wvv.ling.ed.ac . uk/ "" j elle
of type j. Qii is the probability that the algorithm finds the target grammar. To
make learning successful, the target grammars that are presented to the algorithm
have to be biased. The following section will show that for this we need nothing
more than to assume that the output of one learner is the input for the next.
5
Iterated Learning: the Emergence of Learnability
To study the effects of iterated learning, we extend the model with a population
structure. In the new version of the model individuals (agents, that each represent
a generation) are placed in a chain. The first agent induces its grammar from a
number E of randomly generated strings. Every subsequent agent (the child) learns
its grammar from T sample sentences that are generated by the previous one (the
parent). To avoid insufficient expressivenes:,;, we al:,;o extend the generalization step
with a check if the number EG of different strings the grammar G can recognize is
larger than or equal to E. If not, E - EG random new strings are generated and
incorporated in the grammar. Using the matrix Q from the previou:,; section, we can
formalize this iterated learning model with the following general equation, where Xi
is the probability that grammar i is the grammar of the current generation:
N
~Xi =
LXjQji
j=O
(1)
In simulations such a:,; the one of figure 1 communicative :,;ucces:,; between child and
parent - a measure for the learnability of a grammar - rises steadily from a low
value (here 0.65) to a high value (here 1.0). In the initial stage the grammar shows
no structure, and consequently almost every string that the grammar produces
is idiosyncratic. A child in this stage typically hears strings like "ada", "ddac",
"adba", "bcbd", or "cdca" from its parent. It can not discover many regularities in
these strings. The child therefore can not do much better than simply reproduce the
strings it heard (i.e. T random draws from at least E different :,;trings), and generate
random new strings, if necessary to make sure its language obeys the minimum
number (E) of strings. However, in these randomly generated strings, sometimes
regularitie:,; appear. I.e., a parent may u:,;e the randomly generated string:,; "dcac",
"bcac", "caac" and "daac". When this happens the child tends to analyze these
strings as different combinations with the building block "ac". Thus, typically,
the learning algorithm generates a grammar with the rules S f-7 dcX, S f-7 bcX,
S f-7 caX, S f-7 daX, and X f-7 ac. When this happens to another set of string:,; as
well, say with a new rule Y f-7 b, the generalization procedure can decide to equate
the non-terminals X and Y. The resulting grammar can then generalize from the
observed strings, to the unobserved strings "dcb", "bcb", "cab" and "dab". The
child still needs to generate random new strings to reach the minimum E, but fewer
than in the case considered above.
The interesting aspect of this becomes clear when we consider the next step in the
simulation, when the child becomes itself the parent of a new child. This child
is now pre:,;ented with a language with more regularities than before, and has a
fair chance of cor?r-ectly generalizing to unseen examples. If, for instance, it only
sees the strings "dcac", "bcac", "caac", "bcb", "cab" and "dab", it can, through
the same procedure as above, infer that "daac" and "dcb" are also part of the
target language. This means that (i) the child shares more string:,; with its parent
than just the ones it observes and consequently shows a higher between generation
communicative success, and (ii) regularities that appear in the language by chance,
have a fair chance to remain in the language. In the process of iterated learning,
languages can thus become more structured and better learnable.
'---.
(a.) LeBnl.&bility
(b) Number of rules
.....
- '---....
(c) Expressiveness
Figure 1: Iterated Learning: although initially the target language is unstructured
and difficult to learn, over the course of 20 generation!! (8) the learnability (the fraction of !!uccessful communication!! with the parent) steadily increases, (b) the number of rules steadily dec:reaaes (combmatorial and recursive stategies are used), and
(c) after a initial. phase of overgeneralization, the expressiveness remains close to its
minimally required level. Parameters: Vi = {a,b,c,d}, Vut = {S,X,Y,Z, A,B, C},
T=30, E=20, 10=3. Shown are the average values of 2 simulations.
Similar results with different formalismB were already reported before [e.g. 11, 16],
but here I have used context-free grammars and the results an! therefore directly
relevant for the interpretation of Gold'e proof [3]. Whereas in the ueual interpretation of that proof [e.g. 1] it is assumed that we need. innate constraints on the
search space in addition to a smart leaming procedure, here I show that even a
!!imple learning procedure can lead to succeMful acquisition, because restriction!!
on the search space automatically emerge in the iteration of learning. If one considers ieamability a Dina'll feature - 38 is common in generative linguistics - this
ill a rather trivial phenomenon: languages that are not learnable will not occur in
the next generation. However, if there are gradations in learnability, the cultural
evolution of language can be an intricate process where languages get shaped over
many generations.
6
Language Adaptation and the Coherence Threshold
When we study this effect in a version of the model where selection does play a
role, it is also relevant for the analysis in [7]. The model is therefore extended such
that at every generation there ill a population of agents, agents of one generation
communicate with each other and the expected number of ofFspring of an agent (the
fitnt2B) is determined by the number of successful interactions it had. Children still
acquire their grammar from sample strings produced. by their parent. Adapting
equation 1, this system CaD now be described with the following equation, where
z. is now the relative fraction of grammar i in the population (assuming an infinite
population size):
N
~i
= Lz;ljQji -
t/Yzi
(2)
j=O
Here, Ji ill the relative
E j ziF~i' where F~J is
=
jitnelJB (quality) of gra.m.mars of type i and equ.alB Ji
the expected communicative success from an interaction
between an individual of type i and an individual of type j. The relative fitness f of a
grammar thus depends on the frequencies of all grammar types, hence it ill freflUency
dependent. q, is the average fitness in the population and equals q, = Ei Xiii. This
term is needed to keep the sum of all fractions at 1. This equation is essentially the
model of Nowak et al. [7]. Recall that the main result of that paper is a "coherence
threshold": a minimum value for the learning accuracy q to keep coherence in the
population. In previous work [unpublished] I have reproduced this result and shown
that it is robust against variations in the Q-matrix, as long as the value of q (i.e.
the diagonal values) remains equal for all grammars.
%~~~20~~~40'-~6~o~-o8~o~-7"oo
generations
Figure 2: Results from a run under fitness proportional selection. This figure shows
that there are regions of grammar space where the dynamics are apparently under
the "coherence threshold" [7], while there are other regions where the dynamics are
above this threshold. The parameters, including the number of sample sentences T,
are still the same, but the language has adapted itself to the bias of the learning
algorithm. Parameters are: lit = {O, 1, 2, 3}, v;.,t = {S, a, b, c, d, e, f}, P=20, T=100,
E=100, lo=12. Shown are the average values of 20 agents.
Figure 2, however, shows results from a simulation with the grammar induction
algorithm described above, where this condition is violated. Whereas in the simulations of figure 1 the target languages have been relatively easy (the initial string
length is short, i.e. 6), here the learning problem is very difficult (initial string
length is long, i.e. 12). For a long period the learning is therefore not very successful, but around generation 70 the success suddenly rises. With always the same
T (number of sample sentences), and with always the same grammar space, there
are regions where the dynamics are apparently under the "coherence threshold",
while there are other regions where the dynamics are above this threshold. The
language has adapted to the learning algorithm, and, consequently, the coherence
in the population does not satisfy the prediction of Nowak et al.
7
Conclusions
I believe that these results have some important consequences for our thinking
about language acquisition. In particular, they offer a different perspective on the
argument from the poverty of the stimulus, and thus on one of the most central
"problems" of language acquisition research: the logical pmblern of lang'uage acquisition. My results indicate that in iterated learning it is not necessary to put the
(whole) explanatory burden on the representation bias. Although the details of the
grammatical formalism (context-free grammars) and the population structure are
deliberately close to [3] and [7] respectively, I do observe successful acquisition of
grammars from a class that is unlearn able by Gold's criterion. Further, I observe
grammatical coherence even though many more grammars are allowed in principle
than Nowak et al. calculate as an upper bound. The reason for these surprising
results is that language acquisition is a very particular type of learning problem:
it is a problem where the target of the learning process is itself the outcome of a
learning process. That opens up the possibility of language itself to adapt to the
language acquisition procedure of children. In such iterated learning situations [11],
learners are only presented with targets that other learners have been able to learn.
Isn't this the traditional Universal Grammar in disguise'? Learnability is - consistent
with the undisputed proof of [3] - still achieved by constraining the set of targets.
However, unlike in usual interpretations of this proof, these constraints are not
strict (some grammars are better learnable than others, allowing for an infinite
"Grammar Universe"), and they are not a-priori: they are the outcome of iterated
learning. The poverty of the stimulus is now no longer a problem; instead, the
ancestors' poverty is the solution for the child's.
AcknowledgIllents This work was performed while I was at the AI Laboratory
of the Vrije Universiteit Brussel. It builds on previous work that was done in close
collaboration with Paulien Hogeweg of Utrecht University. I thank her and Simon
Kirby, John Batali, Aukje Zuidema and my colleagues at the AI Lab and the LEC
for valuable hints, questions and remarks. Funding from the Concerted Research
Action fund of the Flemish Government and the VUB, from the Prins Bernhard
Cultuurfonds and from a Marie Curie Fellowship of the European Commission are
gratefully acknowledged.
References
[1) Stefano Bertolo, editor. Language Acquisition and Learnability. Cambridge University
Press, 200l.
[2) Noam Chom::;ky. Aspects of the theor'y of syntax. MIT Pre::;::;, Cambridge, MA, 1965.
[3) E. M. Gold. Language identification in the limit. Infor'mation and Contml (now
Information and Computation), 10:447- 474, 1967.
[4) Michael A. Arbib and Jane C. Hill. Language acquisition: Schemas replace universal grammar. In John A. Hawkins, editor, Explaining Language Universals. Basil
Blackwell, New York, USA, 1988.
[5) J. Elman, E. Bates, et al. Rethinking innateness. MIT Press, 1996.
[6) Steven Pinker and Paul Bloom. Natural language and natural selection. Behavioral
and brain sciences, 13:707-784, 1990.
[7) Martin A. Nowak, Natalia Komarova, and Partha Niyogi. Evolution of universal
grammar. Science, 291:114-118, 200l.
[8) Terrence Deacon. Symbolic species, the co-e'Uol'ution of language and the h'uman brain.
The Penguin Press, 1997.
[9) S. Kirby and J. Hurford. The emergence of lingui::;tic ::;tructure: An overview of the
iterated learning model. In Angelo Cangelosi and Domenico Parisi, editors, Sirn'ulating
the Evolution of Lang'uage, chapter 6, pages 121-148. Springer Verlag, London, 2002.
[10) Kenny Smith. Natural selection and cultural selection in the evolution of communication. Adaptive Behavior, 2003. to appear.
[11) Simon Kirby. Syntax without natural selection: How compositionality emerges from
vocabulary in a population of learners. In C. Knight et al., editors, The Evolutionary
Emergence of Language. Cambridge University Press, 2000.
[12) J. Gerard Wolff. Language acqui::;ition, data compre::;::;ion and generalization. Language
(3 Communication, 2(1):57-89, 1982.
[13) A. Stolcke. Bayesian Learning of Pmbabilistic Language Models. PhD thesii:i, Dept.
of Electrical Engineering and Computer Science, University of California at Berkeley,
1994.
[14) Menno van Zaanen and Pieter Adriaans. Comparing two unsupervised grammar
induction systems: Alignment-based learning vs. EMILE. In Ben Kriise et al., editors,
Pmceedinys of BNAIC 2001, 200l.
[15) Zach Solan, Eytan Ruppin, David Horn, and Shimon Edelman. Automatic acquisition
and efficient representation of syntactic structures. This volume.
[16) Henry Brighton. Compositional syntax from cultural transmission. Ar?tificial Life,
8(1), 2002.
| 2259 |@word version:2 compression:8 open:1 pieter:1 simulation:5 solan:1 pressure:1 initial:4 substitution:1 contains:1 united:1 bc:1 interestingly:1 existing:1 current:1 comparing:1 cad:1 surprising:1 lang:2 must:2 john:2 subsequent:2 shape:1 designed:1 fund:1 v:1 infant:3 alone:1 fewer:1 generative:1 smith:1 short:1 provides:1 become:3 qij:1 consists:1 edelman:1 behavioral:1 concerted:1 acquired:3 expected:2 intricate:1 behavior:2 elman:1 bility:1 brain:2 terminal:4 inspired:1 automatically:1 actual:1 becomes:7 provided:1 xx:1 discover:1 cultural:5 dax:1 tic:1 kind:1 interpreted:1 string:32 developed:1 unobserved:1 certainty:1 berkeley:1 every:6 xd:2 uk:2 zl:1 unit:1 appear:3 positive:1 before:5 engineering:1 offspring:1 tends:1 limit:4 consequence:3 mistake:1 flemish:1 becoming:1 approximately:1 yd:1 might:1 emphasis:1 twice:1 equating:1 minimally:1 yzi:1 co:1 qii:1 limited:4 obeys:1 hurford:3 horn:1 spontaneously:1 recursive:2 practice:1 block:1 dcx:1 procedure:8 innateness:1 universal:8 thought:1 adapting:1 pre:2 induce:2 regular:1 chomsky:1 symbolic:1 get:1 cannot:1 close:3 selection:7 put:2 context:8 impossible:1 gradation:1 restriction:3 equivalent:3 unstructured:1 undisputed:1 rule:12 wvv:1 his:1 population:18 traditionally:1 variation:1 target:15 play:1 ulating:1 us:2 hypothesis:1 element:1 native:1 observed:1 role:3 steven:1 electrical:1 calculate:3 region:4 knight:1 observes:1 valuable:1 imple:1 dynamic:5 smart:1 learner:5 basis:1 represented:1 chapter:1 alphabet:2 derivation:1 describe:1 london:1 artificial:1 outcome:4 exhaustive:1 emerged:1 larger:1 say:1 particularity:1 dab:2 grammar:81 ability:2 niyogi:1 ition:1 unseen:1 syntactic:1 emergence:3 ifv:1 itself:4 reproduced:1 parisi:1 interaction:4 product:1 adaptation:2 frequent:2 relevant:4 rapidly:1 adapts:1 gold:12 description:1 ky:1 zuidema:1 parent:11 regularity:3 transmission:2 gerard:1 produce:2 natalia:1 ben:1 oo:1 ac:4 measured:2 nonterminal:2 solves:1 strong:1 implemented:1 indicate:1 qd:1 stochastic:1 subsequently:2 human:5 myself:1 argued:1 require:1 government:1 generalization:7 biological:2 theor:1 strictly:1 around:1 considered:1 hawkins:1 normal:1 great:1 scope:1 claim:1 substituting:1 bcx:1 angelo:1 communicative:5 healthy:1 mit:2 always:5 mation:1 super:1 rather:1 avoid:1 linguistic:4 check:2 contrast:1 tradition:1 sense:1 dependent:1 bt:1 typically:2 explanatory:1 initially:1 her:1 ancestor:1 reproduce:1 infor:1 fidelity:1 ill:4 priori:2 animal:2 special:1 equal:3 never:2 shaped:1 biology:1 identical:1 lit:1 unsupervised:1 thinking:1 others:2 stimulus:6 fundamentally:1 xiii:1 hint:1 penguin:1 randomly:4 recognize:1 individual:8 fitness:3 poverty:7 replaced:2 phase:1 vrije:1 possibility:1 alignment:1 extreme:1 chain:1 nowak:11 dina:1 necessary:3 experience:1 machinery:1 tree:1 minimal:1 instance:1 formalism:2 vub:1 ar:1 ada:1 entry:1 uniform:1 successful:7 nonterminals:3 learnability:7 reported:1 commission:1 answer:1 teacher:1 accomplish:1 my:3 subscribe:1 terrence:1 michael:1 central:3 disguise:1 account:1 student:1 ilm:2 includes:2 vnt:1 matter:2 satisfy:1 depends:2 vi:1 performed:1 view:7 lab:1 analyze:1 apparently:2 schema:1 start:1 universiteit:1 pinker:1 simon:2 curie:1 partha:1 square:1 accuracy:1 equate:3 yield:3 generalize:1 identification:2 bayesian:1 iterated:12 accurately:1 tenninal:1 produced:1 substring:6 bates:1 utrecht:1 notoriously:1 reach:2 ed:2 against:1 acquisition:19 frequency:2 steadily:3 colleague:1 proof:6 popular:1 logical:1 recall:1 emerges:2 formalize:1 higher:4 originally:1 specify:1 done:1 though:1 mar:1 just:1 stage:2 until:2 hand:1 receives:1 zif:1 expressive:1 replacing:1 ei:1 aj:1 quality:1 alb:1 believe:2 innate:4 building:1 effect:6 usa:1 deliberately:1 evolution:13 inductive:2 analytically:1 hence:1 laboratory:1 semantic:2 deal:1 eg:2 ll:2 adriaans:1 acquires:1 criterion:3 syntax:7 brighton:1 hill:1 bring:1 stefano:1 meaning:1 ruppin:1 recently:1 funding:1 common:2 unlearn:1 ji:2 overview:1 volume:1 extend:3 interpretation:3 interpret:1 cambridge:3 ai:2 automatic:1 language:65 had:1 gratefully:1 henry:1 stable:1 specification:1 longer:2 add:2 dominant:1 own:1 showed:1 perspective:1 termed:1 certain:2 verlag:1 success:6 vt:1 life:1 seen:1 minimum:4 george:1 somewhat:1 kenny:1 paradigm:1 redundant:1 period:1 ii:1 relates:1 reduces:1 infer:1 adapt:2 offer:1 long:4 equally:4 prediction:1 essentially:2 iteration:3 represent:2 sometimes:1 achieved:1 cell:1 dec:1 dcb:2 whereas:3 addition:1 fellowship:1 ion:1 source:1 crucial:1 biased:2 unlike:1 sure:1 strict:1 call:1 constraining:1 easy:1 stolcke:1 variety:1 fit:1 arbib:1 ution:1 absent:1 elle:1 speaking:1 york:1 compositional:1 remark:1 action:1 enumerate:1 useful:1 heard:1 detailed:1 clear:2 brussel:1 induces:2 reduced:1 http:1 specifies:1 generate:2 correctly:2 yy:1 nevertheless:2 threshold:8 acknowledged:1 drawn:2 basil:1 neither:1 marie:1 bloom:1 fraction:3 year:1 sum:1 run:1 communicate:1 place:1 almost:1 gra:1 decide:2 draw:1 coherence:11 batali:1 bound:2 lec:1 encountered:1 adapted:2 occur:1 incorporation:3 constraint:2 generates:1 aspect:3 argument:6 relatively:3 martin:1 structured:2 combination:3 smaller:2 describes:1 remain:1 character:1 kirby:5 abed:2 biologically:1 inote:1 happens:2 restricted:1 pr:1 daac:2 taken:2 equation:4 remains:3 needed:1 end:1 cor:1 available:3 operation:1 apply:1 observe:2 appropriate:2 occurrence:2 anymore:2 jane:1 substitute:1 linguistics:1 restrictive:1 build:1 suddenly:1 question:4 already:2 primary:4 usual:1 diagonal:1 traditional:1 evolutionary:1 thank:1 rethinking:1 majority:1 evaluate:1 considers:1 trivial:2 reason:3 induction:8 assuming:1 length:4 code:1 modeled:1 insufficient:2 acquire:2 kingdom:1 difficult:3 unfortunately:1 idiosyncratic:1 noam:1 negative:3 rise:2 design:1 vut:1 upper:1 allowing:1 finite:5 situation:1 extended:1 communication:5 incorporated:1 expressiveness:2 compositionality:1 introduced:1 david:1 pair:1 required:2 unpublished:1 extensive:1 sentence:9 blackwell:1 california:1 learned:2 deletion:1 eh8:1 able:2 suggested:1 usually:1 oj:1 including:1 explanation:1 deleting:1 natural:7 hears:1 isn:1 determining:1 relative:4 bear:1 generation:13 interesting:1 proportional:2 cax:1 emile:1 agent:7 consistent:2 o8:1 principle:2 editor:5 critic:1 share:1 production:1 lo:1 collaboration:1 course:2 placed:1 last:1 free:8 keeping:1 bias:4 formal:2 side:1 institute:1 explaining:1 emerge:2 edinburgh:2 grammatical:3 feedback:1 depth:3 vocabulary:1 evaluating:1 valid:3 van:1 adopts:1 made:1 adaptive:2 equated:1 lz:1 compact:1 nativist:2 implicitly:1 bernhard:1 keep:3 assumed:3 conclude:1 equ:1 xi:3 search:6 learn:10 robust:1 complex:2 european:1 tructure:1 main:2 universe:1 whole:1 ling:2 paul:1 nothing:1 child:22 allowed:2 repeated:4 fair:2 zach:1 learns:3 shimon:1 down:1 uman:1 learnable:5 symbol:4 evidence:3 burden:1 cab:2 phd:1 easier:2 zaanen:1 generalizing:1 simply:1 likely:1 springer:1 chance:4 abc:3 ma:1 consequently:3 leaming:1 replace:3 change:4 hard:1 infinite:5 determined:1 menno:1 uol:1 wolff:1 called:2 total:2 specie:1 eytan:1 mark:1 support:1 bcb:2 violated:1 dept:1 tested:1 phenomenon:1 |
1,384 | 226 | 676
Baum
The Perceptron Algorithm Is Fast tor
Non-Malicious Distributions
Erice B. Baum
NEC Research Institute
4 Independence Way
Princeton, NJ 08540
Abstract: Within the context of Valiant's protocol for learning, the Perceptron
algorithm is shown to learn an arbitrary half-space in time O(r;;) if D, the probability distribution of examples, is taken uniform over the unit sphere sn. Here f is
the accuracy parameter. This is surprisingly fast, as "standard" approaches involve
solution of a linear programming problem involving O( 7') constraints in n dimensions. A modification of Valiant's distribution independent protocol for learning
is proposed in which the distribution and the function to be learned may be chosen by adversaries, however these adversaries may not communicate. It is argued
that this definition is more reasonable and applicable to real world learning than
Valiant's. Under this definition, the Perceptron algorithm is shown to be a distribution independent learning algorithm. In an appendix we show that, for uniform
distributions, some classes of infinite V-C dimension including convex sets and a
class of nested differences of convex sets are learnable.
?1: Introduction
The Percept ron algorithm was proved in the early 1960s[Rosenblatt,1962] to
converge and yield a half space separating any set of linearly separable classified
examples. Interest in this algorithm waned in the 1970's after it was emphasized[Minsky and Papert, 1969] (1) that the class of problems solvable by a single
half space was limited, and (2) that the Perceptron algorithm, although converging in finite time, did not converge in polynomial time. In the 1980's, however, it
has become evident that there is no hope of providing a learning algorithm which
can learn arbitrary functions in polynomial time and much research has thus been
restricted to algorithms which learn a function drawn from a particular class of
functions. Moreover, learning theory has focused on protocols like that of [Valiant,
1984] where we seek to classify, not a fixed set of examples, but examples drawn
from a probability distribution. This allows a natural notion of "generalization" .
There are very few classes which have yet been proven learnable in polynomial time,
and one of these is the class of half spaces. Thus there is considerable theoretical
interest now in studying the problem of learning a single half space, and so it is
natural to reexamine the Percept ron algorithm within the formalism of Valiant.
The Perceptron Algorithm Is Fast for Non-Malicious Distributions
In Valiant's protocol, a class of functions is called learnable if there is a learning algorithm which works in polynomial time independent of the distribution D
generating the examples. Under this definition the Perceptron learning algorithm
is not a polynomial time learning algorithm. However we will argue in section 2
that this definition is too restrictive. We will consider in section 3 the behavior of
the Perceptron algorithm if D is taken to be the uniform distribution on the unit
In this case, we will see that the Perceptron algorithm converges resphere
markably rapidly. Indeed we will give a time bound which is faster than any bound
known to us for any algorithm solving this problem. Then, in section 4, we will
present what we believe to be a more natural definition of distribution independent
learning in this context, which we will call Nonmalicious distribution independent
learning. We will see that the Perceptron algorithm is indeed a polynomial time nonmalicious distribution independent learning algorithm. In Appendix A, we sketch
proofs that, if one restricts attention to the uniform distribution, some classes with
infinite Vapnik-Chervonenkis dimension such as the class of convex sets and the
class of nested differences of convex sets (which we define) are learnable. These
results support our assertion that distribution independence is too much to ask for,
and may also be of independent interest.
sn.
?2: Distribution Independent Learning
In Valiant's protocol [Valiant , 1984], a class F of Boolean functions on ~n is
called learnable if a learning algorithm A exists which satisfies the following conditions. Pick some probability distribution D on ~n. A is allowed to call examples,
which are pairs (x, I(x?, where x is drawn according to the distribution D. A is a
valid learning algorithm for F if for any probability distribution D on ~n, for any
o < 8, f < 1, for any I E F, A calls examples and, with probability at least 1 - 8
outputs in time bounded by a polynomial in n, 8- 1 , and f- 1 a hypothesis 9 such
that the probability that I(x) "I g(x) is less than f for x drawn according to D.
This protocol includes a natural formalization of 'generalization' as prediction.For more discussion see [Valiant, 1984]. The definition is restrictive in demanding that A work for an arbitrary probability distribution D. This demand
is suggested by results on uniform convergence of the empirical distribution to the
actual distribution. In particular, if F has Vapnik-Chervonenkis (V-C) dimension l1
d, then it has been proved[Blumer et al, 1987] that all A needs to do to be a valid
max(~logj, Sfdlog1f3) examples and to
learning algorithm is to call MO(f, 8, d)
find in polynomial time a function 9 E F which correctly classifies these.
Thus, for example, it is simple to show that the class H of half spaces is
Valiant learnable[Blumer et aI, 1987]. The V-C dimension of H is n + 1. All we
need to do to learn H is to call MO(f, 8, n + 1) examples and find a separating half
space using Karmarkar's algorithm [Karmarkar, 1984]. Note that the Perceptron
algorithm would not work here, since one can readily find distributions for which
the Perceptron algorithm would be expected to take arbitrarily long times to find
a separating half space.
=
We say a set S C Rn is shattered by a class F of Boolean functions if F
induces all Boolean functions on S. The V-C dimension of F is the cardinality of
the largest set S which F shatters.
11
677
678
Baum
Now, however, it seems from three points of view that the distribution independent definition is too strong. First, although the results of [Blumer et al., 1987]
tell us we can gather enough information for learning in polynomial time, they say
nothing about when we can actually find an algorithm A which learns in polynomial
time. So far, such algorithms have only been found in a few cases, and (see, e.g.
[Baum, 1989a]) these cases may be argued to be trivial.
Second, a few cl~es of functions have been proved (modulo strong but plausible complexity theoretic hypotheses) unlearnable by construction of cryptographically secure subclasses. Thus for example [Kearns and Valiant, 1988] show that
the class of feedforward networks of threshold gates of some constant depth, or of
Boolean gates of logarithmic depth, is not learnable by construction of a cryptographically secure subclass. The relevance of such results to learning in the natural
world is unclear to us. For example, these results do not rule out a learning algorithm that would learn almost any log depth net. We would thus prefer a less
restrictive definition of learnability, so that if a class were proved unlearnable, it
would provide a meaningful limit on pragmatic learning.
Third, the results of [Blumer et aI, 1987] imply that we can only expect to learn
a class of functions F if F has finite V-C dimension. Thus we are in the position
of assuming an enormous amount of information about the class of functions to be
learned- namely that it be some specific class of finite V-C dimension, but nothing
whatever about the distribution of examples. In the real world, by contrast, we
are likely to know at least as much about the distribution D as we know about the
class of functions F. If we relax the distribution independence criterion, then it can
be shown that classes of infinite Vapnik-Chervonenkis dimension are learnable. For
example, for the uniform distribution, the class of convex sets and a class of nested
differences of convex sets ( both of which trivially have infinite V-C dimension) are
shown to be learnable in Appendix A.
?3: The Perceptron Algorithm and Uniform Distributions
The Percept ron algorithm yields, in finite time, a half-space (WH, ()H) which
correctly classifies any given set of linearly separable examples [Rosenblatt,1962].
That is, given a set of classified examples {z~} such that, for some (w~, ()~), W~ .z+ >
()~ and W~ ? z~ < ()~ for alII', the algorithm converges in finite time to output a
( W H , () H) such that W H ? z~ 2:: () Hand W H . z~ < () H. We will normalize so that
w~ .w~
1. Note that Iw~ . z - ()~ I is the Euclidean distance from z to the separating
hyperplane {y : W~ . Y ()~}.
The algorithm is the following. Start with some initial candidate (wo, ()o),
which we will take to be (0,0). Cycle through the examples. For each example, test
whether that example is correctly classified. If so, proceed to the next example. If
not, modify the candidate by
=
=
(1)
where the sign of the modification is determined by the classification of the missclassified example.
In this section we will apply the Perceptron algorithm to the problem of learning
The Perceptron Algorithm Is Fast for Non-Malicious Distributions
in the probabilistic context described in section 2, where however the distribution
D generating examples is uniform on the unit sphere sn. Rather than have a
fixed set of examples, we apply the algorithm in a slightly novel way: we call an
example, perform a Perceptron update step, discard the example, and iterate until
we converge to accuracy c/ 2 If we applied the Perceptron algorithm in the standard
way, it seemingly would not converge as rapidly. We will return to this point at the
end of this section.
Now the number of updates the Perceptron algorithm must make to learn a
given set of examples is well known to be O( f;), where I is the minimum distance
from an example to the classifying hyperplane (see ego [Minsky and Papert, 1969]).
In order to learn to c accuracy in the sense of Valiant, we will observe that for
the uniform distribution we do not need to correctly classify examples closer to the
target separating hyperplane than O( -7,:). Thus we will prove that the Perceptron
algorithm will converge (with probability 1 - 8) after O( ~) updates, which will
occur after O( -!i) presentations of examples.
Indeed take Ot = 0 so the target hyperplane passes through the origin. Parallel
hyperplanes a distance tc/2 above and below the target hyperplane bound a band
B of probability measure
,,/2
n 2
A
(2)
P(tc) =
h/1 - z2) - dz ~
-,,/2
An
1
(for n
> 2), where An = f?~:+ll)/;)
is the area of sn. See figure 1. Using the readily
tK
J..
Figure 1: The target hyperplane intersects the sphere sn along its equator (if
Oe = 0) shown as the central line. Points in (say) the upper hemisphere are classifie.d
as positive examples and those in the lower as negative examples. The band B 18
formed by intersecting the sphere with two planes parallel to the target hyperplane
and? a distance tc/2 above and below it.
/2 We say that our candidate half space has accuracy c when the probability that
it missclassifies an example drawn from D is no greater than c.
679
680
Baum
vn,
obtainable (e.g. by Stirling's formula) bound that AA:l <
and the fact that
the integrand is nowhere greater than 1, we find that for", ?/2vn, the band has
measure less than ?/2. If Ot # 0, a band of width", will have less measure than it
would for Ot = 0. We will thus continue to argue (without loss of generality) by
assuming the worst case condition that Ot 0.
Since B has measure less than ?/2, if we have not yet converged to accuracy ?,
there is no more than probability 1/2 that the next example on which we update will
be in B. We will show that once we have made rno = rnax(144In!, ~) updates, we
have converged unless more than 7/12 of the updates are in B. The probability of
making this fraction of the up dates in B, hC?wever, is less than 6/2 if the probability
of each update lying in B is not more than 1/2. We conclude with confidence 1-6/2
that the probability our next update will be in B is greater than 1/2 and thus that
we have converged to ?-accuracy.
Indeed, consider the change in the quantity
=
=
(3)
when we update.
(4)
?
Now note that ?(Wk . X:l:: - Ok) < since x was miss classified by (Wk' Ok) (else we
would not update). Let A (=F(Wt? x:l:: - Ot?. If x E B, then A < 0. If x rt. B, then
A ~ -",/2. Recalling x 2
1, we see that tl.N < 2 for x E Band tl.N < -0'" + 2
for x rt. B. If we choose 0 = 8/"" we find that tl.N ~ -6 for x ~ B. Recall that,
for k 0, with (Wo, ( 0) (0,0), we have N
0 2
64/",2. Thus we see that if we
have made 0 updates on points outside B, and 1 updates on points in B, N < if
60 - 21> 64/",2. But N is positive semidefinite. Once we have made 48/",2 tot'al
updates, at least 7/12 of the updates must thus have been on examples in B.
If you assume that the probability of updates falling in B is less than 1/2 (and
thus that our hypothesis half space is not yet at ? - accuracy), then the probability
that more than 7/12 of mo = max(144In~, ~) updates fall in B is less than 6/2.
To see this define LE(p, m, r) as the probability of having at most r successes in m
independent Bernoulli trials with probability of success p and recall, [Angluin and
Valiant,1979], for < f3 < 1 that
=
=
=
=
= =
?
?
(5)
=
=
=
Applying this formula with m
mo, p
1/2, f3
1/6 shows the desired result.
We conclude that the probability of making rno updates without converging to ?
accuracy is less than 6/2.
The Perceptron Algorithm Is Fast for Non-Malicious Distributions
However, as it approaches 1 - ? accuracy, the algorithm will only update on a
fraction ? of the examples. To get, with confidence 1- 8/2, rno updates, it suffices to
2m o /? examples. Thus we see that the Perceptron algorithm converges,
call M
with confidence 1 - 0, after we have called
=
M
?
2
= -max(144In2,
?
48n
-2)
?
(6)
examples.
Each example could be processed in time of order 1 on a "neuron" which
computes Wk . x in time 1 and updates each of its "synaptic weights" in parallel.
On a serial computer, however, processing each example will take time of order n,
so that we have a time of order O(n 2/?3) for convergence on a serial computer.
This is remarkably fast. The general learning procedure, described in section 2,
is to call Mo(?, 0, n+1) examples and find a separating halfspace, by some polynomial
time algorithm for linear programming such as Karmarkar's algorithm. This linear
programming problem thus contains 0(7) constraints in n dimensions. Even to
write down the problem thus takes time o(nf~)' The upper time bound to solve this
given by [Karmarkar, 1984] is O(n505 ?-2) . For large n the Percept ron algorithm is
faster by a factor of n 305 ? Of course it is likely that Karmarkar's algorithm could
be proved to work faster than O( n 505 ) for the particular distribution of examples
of interest. If, however, Karmarkar's algorithm requires a number of iterations
depending even logarithmically on n, it will scale worse (for large n) than the
Perceptron algorithm/ 3
Notice also that if we simply called Mo(?, 0, n + 1) examples and used the
Perceptron algorithm, in the traditional way, to find a linear separator for this set
of examples, our time performance would not be nearly as good. In fact, equation
2 tells us that we would expect one of these examples to be a distance O( nt.g) from
the target hyperplane, since we are calling 0(7) examples and a band of width
O( nf.s) has measure O( *). Thus this approach would take time O( ~), or a factor
of n 2 worse than the one we have proposed.
An alternative approach to learning using only O( 7) examples, would be to
call MoCi, 0, n + 1) examples and apply the Perceptron algorithm to these until a
fraction 1- ?/2 had been correctly classified. This would suffice to assure that the
hypothesis half space so generated would (with confidence 1 - 0) have error less
than ?, as is seen from [Blumer et aI, 1987, Theorem A3.3]. It is unclear to us what
time performance this procedure would yield.
?4: Non-Malicious Distribution Independent Learning
Next we propose modification of the distribution independence assumption,
which we have argued is too strong to apply to real world learning. We begin
with an informal description. We allow an adversary (adversary 1) to choose the
/3 We thank P. Vaidya for a discussion on this point.
681
682
Baum
function f in the class F to present to the learning algorithm A. We allow a second
adversary (adversary 2) to choose the distribution D arbitrarily. We demand that
(with probability 1 - 8) A converge to produce an (-accurate hypothesis g. Thus
far we have not changed Valiant's definition. Our restriction is simply that before
their choice of distribution and function, adversaries 1 and 2 are not allowed to
exchange information. Thus they must work independently. This seems to us an
entirely natural and reasonable restriction in the real world.
Now if we pick any distribution and any hyperplane independently, it is highly
unlikely that the probability measure will be concentrated close to the hyperplane.
Thus we expect to see that under our restriction, the Perceptron algorithm is a
distribution independent learning algorithm for H and converges in time O( S;2)
on a serial computer.
If adversary 1 and adversary 2 do not exchange information, the least we can
expect is that they have no notion of a preferred direction on the sphere. Thus our
informal demand that these two adversaries do not exchange information should
(relative e.g. to
imply, at least, that adversary 1 is equally likely to choose any
whatever direction adversary 2 takes as his z axis). This formalizes, sufficiently for
our current purposes, the notion of Nonmalicious Distribution Independence.
w,
sn
Theorem 1: Let U be the uniform probability measure on
and D any other
probability distribution on
Let R be any region on
of U-measure (8 and
randomly according to U.
let z label some point in R. Choose a point y on
Consider the region R' formed by translating R rigidly so that z is mapped to y.
Then the probability that the measure D(R/) > ( is less than 8.
sn.
sn.
sn
sn
Proof: Fix any point z E
Now choose y and thus R'. The
(8. Thus in particular, if we choose a point p according to D
the probability that pER' is (8.
Now assume that there is probability greater than 8 that
arrive immediately at a contradiction, since we discover that
p E Fe is greater than (8. Q.E.D.
probability z E R' is
and then choose R',
D( R/) > (. Then we
the probability that
Corollary 2: The Perceptron algorithm is aNon-malicious distribution independent learning algorithm for half spaces on the unit sphere which converges, with
confidence 1 - {) to accuracy 1 - ( in time of order O( S;2) on a serial computer.
=
Proof sketch: Let ",,
(8/2fo,. Apply Theorem 1 to show that a band formed by
hyperplanes a distance ",, /2 on either side of the target hyperplane has probability
less than 8 of having measure for examples greater than (/2. Then apply the
arguments of the last section, with ",' in place of "'. Q.E.D.
Appendix A: Convex Sets Are Learnable for Uniform Distribution
In this appendix we sketch proofs that two classes of functions with infinite
V-C dimension are learnable. These classes are the class of convex sets and a class
of nested differences of convex sets which we define. These results support our
The Perceptron Algorithm Is Fast for Non-Malicious Distributions
conjecture that full distribution independence is too restrictive a criterion to ask
for if we want our results to have interesting applications. We believe these results
are also of independent interest.
Theorem 3: The class C of convex sets is learnable in time polynomial in (-1 and
6- 1 if the distribution of examples is uniform on the unit square in d dimensions.
Remarks: (1) C is well known to have infinite V-C dimension. (2) So far as we
know, C is not learnable in time polynomial in d as well.
Proof Sketch:/ 4 We work, for simplicity, in 2 dimensions. Our arguments can readily
be extended to d dimensions.
The learning algorithm is to call M examples (where M will be specified). The
positive examples are by definition within the convex set to be learned. Let M+ be
the set of positive examples. We classify examples as negative if they are linearly
separable from M+, i.e. outside of c+, the convex hull of M+.
Clearly this approach will never missclassify a negative example, but may missclassify positive examples which are outside c+ and inside Ct. To show (- accuracy,
U
~~~~II
~f=:
~~
lllllUHf
~~
~
~
~l==
~~~
t?0~
~t?0
~~
?~
~~~
E~
E~
~
=~~~
~~E=
~II
mf
Figure 2: The boundary of the target concept Ct is shown. The set It of little
squares intersecting the boundary of are hatched vertically. The set 12 of squares
just inside Ii are hatched horizontally. The set 13 of squares just inside 12 are
hatched diagonally. If we have an example in each square in 12, the convex hull of
these examples contains all points inside except possibly those in It, 12 , or 13 ?
c,
c,
/4 This proofis inspired by arguments presented in [Pollard, 1984], pp22-24. After
this proof was completed, the author heard D. Haussler present related, unpublished
results at the 1989 Snowbird meeting on Neural Computation.
683
684
Baum
we must choose M large enough so that, with confidence 1 - 8, the symmetric
difference of the target set C. and c+ has area less than f.
Divide the unit square into k 2 equal subsquares. (See figure 2.) Call the set
of subsquares which the boundary of Ct intersects II. It is easy to see that the
cardinality of II is no greater than 4k. The set 12 of subsquares just inside 11 also
has cardinality no greater than 4k, and likewise for the set 13 of subsquares just
inside 12 ? If we have an example in each of the squares in 12 , then Ct and C+ clearly
have symmetric difference at most equal the area of 11 U 12 U 13 < 12k X k- 2 = 12/ k.
Thus take k = 12/f. Now choose M sufficiently large so that after M trials there is
less than 8 probability we have not got an example in each of the 4k squares in 12 ?
Thus we need LE(k- 2 ,M,4k) < 8. Using equation 5, we see that M = 5f~oln8 will
suffice. Q.E.D.
Actually, one can learn (for uniform distributions) a more complex class of
functions formed out of nested convex regions. For any set {C1, C2, ??. , c,} of I convex
regions in ~d, let R1 = C1 and for j = 2, ... ,1 let Rj = Rj-1 n Cj. Then define a
concept f = R1 - R2 + R3 - ?.. R,. The class C of concepts so formed we call nested
convex sets. See figure 3.
c,
Figure 3: Cl is the five sided region,
square. The positive region C1 - C2 U C1
is the tria~gular region, and
+ C3 U C2 U C1 IS shaded.
C2
Cs
is the
The Perceptron Algorithm Is Fast for Non-Malicious Distributions
This class can be learned by an iterative procedure which peels the onion. Call
a sufficient number of examples. (One can easily see that a number polynomial in
I, f, and 6 but of course exponential in d will suffice.) Let the set of examples so
obtained be called S. Those negative examples which are linearly separable from all
positive examples are in the outermost layer. Class these in set Sl. Those positive
examples which are linearly separable from all negative examples in S - Sl lie in
the next layer- call this set of positive examples S2. Those negative examples in
S - Sl linearly separable from all positive examples in S - S2 lie in the next layer,
S3. In this way one builds up I + 1 sets of examples. (Some of these sets may
be empty.) One can then apply the methods of Theorem 3 to build a classifying
function from the outside in. If the innermost layer S,+1 is (say) negative examples,
then any future example is called negative if it is not linearly separable from S'+1,
or is linearly separable from S, and not linearly separable from S,-1, or is linearly
separable from S,-2 but not linearly separable from S,-3, etc.
Acknowledgement: I would like to thank L.E. Baum for conversations and L. G.
Valiant for conunents on a draft. Portions of the work reported here were performed while the author was an employee of Princeton University and of the Jet
Propulsion Laboratory, California Institute of Technology, and were supported by
NSF grant DMR-8518163 and agencies of the US Department of Defence including
the Innovative Science and Technology Office of the Strategic Defence Initiative
Or ganization.
References
ANGLUIN, D., VALIANT, L.G. (1979), Fast probabilistic algorithms for Hamiltonian circuits and matchings, J. of Computer and Systems Sciences, 18, pp 155-193.
BAUM, E.B., (1989), On learning a union of half spaces, Journal of Complexity
V5, N4.
BLUMER, A., EHRENFEUCHT,A., HAUSSLER,D., and WARMUTH,M. (1987),
Learnability and the Vapnik-Chervonenkis Dimension, U.C.S.C. tech. rep. UCSCCRL-87-20, and J. ACM, to appear.
KARMARKAR, N., (1984), A new polynomial time algorithm for linear programming, Combinatorica 4, pp373-395
KEARNS, M, and VALIANT, L., (1989), Cryptographic limitations on learning
Boolean formulae and finite automata, Proc. 21st ACM Symp. on Theory of
Computing, pp433-444.
MINSKY, M, and PAPERT,S., (1969), Perceptrons, and Introduction to Computational Geometry, MIT Press, Cambridge MA.
POLLARD, D. (1984), Convergence of stochastic processes, New York: SpringerVerlag.
ROSENBLATT, F. (1962), Principles of Neurodynamics, Spartan Books, N.Y.
VALIANT, L.G., (1984), A theory of the learnable, Conun. of ACM V27, Nll,
pp1l34-1142.
685
| 226 |@word trial:2 polynomial:15 seems:2 rno:3 seek:1 innermost:1 pick:2 initial:1 contains:2 chervonenkis:4 current:1 z2:1 nt:1 yet:3 must:4 readily:3 tot:1 update:20 half:14 warmuth:1 plane:1 hamiltonian:1 draft:1 ron:4 hyperplanes:2 five:1 along:1 c2:4 become:1 initiative:1 prove:1 symp:1 inside:6 expected:1 indeed:4 behavior:1 inspired:1 actual:1 little:1 cardinality:3 begin:1 classifies:2 moreover:1 bounded:1 suffice:3 discover:1 circuit:1 what:2 nj:1 formalizes:1 nf:2 subclass:2 whatever:2 unit:6 grant:1 appear:1 positive:10 before:1 vertically:1 modify:1 limit:1 rigidly:1 shaded:1 limited:1 v27:1 union:1 procedure:3 area:3 empirical:1 got:1 confidence:6 get:1 close:1 context:3 applying:1 restriction:3 dz:1 baum:9 attention:1 independently:2 convex:16 focused:1 automaton:1 simplicity:1 immediately:1 contradiction:1 rule:1 haussler:2 his:1 notion:3 construction:2 target:9 modulo:1 programming:4 hypothesis:5 origin:1 ego:1 nowhere:1 logarithmically:1 assure:1 logj:1 reexamine:1 worst:1 region:7 cycle:1 oe:1 agency:1 complexity:2 pp1l34:1 solving:1 matchings:1 easily:1 intersects:2 fast:9 spartan:1 tell:2 outside:4 plausible:1 solve:1 say:5 relax:1 seemingly:1 nll:1 net:1 propose:1 rapidly:2 date:1 gular:1 description:1 normalize:1 convergence:3 empty:1 r1:2 produce:1 generating:2 converges:5 tk:1 depending:1 snowbird:1 strong:3 c:1 direction:2 hull:2 stochastic:1 translating:1 argued:3 exchange:3 suffices:1 generalization:2 fix:1 lying:1 sufficiently:2 mo:6 subsquares:4 tor:1 early:1 purpose:1 proc:1 applicable:1 label:1 iw:1 largest:1 hope:1 mit:1 clearly:2 defence:2 rather:1 office:1 corollary:1 bernoulli:1 tech:1 secure:2 nonmalicious:3 contrast:1 sense:1 shattered:1 unlikely:1 onion:1 classification:1 wever:1 once:2 f3:2 having:2 never:1 equal:2 nearly:1 future:1 few:3 randomly:1 minsky:3 geometry:1 recalling:1 peel:1 interest:5 highly:1 semidefinite:1 cryptographically:2 accurate:1 closer:1 unless:1 euclidean:1 divide:1 desired:1 theoretical:1 classify:3 formalism:1 boolean:5 assertion:1 stirling:1 strategic:1 uniform:13 too:5 learnability:2 reported:1 st:1 probabilistic:2 intersecting:2 central:1 choose:10 possibly:1 worse:2 book:1 return:1 wk:3 includes:1 performed:1 view:1 portion:1 start:1 parallel:3 halfspace:1 formed:5 square:9 accuracy:11 percept:4 likewise:1 yield:3 classified:5 converged:3 fo:1 synaptic:1 definition:10 pp:1 proof:6 vaidya:1 dmr:1 proved:5 ask:2 wh:1 recall:2 conversation:1 cj:1 obtainable:1 actually:2 ok:2 generality:1 just:4 until:2 sketch:4 hand:1 believe:2 concept:3 symmetric:2 laboratory:1 ehrenfeucht:1 ll:1 width:2 criterion:2 evident:1 theoretic:1 l1:1 novel:1 employee:1 cambridge:1 ai:3 trivially:1 had:1 etc:1 hemisphere:1 discard:1 rep:1 arbitrarily:2 continue:1 success:2 meeting:1 seen:1 minimum:1 greater:8 converge:6 ii:5 full:1 rj:2 faster:3 jet:1 sphere:6 long:1 serial:4 equally:1 converging:2 involving:1 prediction:1 ucsccrl:1 iteration:1 equator:1 c1:5 remarkably:1 want:1 else:1 malicious:8 ot:5 pass:1 call:14 feedforward:1 enough:2 easy:1 iterate:1 independence:6 whether:1 wo:2 pollard:2 proceed:1 york:1 remark:1 heard:1 involve:1 amount:1 band:7 concentrated:1 induces:1 processed:1 angluin:2 sl:3 restricts:1 nsf:1 notice:1 s3:1 sign:1 correctly:5 per:1 rosenblatt:3 write:1 threshold:1 enormous:1 falling:1 drawn:5 shatters:1 fraction:3 you:1 communicate:1 arrive:1 almost:1 reasonable:2 place:1 vn:2 appendix:5 prefer:1 entirely:1 bound:5 ct:4 layer:4 occur:1 constraint:2 calling:1 integrand:1 argument:3 innovative:1 separable:11 conjecture:1 department:1 according:4 slightly:1 n4:1 modification:3 making:2 restricted:1 sided:1 taken:2 equation:2 r3:1 know:3 end:1 informal:2 studying:1 apply:7 observe:1 alternative:1 gate:2 in2:1 completed:1 restrictive:4 build:2 v5:1 quantity:1 rt:2 traditional:1 unclear:2 distance:6 thank:2 mapped:1 separating:6 propulsion:1 argue:2 trivial:1 assuming:2 providing:1 fe:1 negative:8 cryptographic:1 perform:1 upper:2 neuron:1 finite:6 extended:1 rn:1 arbitrary:3 pair:1 namely:1 specified:1 unpublished:1 c3:1 california:1 learned:4 adversary:12 suggested:1 below:2 rnax:1 including:2 max:3 demanding:1 natural:6 solvable:1 technology:2 imply:2 axis:1 alii:1 sn:10 acknowledgement:1 relative:1 loss:1 expect:4 interesting:1 limitation:1 proven:1 gather:1 sufficient:1 principle:1 classifying:2 course:2 changed:1 diagonally:1 surprisingly:1 last:1 supported:1 side:1 allow:2 perceptron:27 institute:2 fall:1 outermost:1 boundary:3 dimension:17 depth:3 world:5 valid:2 computes:1 author:2 made:3 far:3 hatched:3 preferred:1 conclude:2 iterative:1 neurodynamics:1 learn:9 hc:1 cl:2 separator:1 complex:1 protocol:6 did:1 linearly:11 s2:2 nothing:2 allowed:2 tl:3 formalization:1 papert:3 position:1 exponential:1 candidate:3 lie:2 third:1 learns:1 formula:3 down:1 theorem:5 specific:1 emphasized:1 learnable:14 r2:1 a3:1 exists:1 vapnik:4 valiant:18 nec:1 demand:3 mf:1 tc:3 logarithmic:1 simply:2 likely:3 karmarkar:7 horizontally:1 aa:1 nested:6 satisfies:1 acm:3 ma:1 presentation:1 blumer:6 considerable:1 change:1 springerverlag:1 infinite:6 determined:1 except:1 hyperplane:11 wt:1 miss:1 kearns:2 called:6 e:1 meaningful:1 perceptrons:1 pragmatic:1 combinatorica:1 support:2 relevance:1 princeton:2 unlearnable:2 |
1,385 | 2,260 | Handling Missing Data with Variational
Bayesian Learning of ICA
Kwokleung Chan, Te-Won Lee and Terrence Sejnowski
The Salk Institute, Computational Neurobiology Laboratory,
10010 N. Torrey Pines Road,
La Jolla,, CA 92037, USA
{kwchan,tewon,terry}@salk.edu
Abstract
Missing data is common in real-world datasets and is a problem for many
estimation techniques. We have developed a variational Bayesian method
to perform Independent Component Analysis (ICA) on high-dimensional
data containing missing entries. Missing data are handled naturally in the
Bayesian framework by integrating the generative density model. Modeling the distributions of the independent sources with mixture of Gaussians allows sources to be estimated with different kurtosis and skewness.
The variational Bayesian method automatically determines the dimensionality of the data and yields an accurate density model for the observed data without overfitting problems. This allows direct probability
estimation of missing values in the high dimensional space and avoids
dimension reduction preprocessing which is not feasible with missing
data.
1 Introduction
Data density estimation is an important step in many machine learning problems. Often we
are faced with data containing incomplete entries. The data may be missing due to measurement or recording failure. Another frequent cause is difficulty in collecting complete
data. For example, it could be expensive and time consuming to perform some biomedical
tests. Data scarcity is not uncommon and it would be very undesirable to discard those data
points with missing entries when we already have a small dataset. Traditionally, missing
data are filled in by mean imputation or regression imputation during preprocessing. This
could introduce biases into the data cloud density and adversely affect subsequent analysis. A more principled way would be to use probability density estimates of the missing
entries instead of point estimates. A well known example of this approach is the use of
Expectation-Maximization (EM) algorithm in fitting incomplete data with a single Gaussian density [5].
Independent Component Analysis (ICA) [4] tries to locate independent axes within the data
cloud and was developed for blind source separation. It has been applied to speech separation and analyzing fMRI and EEG data. ICA is also used to model data density, describing
data as linear mixture of independent features and finding projections that may uncover interesting structure in the data. Maximum likelihood learning of ICA with incomplete data
has been studied by [6], in the limited case of a square mixing matrix and predefined source
densities.
Many real-world datasets have intrinsic dimensionality smaller then that of the observed
data. With missing data, principal component analysis cannot be used to perform dimension reduction as preprocessing for ICA. Instead, the variational Bayesian method applied
to ICA can handle small datasets with high observed dimension [1, 2]. The Bayesian
method prevents overfitting and performs automatic dimension reduction. In this paper, we
extend the variational Bayesian ICA method to problems with missing data. The probability density estimate of the missing entries can be used to fill in the missing values. This
also allows the density model to be refined and made more accurate.
2 Model and Theory
2.1 ICA generative model with missing data
Consider a data set of T data points in an N -dimensional space: X = {x t ? RN },
t = {1, ? ? ? , T }. Assume a noisy ICA generative model for the data:
Z
P (xt |?) = N (xt |Ast + ?, ?)P (st |?s ) dst
(1)
where A is the mixing matrix, ? is the observation mean and ??1 is the diagonal noise
variance. The hidden source st is assumed to have L dimensions. Each component of st is
modeled by a mixture of K Gaussians to allow for source densities of various kurtosis and
skewness,
!
L
K
Y
X
P (st |?s ) =
?lkl N (st (l)|?lkl , ?lkl )
(2)
l
kl
o>
m>
Split each data point into a missing part and an observed part: x>
). In this
t = (xt , xt
paper, we only consider the random missing case [3], i.e. the probability for the missing
m
o
entries xm
t is independent of the value of xt , but could depend on the value of xt . The
likelihood of the dataset is then defined to be
Y
L(?; X) =
P (xot |?) ,
(3)
P (xot |?) =
Z
P (xt |?) dxm
t =
Zt
N (xot |[Ast + ?]ot , [?]ot )P (st |?s ) dst
(4)
Here we have introduced the notation [?]ot , which means taking only the observed dimensions (corresponding to the tth data point) of whatever is inside the square brackets. Since
eqn. (4) is similar to eqn. (1), the variational Bayesian ICA [1, 2] can be extended naturally to handled missing data, but only if care is taken in discounting missing entries in the
learning rules.
2.2 Variational Bayesian method
In a full Bayesian treatment, the posterior distribution of the parameters ? is obtained by
Q
P (xot |?)P (?)
P (X|?)P (?)
= t
(5)
P (?|X) =
P (X)
P (X)
where P (X) is the marginal likelihood of the data and given as:
Z Y
P (X) =
P (xot |?)P (?) d?
(6)
t
The ICA model for P (X) is defined with the following priors on the parameters P (?),
P (? l ) = D(? l |do (? l ))
P (Anl ) = N (Anl |0, ?l )
P (?lkl ) = N (?lkl |?o (?lkl ), ?o (?lkl )) (7)
P (?l ) = G(?l |ao (?l ), bo (?l ))
P (?lkl ) = G(?lkl |ao (?lkl ), bo (?lkl ))
P (?n ) = N (?n |?o (?n ), ?o (?n ))
P (?n ) = G(?n |ao (?n ), bo (?n ))
(8)
where N (?), G(?) and D(?) are the normal, gamma and Dirichlet distributions.a o(?), bo (?),
do (?), ?o (?), and ?o (?) are prechosen hyperparameters for the priors.
Under the variational Bayesian treatment, instead of performing the integration in eqn. (6)
to solve for P (?|X) directly, we approximate it by Q(?) and opt to minimize the KullbackLeibler distance between them:
Z
P (?|X)
d?
?KL(Q(?)|P (?|X)) = Q(?) log
Q(?)
#
"
Z
X
P
(?)
d? ? log P (X) (9)
= Q(?)
log P (xot |?) + log
Q(?)
t
Since ?KL(Q(?)|P (?|X)) ? 0, we get a lower bound for the log marginal likelihood of
the data,
Z
Z
X
P (?)
log P (X) ?
Q(?)
log P (xot |?) d? + Q(?) log
d? ,
(10)
Q(?)
t
which can also be obtained by applying the Jensen?s inequality to eqn. (6). Q(?) is then
solved by functional maximization of the lower bound. A separable approximate posterior
Q(?) will be assumed:
"
#
Y
Y
Q(?) = Q(?)Q(?) ? Q(A)Q(?) ?
Q(? l )
Q(?lkl )Q(?lkl ) .
(11)
l
kl
The second term in eqn. (10), which is the negative Kullback-Leibler divergence between
approximate posterior Q(?) and prior P (?), can be expanded as,
Z
XZ
P (?)
P (? l )
Q(?) log
d? =
Q(? l ) log
d? l
Q(?)
Q(? l )
l
XZ
XZ
P (?lkl )
P (?lkl )
+
Q(?lkl ) log
d?lkl +
Q(?lkl ) log
d?lkl
Q(?lkl )
Q(?lkl )
l kl
l kl
ZZ
Z
P (A|?)
P (?)
+
Q(A)Q(?) log
dA d? + Q(?) log
d?
Q(A)
Q(?)
Z
Z
P (?)
P (?)
d? + Q(?) log
d? (12)
+ Q(?) log
Q(?)
Q(?)
2.3 Special treatment for missing data
Thus far the analysis follows almost exactly that of the variational Bayesian ICA on complete data, except that P (xt |?) is replaced by P (xot |?) in eqn. (6) and consequently the
missing entries are discounted in the learning rules. However, it would be useful to obtain
o
Q(xm
t |xt ), i.e., the approximate distribution on the missing entries, which is given by
Z
Z
m o
m
m
Q(xt |xt ) = Q(?) N (xm
(13)
t |[Ast + ?]t , [?]t )Q(st ) dst d? .
As noted in [6], elements of st given xot are dependent. More importantly, under the ICA
model, Q(st ) is unlikely to be a single Gaussian. This is evident from figure 1 which shows
the probability density functions of the data x and hidden variable s. The inserts show
the sample data in the two spaces. Here the hidden sources assume density of P (s l ) ?
exp(?|sl |0.7 ). They are mixed noiselessly to give P (x) in the left graph. The cut in the
left graph represents P (x1 |x2 = ?0.5), which transforms into a highly correlated and
non-Gaussian P (s|x2 = ?0.5).
1.4
1.6
1.2
1.4
1
1.2
0.8
1
0.6
0.8
0.4
0.2
0.6
0
1
0.4
0.2
0
1
0.5
0
1
0.5
?0.5
1
0.5
0
0
?0.5
?0.5
x2
?1
?1
0.5
0
s2
x1
?0.5
?1
?1
s1
Figure 1: Pdfs for the data x (left) and hidden sources s (right). Inserts show the sample
data in the two spaces. The ?cuts? show P (x1 |x2 = ?0.5) and P (s|x2 = ?0.5).
o
Unless we are interested only in the first and second order statistics of Q(x m
t |xt ), we
should try to capture as much structure as possible of P (st |xot ) in Q(st ). In this paper, we
take a slightly different route from [1, 2] when performing variational Bayesian learning.
First, we break down P (st ) (eqn. 2) into a mixture of K L Gaussians in the L dimensional
s space.
X X
P (st ) =
??
[?1k1 ? ? ? ??LkL ? N (st (1)|?1k1 ?1k1 ) ? ? ? ?N (st (L)|?LkL ?LkL )]
k1
=
X
kL
?k N (st |?k , ? k )
(14)
k
Here we have defined k to be a vector index. The ?kth? Gaussian is centered at ? k , of
inverse covariance ? k , in the source s space,
?k = ?1k1 ? ? ? ? ? ?LkL
? k = diag (?1k1 , ? ? ? ?LkL )
?k = (?1k1 , ? ? ? , ?lkl , ? ? ? , ?LkL )>
k = (k1 , ? ? ? , kl , ? ? ? , kL )> ,
kl = 1, ? ? ? , K
Log likelihood for xot is then expanded using the Jensen?s inequality,
X Z
log P (xot |?) = log
?k P (xot |st , ?) N (st |?k , ?k ) dst
(15)
k
?
X
Z
Q(kt ) log P (xot |st , ?)N (st |?k , ? k ) dst +
k
X
k
Q(kt ) log
?k
Q(kt )
(16)
Here Q(kt ) is a short form for Q(kt = k). kt is a discrete hidden variable and Q(kt = k)
is the probability that the tth data point belongs to the kth Gaussian. Recognizing that s t is
just a dummy variable, we introduce Q(skt ), apply the Jensen?s inequality again and get
Z
X
log P (xot |?) ?
Q(kt )
Q(skt ) log P (xot |skt , ?) dskt
k
+
Z
Q(skt ) log
X
N (skt |?k , ?k )
?k
dskt +
Q(kt ) log
Q(skt )
Q(kt )
(17)
k
Substituting log P (xot |?) back into eqn. (10), the variational Bayesian method can be continued as usual. We have drawn in figure 2 a simplified graphical representation for the
generative model of variational ICA. xt is the observed variable, kt and st are hidden variables and the rest are model parameters, where kt indicates which of the K L expanded
Gaussians generated st .
?
?
?
A
kt
xt
st
?
?
?
Figure 2: A simplified directed graph for the generative model of variational ICA. x t is the
observed variable, kt and st are hidden variables and the rest are model parameters. The
kt indicates which of the K L expanded Gaussians generated st .
3 Learning Rules
Combining eqns. (10,12 and 17) we perform functional maximization on the lower bound
of the log marginal likelihood, log P (X), w.r.t. Q(?) (eqn. 11), Q(kt ) and Q(skt ) (eqn. 17)
and obtain the following learning rules for the sufficient statistics of Q(?) and Q(s kt ):
X
?(?n ) = ?o (?n ) + h?n i
ont
t
?(?n ) =
?o (?n )?o (?n ) + h?n i
a(?n ) = ao (?n ) +
1X
2
P
t ont
P
k
Q(kt )h(xnt ? An? skt )i
(18)
?(?n )
ont
t
X
1X
ont
Q(kt )h(xnt ? An? skt ? ?n )2 i
2 t
k
X
X
?(An? ) = diag (h?1 i, ? ? ? h?L i) + h?n i
ont
Q(kt )hskt s>
kt i
(19)
b(?n ) = bo (?n ) +
t
?(An? ) =
h?n i
X
ont (xnt ? h?n i)
t
a(?l ) = ao (?l ) +
k
Q(kt )hs>
kt i
k
N
2
d(?lk ) = do (?lk ) +
X
b(?l ) = bo (?l ) +
XX
t
!
?(An? )
(20)
?1
1X 2
hAnl i
2 n
Q(kt )
(21)
(22)
kl =k
?(?lkl ) = ?o (?lkl ) + h?lkl i
XX
t
Q(kt )
kl =k
P P
?o (?lkl )?o (?lkl ) + h?lkl i t kl =k Q(kt )hskt (l)i
?(?lkl ) =
?(?lkl )
1XX
a(?lkl ) = ao (?lkl ) +
Q(kt )
2 t
kl =k
1X X
b(?lkl ) = bo (?lkl ) +
Q(kt )h(skt (l) ? ?lkl )2 i
2 t
(23)
(24)
kl =k
Q(skt ) = N (skt |?(skt ), ?(skt ))
?(skt ) = diag (h?1k1 i, ? ? ? h?LkL i) + hA> diag (o1t ?1 , ? ? ? oN t ?N ) Ai
>
>
(25)
?(skt )?(skt ) = h?1k1 ?1k1 , ? ? ? ?LkL ?LkL i + hA diag (o1t ?1 , ? ? ? oN t ?N ) (xt ? ?)i
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
?4
?3
?2
?1
0
1
2
o
Figure 3: The approximation of Q(xm
t |xt ) from the full missing ICA (solid line) and
o
the polynomial missing ICA (dashed line). Shaded area is the exact posterior P (x m
t |xt )
corresponding to the noiseless mixture in fig. 1 with observed x2 =?2. Dotted lines are
o
contribution from the individual Q(xm
kt |xt , k).
In the above equations, h?i denotes the expectation
over the posterior distributions Q(?).
P
An? is the nth row of the mixing matrix A, kl =k means picking out those Gaussians
such that the lth element of their indices k has the value of k, and o nt is a binary indicator
variable for whether or not xnt is observed.
For a model of equal noise variance among all the observation dimensions, the summation
in the learning rules for Q(?) would be over both t and n. Note that there exists scale
and translational degeneracy in the model, as given by eqn. (1) and (2). After each update
of Q(? l ), Q(?lkl ) and Q(?lkl ), it is better to rescale P (st (l)) to have zero mean and unit
variance. Q(skt ), Q(A), Q(?), Q(?) and Q(?) have to be adjusted correspondingly.
Finally, Q(kt ) is given by,
log Q(kt ) = hlog P (xot |skt , ?)+log N (skt |?k , ?k )?log Q(skt )+log ? k i?log zt (26)
where zt is a normalization constant. The lower bound E(X, Q(?)|H) for the log marginal
likelihood
Z
X
P (?)
d?
(27)
E(X, Q(?)|H) =
log zt + Q(?) log
Q(?)
t
can be monitored during learning and used for comparison of different solutions or models.
4 Filling in missing entries
o
m o
The approximate distribution Q(xm
t |xt ) can be obtained by a summation of Q(xkt |xt , k):
Z
X
m o
m
o
m
(28)
Q(kt ) ?(xm
Q(xm
t ? xkt )Q(xkt |xt , k) dxkt ,
t |xt ) =
k
o
Q(xm
kt |xt , k)
=
Z
Q(?)
Z
m
m
N (xm
kt |[Askt + ?]t , [?]t )Q(skt ) dskt d?
(29)
o
Estimation of Q(xm
t |xt ) using the above equations is demonstrated in fig. 3. The shaded
o
area is the exact posterior P (xm
t |xt ) for the noiseless mixing in fig. 1 with observed x2 =?2
and the solid line is the approximation by eqn. 28?29. We have modified the variational
ICA of [1] by discounting missing entries in the learning rules. The dashed line is the
o
approximation of Q(xm
t |xt ) from this modified method. The treatment of fully expanding
L
the K hidden source Gaussians discussed in section 2.3 is called ?full missing ICA?, and
the modified method is ?polynomial missing ICA?. The ?full missing ICA? gives a more
o
m o
accurate fit for P (xm
t |xt ) and a better estimate for hxt |xt i.
c)
b)
d)
?1500
e)
log marginal likelihood lower bound
a)
?1600
?1700
?1800
?1900
full missing ICA
polynomial missing ICA
?2000
1
2
3
4
5
Number of dimensions
6
7
Figure 4: a)-d) Source density modeling by variational missing ICA of the synthetic data.
Histograms: recovered sources distribution; dashed lines: original probability densities;
solid line: mixture of Gaussians modeled probability densities; dotted lines: individual
Gaussian contribution. e) E(X, Q(?)|H) as a function of hidden source dimensions.
5 Experiment
5.1 Synthetic Data
In the first experiment, 200 data points were generated by mixing 4 sources randomly in a 7
dimensional space. The generalized Gaussian, gamma and beta distributions were used to
represent source densities of various skewness and kurtosis (fig. 4 a)-d)). Noise at ?26 dB
level was added to the data and missing entries were created with a probability of 0.3. In
fig. 4 a)-d), we plotted the histograms of the recovered sources and the probability density
functions (pdf) of the 4 sources. The dashed line is the exact pdf used to generate the data
and solid line is the modeled pdf by mixture of two 1-D Gaussians (eqn. 2). Fig. 4 e)
plots the lower bound of log marginal likelihood (eqn. 27) for models assuming different
numbers of intrinsic dimensions. As expected, the Bayesian treatment allows us to the infer
the intrinsic dimension of the data cloud. In the figure, we also plot the E(X, Q(?)|H) from
the polynomial missing ICA. It is clear that the full missing ICA gave a better fit to the data
density. Furthermore, the polynomial missing ICA converges slower per epoch of learning,
suffers from many more local minima and problems get worse with higher missing rate.
5.2 Mixing Images
This experiment demonstrates the ability of the proposed method to fill in missing values
while performing demixing. The 1st column in fig. 5 shows the 2 original 380-by-380
pixels images. They were linearly mixed into 3 images and ?20 dB noise was added. 20%
missing entries were introduced randomly. The denoised mixtures and recovered sources
are in the 3rd and 4th columns of fig. 5. 0.8% of the pixels were missing from all 3 mixed
images and could not be recovered. 38.4% of the pixels were missing from only 1 mixed
image and could be filled in with low uncertainty. 9.6% of the pixels were missing from
any two of the mixed images. Estimation of their values incurred high uncertainty. From
fig. 5, we can see that the source images were well separated and the mixed images were
nicely denoised. The denoised mixed images in this example were only meant to visually
illustrate the method. However, if (x1 , x2 , x3 ) represent cholesterol, blood sugar and uric
acid level, for example, it would be possible to fill in the third when only two are available.
6 Conclusion
In this paper, we derived the learning rules for variational Bayesian ICA with missing data.
The complexity of the method is exponential in L. However, this exponential growth in
?
?
+
Figure 5: A demonstration of recovering missing values. The original images are in the 1st
column. 20% of the pixels in the mixed images (2nd column) are missing, while only 0.8%
are missing from the denoised mixed (3rd column) and separated images (4th column).
complexity is manageable and worthwhile for small data sets containing missing entries
in a high dimensional space. The proposed method shows promise in analyzing and identifying projections of datasets that have a very limited number of expensive data points
yet contain missing entries due to data scarcity. We have applied the variational missing
ICA to a primates brain volumetric dataset containing 44 examples in 57 dimensions. Very
encouraging results were obtained and will be reported in another paper.
References
[1] Kwokleung Chan, Te-Won Lee, and Terrence J. Sejnowski. Variational learning of
clusters of undercomplete nonsymmetric independent components. Journal of Machine
Learning Research, 3:99?114, 2002.
[2] Rizwan A. Choudrey and Stephen J. Roberts. Flexible Bayesian independent component analysis for blind source separation. In 3rd International Conference on Independent Component Analysis and Blind Signal Separation, pages 90?95, San Diego, Dec.
09-12 2001.
[3] Z. Ghahramani and M. Jordan. Learning from incomplete data. Technical Report CBCL Paper No. 108, Center for Biological and Computational Learning, Massachusetts Institute of Technology, 1994.
[4] Aapo Hyvarinen, Juha Karhunen, and Erkki Oja. Independent Component Analysis. J.
Wiley, New York, 2001.
[5] R. J. A. Little and D. B. Rubin. Statistical Analysis with Missing Data. Wiley, New
York, 1987.
[6] Max Welling and Markus Weber. Independent component analysis of incomplete data.
In 1999 6th Joint Symposium on Neural Compuatation Proceedings, volume 9, pages
162?168. UCSD, May. 22 1999.
| 2260 |@word h:1 manageable:1 polynomial:5 nd:1 covariance:1 solid:4 reduction:3 recovered:4 nt:1 yet:1 subsequent:1 plot:2 update:1 generative:5 short:1 direct:1 beta:1 symposium:1 fitting:1 inside:1 introduce:2 expected:1 ica:30 xz:3 brain:1 discounted:1 automatically:1 ont:6 encouraging:1 little:1 xx:3 notation:1 skewness:3 developed:2 finding:1 collecting:1 growth:1 exactly:1 demonstrates:1 whatever:1 unit:1 local:1 analyzing:2 studied:1 shaded:2 limited:2 directed:1 x3:1 area:2 projection:2 road:1 integrating:1 get:3 cannot:1 undesirable:1 ast:3 applying:1 demonstrated:1 missing:51 center:1 identifying:1 rule:7 continued:1 importantly:1 fill:3 cholesterol:1 handle:1 traditionally:1 diego:1 exact:3 element:2 expensive:2 cut:2 observed:10 cloud:3 solved:1 capture:1 principled:1 complexity:2 sugar:1 o1t:2 depend:1 joint:1 various:2 separated:2 sejnowski:2 refined:1 solve:1 ability:1 statistic:2 torrey:1 noisy:1 kurtosis:3 frequent:1 combining:1 mixing:6 cluster:1 converges:1 illustrate:1 rescale:1 recovering:1 centered:1 ao:6 opt:1 biological:1 summation:2 adjusted:1 insert:2 lkl:46 normal:1 exp:1 visually:1 cbcl:1 pine:1 substituting:1 rizwan:1 estimation:5 gaussian:7 modified:3 ax:1 derived:1 pdfs:1 likelihood:9 indicates:2 dependent:1 unlikely:1 hidden:9 interested:1 pixel:5 translational:1 among:1 flexible:1 integration:1 special:1 marginal:6 equal:1 nicely:1 zz:1 represents:1 filling:1 fmri:1 report:1 randomly:2 oja:1 gamma:2 divergence:1 individual:2 replaced:1 highly:1 uncommon:1 mixture:8 bracket:1 predefined:1 accurate:3 kt:34 unless:1 filled:2 incomplete:5 plotted:1 column:6 modeling:2 maximization:3 entry:15 undercomplete:1 prechosen:1 recognizing:1 kullbackleibler:1 reported:1 synthetic:2 st:28 density:19 international:1 lee:2 terrence:2 picking:1 again:1 containing:4 worse:1 adversely:1 dskt:3 blind:3 try:2 break:1 denoised:4 contribution:2 minimize:1 square:2 variance:3 acid:1 yield:1 bayesian:17 suffers:1 volumetric:1 failure:1 naturally:2 monitored:1 degeneracy:1 dataset:3 treatment:5 massachusetts:1 dimensionality:2 noiselessly:1 uncover:1 back:1 higher:1 furthermore:1 just:1 biomedical:1 eqn:14 xkt:3 usa:1 contain:1 discounting:2 laboratory:1 leibler:1 during:2 eqns:1 noted:1 won:2 generalized:1 pdf:3 evident:1 complete:2 performs:1 image:12 variational:18 weber:1 common:1 functional:2 volume:1 extend:1 discussed:1 nonsymmetric:1 measurement:1 ai:1 automatic:1 rd:3 posterior:6 chan:2 jolla:1 belongs:1 discard:1 route:1 inequality:3 binary:1 xot:18 minimum:1 care:1 dashed:4 stephen:1 signal:1 full:6 infer:1 technical:1 regression:1 aapo:1 noiseless:2 expectation:2 histogram:2 normalization:1 represent:2 dec:1 source:20 ot:3 rest:2 recording:1 db:2 jordan:1 split:1 affect:1 fit:2 gave:1 whether:1 handled:2 speech:1 york:2 cause:1 useful:1 clear:1 transforms:1 tth:2 generate:1 sl:1 dotted:2 estimated:1 dummy:1 per:1 discrete:1 promise:1 blood:1 drawn:1 imputation:2 graph:3 inverse:1 uncertainty:2 dst:5 almost:1 separation:4 bound:6 uric:1 x2:8 erkki:1 markus:1 performing:3 separable:1 expanded:4 smaller:1 slightly:1 em:1 primate:1 s1:1 taken:1 equation:2 describing:1 available:1 gaussians:9 apply:1 worthwhile:1 slower:1 original:3 denotes:1 dirichlet:1 graphical:1 k1:11 ghahramani:1 skt:22 already:1 added:2 usual:1 diagonal:1 kth:2 distance:1 assuming:1 modeled:3 index:2 demonstration:1 hlog:1 robert:1 negative:1 xnt:4 zt:4 perform:4 observation:2 datasets:4 juha:1 neurobiology:1 extended:1 locate:1 rn:1 ucsd:1 introduced:2 kl:16 xm:14 max:1 terry:1 difficulty:1 indicator:1 nth:1 tewon:1 technology:1 lk:2 created:1 faced:1 prior:3 epoch:1 fully:1 mixed:9 interesting:1 incurred:1 sufficient:1 rubin:1 row:1 bias:1 allow:1 institute:2 taking:1 correspondingly:1 dimension:12 world:2 avoids:1 made:1 preprocessing:3 simplified:2 san:1 far:1 hyvarinen:1 welling:1 approximate:5 kullback:1 overfitting:2 assumed:2 consuming:1 anl:2 ca:1 correlated:1 expanding:1 eeg:1 da:1 diag:5 linearly:1 s2:1 noise:4 hyperparameters:1 x1:4 fig:9 salk:2 wiley:2 exponential:2 third:1 down:1 xt:28 jensen:3 demixing:1 intrinsic:3 exists:1 hxt:1 te:2 karhunen:1 prevents:1 bo:7 determines:1 lth:1 dxm:1 consequently:1 feasible:1 except:1 principal:1 called:1 la:1 meant:1 scarcity:2 handling:1 |
1,386 | 2,261 | Margin Analysis of the LVQ Algorithm
Koby Crammer
[email protected]
Ran Gilad-Bachrach
[email protected]
Amir Navot
[email protected]
Naftali Tishby
[email protected]
School of Computer Science and Engineering and
Interdisciplinary Center for Neural Computation
The Hebrew University, Jerusalem, Israel
Abstract
Prototypes based algorithms are commonly used to reduce the computational complexity of Nearest-Neighbour (NN) classifiers. In this paper
we discuss theoretical and algorithmical aspects of such algorithms. On
the theory side, we present margin based generalization bounds that suggest that these kinds of classifiers can be more accurate then the 1-NN
rule. Furthermore, we derived a training algorithm that selects a good set
of prototypes using large margin principles. We also show that the 20
years old Learning Vector Quantization (LVQ) algorithm emerges naturally from our framework.
1
Introduction
Though fifty years have passed since the introduction of One Nearest Neighbour (1-NN) [1]
it is still a popular algorithm. 1-NN is a simple and intuitive algorithm but at the same time
achieves state of the art results [2]. However in large, high dimensional data set it often
become infeasible. One approach to face this computational problem is to approximate
the nearest neighbour [3] using various techniques. Alternative approach is to choose a
small data-set (aka prototypes) which represents the original training sample, and apply the
nearest neighbour rule only with respect to this small data-set. This solution maintains the
?spirit? of the original algorithm, while making it feasible. Moreover, it might improve the
accuracy by reducing noise over-fitting.
In this setting, the goal of the learning stage is to choose wisely the prototypes, i.e., in a way
that will yield good generalization 1 . In this paper we use the Maximal Margin principle
[4, 5] for this purpose. The training data is used to measure the margin of each proposed
positioning of the prototypes. We combine these measurements to calculate a risk for each
prototype set and select the prototypes that minimize the risk.
Roughly speaking, margins measure the level of confidence a classifiers has with respect
to its decisions. This tool has become a primary method in machine learning during the
last decade. Two of the most powerful algorithms in the field, Support Vector Machines
1
Good generalization means that the probability of misclassifying a new example is small.
(SVM) [4] and AdaBoost [5] are motivated and analyzed by margins. Since the introduction
of these algorithms dozens of papers were published on different aspect of margins in
supervised learning [6, 7, 8].
Learning Vector Quantization (LVQ) [9] is a well-known algorithm that deals with the
same problem of selecting prototypes. LVQ iterates over the training data and updates the
prototypes position. Although it is known for more then 20 years and in spite of its popularity, no adequate generalization bounds and theory were suggested for this algorithm.
In this paper we show that algorithms derived from the maximal margin principle contains
LVQ as a special case. We use this result to present generalization bounds and insights for
the LVQ algorithm.
Buckingham and Geva [10] were the first to explore the relations between maximal margin
principle and LVQ. They presented a variant named LMVQ and analyzed it. As in most of
the literature about LVQ they look at the algorithm as trying to estimate a density function
(or a function of the density) at each point. After estimating the density the Bayesian
decision rule is used. We take a different point of view on the problem and look at the
geometry of the decision boundary induced by the decision rule. Note that in order to
generate a good classification rule the only significant factor is where the decision boundary
lies (It is a well known fact that classification is easier then density estimation [11]).
Summary of the Results In section 2 we present the model and outline the LVQ family
of algorithms. A discussion and definition of margin is provided in section 3. The two
fundamental results are a bound on the generalization error and a theoretical reasoning for
the LVQ family of algorithms. In section 4 we present a bound on the gap between the
empirical and the generalization accuracy. This provides a guaranty on the performance
over unseen instances based on the empirical evidence. Although LVQ was designed as an
approximation to nearest neighbour the theorem suggests that the former is more accurate
in many cases. Indeed a simple experiment shows this prediction to be true. In section 5
we show how LVQ family of algorithms emerges from the generalization bound. These
algorithms minimize the bound using gradient descent. The different variants correspond
to different tradeoff between opposing quantities. In practice the tradeoff is controlled by
loss functions.
2
Problem Setting and the LVQ algorithm
The framework we are interested in is supervised learning for classification problems. In
this framework the task is to find a map from Rn into a finite set of labels Y. We focus
on classification functions of the following form: the classifiers are parameterized by a set
of points ?1 , . . . , ?k ? Rn which we refer to as prototypes. Each prototype is associated
with a label y ? Y. Given a new instance x ? Rn we predict that it has the same label as
the closest prototype, similar to the 1-nearest-neighbour rule (1-NN). We denote the label
predicted using a set of prototypes {?j }kj=1 by ?(x). The goal of the learning process in
this model is to find a set of prototypes which will predict accurately the labels of unseen
instances.
The Learning Vector Quantization (LVQ) family of algorithms works in this model. The
n
algorithm gets as an input a labelled sample S = {(xl , yl )}m
l=1 , where xl ? R and yl ? Y
and uses it to find a good set of prototypes. All the variants of LVQ share the following
common scheme. The algorithm maintains a set of prototypes each is assigned with a
predefined label, which is kept constant during the learning process. It cycles through the
training data S and on each iteration modifies the set of prototypes in accordance to one
instance (xt , yt ). If the prototype ?j has the same label as yt it is attracted to xt but if the
label of ?j is different it is repelled from it. Hence LVQ updates the closest prototypes to
xt according to the rule:
?j ? ?j ? ?t (xt ? ?j ) ,
(1)
where the sign is positive if the label of xt and ?j agree, and negative otherwise. The
parameter ?t is updated using a predefined scheme and controls the rate of convergence
of the algorithm. The variants of LVQ differ in which prototypes they choose to update in
each iteration and in the specific scheme used to modify ?t .
For instance, LVQ1 and OLVQ1 updates only the closest prototype to xt in each iteration. Another example is the LVQ2.1 which modifies the two closest prototypes ? i and ?j
to xt . It uses the same update rule (1) but apply it only if the following two conditions hold :
1. Exactly one of the prototypes has the same label as xt , i.e. yt .
2. The ratios of their distances from xt falls in a window: 1/s ? kxt ? ?i k / kxt ? ?j k ? s,
where s is the window size.
More variants of LVQ can be found in [9].
3
Margins
Margin plays an important role in current research of machine learning. It measures the
confidence of a classifier with respect to its predictions. One approach is to define margin
as the distance between an instance and the decision boundary induced by the classification
rule as illustrated in figure 1(a). Support Vector Machines [4] are based on this definition of
margin, which we refer to as Sample-Margin. However, an alternative definition, Hypothesis Margin, exists. In this definition the margin is the distance that the classifier can travel
without changing the way it labels any of the sample points. Note that this definition requires a distance measure between classifiers. This type of margin is used in AdaBoost [5]
and is illustrated in figure 1(b).
It is possible to apply these two types of margin
in the context of LVQ. Recall that in our model a
classifier is defined by a set of labeled prototypes.
Such a classifier generates a decision boundary by
Voronoi tessellation. Although using sample margin is more natural as a first choice, it turns out
that this type of margin is both hard to compute
and numerically unstable in our context, since
small relocations of the prototypes might lead to a
dramatic change in the sample margin. Hence we
focus on the hypothesis margin and thus have to
define a distance measure between two classifiers.
We choose to define it as the maximal distance
between prototypes pairs as illustrated in figure 2.
Formally, let ? = {?j }kj=1 and ?
? = {?
?j }kj=1 define two classifiers, then
k
? (?, ?
?) = max k?i ? ?
? i k2 .
i=1
Note that this definition is not invariant to permutations of the prototypes but it upper bounds
the invariant definition. Furthermore, the induced
margin is easy to compute (lemma 1) and lower
bounds the sample-margin (lemma 2).
(a)
(b)
Figure 1: Sample Margin (figure 1(a)) measures how much can
an instance travel before it hits
the decision boundary.
On the
other hand Hypothesis Margin (figure 1(b)) measures how much can
the hypothesis travel before it hits
an instance.
Lemma 1 Let ? = {?j }kj=1 be a set of prototypes and x a sample point. Then the hypothesis margin of ? with respect to x is ? = 12 (k?j ? xk ? k?i ? xk) where ?i (?j ) is the
closest prototype to x with the same (alternative) label.
m
Lemma 2 Let S = {xl }l=1 be a sample and ? = (?1 , . . . , ?k ) be a set of prototypes.
sample-marginS (?) ? hypothesis-marginS (?)
Lemma 2 shows that if we find a set of prototypes with large hypothesis margin then it has
large sample margin as well.
4
Margin Based Generalization Bound
In this section we present a bound on the generalization error of LVQ type of classifiers.
When a classifier is applied to a training data it
is natural to use the training error as a prediction to the generalization error (the probability of
misclassification of an unseen instance). In prototype based hypothesis the classifier assigns a confidence level, i.e. margin, to its predictions. Taking into account the margin by counting instances
with small margin as mistakes gives a better prediction and provide a bound on the generalization
error. This bound is given in terms of the number of prototypes, the sample size, the margin and
the margin based empirical error. The following
theorem states this result formally.
Figure 2: The distance measure on
the LVQ hypothesis class. The distance between the white and black
prototypes set is the maximal distance between prototypes pairs.
Theorem 1 In the following setting:
n
m
? Let S = {xi , yi }m
i=1 ? {R ? Y} be a
training sample drawn by some underlying distribution D.
? Assume that ?i kxi k ? R.
? Let ? be a set of prototypes with k prototypes from each class.
? Let 0 < ? < 1/2.
? Let ?? (?) = 1 {i : margin (xi ) < ?}.
S
?
m
? Let eD (?) be the generalization error: eD (?) = Pr(x,y)?D [?(x) 6= y].
? Let ? > 0.
Then with probability 1 ? ? over the choices of the training data:
s
32m
8
4
?
??
eD ? ?S (?) +
d log2 2 + log
m
?
?
(2)
where d is the VC dimension:
64R2
d = min n + 1, 2
2k |Y| log ek 2
?
(3)
This theorem leads to a few observations. First, note that the bound is dimension free, in
the sense that the generalization error is bounded independently of the input dimension (n)
much like in SVM. Hence it makes sense to apply these algorithms with kernels.
Second, note that the VC dimension grows as the number of prototypes grows (3). This
suggest that using too many prototypes might result in poor performance, therefore there
is a non trivial optimal number of prototypes. One should not be surprised by this result
as it is a realization of the Structural Risk Minimization (SRM) [4] principle. Indeed a
simple experiment supports this prediction. Hence not only that prototype based methods
are faster than Nearest Neighbour, they are more accurate as well. Due to space limitations
proofs are provided in the full version of this paper only.
5
Maximizing Hypothesis Margin Through Loss Function
Once margin is properly defined it is natural to ask for algorithm that maximizes it. We will
show that this is exactly what LVQ does. Before going any further we have to understand
why maximizing the margin is a good idea.
4.5
zero one loss
hinge loss
broken linear loss
exponential loss
4
3.5
3
loss
In theorem 1 we saw that the generalization error
can be bounded by a function of the margin ? and
the empirical ?-error (?). Therefore it is natural to
seek prototypes that obtain small ?-error for a large
?. We are faced with two contradicting goals: small
?-error verses large ?. A natural way to solve this
problem is through the use of loss function.
2.5
2
1.5
1
Loss function are a common technique in machine
learning for finding the right balance between opposed quantities [12]. The idea is to associate a
margin based loss (a ?cost?) for each hypothesis
with respect to a sample. More formally, let L be a
function such that:
1. For every ?:
L(?) ? 0.
2. For every ? < 0: L(?) ? 1.
0.5
0
-1.5
-1
-0.5
0
0.5
1
1.5
margin
Figure 3: Different loss functions.
SVM, LVQ1 and OLVQ1 use the
?hinge? loss: (1 ? ?)+ . LVQ2.1
uses the broken linear: min(2, (1 ?
2?)+ ). AdaBoost use the exponential loss (e?? ).
We use L to compute the loss of an hypothesis with
respect to one instance. When a training set is avail-P
able we sum the loss over the instances: L(?) = l L(?l ), where ?l is the margin of the
l?th instance in the training data. The two axioms of loss functions guarantee that L(?)
bounds the empirical error. It is common to add more restrictions on the loss function, such
as requiring that L is a non-increasing function. However, the only assumption we make
here is that the loss function L is differentiable.
Different algorithms use different loss functions [12]. AdaBoost uses the exponential loss
function L(?) = e??? while SVM uses the ?hinge? loss L(?) = (1 ? ??)+ , where ? > 0
is a scaling factor. See figure 3 for a demonstration of these loss functions.
Once a loss function is chosen, the goal of the learning algorithm is finding an hypothesis
that minimizes it. Gradient descent is a natural simple choice for the task. Recall that in
our case ?l = (kxl ? ?i k ? kxl ? ?j k)/2 where ?j and ?i are the closest prototypes to xl
with the correct and incorrect labels respectively. Hence we have that2
d?l
xl ? ? r
= Sl (r)
d?r
kxl ? ?r k
where Sl (r) is a sign function such that
(
1 if ?r is the closest prototype with correct label.
?1 if ?r is the closest prototype with incorrect label.
Sl (r) =
0 otherwise.
2
Note that if xl = ?j the derivative is not defined. This extreme case does not affect our conclusions, hence or the sake of clarity we avoid the treatment of such extreme cases in this paper.
Algorithm 1 Online Loss Minimization.
Recall that L is a loss function, and ?t varies to zero as the algorithm proceeds.
1. Choose an initial positions for the prototypes {?j }kj=1 .
2. For t = 1 : T ( or ?)
(a) Receive a labelled instance xt , yt
(b) Compute the closest correct and incorrect prototypes to xt : ?j , ?i , and the
margin of xt , i.e. ?t = 1/2(kxt ? ?i k ? kxt ? ?j k)
(c) Apply the update rule for r = i, j:
?r ? ?r + ? t
xt ? ? r
dL(?t )
Sl (r)
d?
kxt ? ?r k
Taking the derivative of L with respect to ?r using the chain rule we obtain
X dL(?l )
dL
xl ? ? r
=
Sl (r)
d?r
d?l
kxl ? ?r k
(4)
l
By comparing the derivative to zero, we get that the optimal solution is achieved when
r
P
Sl (r)
l)
r
P?l r . This leads to two conclu?r = l wlr xl where ?lr = dL(?
d?l kxl ??r k and wl =
?
l
l
sions. First, the optimal solution is in the span of the training instances. Furthermore, from
its definition it is clear that wlr 6= 0 only for the closest prototypes to xl . In other words,
wlr 6= 0 if and only if ?r is either the closest prototype to xl which have the same label
as xl , or the closest prototype to xl with alternative label. Therefore the notion of support
vectors [4] applies here as well.
5.1
Minimizing The Loss
Using (4) we can find a local minima of the loss function by a gradient descent algorithm.
The iteration in time t computes:
X dL(?l )
xl ? ?r (t)
?r (t + 1) ? ?r (t) + ?t
Sl (r)
d?
kxl ? ?r (t)k
l
where ?t approaches zero as t increases. This computation can be done iteratively where
in each step we update ?r only with respect to one sample point xl . This leads to the
following basic update step
?r ? ?r + ? t
dL(?l )
xl ? ? r
Sl (r)
d?
kxl ? ?r k
Note that Sl (r) differs from zero only for the closest correct and incorrect prototypes to x l ,
therefore a simple online algorithm is obtained and presented as algorithm 1.
5.2
LVQ1 and OLVQ1
The online loss minimization (algorithm 1) is a general algorithm applicable with different choices of loss functions. We will now apply it with a couple of loss functions and
see how LVQ emerges. First let us consider the ?hinge? loss function. Recall that the
hinge loss is defined to be L(?) = (1 ? ??)+ . The derivative3 of this loss function is
3
The ?hinge? loss has no derivative at the point ? = 1/?. Again as in other cases in this paper,
this fact is neglected.
663
662
dL(?)
=
d?
661
660
loss
659
658
657
0
??
if ? > 1/?
otherwise
If ? is chosen to be large enough, the update rule in
the online loss minimization is
656
655
654
653
0
5000
10000
15000
20000
number of iterations
?r = ? r ? ? t ?
xt ? ? r
kxt ? ?r k
FigureP4: The ?hinge? loss function ( (1 ? ?l )+ ) vs. number
of iterations of OLVQ1. One can
clearly see that it decreases.
This is the same update rule as in LVQ1 and
OLVQ1 algorithm [9] beside the extra factor of
?
kxt ??r k . However, this is a minor difference since
?/ kxt ? ?r k is just a normalizing factor. A demonstration of the affect of OLVQ1 on the ?hinge? loss
function is provided in figure 4. We applied the algorithm to a simple toy problem consisting of three classes and a training set of 800 points.
We allowed the algorithm 10 prototypes. As expected the loss decreases as the algorithm
proceeds. For this purpose we used the lvq pak package [13].
5.3
LVQ2.1
The idea behind the definition of margin, and especially hypothesis margin was that a
minor change in the hypothesis can not change the way it labels an instance which had a
large margin. Hence when making small updates (i.e. small ?t ) one should focus only on
the instances which have margins close to zero. The same idea appeared also in Freund?s
boost by majority algorithm [14].
Kohonen adapted this idea to his LVQ2.1 algorithm [9]. The major difference between
LVQ1 and LVQ2.1 algorithm is that LVQ2.1 updates ?r only if the margin of xt falls inside a certain window. The suitable loss function for LVQ2.1 is the broken linear loss
function (see figure 3). The broken linear loss is defined to be L(?) = min(2, (1 ? ??) + ).
Note that for |?| > 1/? the loss is constant (i.e. the derivative is zero), this causes the
learning algorithm to overlook instances with too high or too low margin. There exist several differences between LVQ2.1 and the online loss minimization presented here, however
these differences are minor.
6
Conclusions and Further Research
In this paper we used the maximal margin principle together with loss functions to derive
algorithms for prototype positioning. We saw that LVQ can be considered as a special case
of this general algorithm. We also provide generalization bounds for any prototype based
classifier.
This formulation allows derivation of new algorithms in several different ways. The first
is to use other loss functions such as the exponential loss. A second way is to use other
classification rule, such as k-NN or parzan window. The proper way to adapt the algorithm
to the chosen rule is to define the margin accordingly, and modify the minimization process
in the training stage. We have constructed some basic experiments using the k-NN rule.
The performance of the modified classifier did not exceed those of the 1-NN rule. We
suggest the following explanation of these results. Usually the k-NN rule perform better
than the 1-NN rule as it filters noise better, and in our setting the noise filtering is already
achieved by using a small number of prototype.
Another extension to use a different distance measure instead of the l2 norm. This may
result in more complicated formula of the derivative of the loss function, but may improve
the results significantly in some cases. One specific interesting distance measure is the
Tangent Distance [2].
We also presented a generalization guarantee for prototype based classifier that is based
on the margin training error. The bound is dimension free and thus a kernel version of
the algorithm may yield a good performance. This modification is straightforward, as the
algorithm can be expressed as function of inner-products only. We performed preliminary
experiments with a kernelized version of the algorithm. It seems that it improves the accuracy when it is used with a small number of prototypes. However, allowing more prototypes
to the standard version achieves the same improvement.
A possible explanation of this phenomenon is the following. Recall that a classifier is
parametrised by a set of labelled prototypes that define a Voronoi tessellation. The decision
boundary of such a classifier is built of some of the lines of the Voronoi tessellation. In the
standard version these lines are straight lines. In the kernel version these lines are smooth
non-linear curves. As the number of prototypes grows, the decision boundary consists
of more, and shorter lines. Now, if we remember the fact that any smooth curve can be
approximated by a broken linear line, we come to the conclusion that any classifier that can
be generated by the kernel version, can be approximated by one that is generated by the
standard version, when is applied with more prototypes.
Acknowledgement We thank Yoram Singer and Gal Chechik for their helpful remarks.
References
[1] E. Fix and j. Hodges. Discriminatory analysis. nonparametric discrimination: Consistency
properties. Technical Report 4, USAF school of Aviation Medicine, 1951.
[2] P. Y. Simard, Y. A. Le Cun, and J. Denker. Efficient pattern recognition using a new transformation distance. In Advances in Neural Information Processing Systems, volume 5, pages 50?58.
1993.
[3] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of
dimensionality. In Proceedings of the 30th ACM Symposium on the Theory of Computing,
pages 604?613, 1998.
[4] V. Vapnik. The Nature Of Statistical Learning Theory. Springer-Verlag, 1995.
[5] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
[6] R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. Boosting the margin : A new explanation
for the effectiveness of voting methods. Annals of Statistics, 1998.
[7] Llew Mason, P. Bartlett, and J. Baxter. Direct optimization of margins improves generalization
in combined classifier. Advances in Neural Information Processing Systems, 11:288?294, 1999.
[8] C. Campbell, N. Cristianini, and A. Smola. Query learning with large margin classifiers. In
International Conference on Machine Learning, 2000.
[9] T. Kohonen. Self-Organizing Maps. Springer-Verlag, 1995.
[10] L. Buckingham and S. Geva. Lvq is a maximum margin algorithm. In Pacific Knowledge
Acquisition Workshop PKAW?2000, 2000.
[11] L. Devroye, L. Gyorfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer,
New York, 1996.
[12] Y. Singer and D. D. Lewis. Machine learning for information retrieval: Advanced techniques.
presented at ACM SIGIR 2000, 2000.
[13] T. Kohonen, J. Hynninen, J. Kangas, and K. Laaksonen, J. Torkkola. Lvq pak, the learning
vector quantization program package. http://www.cis.hut.fi/research/lvq pak, 1995.
[14] Y. Freund. Boosting a weak learning algorithm by majority. Information and Computation,
121(2):256?285, 1995.
| 2261 |@word version:8 norm:1 seems:1 that2:1 seek:1 dramatic:1 initial:1 contains:1 selecting:1 current:1 comparing:1 buckingham:2 attracted:1 designed:1 update:12 v:1 discrimination:1 amir:1 accordingly:1 xk:2 lr:1 iterates:1 provides:1 boosting:3 constructed:1 direct:1 become:2 symposium:1 surprised:1 incorrect:4 consists:1 fitting:1 combine:1 inside:1 expected:1 indeed:2 roughly:1 curse:1 window:4 increasing:1 provided:3 estimating:1 moreover:1 underlying:1 bounded:2 maximizes:1 israel:1 what:1 kind:1 minimizes:1 finding:2 gal:1 transformation:1 guarantee:2 remember:1 every:2 voting:1 exactly:2 classifier:22 k2:1 hit:2 control:1 positive:1 llew:1 before:3 engineering:1 accordance:1 modify:2 mistake:1 local:1 lugosi:1 might:3 black:1 suggests:1 discriminatory:1 gyorfi:1 practice:1 differs:1 axiom:1 empirical:5 significantly:1 chechik:1 confidence:3 word:1 spite:1 suggest:3 get:2 close:1 risk:3 context:2 restriction:1 www:1 map:2 center:1 yt:4 modifies:2 jerusalem:1 maximizing:2 straightforward:1 independently:1 sigir:1 bachrach:1 assigns:1 rule:19 insight:1 his:1 notion:1 updated:1 annals:1 play:1 us:5 hypothesis:15 associate:1 approximated:2 recognition:2 labeled:1 role:1 calculate:1 cycle:1 decrease:2 ran:1 broken:5 complexity:1 cristianini:1 neglected:1 usaf:1 various:1 derivation:1 query:1 guaranty:1 solve:1 otherwise:3 statistic:1 unseen:3 indyk:1 online:5 kxt:8 differentiable:1 maximal:6 product:1 kohonen:3 realization:1 organizing:1 intuitive:1 convergence:1 motwani:1 derive:1 ac:4 avail:1 nearest:8 minor:3 school:2 c:4 predicted:1 come:1 differ:1 correct:4 filter:1 vc:2 fix:1 generalization:19 preliminary:1 extension:1 hold:1 hut:1 considered:1 predict:2 major:1 achieves:2 purpose:2 pak:3 estimation:1 travel:3 applicable:1 label:18 saw:2 wl:1 tool:1 minimization:6 clearly:1 modified:1 avoid:1 sion:1 derived:2 focus:3 properly:1 improvement:1 aka:1 sense:2 helpful:1 voronoi:3 nn:10 kernelized:1 relation:1 going:1 selects:1 interested:1 classification:6 art:1 special:2 field:1 once:2 represents:1 koby:1 look:2 report:1 few:1 neighbour:7 geometry:1 consisting:1 opposing:1 analyzed:2 extreme:2 behind:1 parametrised:1 chain:1 predefined:2 accurate:3 shorter:1 old:1 theoretical:2 instance:18 laaksonen:1 tessellation:3 cost:1 srm:1 tishby:2 too:3 varies:1 kxi:1 combined:1 density:4 fundamental:1 international:1 huji:4 interdisciplinary:1 lee:1 yl:2 probabilistic:1 together:1 again:1 hodges:1 opposed:1 choose:5 ek:1 derivative:6 simard:1 toy:1 account:1 relocation:1 performed:1 view:1 maintains:2 complicated:1 minimize:2 il:4 accuracy:3 yield:2 correspond:1 weak:1 bayesian:1 accurately:1 lvq2:8 overlook:1 published:1 straight:1 ed:3 definition:9 verse:1 acquisition:1 naturally:1 associated:1 proof:1 couple:1 treatment:1 popular:1 ask:1 recall:5 knowledge:1 emerges:3 improves:2 dimensionality:1 campbell:1 supervised:2 adaboost:4 formulation:1 done:1 though:1 furthermore:3 just:1 stage:2 smola:1 hand:1 grows:3 requiring:1 true:1 former:1 hence:7 assigned:1 iteratively:1 illustrated:3 deal:1 white:1 during:2 self:1 naftali:1 trying:1 outline:1 theoretic:1 reasoning:1 fi:1 common:3 volume:1 numerically:1 measurement:1 significant:1 refer:2 wlr:3 consistency:1 kobics:1 had:1 add:1 closest:13 certain:1 verlag:2 yi:1 minimum:1 full:1 smooth:2 technical:1 positioning:2 faster:1 adapt:1 retrieval:1 controlled:1 prediction:6 variant:5 basic:2 iteration:6 kernel:4 gilad:1 achieved:2 receive:1 fifty:1 extra:1 induced:3 lvq1:5 spirit:1 effectiveness:1 structural:1 counting:1 exceed:1 easy:1 enough:1 baxter:1 affect:2 reduce:1 idea:5 prototype:61 inner:1 tradeoff:2 motivated:1 bartlett:2 passed:1 speaking:1 cause:1 york:1 remark:1 adequate:1 clear:1 nonparametric:1 generate:1 schapire:2 sl:9 exist:1 http:1 wisely:1 misclassifying:1 sign:2 popularity:1 drawn:1 changing:1 clarity:1 kept:1 year:3 sum:1 package:2 parameterized:1 powerful:1 named:1 family:4 decision:11 scaling:1 bound:17 adapted:1 sake:1 generates:1 aspect:2 min:3 span:1 pacific:1 according:1 poor:1 cun:1 making:2 modification:1 invariant:2 pr:1 agree:1 discus:1 turn:1 singer:2 apply:6 denker:1 alternative:4 original:2 log2:1 hinge:8 medicine:1 yoram:1 especially:1 already:1 quantity:2 primary:1 gradient:3 distance:13 thank:1 majority:2 unstable:1 trivial:1 devroye:1 ratio:1 balance:1 hebrew:1 demonstration:2 minimizing:1 repelled:1 negative:1 proper:1 perform:1 allowing:1 upper:1 observation:1 finite:1 descent:3 rn:3 kangas:1 pair:2 boost:1 able:1 suggested:1 proceeds:2 usually:1 pattern:2 appeared:1 kxl:7 program:1 built:1 max:1 explanation:3 misclassification:1 suitable:1 natural:6 advanced:1 scheme:3 improve:2 kj:5 faced:1 literature:1 l2:1 tangent:1 acknowledgement:1 beside:1 loss:48 freund:4 permutation:1 interesting:1 limitation:1 filtering:1 principle:6 share:1 summary:1 last:1 free:2 infeasible:1 side:1 understand:1 fall:2 neighbor:1 face:1 taking:2 boundary:7 dimension:5 curve:2 computes:1 commonly:1 approximate:2 geva:2 navot:1 xi:2 decade:1 why:1 nature:1 did:1 noise:3 contradicting:1 allowed:1 position:2 exponential:4 xl:15 lie:1 dozen:1 theorem:5 formula:1 removing:1 xt:15 specific:2 r2:1 mason:1 svm:4 torkkola:1 evidence:1 dl:7 exists:1 normalizing:1 quantization:4 vapnik:1 workshop:1 ci:1 margin:61 gap:1 easier:1 explore:1 expressed:1 applies:1 springer:3 lewis:1 acm:2 goal:4 lvq:28 towards:1 labelled:3 feasible:1 hard:1 change:3 reducing:1 aviation:1 lemma:5 select:1 formally:3 support:4 crammer:1 phenomenon:1 |
1,387 | 2,262 | ?Name That Song!?: A Probabilistic Approach
to Querying on Music and Text
Eric Brochu
Department of Computer Science
University of British Columbia
Vancouver, BC, Canada
[email protected]
Nando de Freitas
Department of Computer Science
University of British Columbia
Vancouver, BC, Canada
[email protected]
Abstract
We present a novel, flexible statistical approach for modelling music and
text jointly. The approach is based on multi-modal mixture models and
maximum a posteriori estimation using EM. The learned models can be
used to browse databases with documents containing music and text, to
search for music using queries consisting of music and text (lyrics and
other contextual information), to annotate text documents with music,
and to automatically recommend or identify similar songs.
1 Introduction
Variations on ?name that song?-types of games are popular on radio programs. DJs play a
short excerpt from a song and listeners phone in to guess the name of the song. Of course,
callers often get it right when DJs provide extra contextual clues (such as lyrics, or a piece
of trivia about the song or band). We are attempting to reproduce this ability in the context
of information retrieval (IR). In this paper, we present a method for querying with words
and/or music.
We focus on monophonic and polyphonic musical pieces of known structure (MIDI files,
full music notation, etc.). Retrieving these pieces in multimedia databases, such as the
Web, is a problem of growing interest [1, 2]. A significant step was taken by Downie [3],
who applied standard text IR techniques to retrieve music by, initially, converting music to
text format. Most research (including [3]) has, however, focused on plain music retrieval.
To the best of our knowledge, there has been no attempt to model text and music jointly.
We propose a joint probabilistic model for documents with music and/or text. This model
is simple, easily extensible, flexible and powerful. It allows users to query multimedia
databases using text and/or music as input. It is well-suited for browsing applications as
it organizes the documents into ?soft? clusters. The document of highest probability in
each cluster serves as a music thumbnail for automated music summarisation. The model
allows one to query with an entire text document to automatically annotate the document
with musical pieces. It can be used to automatically recommend or identify similar songs.
Finally, it allows for the inclusion of different types of text, including website content,
lyrics, and meta-data such as hyper-text links. The interested reader may further wish to
consult [4], in which we discuss an application of our model to the problem of jointly
modelling music, as well as text and images.
2 Model specification
The training data consists of documents with text (lyrics or information about the song) and
musical scores in GUIDO notation [5]. (GUIDO is a powerful language for representing
musical scores in an HTML-like notation. MIDI files, plentiful on the World Wide Web,
can be easily converted to this format.) We model the data with a Bayesian multi-modal
mixture model. Words and scores are assumed to be conditionally independent given the
mixture component label.
We model musical scores with first-order Markov chains, in which each state corresponds
to a note, rest, or the start of a new voice. Notes? pitches are represented by the interval
change (in semitones) from the previous note, rather than by absolute pitch, so that a score
or query transposed to a different key will still have the same Markov chain. Rhythm is
similarly represented as a scalar to the previous value. Rest states are represented similarly,
save that pitch is not represented. See Figure 1 for an example.
Polyphonic scores are represented by chaining the beginning of a new voice to the end of
a previous one. In order to ensure that the first note in each voice appears in both the row
and column of the Markov transition matrix, a special ?new voice? state with no interval or
rhythm serves as a dummy state marking the beginning of a new voice. The first note of a
voice has a distinguishing ?first note? interval value and the first note or rest has a duration
value of one.
[ *3/4 b&1*3/16 b1/16 c#2*11/16 b&1/16 a&1*3/16 b&1/16 f#1/2 ]
0
1
2
3
4
5
6
7
8
INTERVAL
newvoice
rest
firstnote
+1
+2
-2
-2
+3
-5
DURATION
0
Figure 1: Sample melody ? the opening notes to ?The Yellow Submarine? by The Beatles
? in different notations. From top: GUIDO notation, standard musical notation (generated
automatically from GUIDO notation), and as a series of states in a first-order Markov
chain (also generated automatically from GUIDO notation).
The Markov chain representation of a piece of music is then mapped to a sparse transition
frequency table
, where
denotes the number of times we observe the transition
from state to state in document . We use
to denote the initial state of the Markov
chain. The associated text is modeled using a standard sparse term frequency vector
,
where
denotes the number of times word appears in document . For notational
simplicity, we group the music and text variable as follows:
!
#"
#$ . In essence,
this Markovian approach is akin to a text bigram model, save that the states are transitions
between musical notes and rests rather than words.
Our multi-modal mixture model is as follows:
12
!#" $&%
'
'
"
.-
)(+*," "
/'
0
-
(1)
"
where
!
"
" " "
$ encompasses all the model parameters and
4 first
'
where 5
'3 'if the
entry
of 4
belongs to state and is : otherwise. The three
7698
"
dimensional
matrix
denotes the
estimated probability of transitioning from state
denotes the initial probabilities of being in state
to state in cluster' , the matrix
in cluster . The
, given membership
vector
denotes the probability of each cluster.
word
The matrix
denotes the probability
of the
in cluster . The mixture model
' the standard probability simplex !
is defined on
and
: for all
<
$ .
!
;
4=6>8
We introduce the latent allocation variables ?
A@ ! "4BBB"&C $ to indicate that
a 3particular
8
sequence D
belongs to a specific cluster . These indicator
variables ! ?
FE
"BB4B")CIH $
6G8
?
correspond to an i.i.d. sample from the distribution
.
6JK6L
This simple model is easy to extend. For browsing applications, we might prefer a hierarchical structure with levels M :
N
P
O
34
M
4,
"
M
,'
"
(2)
M
This is still a multinomial model, but by applying appropriate parameter constraints we can
produce a tree-like browsing structure [6]. It is also easy to formulate the model in terms
of aspects and clusters as suggested in [7, 8].
2.1 Prior specification
We follow a hierarchical Bayesian strategy, where the unknown parameters and the allocation variables Q are regarded as being drawn from appropriate prior distributions. We
acknowledge our uncertainty about the exact form of the prior by specifying it in terms
of some unknown parameters (hyperparameters). The allocation variables ?
are assumed
. We place a conjugate
to be drawn from a multinomial distribution, ?
SR
E
UT
)
8 . Similarly,
Dirichlet prior on the mixing coefficients
we place Dirich
T
T
, T
3VWeach
on
let prior
distributions
on
each
,
on each
"
,
Y
X
3
[
Z
- 3\]
, and assume that these
priors are independent.
The posterior for the allocation variables will be required. It can be obtained easily using
Bayes? rule:
3
?
6
'3K_a`
<
eN
"
6^
'
'3df[_a`
K6
bN!#" $)% `
'
"
,
fN N!#" $)% `
`
`
'
"
'
(+*," "
"
`
dfN (g*h" "
- '
`
-
0
'
"
dc
df, 0 - "
(3)
e
c
3 Computation
The parameters of the mixture model cannot be computed analytically unless one knows
the mixture indicator variables. We have to resort to numerical methods. One can implement a Gibbs sampler to compute the parameters and allocation variables. This is done by
sampling the parameters from their Dirichlet posteriors and the allocation variables from
their multinomial posterior. However, this algorithm is too computationally intensive for
the applications we have in mind. Instead we opt for expectation maximization (EM) algorithms to compute the maximum likelihood (ML) and maximum a posteriori (MAP) point
estimates of the mixture model.
3.1 Maximum likelihood estimation with the EM algorithm
After initialization, the EM algorithm for ML estimation iterates between the following
two steps:
1. E step: Compute the expectation of the complete log-likelihood
with
respect
to the dis
old
tribution of the allocation variables ML
,
Q "
"
where
6
old
%
2. M step: Maximize over the parameters:
ML
%
6
ML
function expands to
ML
6
new
The
represents the value of the parameters
time step.
at
the previous
%
'
12
34
N
#" $ %
'
"
'
*," "
)(
-
a
)0
-
B
"
In the E step, we have to compute
using equation (3). The corresponding M step
3
the constraints that all probabilities for the parequires that we maximize ML subject
to
rameters sum up to 1. This constrained maximization can be carried out by introducing
Lagrange multipliers. The resulting parameter estimates are:
]3
8
C H
I
6
<
<
4
6
'
<
(5)
<
'3
6
h3
<
5
<
'
<
"
]
6
(6)
(7)
(8)
3.2 Maximum a posteriori estimation with the EM algorithm
The EM formulation for MAP estimation is straightforward. One simply has to augment
the objective function in the M step, ML , by adding to it the log prior densities. That is,
the MAP objective function is
MAP
6
The MAP parameter estimates are:
& (
'
6
6
6
6
"
]
%
Q
'
%
"$#
"
ML
8
e
&
<
e
'
%
6
'% 3
C
CIH
)
' %
5
<
'
8)
,'3
%
e
'
'
e
<
+
C
*
<
,
' % <
8
,
% <
<
'
<
e
e
' C *
-
' % <
.
3
- 8
% <
<
.- e
e
' C
'3
<
! " old
(9)
(10)
'3
(11)
(12)
CLUSTER
2
2
2
..
.
4
4
4
4
..
.
6
..
.
7
7
7
..
.
9
9
9
SONG
Moby ? Porcelain
Nine Inch Nails ? Terrible Lie
other ? ?Addams Family? theme
..
.
J. S. Bach ? Invention #1
J. S. Bach ? Invention #8
J. S. Bach ? Invention #15
The Beatles ? Yellow Submarine
..
.
other ? ?Wheel of Fortune? theme
..
.
The Beatles ? Taxman
The Beatles ? Got to Get You Into My Life
The Cure ? Saturday Night
..
.
R.E.M ? Man on the Moon
Soft Cell ? Tainted Love
The Beatles ? Got to Get You Into My Life
1
1
1
..
.
1
1
1
0.9975
..
.
1
..
.
1
0.7247
1
..
.
1
1
0.2753
Figure 2: Representative probabilistic cluster allocations using MAP estimation.
These expressions can also be derived by considering the posterior
modes and by replacing
the cluster indicator variable with its posterior estimate
. This observation opens up
room for various stochastic and deterministic ways of improving
EM.
4 Experiments
To test the model with text and music, we clustered a database of musical scores with
associated text documents. The database is composed of various types of musical scores ?
jazz, classical, television theme songs, and contemporary pop music ? as well as associated
text files. The scores are represented in GUIDO notation. The associated text files are a
song?s lyrics, where applicable, or textual commentary on the score for instrumental pieces,
all of which were extracted from the World Wide Web.
The experimental database contains 100 scores, each with a single associated text document. There is nothing in the model, however, that requires this one-to-one association
of text documents and scores ? this was done solely for testing simplicity and efficiency.
In a deployment such as the world wide web, one would routinely expect one-to-many or
many-to-many mappings between the scores and text.
We carried out ML and MAP estimation with
EM. The The Dirichlet hyper-parameters
: "
: "
were set to
. The MAP approach resulted in sparser (reg"
V
6 coherent
8 X96
8 clusters.
Z
6
8 Figure
\
6 2 shows some representative cluster probability
ularised), more
assignments obtained with MAP estimation.
By and large, the MAP clusters are intuitive. The 15 pieces by J. S. Bach each have very
: B
) probabilities of membership in the same cluster. A few curious anomalies
high (
exist. The Beatles? song The Yellow Submarine is included in the same cluster as the Bach
pieces, though all the other Beatles songs in the database are assigned to other clusters.
4.1 Demonstrating the utility of multi-modal queries
A major intended use of the text-score model is for searching documents on a combination
of text and music.
Consider a hypothetical example, using our database: A music fan is struggling to recall a
dimly-remembered song with a strong repeating single-pitch, dotted-eight-note/sixteenthnote bass line, and lyrics containing the words come on, come on, get down. A search on
the text portion alone turns up four documents which contain the lyrics. A search on the
notes alone returns seven documents which have matching transitions. But a combined
search returns only the correct document (figure 3).
QUERY
RETRIEVED SONGS
come on, come on, get down
Erksine Hawkins ? Tuxedo Junction
Moby ? Bodyrock
Nine Inch Nails ? Last
Sherwood Schwartz ? ?The Brady Bunch? theme song
The Beatles ? Got to Get You Into My Life
The Beatles ? I?m Only Sleeping
The Beatles ? Yellow Submarine
Moby ? Bodyrock
Moby ? Porcelain
Gary Portnoy ? ?Cheers? theme song
Rodgers & Hart ? Blue Moon
come on, come on, get down
Moby ? Bodyrock
Figure 3: Examples of query matches, using only text, only musical notes, and both text
and music. The combined query is more precise.
4.2 Precision and recall
We evaluated our retrieval system with randomly generated queries. A query is composed of a random series of 1 to 5 note transitions,
and 1 to 5 words,
. We then
determine the actual number of matches C in the database, where a match is defined as a
song
such that all elements of
and
have a frequency of 1 or greater. In order to
: .
avoid skewing the results unduly, we reject any query that has C
or C
To perform a query, we simply sample probabilistically without
replacement from the clus
ters. The probability of sampling from each cluster,
, is computed using equation 3.
'3it
is assigned a sampling probability
If a cluster contains no items or later becomes empty,
of zero, and the probabilities of the remaining clusters are re-normalized.
In each iteration , a cluster is selected, and the matching criteria are applied against each
piece of music that has been assigned to that cluster until a match is found. If no match is
found, an arbitrary piece is selected. The selected piece is returned as the rank- result.
Once all the matches have been returned, we compute the standard precision-recall curve
[9], as shown in Figure 4.
Our querying method enjoys a high precision until recall is approximately : , and experiences a relatively modest deterioration of precision thereafter. By choosing clusters before
Figure 4: Precision-recall curve showing average results, over 1000 randomly-generated
queries, combining music and text matching criteria.
matching, we overcome the polysemy problem. For example, river banks and money banks
appear in separate clusters. We also deal with synonimy since automobiles and cars have
high probability of belonging to the same clusters.
4.3 Association
The probabilistic nature of our approach allows us the flexibility to use our techniques and
database for tasks beyond traditional querying. One of the more promising avenues of
exploration is associating documents with each other probabilistically. This could be used,
for example, to find suitable songs for web sites or presentations (matching on text), or for
recommending songs similar to one a user enjoys (matching on scores).
Given an input document, , we first cluster by finding the most likely cluster as de
termined by computing
(equation 3). Input documents containing text or
music only can be clustered using
only
those
components of the database. Input documents
that combine text and music are clustered using all the data. We can then find the closest association by computing the distance from the input document to the other document vectors
in the cluster using a similarity metric such as Euclidean distance, or cosine measures after
carrying out latent semantic indexing [10]. A few selected examples of associations we
found are shown in figure 5. The results are often reasonable, though unexpected behavior
occasionally occurs.
5 Conclusions
We feel that the probabilistic approach to querying on music and text presented here is
powerful, flexible, and novel, and suggests many interesting areas of future research. In
the future, we should be able to incorporate audio by extracting suitable features from the
INPUT
J. S. Bach ? Toccata and Fugue in D Minor (score)
Nine Inch Nails ? Closer (score & lyrics)
T. S. Eliot ? The Waste Land (text poem)
CLOSEST MATCH
J. S. Bach ? Invention #5
Nine Inch Nails ? I Do Not Want This
The Cure ? One Hundred Years
Figure 5: The results of associating songs in the database with other text and/or musical
input. The input is clustered probabilistically and then associated with the existing song
that has the least Euclidean distance in that cluster. The association of The Wasteland with
The Cure?s thematically similar One Hundred Years is likely due to the high co-occurance
of relatively uncommon words such as water, death, and year(s).
signals. This will permit querying by singing, humming, or via recorded music. There are
a number of ways of combining our method with images [6, 4], opening up room for novel
applications in multimedia [11].
Acknowledgments
We would like to thank Kobus Barnard, J. Stephen Downie, Holger Hoos and Peter Carbonetto for their advice and expertise in preparing this paper.
References
[1] D Huron and B Aarden. Cognitive issues and approaches in music information retrieval. In
S Downie and D Byrd, editors, Music Information Retrieval. 2002.
[2] J Pickens. A comparison of language modeling and probabilistic text information retrieval
approaches to monophonic music retrieval. In International Symposium on Music Information
Retrieval, 2000.
[3] J S Downie. Evaluating a Simple Approach to Music Information Retrieval: Conceiving
Melodic N-Grams as Text. PhD thesis, University of Western Ontario, 1999.
[4] E Brochu, N de Freitas, and K Bao. The sound of an album cover: Probabilistic multimedia and
IR. In C M Bishop and B J Frey, editors, Ninth International Workshop on Artificial Intelligence
and Statistics, Key West, Florida, 2003. To appear.
[5] H H Hoos, K A Hamel, K Renz, and J Kilian. Representing score-level music using the GUIDO
music-notation format. Computing in Musicology, 12, 2001.
[6] K Barnard and D Forsyth. Learning the semantics of words and pictures. In International
Conference on Computer Vision, volume 2, pages 408? 415, 2001.
[7] T Hofmann. Probabilistic latent semantic analysis. In Uncertainty in Artificial Intelligence,
1999.
[8] D M Blei, A Y Ng, and M I Jordan. Latent Dirichlet allocation. In T G Dietterich, S Becker, and
Z Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge,
MA, 2002. MIT Press.
[9] R Baeza-Yates and B Ribeiro-Neto. Modern Information Retrieval. Addison-Wesley, 1999.
[10] S Deerwester, S T Dumais, G W Furnas, T K Landauer, and R Harshman. Indexing by latent
semantic indexing. Journal of the American Society for Information Science, 41(6):391? 407,
1990.
[11] P Duygulu, K Barnard, N de Freitas, and D Forsyth. Object recognition as machine translation:
Learning a lexicon for a fixed image vocabulary. In ECCV, 2002.
| 2262 |@word bigram:1 instrumental:1 open:1 initial:2 plentiful:1 series:2 score:18 contains:2 bc:2 document:23 freitas:3 existing:1 contextual:2 fn:1 numerical:1 hofmann:1 polyphonic:2 alone:2 intelligence:2 selected:4 guess:1 website:1 item:1 summarisation:1 beginning:2 short:1 blei:1 iterates:1 lexicon:1 saturday:1 symposium:1 retrieving:1 consists:1 combine:1 introduce:1 behavior:1 love:1 growing:1 multi:4 automatically:5 byrd:1 actual:1 considering:1 becomes:1 notation:10 finding:1 brady:1 hypothetical:1 expands:1 schwartz:1 dfn:1 appear:2 harshman:1 before:1 frey:1 solely:1 approximately:1 might:1 initialization:1 specifying:1 suggests:1 deployment:1 co:1 acknowledgment:1 lyric:8 testing:1 tribution:1 implement:1 area:1 got:3 reject:1 matching:6 word:9 melodic:1 get:7 cannot:1 wheel:1 context:1 applying:1 map:10 deterministic:1 straightforward:1 duration:2 focused:1 formulate:1 simplicity:2 rule:1 regarded:1 retrieve:1 searching:1 variation:1 caller:1 feel:1 play:1 user:2 guido:7 exact:1 anomaly:1 distinguishing:1 element:1 recognition:1 database:12 portnoy:1 singing:1 kilian:1 bass:1 highest:1 contemporary:1 carrying:1 eric:1 efficiency:1 easily:3 joint:1 represented:6 listener:1 various:2 routinely:1 query:13 artificial:2 hyper:2 choosing:1 otherwise:1 ability:1 statistic:1 jointly:3 fortune:1 sequence:1 propose:1 combining:2 mixing:1 flexibility:1 ontario:1 intuitive:1 bao:1 cluster:29 empty:1 produce:1 downie:4 object:1 minor:1 strong:1 c:2 indicate:1 come:6 correct:1 stochastic:1 exploration:1 nando:2 melody:1 carbonetto:1 clustered:4 kobus:1 opt:1 hawkins:1 mapping:1 g8:1 major:1 conceiving:1 estimation:8 jazz:1 applicable:1 radio:1 label:1 mit:1 rather:2 avoid:1 probabilistically:3 derived:1 focus:1 notational:1 modelling:2 likelihood:3 rank:1 posteriori:3 membership:2 entire:1 initially:1 reproduce:1 interested:1 semantics:1 issue:1 flexible:3 html:1 augment:1 k6:1 constrained:1 special:1 once:1 ng:1 sampling:3 preparing:1 represents:1 holger:1 future:2 simplex:1 recommend:2 opening:2 few:2 modern:1 randomly:2 composed:2 resulted:1 intended:1 consisting:1 replacement:1 attempt:1 interest:1 uncommon:1 mixture:8 chain:5 closer:1 experience:1 modest:1 unless:1 tree:1 old:3 euclidean:2 re:1 column:1 soft:2 modeling:1 markovian:1 cover:1 extensible:1 assignment:1 maximization:2 introducing:1 entry:1 hundred:2 too:1 musicology:1 my:3 combined:2 dumais:1 density:1 international:3 river:1 probabilistic:8 thesis:1 recorded:1 containing:3 cognitive:1 resort:1 american:1 return:2 converted:1 de:4 waste:1 coefficient:1 forsyth:2 addams:1 piece:11 later:1 reg:1 portion:1 start:1 bayes:1 ir:3 moon:2 musical:11 who:1 correspond:1 identify:2 yellow:4 inch:4 bayesian:2 bunch:1 expertise:1 porcelain:2 against:1 frequency:3 associated:6 fugue:1 transposed:1 popular:1 recall:5 knowledge:1 ut:1 car:1 brochu:2 appears:2 wesley:1 follow:1 modal:4 formulation:1 done:2 though:2 evaluated:1 until:2 web:5 beatles:10 night:1 replacing:1 x96:1 struggling:1 western:1 mode:1 name:3 dietterich:1 contain:1 multiplier:1 normalized:1 analytically:1 assigned:3 death:1 hamel:1 semantic:3 deal:1 conditionally:1 semitone:1 game:1 poem:1 essence:1 rhythm:2 chaining:1 cosine:1 criterion:2 complete:1 image:3 novel:3 occurance:1 multinomial:3 volume:1 extend:1 association:5 rodgers:1 significant:1 cambridge:1 gibbs:1 similarly:3 inclusion:1 language:2 dj:2 specification:2 similarity:1 money:1 etc:1 posterior:5 closest:2 retrieved:1 belongs:2 phone:1 occasionally:1 browse:1 meta:1 remembered:1 life:3 commentary:1 greater:1 converting:1 determine:1 maximize:2 signal:1 stephen:1 full:1 sound:1 match:7 bach:7 retrieval:10 hart:1 pitch:4 vision:1 expectation:2 df:2 metric:1 annotate:2 iteration:1 deterioration:1 cell:1 sleeping:1 want:1 interval:4 extra:1 rest:5 sr:1 file:4 subject:1 jordan:1 consult:1 curious:1 extracting:1 easy:2 baeza:1 automated:1 associating:2 avenue:1 intensive:1 expression:1 utility:1 becker:1 akin:1 song:22 peter:1 returned:2 nine:4 skewing:1 repeating:1 band:1 terrible:1 exist:1 dotted:1 estimated:1 thumbnail:1 dummy:1 blue:1 yates:1 group:1 key:2 four:1 thereafter:1 demonstrating:1 drawn:2 invention:4 sum:1 year:3 deerwester:1 powerful:3 uncertainty:2 you:3 place:2 submarine:4 reader:1 nail:4 family:1 reasonable:1 excerpt:1 prefer:1 fan:1 constraint:2 aspect:1 duygulu:1 attempting:1 format:3 relatively:2 department:2 marking:1 combination:1 conjugate:1 belonging:1 em:8 indexing:3 taken:1 computationally:1 equation:3 discus:1 turn:1 know:1 mind:1 addison:1 serf:2 end:1 junction:1 permit:1 eight:1 observe:1 hierarchical:2 appropriate:2 save:2 voice:6 florida:1 top:1 denotes:6 ensure:1 dirichlet:4 remaining:1 music:37 ghahramani:1 classical:1 society:1 objective:2 occurs:1 strategy:1 traditional:1 distance:3 link:1 mapped:1 separate:1 thank:1 seven:1 water:1 modeled:1 fe:1 neto:1 unknown:2 perform:1 observation:1 markov:6 acknowledge:1 precise:1 dc:1 ninth:1 arbitrary:1 canada:2 required:1 coherent:1 learned:1 textual:1 unduly:1 pop:1 beyond:1 suggested:1 able:1 moby:5 encompasses:1 program:1 including:2 suitable:2 indicator:3 representing:2 picture:1 carried:2 columbia:2 text:39 prior:7 vancouver:2 expect:1 cih:2 interesting:1 rameters:1 allocation:9 querying:6 editor:3 bank:2 land:1 translation:1 row:1 eccv:1 course:1 last:1 dis:1 enjoys:2 wide:3 absolute:1 sparse:2 curve:2 plain:1 overcome:1 world:3 transition:6 cure:3 evaluating:1 gram:1 vocabulary:1 clue:1 ribeiro:1 midi:2 ml:10 b1:1 assumed:2 recommending:1 landauer:1 search:4 latent:5 table:1 promising:1 dimly:1 nature:1 ca:2 improving:1 automobile:1 polysemy:1 monophonic:2 hyperparameters:1 nothing:1 clus:1 site:1 representative:2 advice:1 en:1 west:1 furnas:1 precision:5 theme:5 wish:1 lie:1 british:2 down:3 transitioning:1 specific:1 bishop:1 showing:1 workshop:1 adding:1 phd:1 tainted:1 album:1 television:1 browsing:3 sparser:1 suited:1 hoos:2 simply:2 likely:2 lagrange:1 unexpected:1 scalar:1 ters:1 ubc:2 corresponds:1 gary:1 extracted:1 ma:1 presentation:1 room:2 barnard:3 man:1 content:1 change:1 included:1 eliot:1 sampler:1 multimedia:4 experimental:1 organizes:1 thematically:1 incorporate:1 audio:1 |
1,388 | 2,263 | Nash Propagation for Loopy Graphical Games
Luis E. Ortiz
Michael Kearns
Department of Computer and Information Science
University of Pennsylvania
leortiz,mkearns @cis.upenn.edu
Abstract
We introduce NashProp, an iterative and local message-passing algorithm for computing Nash equilibria in multi-player games represented
by arbitrary undirected graphs. We provide a formal analysis and experimental evidence demonstrating that NashProp performs well on large
graphical games with many loops, often converging in just a dozen iterations on graphs with hundreds of nodes.
NashProp generalizes the tree algorithm of (Kearns et al. 2001), and
can be viewed as similar in spirit to belief propagation in probabilistic inference, and thus complements the recent work of (Vickrey and
Koller 2002), who explored a junction tree approach. Thus, as for probabilistic inference, we have at least two promising general-purpose approaches to equilibria computation in graphs.
1 Introduction
There has been considerable recent interest in representational and algorithmic issues
arising in multi-player game theory. One example is the recent work on graphical
games (Kearns et al. 2001) (abbreviated KLS in the sequel). Here a multi-player game
is represented by an undirected graph. The interpretation is that while the global equilibria
of the game depend on the actions of all players, individual payoffs for a player are determined solely by his own action and the actions of his immediate neighbors in the graph.
Like graphical models in probabilistic inference, graphical games may provide an exponentially more succinct representation than the standard ?tabular? or normal form of the game.
Also as for probabilistic inference, the problem of computing equilibria on arbitrary graphs
is intractable in general, and so it is of interest to identify both natural special topologies
permitting fast Nash computations, and good heuristics for general graphs.
KLS gave a dynamic programming algorithm for computing Nash equilibria in graphical
games in which the underlying graph is a tree, and drew analogies to the polytree algorithm
for probabilistic inference (Pearl 1988). A natural question following from this work is
whether there are generalizations of the basic tree algorithm analogous to those for probabilistic inference. In probabilistic inference, there are two main approaches to generalizing
the polytree algorithm. Roughly speaking, the first approach is to take an arbitrary graph
and ?turn it into a tree? via triangulation, and subsequently run the tree-based algorithm on
the resulting junction tree (Lauritzen and Spiegelhalter 1988). This approach has the merit
of being guaranteed to perform inference correctly, but the drawback of requiring the computation to be done on the junction tree. On highly loopy graphs, junction tree computations
may require exponential time. The other broad approach is to simply run (an appropriate
generalization of) the polytree algorithm on the original loopy graph. This method garnered considerable interest when it was discovered that it sometimes performed quite well
empirically, and was closely connected to the problem of decoding in Turbo Codes. Belief
propagation has the merit of each iteration being quite efficient, but the drawback of having no guarantee of convergence in general (though recent theoretical work has established
convergence for certain special cases (Weiss 2000)).
In recent work, (Vickrey and Koller 2002) proposed a number of heuristics for equilibria
computation in graphical games, including a constraint satisfaction generalization of KLS
that essentially provides a junction tree approach for arbitrary graphical games. They also
gave promising experimental results for this heuristic on certain loopy graphs that result in
manageable junction trees.
In this work, we introduce the NashProp algorithm, a different KLS generalization which
provides an approach analogous to loopy belief propagation for graphical games. Like
belief propagation, NashProp is a local message-passing algorithm that operates directly
on the original graph of the game, requiring no triangulation or moralization 1 operations.
NashProp is a two-phase algorithm. In the first phase, nodes exchange messages in the form
of two-dimensional tables. The table player sends to neighboring player in the graph
indicates the values ?believes? he can play given a setting of and the information he has
received in tables from his other neighbors, a kind of conditional Nash equilibrium. In the
second phase of NashProp, the players attempt to incrementally construct an equilibrium
obeying constraints imposed by the tables computed in the first phase.
Interestingly, we can provide rather strong theory for the first phase, proving that the tables
must always converge, and result in a reduced search space that can never eliminate an
equilibrium. When run using a discretization scheme introduced by KLS, the first phase of
NashProp will actually converge in time polynomial in the size of the game representation.
We also report on a number of controlled experiments with NashProp on loopy graphs,
including some that would be difficult via the junction tree approach due to the graph
topology. The results appear to be quite encouraging, thus growing the body of heuristics
available for computing equilibria in compactly represented games.
2 Preliminaries
! " # $
!
4 3 !
5
%&
'! )(+*-., / 0 , %&
12 5
!5
764!
We use ! 98:!2; to denote the vector which is the same as ! except in the th component,
where the value has been changed to !<; . A (Nash) equilibrium for the game is a mixed
strategy ! such that for any player , and for any ! ; = $ , >
?! 9@ >
?! A8! ; . (We
say that ! is a best response to the rest of ! .) In other words, no player can improve their
The normal or tabular form of an -player, two-action2 game is defined by a set of
matrices
(
), each with indices. The entry
specifies the
payoff to player when the joint action of the players is
. Thus, each
has
entries. The actions 0 and 1 are the pure strategies of each player, while a mixed strategy
that the player will play 0. For any joint
for player is given by the probability
mixed strategy, given by a product distribution , we define the expected payoff to player
as
, where
indicates that each is 0 with probability and
1 with probability
.
expected payoff by deviating unilaterally from a Nash equilibrium. The classic theorem
of (Nash 1951) states that for any game, there exists a Nash equilibrium in the space of
joint mixed strategies. We will also use a straightforward definition for approximate Nash
equilibria. An -Nash equilibrium is a mixed strategy such that for any player , and for
any value
,
. (We say that is an -best response
to the rest of .) Thus, no player can improve their expected payoff by more than by
B
'! )D B @
?! E8!2; !
!2; C
!
1
2
!
B
B
Unlike for inference, moralization may be required for games even on undirected graphs.
For simplicity, we describe our results for two actions, but they generalize to multi-action games.
deviating unilaterally from an approximate Nash equilibrium.
The following definitions are due to KLS. An -player graphical game is a pair
,
where is an undirected graph on vertices and is a set of matrices
called the
local game matrices. Each player is represented by a vertex in , and the interpretation
is that each player?s payoff is determined solely by the actions in their local neighborhood
in . Thus the matrix
has an index for each of the neighbors of , and
,
itself, and for
denotes the payoff to
when he
an index for
is
and his neighbors play . The expected payoff under a mixed strategy
defined analogously. Note that in the two-action case, has
entries, which may be
considerably smaller than .
<
"
+
! $
Note that any game can be trivially represented as a graphical game by choosing to be the
complete graph, and letting the local game matrices be the original tabular form matrices.
However, any time in which the local neighborhoods in can be bounded by
, the
graphical representation is exponentially smaller than the normal form. We are interested
in heuristics that can exploit this succinctness computationally.
3 NashProp: Table-Passing Phase
The table-passing phase of NashProp proceeds in a series of rounds. In each round, every
node will send a different binary-valued table to each of its neighbors in the graph. Thus,
if
vertices and are neighbors, the table sent from to in round shall be denoted
. Since the vertices are always
clear from the lower-case table indices, we shall
drop the subscript and simply write
. This table is indexed by the continuum of
for
players
and , respectively. Intuitively, the
possible mixed
strategies
binary value
indicates player ?s (possibly incorrect) ?belief? that there exists a
and
.
(global) Nash equilibrium in which
#
(
(
As these tables are indexed by continuous values, it is not clear how they can be finitely
represented. However, as in KLS, we shall shortly introduce a finite discretization of these
tables whose resolution is dependent only on local neighborhood size, yet is sufficient to
compute global (approximate) equilibria. For the sake of generality we shall work with the
exact tables in the ensuing formal analysis, which will immediately apply to the approximation algorithm as well.
% $
(
(
"
For every edge !
, the table-passing phase initialization is
for all
$#%#%#
. Let us denote the neighbors of
other than (if any) by
. For
'&(
, the table entry is assigned the value 1 if and only if there
each
)
%#$#%# )
,'&( for
exists a vector of mixed strategies )
such that
4 (
'
$
1.
( for all " 6= ; and
( .
2. (
is a best response to (
We shall call such a a witness to
'
4( . If has no neighbors other than
, we define Condition 1 above to hold vacuously. If either condition is violated, we set
'
(+ .
, the table sent from to can only
Lemma 1 For all edges
and all
contract or remain the same:
8 ?
(
8
( .
Proof: By induction on . The base case ( holds trivially due to the table initialization
to contain all 1 entries. For the induction, assume for contradiction that for some ,
there exists a pair of neighboring players
and a strategy pair
= $ such
( yet '
( . Since '
( , the definition of the
that
table-passing phase implies that there exists a witness for the neighbors of other
)
*&+
-
)
)
!
/.
0
!
)
1.
!2
1
(
( 6+
( (
2 ( E(
)
than meeting Conditions
1 and 2 above. By induction, the fact that
in
)
%#%#$#
Condition 1 implies that
for all
. Since
&(
)
)
it must be
that
is a not best response to
cannot be a
. But then
witness to
, a contradiction.
( (
'
Since all tables begin filled with 1 entries, and Lemma 1 states entries can only change
from 1 to 0, the table-passing phase must converge:
% #
Theorem 2 For all
,2
, the limit
exists.
It is also immediately obvious that the limit tables
must all simultaneously
balance each other, in the sense of obeying
Conditions 1 and 2. That is, we must have that
for all edges !
and all ,
implies the existence of a witness ) for
)
)
. If this
such that
for all , and
is a best response to
were not true the tables would be altered by a single round of the table-passing phase.
2 ) (
# $
! !
Lemma 3 Let ! $
( (
( (
We next establish that the table-passing phase will never eliminate any global Nash equilibria. Let
be any mixed strategy for the entire population of players, and let us
use
to denote the mixed strategy assigned to player by .
!
@ of the table
'! ! )(
Proof: By induction on . The base case (C holds trivially by the table initialization.
By induction, for every and neighbor of ,
1
?! ! A ( , satisfying Con
(
?! ! . Condition 2 is immediately satisfied since ! is a Nash
dition 1 for
equilibrium.
We can now establish a strong sense in which the set of balanced limit tables
characterizes the Nash equilibria of the global game. We say that ! is consistent with
if for every vertex with neighbors we have
?! ! 9 9( ,
the
is a witness to this value. In other words, every edge assignment made in ! is
and ! 9
, and furthermore the neighborhood assignments made by !
?allowed? by the
are witnesses.
Theorem 4 Let ! $ be any global mixed strategy. Then ! is consistent with the
balanced limit tables
if and only if it is a Nash equilibrium.
Proof: The forward direction is easy. If ! is consistent with the
( , then by def-(
(
inition, for all ,
is a best response to the local neighborhood !
! A . Hence, ! is a Nash! equilibrium.
For the other direction, if ! is a Nash equilibrium, then for all , ( ! is certainly a
( ! ( ! 9 . So for consistency
best response to the strategy of its neighbors
with the
it remains to show that for every player and its neighbors ,
'! !
( , and
'! ! ) ( for all . This has already been established
be a Nash equilibrium.
Then for all rounds
,
.
passing phase, and every edge
&(
in Lemma 3.
Theorem 4 is important because it establishes that the table-passing phase provides us with
an alternative ? and hopefully vastly reduced ? seach space for Nash equilibria. Rather
than search for equilibria in the space of all mixed strategies, Theorem 4 asserts that we
can
limit our search to the space of that are consistent with the balanced limit tables
, with no fear of missing equilibria. The demand for consistency with the limit
tables is a locally stronger demand than merely asking for a player to be playing a best
response to its neighborhood. Heuristics for searching this constrained space are the topic
of Section 5.
!
But first let us ask in what ways the search space defined by the
might constitute
a significant reduction. The most obvious case is that in which many of the tables contain
a large fraction of 0 entries, since every such entry eliminates all mixed strategies in which
the corresponding pair of vertices plays the corresponding pair of values. As we shall see
in the discussion of experimental results, such behavior seems to occur in many ? but
certainly not all ? interesting cases. We shall also see that even when such reduction
does not occur, the underlying graphical structure of the game may still yield significant
computational benefits in the search for a consistent mixed strategy.
4 Approximate Tables
Thus far we have assumed that the binary-valued tables have continuous indices
and , and thus it is not clear how they can be finitely represented 3 . Here we briefly address
this issue by asserting that it can be handled using the discretization scheme of KLS. More
precisely, in that work it was established that if we restrict all table indices to only assume
discrete values that are multiples of , and we relax Condition 2 in the definition of the
)
be only an -best response to
,
table-passing phase to ask that
then the choice
suffices to preserve -Nash equilibria in the tables.
Here is the maximum
degree of any node in the graph. The total number of entries in
2 and thus exponential in , but the payoff matrices for the players
each table will be
are already exponential in , so our tables remain polynomial in the size of the graphical
game representation. The crucial point established in KLS is that the required resolution is
independent of the total number of players. It is easily verified that none of the key results
establishing this fact (specifically, Lemmas 2, 3 and 4 of KLS) depend on the underlying
graph being a tree, but hold for all graphical games.
( B
( >
B
(
B
(
Precise analogues of all the results of the preceding section can thus be established for the
discretized instantiation of the table-passing phase (details omitted). In particular, the tablepassing phase will now converge to finite balanced limit tables, and consistency with these
tables characterizes -Nash equilibria. Furthermore, since every round prior to convergence
must change
at least one entry in one table, the table-passing phase must thus converge in
2 rounds, which is again polynomial in the size of the game representation.
at most
Each round of the table-passing phase takes at most on the order of (
computational
steps in the worst case (though possibly considerably less), giving a total running time to
the table-passing phase that scales polynomially with the size of the game.
B
We note that the discretization of each player?s space of mixed strategies allows one to formulate the problem of computing an approximate NE in a graphical game as a CSP(Vickrey
and Koller 2002), and there is a precise connection between NashProp and constraint propagation algorithms for (generalized) arc consistency in constraint networks 4 .
5 NashProp: Assignment-Passing Phase
We have already suggested that the tables
represent a solution space that may
be considerably smaller than the set of all mixed strategies. We now describe heuristics for
searching this space for a Nash equilibrium.
For this it will be convenient to define, for
each vertex , its projection set
, which is indexed by the possible values
(or by their allowed values in the aforementioned discretization scheme). The purpose of
is simply to consolidate the information
sent to by all of its neighbors. Thus, if
are all the neighbors
of , we define
to be 1 if and only if there exists ) (again called
a witness to
) such that )
for all , and
is a best response to
)
; otherwise we define
to be 0.
(
(
(
If ! is any global mixed strategy, it is easily verified that !
3
(
#
is consistent with the
We note that the KLS proof that the exact tables must admit a rectilinear representation holds
generally, but we cannot bound their complexity here.
4
We are grateful to Michael Littman for helping us establish this connection.
!
?! E (
if and only if
for all nodes , with the assignment of the neighbors of
in as a witness. The
first
step
of the assignment-passing phase of NashProp is thus the
computation of the
at each vertex , which is again a local computation
in the graph.
Neighboring nodes and also exchange their projections
and .
Let us begin by noting that the search space for a Nash equilibrium is immediately reduced
to the cross-product of the projection sets by Theorem 4, so if the table-passing phase
has resulted in many 0 values in the projections, even an exhaustive search across this
(discretized) cross-product space may sometimes quickly yield a solution. However, we
would obviously prefer a solution that exploits the local topology of the solution space
given by the graph. At a high level, such a local search algorithm is straightforward:
7(
2
(
1. Initialization:
Choose any node and any values ) such that
with witness
)
)
, and
for all . assigns itself value , and assigns each of its neighbors
the value ) .
2. Pick the next node (in some fixed ordering) that has already been assigned some value
. If there is a partial assignment
to the neighbors of , attempt to extend it to a witness )
)
to
such that
for all , and assign any previously unassigned neighbors
their values in this witness. If all the neighbors of have been assigned, make sure
is a best response.
(
) (
(
Thus, the first vertex chosen assigns both itself and all of its neighbors, but afterwards vertices assign only (some of) their neighbors, and receive their own values from a neighbor. It
is easily verified that if this process
succeeds in assigning all vertices, the resulting mixed
strategy is consistent with the
and thus a Nash equilibrium (or approximate
equilibrium in the discretized case). The difficulty, of course, is that the inductive step of
the assignment-passing phase may fail due to cycles in the graph ? we may reach a node
whose neighbor partial assignment cannot be extended, or whose assigned value
is not a best response to its complete neighborhood assignment. In this case, as with any
structured local search phase, we have reached a failure point and must backtrack.
(
The overall NashProp algorithm thus consists of the (always converging) table-passing
phase followed by the backtracking local assignment-passing phase. NashProp directly
generalizes the algorithm of KLS, and as such, on certain special topologies such as trees
may provably yield efficient computation of equilibria. Here we have shown that NashProp
enjoys several natural and desirable properties even on arbitrary graphs. We now turn to
some experimental investigation of NashProp on graphs containing cycles.
6 Experimental Results
We have implemented the NashProp algorithm (with distinct table-passing and assignmentpassing 5 phases) as described, and run a series of controlled experiments on loopy graphs
of varying size and topology. As discussed in Section 4, there is a relationship suggested
by the KLS analysis between the table resolution and the global approximation quality
, but in practice this relationship may be pessimistic (Vickrey and Koller 2002) . Our
implementation thus takes both and as inputs, and attempts to find an -Nash equilibrium
running NashProp on tables of resolution .
B
B
B
We first draw attention to Figure 1, in which we provide a visual display of the evolution of
the tables computed by the NashProp table-passing phase for a small (3 by 3) grid game.
Note that for this game, the table-passing phase constrains the search space tremendously
? so much so that the projection sets entirely determine the unique equilibrium, and the
assignment-passing phase is superfluous. This is of course ideal behavior.
The main results of our controlled experiments are summarized in Figure 2. One of our
5
We did not implement backtracking, but this caused an overall rate of failure of only 3% across
all 3000 runs described here.
r=1
r=3
r=2
r=8
Figure 1: Visual display of the NashProp table-passing phase after rounds 1,2 and 3 and 8 (where
convergence occurs). Each row shows first the projection set, then the four outbound tables, for each
of the 9 players in a 3 by 3 grid. For the reward functions, each player has a distinct preference
for one of his two actions. For 15 of the 16 possible settings of his 4 neighbors, this preference is
the same, but for the remaining setting it is reversed. It is easily verified that every player?s payoff
depends on all of his neighbors. (Settings used:
).
primary interests is how the number of rounds in each of the two phases ? and therefore
the overall running time ? scales with the size and complexity of the graph. More detail
is provided in the caption, but we created graphs varying in size from 5 to 100 nodes with
a number of different topologies: single cycles; single cycles to which a varying number
of chords were added, which generates considerably more cycles in the graph; grids; and
?ring of rings? (Vickrey and Koller 2002). We also experimented with local payoff matrices
in which each entry was chosen randomly from
, and with ?biased? rewards, in which
for some fixed number of the settings of its neighbors, each node has a strong preference
for one of their actions, and in the remaining settings, a strong preference for the other. The
settings were chosen randomly subject to the constraint that no neighbor is marginalized
(thus no simplification of the graph is possible). These classes of graphs seems to generate
a nice variability in the relative speed of the table-passing and assignment-passing phases
of NashProp, which is why we chose them.
# $
We now make a number of remarks regarding the NashProp experiments. First, and most
basically, these preliminary results indicate that the algorithm performs well across a range
of loopy topologies, including some (such as grids and cycles with many chords) that might
pose computational challenges for junction tree approaches as the number of players becomes large. Excluding the small fraction of trials in which the assignment-passing phase
failed to find a solution, even on grid and loopy chord graphs with 100 nodes, we find
convergence of both the table and assignment-passing phases in less than a dozen rounds.
We next note that there is considerable variation across topologies (and little within) in the
amount of work done by the table-passing phase, both in terms of the expected number of
rounds to convergence, and the fraction of 0 entries that have been computed at completion. For example, for cycles the amount of work in both senses is at its highest, while
for grids with random rewards it is lowest. For grids and chordal cycles, decreasing the
value of (and thus increasing the bias of the payoff matrices) generally causes more to
be accomplished by the table-passing phase. Intuitively, when rewards are entirely random
and unbiased, nodes with large degrees will tend to rarely or never compute 0s in their
Table-Passing Phase
Assignment-Passing Phase
14
10
8
0.87
6
number of rounds
cycle
grid
0.65 chordal(0.25,1,2,3)
0.59 chordal(0.25,1,1,2)
0.60 chordal(0.25,1,1,1)
8
chordal(0.5,1,2,3)
0.42
chordal(0.5,1,1,2)
chordal(0.5,1,1,1)
grid(3)
0.81
grid(2)
0.61
6
grid(1)
ringofrings
0.81
0.78
12
number of rounds
10
0.53
cycle
grid
chordal(0.25,1,2,3)
chordal(0.25,1,1,2)
chordal(0.25,1,1,1)
chordal(0.5,1,2,3)
chordal(0.5,1,1,2)
chordal(0.5,1,1,1)
grid(3)
grid(2)
grid(1)
ringofrings
4
0.93
4
2
2
1.00
0
0
0
20
40
60
number of players
80
100
0
20
40
60
number of players
80
100
Figure 2: Plots showing the number of rounds taken by the NashProp table-passing (left) and
assignment-passing (right) phases in computing an equilibrium, for a variety of different graph
topologies. The -axis shows the total number of vertices in the graph. Topologies and rewards
examined included cycles, grids and ?ring of rings?(Vickrey and Koller 2002) with random rewards
(denoted cycle, grid and ringofrings in the legend); cycles with a fraction of random
chords added,
3 have
, and degree 4
and with
biased
rewards in which nodes
of degree 2 have , degree
have
(see text for definition of ), denoted chordal(
); and grids with biased rewards
with , denoted grid( )). Each data point represents averages over 50 trials for the given topology and
number of vertices. In the table-passing plot, each curve is also annotated with the average fraction
of 1 values in the converged
tables. For cycles, settings
used were
; for ring of
rings,
; for all other classes,
.
outbound tables ? there have too many neighbors whose combined setting can act as a
witnesses for a 1 in an outbound table.
However, as suggested by the theory, greater progress (and computation) in the tablepassing phase pays dividends in the assignment-passing phase, since the search space may
have been dramatically reduced. For example, for chordal and grid graphs with biased
rewards, the ordering of plots by convergence time is essentially reversed from the tablepassing to assignment-passing phases. This suggests that, when it occurs, the additional
convergence time in the table-passing phase is worth the investment. However, we again
note that even for the least useful table-passing phase (for grids with random rewards), the
assignment-passing phase (which thus exploits the graph structure alone) still manages to
find an equilibrium rapidly.
References
M. Kearns, M. Littman, and S. Singh. Graphical models for game theory. In Proceedings of the
Conference on Uncertainty in Artificial Intelligence, pages 253?260, 2001.
S. Lauritzen and D. Spiegelhalter. Local computations with probabilities on graphical structures and
their application to expert systems. J. Royal Stat. Soc. B, 50(2):157?224, 1988.
J. F. Nash. Non-cooperative games. Annals of Mathematics, 54:286?295, 1951.
J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988.
D. Vickrey and D. Koller. Multi-agent algorithms for solving graphical games. In Proceedings of the
National Conference on Artificial Intelligence (AAAI), 2002. To appear.
Yair Weiss. Correctness of local probability propagation in graphical models with loops. Neural
Computation, 12(1):1?41, 2000.
| 2263 |@word trial:2 briefly:1 manageable:1 polynomial:3 stronger:1 seems:2 pick:1 reduction:2 mkearns:1 series:2 interestingly:1 discretization:5 chordal:15 yet:2 assigning:1 must:9 luis:1 drop:1 plot:3 alone:1 intelligence:2 provides:3 node:14 preference:4 incorrect:1 consists:1 introduce:3 upenn:1 expected:5 behavior:2 roughly:1 growing:1 multi:5 discretized:3 decreasing:1 encouraging:1 little:1 increasing:1 becomes:1 begin:2 provided:1 underlying:3 bounded:1 lowest:1 what:1 kind:1 guarantee:1 every:10 act:1 appear:2 local:16 limit:8 establishing:1 subscript:1 solely:2 might:2 chose:1 initialization:4 examined:1 suggests:1 range:1 unique:1 practice:1 investment:1 implement:1 convenient:1 projection:6 word:2 inition:1 cannot:3 imposed:1 missing:1 send:1 straightforward:2 attention:1 resolution:4 formulate:1 simplicity:1 immediately:4 pure:1 assigns:3 contradiction:2 his:7 unilaterally:2 proving:1 leortiz:1 classic:1 population:1 searching:2 analogous:2 variation:1 annals:1 play:4 exact:2 programming:1 caption:1 satisfying:1 ringofrings:3 cooperative:1 worst:1 connected:1 cycle:14 ordering:2 e8:1 chord:4 highest:1 balanced:4 nash:28 complexity:2 constrains:1 reward:10 littman:2 dynamic:1 depend:2 grateful:1 singh:1 solving:1 compactly:1 easily:4 joint:3 represented:7 distinct:2 fast:1 describe:2 artificial:2 neighborhood:7 choosing:1 exhaustive:1 quite:3 heuristic:7 whose:4 valued:2 say:3 relax:1 otherwise:1 itself:3 obviously:1 product:3 neighboring:3 loop:2 rapidly:1 representational:1 asserts:1 convergence:8 ring:6 completion:1 stat:1 pose:1 finitely:2 lauritzen:2 received:1 progress:1 strong:4 soc:1 implemented:1 implies:3 indicate:1 direction:2 drawback:2 closely:1 annotated:1 subsequently:1 require:1 exchange:2 assign:2 suffices:1 generalization:4 preliminary:2 investigation:1 pessimistic:1 helping:1 hold:5 normal:3 equilibrium:39 algorithmic:1 continuum:1 omitted:1 purpose:2 correctness:1 establishes:1 always:3 csp:1 rather:2 unassigned:1 varying:3 indicates:3 tremendously:1 sense:2 inference:9 dependent:1 eliminate:2 entire:1 koller:7 interested:1 provably:1 issue:2 aforementioned:1 overall:3 denoted:4 constrained:1 special:3 construct:1 never:3 having:1 represents:1 broad:1 tabular:3 report:1 intelligent:1 randomly:2 simultaneously:1 preserve:1 resulted:1 individual:1 national:1 deviating:2 phase:48 ortiz:1 attempt:3 interest:4 message:3 highly:1 certainly:2 sens:1 superfluous:1 edge:5 partial:2 dividend:1 tree:15 indexed:3 filled:1 theoretical:1 asking:1 moralization:2 assignment:19 loopy:9 vertex:13 entry:13 hundred:1 too:1 considerably:4 combined:1 sequel:1 contract:1 probabilistic:8 decoding:1 michael:2 analogously:1 quickly:1 vastly:1 aaai:1 again:4 satisfied:1 containing:1 choose:1 possibly:2 outbound:3 admit:1 expert:1 summarized:1 caused:1 depends:1 performed:1 characterizes:2 reached:1 kaufmann:1 who:1 yield:3 identify:1 generalize:1 backtrack:1 basically:1 none:1 manages:1 worth:1 converged:1 reach:1 definition:5 failure:2 obvious:2 proof:4 con:1 ask:2 actually:1 response:12 wei:2 done:2 though:2 generality:1 furthermore:2 just:1 hopefully:1 propagation:7 incrementally:1 quality:1 succinctness:1 true:1 unbiased:1 contain:2 inductive:1 hence:1 assigned:5 evolution:1 requiring:2 vickrey:7 round:15 game:38 generalized:1 complete:2 performs:2 reasoning:1 garnered:1 empirically:1 exponentially:2 extend:1 interpretation:2 he:3 discussed:1 significant:2 rectilinear:1 trivially:3 consistency:4 grid:21 mathematics:1 base:2 own:2 recent:5 triangulation:2 certain:3 binary:3 meeting:1 accomplished:1 morgan:1 greater:1 additional:1 preceding:1 converge:5 determine:1 afterwards:1 multiple:1 desirable:1 cross:2 permitting:1 controlled:3 converging:2 basic:1 essentially:2 iteration:2 sometimes:2 represent:1 receive:1 sends:1 crucial:1 biased:4 rest:2 unlike:1 eliminates:1 sure:1 subject:1 tend:1 undirected:4 sent:3 legend:1 spirit:1 call:1 noting:1 ideal:1 easy:1 variety:1 gave:2 pennsylvania:1 topology:11 restrict:1 regarding:1 whether:1 handled:1 passing:43 speaking:1 constitute:1 action:11 remark:1 cause:1 dramatically:1 generally:2 useful:1 clear:3 amount:2 locally:1 reduced:4 generate:1 specifies:1 arising:1 correctly:1 write:1 discrete:1 shall:7 key:1 four:1 demonstrating:1 verified:4 graph:38 merely:1 fraction:5 run:5 uncertainty:1 draw:1 consolidate:1 prefer:1 entirely:2 def:1 bound:1 pay:1 guaranteed:1 followed:1 display:2 simplification:1 dition:1 turbo:1 occur:2 constraint:5 precisely:1 sake:1 generates:1 speed:1 department:1 structured:1 smaller:3 remain:2 across:4 intuitively:2 taken:1 computationally:1 remains:1 previously:1 abbreviated:1 turn:2 fail:1 merit:2 letting:1 available:1 generalizes:2 junction:8 operation:1 apply:1 appropriate:1 alternative:1 yair:1 shortly:1 existence:1 original:3 denotes:1 running:3 remaining:2 graphical:20 marginalized:1 exploit:3 giving:1 establish:3 question:1 already:4 occurs:2 added:2 strategy:21 primary:1 reversed:2 ensuing:1 topic:1 induction:5 code:1 index:6 relationship:2 balance:1 difficult:1 implementation:1 perform:1 arc:1 finite:2 immediate:1 payoff:12 witness:12 extended:1 precise:2 variability:1 excluding:1 discovered:1 arbitrary:5 introduced:1 complement:1 polytree:3 required:2 pair:5 connection:2 established:5 pearl:2 address:1 suggested:3 proceeds:1 challenge:1 including:3 royal:1 belief:6 analogue:1 satisfaction:1 natural:3 difficulty:1 scheme:3 improve:2 altered:1 spiegelhalter:2 ne:1 axis:1 created:1 text:1 prior:1 nice:1 relative:1 mixed:18 interesting:1 analogy:1 degree:5 agent:1 sufficient:1 consistent:7 playing:1 row:1 course:2 changed:1 enjoys:1 formal:2 bias:1 neighbor:29 benefit:1 curve:1 asserting:1 forward:1 made:2 far:1 polynomially:1 approximate:6 global:8 instantiation:1 assumed:1 search:11 iterative:1 continuous:2 why:1 table:71 promising:2 did:1 main:2 succinct:1 allowed:2 body:1 nashprop:25 obeying:2 exponential:3 dozen:2 theorem:6 showing:1 explored:1 experimented:1 evidence:1 intractable:1 exists:7 drew:1 ci:1 demand:2 generalizing:1 backtracking:2 simply:3 visual:2 failed:1 fear:1 a8:1 kls:13 conditional:1 viewed:1 considerable:3 change:2 included:1 determined:2 except:1 operates:1 specifically:1 kearns:4 lemma:5 called:2 total:4 experimental:5 player:38 succeeds:1 rarely:1 violated:1 |
1,389 | 2,264 | Annealing and the Rate Distortion Problem
Albert E. Parker
Department of Mathematical Sciences
Montana State University
Bozeman, MT 59771
[email protected]
Tom?as? Gedeon
Department of Mathematical Sciences
Montana State University
[email protected]
Alexander G. Dimitrov
Center for Computational Biology
Montana State University
[email protected]
Abstract
In this paper we introduce methodology to determine the bifurcation structure of
optima for a class of similar cost functions from Rate Distortion Theory, Deterministic Annealing, Information Distortion and the Information Bottleneck Method.
We also introduce a numerical algorithm which uses the explicit form of the bifurcating branches to find optima at a bifurcation point.
1
Introduction
This paper analyzes a class of optimization problems
max G(q) + ?D(q)
q??
(1)
where ? is a linear constraint space, G and D are continuous, real valued functions of q,
smooth in the interior of ?, and maxq?? G(q) is known. Furthermore, G and D are invariant
under the group of symmetries SN . The goal is to solve (1) for ? = B ? [0, ?).
This type of problem, which appears to be N P hard, arises in Rate Distortion Theory [1, 2],
Deterministic Annealing [3], Information Distortion [4, 5, 6] and the Information Bottleneck
Method [7, 8].
The following basic algorithm, various forms of which have appeared in [3, 4, 6, 7, 8], can
be used to solve (1) for ? = B.
Algorithm 1 Let
q0 be the maximizer of max G(q)
q??
(2)
and let ?0 = 0. For k ? 0, let (qk , ?k ) be a solution to (1). Iterate the following steps until
?? = B for some ?.
1. Perform ?-step: Let ?k+1 = ?k + dk where dk > 0.
(0)
2. Take qk+1 = qk + ?, where ? is a small perturbation, as an initial guess for the
solution qk+1 at ?k+1 .
3. Optimization: solve
max G(q) + ?k+1 D(q)
q??
(0)
to get the maximizer qk+1 , using initial guess qk+1 .
We introduce methodology to efficiently perform algorithm 1. Specifically, we implement
numerical continuation techniques [9, 10] to effect steps 1 and 2. We show how to detect
bifurcation and we rely on bifurcation theory with symmetries [11, 12, 13] to search for the
desired solution branch. This paper concludes with the improved algorithm 6 which solves
(1).
2
The cost functions
The four problems we analyze are from Rate Distortion Theory [1, 2], Deterministic Annealing [3], Information Distortion [4, 5, 6] and the Information Bottleneck Method [7, 8]. We
discuss the explicit form of the cost function (i.e. G(q) and D(q)) for each of these scenarios
in this section.
2.1
The distortion function D(q)
Rate distortion theory is the information theoretic approach to the study of optimal source
coding systems, including systems for quantization and data compression [2]. To define how
well a source, the random variable Y , is represented by a particular representation using N
symbols, which we call YN , one introduces a distortion function between Y and YN
XX
D(q(yN |y)) = D(Y, YN ) = Ey,yN d(y, yN ) =
q(yN |y)p(y)d(y, yN )
y
yN
where d(y, yN ) is the pointwise distortion function on the individual elements of y ? Y and
yN ? YN . q(yN |y) is a stochastic map or quantization of Y into a representation YN [1, 2].
The constraint space
X
? := {q(yN |y) |
q(yN |y) = 1 and q(yN |y) ? 0 ?y ? Y }
(3)
yN
(compare with (1)) is the space of valid quantizers in <n . A representation YN is optimal if
there is a quantizer q ? (yN |y) such that D(q ? ) = minq?? D(q).
In engineering and imaging applications, the distortion function is usually chosen as the mean
? yN ), where the pointwise distortion func?
squared error [1, 3, 14], D(Y,
YN ) = Ey,yN d(y,
?
?
tion d(y, yN ) is the Euclidean squared distance. In this case, D(Y,
YN ) is a linear function
of the quantizer. In [4, 5, 6], the information distortion measure
X
DI (Y, YN ) :=
p(y, yN )KL(p(x|yN )||p(x|y)) = I(X; Y ) ? I(X; YN )
y,yN
is used, where the Kullback-Leibler divergence KL is the pointwise distortion function. Unlike the pointwise distortion functions usually investigated in information theory [1, 3], this
one is nonlinear, it explicitly considers a third space, X, of inputs, and it depends on the
P
N |y)p(y)
. The only term in DI which
quantizer q(yN |y) through p(x|yN ) = y p(x|y) q(yp(y
N)
depends on the quantizer is I(X; YN ), so we can replace DI with the effective distortion
Def f (q) := I(X; YN ).
Def f (q) is the function D(q) from (1) which has been considered in [4, 5, 6, 7, 8].
2.2
Rate Distortion
There are two related methods used to analyze communication systems at a distortion D(q) ?
D0 for some given D0 ? 0 [1, 2, 3]. In rate distortion theory [1, 2], the problem of finding a
minimum rate at a given distortion is posed as a minimal information rate distortion problem:
R(D0 ) =
minq(yN |y)?? I(Y ; YN )
.
D(Y ; YN ) ? D0
(4)
This formulation is justified by the Rate Distortion Theorem [1]. A similar exposition using
the Deterministic Annealing approach [3] is a maximal entropy problem
maxq(yN |y)?? H(YN |Y )
.
D(Y ; YN ) ? D0
(5)
The justification for using (5) is Jayne?s maximum entropy principle [15]. These formulations
are related since I(Y ; YN ) = H(YN ) ? H(YN |Y ).
Let I0 > 0 be some given information rate. In [4, 6], the neural coding problem is formulated
as an entropy problem as in (5)
maxq(yN |y)?? H(YN |Y )
.
Def f (q) ? I0
(6)
which uses the nonlinear effective information distortion measure Def f .
Tishby et. al. [7, 8] use the information distortion measure to pose an information rate
distortion problem as in (4)
minq(yN |y)?? I(Y ; YN )
.
Def f (q) ? I0
(7)
Using the method of Lagrange multipliers, the rate distortion problems (4),(5),(6),(7) can be
reformulated as finding the maxima of
max F (q, ?) = max[G(q) + ?D(q)]
q??
q??
(8)
as in (1) where ? = B. For the maximal entropy problem (6),
F (q, ?) = H(YN |Y ) + ?Def f (q)
(9)
F (q, ?) = ?I(Y ; YN ) + ?Def f (q)
(10)
and so G(q) from (1) is the conditional entropy H(YN |Y ). For the minimal information rate
distortion problem (7),
and so G(q) = ?I(Y ; YN ).
In [3, 4, 6], one explicitly considers B = ?.
For (9), this involves taking
lim??? maxq?? F (q, ?) = maxq?? Def f (q) which in turn gives minq(yN |y)?? DI . In
Rate Distortion Theory and the Information Bottleneck Method, one could be interested in
solutions to (8) for finite B which takes into account a tradeoff between I(Y ; Y N ) and Def f .
For lack of space, here we consider (9) and (10). Our analysis extends easily to similar
?
formulations which use a norm based distortion such as D(q),
as in [3].
3
Improving the algorithm
We now turn our attention back to algorithm 1 and indicate how numerical continuation
[9, 10], and bifurcation theory with symmetries [11, 12, 13] can improve upon the choice of
the algorithm?s parameters.
We begin
P by rewriting (8), now incorporating the Lagrange multipliers for the equality constraint yN q(yN |yk ) = 1 from (3) which must be satisfied for each yk ? Y . This gives the
Lagrangian
L(q, ?, ?) = F (q, ?) +
K
X
k=1
?k (
X
yN
q(yN |yk ) ? 1).
(11)
There are optimization schemes, such as the Fixed Point [4, 6] and projected Augmented
Lagrangian [6, 16] methods, which exploit the structure of (11) to find local solutions to (8)
for step 3 of algorithm 1.
3.1
Bifurcation structure of solutions
It has been observed that the solutions {qk } undergo bifurcations or phase transitions [3, 4,
6, 7, 8]. We wish to pose (8) as a dynamical system in order to study the bifurcation structure
of local solutions for ? ? [0, B]. To this end, consider the equilibria of the flow
q?
= ?q,? L(q, ?, ?)
(12)
??
?
q
where ?q,? L(q ? , ?? , ?) = 0 for some ?. The
for ? ? [0, B]. These are points
??
Jacobian of this system is the Hessian ?q,? L(q, ?, ?). Equilibria, (q ? , ?? ), of (12), for which
?q F (q ? , ?) is negative definite, are local solutions of (8) [16, 17].
Let |Y | = K, |YN | = N , and n = N K. Thus, q ? ? ? <n and ? ? <K . The (n + K) ?
(n + K) Hessian of (11) is
?q F (q, ?) J T
?q,? L(q, ?, ?) =
J
0
where 0 is K ? K [17]. ?q F is the n ? n block diagonal matrix of N K ? K matrices
{Bi }N
i=1 [4]. J is the K ? n Jacobian of the vector of K constraints from (11),
J = ( IK
|
IK ... IK ) .
{z
}
N blocks
(13)
The kernel of ?q,? L plays a pivotal role in determining the bifurcation structure of solutions
to (8). This is due to the fact that bifurcation of an equilibria (q ? , ?? ) of (12) at ? = ? ?
happen when ker ?q,? L(q ? , ?? , ? ? ) is nontrivial. Furthermore, the bifurcating branches are
tangent to certain linear subspaces of ker ?q,? L(q ? , ?? , ? ? ) [12].
3.2
Bifurcations with symmetry
Any solution q ? (yN |y) to (8) gives another equivalent solution simply by permuting the
labels of the classes of YN . For example, if P1 and P2 are two n ? 1 vectors such that for
a solution q ? (yN |y), q ? (yN = 1|y) = P1 and q ? (yN = 2|y) = P2 , then the quantizer
where q?(yN = 1|y) = P2 , q?(yN = 2|y) = P1 and q?(yN |y) = q ? (yN |y) for all other
classes yN is a maximizer of (8) with F (?
q , ?) = F (q ? , ?). Let SN be the algebraic group of
all permutations on N symbols [18, 19]. We say that F (q, ?) is SN -invariant if F (q, ?) =
F (?(q), ?) where ?(q) denotes the action on q by permutation of the classes of Y N as defined
by any ? ? SN [17]. Now suppose that a solution q ? is fixed by all the elements of SM
for M ? N . Bifurcations at ? = ? ? in this scenario are called symmetry breaking if the
bifurcating solutions are fixed (and only fixed) by subgroups of SM .
To determine where a bifurcation of a solution (q ? , ?? , ?) occurs, one determines ? for
which ?q F (q ? , ?) has a nontrivial kernel. This approach is justified by the fact that
?q,? L(q ? , ?? , ?) is singular if and only if ?q F (q ? , ?) is singular [17]. At a bifurcation
(q ? , ?? , ? ? ) where q ? is fixed by SM for M ? N , ?q F (q ? , ? ? ) has M identical blocks.
The bifurcation is generic if
each of the identical blocks has a single 0-eigenvector, v ,
and the other blocks are nonsingular.
(14)
Thus, a generic bifurcation can be detected by looking for singularity of one of the K ? K
identical blocks of ?q F (q ? , ?). We call the classes of YN which correspond to identical
blocks unresolved classes. The classes of YN that are not unresolved are called resolved
classes.
The Equivariant Branching Lemma and the Smoller-Wasserman Theorem [12, 13] ascertain
the existence of explicit bifurcating solutions in subspaces of ker ?q,? L(q ? , ?? , ? ? ) which
are fixed by special subgroups of SM [12, 13]. Of particular interest are the bifurcating
solutions in subspaces of ker ?q,? L(q ? , ?? , ? ? ) of dimension 1 guaranteed by the following
theorem
Theorem 2 [17] Let (q ? , ?? , ? ? ) be a generic bifurcation of (12) which is fixed (and only
fixed) by SM , for 1 < M ? N . Then, for small t, with ?(t = 0) = ? ? , there exists M
bifurcating solutions,
!
q?
um
tu
?
?
, where 1 ? m ? M,
(15)
+
?(t)
??
?
? (M ? 1)vv if ? is the mth unresolved class of YN
u m ]? =
[u
(16)
?vv
if ? is some other unresolved class of YN
?
0
otherwise
and v is defined as in (14). Furthermore, each of these solutions is fixed by the symmetry
group SM ?1 .
For a bifurcation from the uniform quantizer, q N1 , which is identically
all of the classes of YN are unresolved. In this case,
1
N
for all y and all yN ,
u m = (?vv T , ..., ?vv T , (N ? 1)vv T , ?vv T , ..., ?vv T , 0 T )T
where (N ? 1)vv is in the mth component of u m .
Relevant to the computationalist is that instead of looking for a bifurcation by looking for
singularity of the n ? n Hessian ?q F (q ? , ?), one may look for singularity of one of the
n
K ? K identical blocks, where K = N
. After bifurcation of a local solution to (8) has
?
been detected at ? = ? , knowledge of the bifurcating directions makes finding solutions of
interest for ? > ? ? much easier (see section 3.4.1).
3.3
The subcritical bifurcation
In all problems under consideration, the solution for ? = 0 is known. For (9), (10) this
solution is q0 = q N1 . For (4) and (5), q0 is the mean of Y . Rose [3] was able to compute
explicitly the critical value ? ? where q0 loses stability for the Euclidean pointwise distortion
function. We have the following related result.
Theorem 3 [20] Consider problems (9), (10). The solution q0 = 1/N loses stability at
? = ? ? where 1/? ? is the second largest eigenvalue of aP
discrete Markov chain on vertices
y ? Y , where the transition probabilities p(yl ? yk ) := i p(yk |xi )p(xi |yl ).
Corollary 4 Bifurcation of the solution (q N1 , ?) in (9), (10) occurs at ? ? 1.
The discriminant of the bifurcating branch (15) is defined as [17]
?(q ? , ? ? , u m )
3
3
um , u m ]]i
um , EL? E?q,?
um , ?q,?
L(q ? , ?? , ? ? )[u
L(q ? , ?? , ? ? )[u
= hu
?
?
?
4
um , u m , u m ]i,
um , ?q,? L(q , ? , ? )[u
?3hu
n
L[?, ..., ?] is the multilinear form of the nth
where h?, ?i is the Euclidean inner product, ?q,?
derivative of L, E is the projection matrix onto range(?q,? L(q ? , ?? , ? ? )), and L? is the
Moore-Penrose generalized inverse of the Hessian ?q,? L(q ? , ?? , ? ? ).
Theorem 5 [17] If ?(q ? , ? ? , u m ) < 0, then the bifurcating branch (15) is subritical (i.e. a
first order phase transition). If ?(q ? , ? ? , u m ) > 0, then (15) is supercritical.
For a data set with a joint probability distribution modelled by a mixture of four Gaussians as
in [4], Theorem 5 predicts a subcritical bifurcation from (q N1 , ? ? ? 1.038706) for (9) when
N ? 3. The existence of a subcritical bifurcation (a first order phase transition) is intriguing.
Subcritical Bifurcating Branch for F=H(Y |Y)+? I(X;Y ) from uniform solution q
for N=4
N
N
1/N
3
2.5
||q ?q1/N||
2
*
1.5
1
0.5
Local Maximum
Stationary Solution
0
1.034
1.036
1.038
1.04
1.042
?
1.044
1.046
1.048
1.05
Figure 1: A joint probability space on the random variables (X, Y ) was constructed from a mixture of
four Gaussians as in [4]. Using this probability space, the equilibria of (12) for F as defined in (9) were
found using Newton?s method. Depicted is the subcritical bifurcation from (q 1 , ? ? ? 1.038706).
4
In analogy to the rate distortion curve [2, 1], we can define an H-I curve for the problem (6)
H(I0 ) :=
max
q??,Def f ?I0
H(YN |Y ).
Let Imax = maxq?? Def f . Then for each I0 ? (0, Imax ) the value H(I0 ) is well defined
and achieved at a point where Def f = I0 . At such a point there is a Lagrange multiplier ?
such that ?q,? L = 0 (compare with (11) and (12)) and this ? solves problem (9). Therefore,
for each I ? (0, Imax ), there is a corresponding ? which solves problem (9). The existence
of a subcritical bifurcation in ? implies that this correspondence is not monotone for small
values of I.
3.4
Numerical Continuation
Numerical continuation methods efficiently analyze the solution behavior of dynamical systems such as (12) [9, 10]. Continuation methods can speed up the search for the solution q k+1
(0)
at ?k+1 in step 3 of algorithm 1 by improving upon the perturbed choice qk+1 = qk +?. First,
the vector (?? qkT ?? ?Tk )T which is tangent to the curve ?q,? L(q, ?, ?) = 0 at (qk , ?k , ?k )
is computed by solving the matrix system
?? q k
?q,? L(qk , ?k , ?k )
= ??? ?q,? L(qk , ?k , ?k ).
(17)
?? ? k
(0)
Now the initial guess in step 2 becomes qk+1 = qk + dk ?? qk where dk =
?s
?
for ?s > 0. Furthermore, ?k+1 in step 1 is found by using this
2
2
||?? qk || +||?? ?k || +1
same dk . This choice of dk assures that a fixed step along (?? qkT ?? ?Tk )T is taken for each
k. We use three different continuation methods which implement variations of this scheme:
Parameter, Tangent and Pseudo Arc-Length [9, 17]. These methods can greatly decrease the
(0)
optimization iterations needed to find qk+1 from qk+1 in step 3. The cost savings can be
significant, especially when continuation is used in conjunction with a Newton type optimization scheme which explicitly uses the Hessian ?q F (qk , ?k ). Otherwise, the CPU time
incurred from solving (17) may outweigh this benefit.
3.4.1
Branch switching
Suppose that a bifurcation of a solution q ? of (8) has been detected at ? ? . To proceed, one
u m }M
uses the explicit form of the bifurcating directions, {u
m=1 from (16) to search for the
bifurcating solution of interest, say qk+1 , whose existence is guaranteed by Theorem 2. To
do this, let u = u m for some m ? M , then implement a branch switch [9]
(0)
qk+1 = q ? + dk ? u.
4
A numerical algorithm
We conclude with a numerical algorithm to solve (1). The section numbers in parentheses
indicate the location in the text supporting each step.
Algorithm 6 Let q0 be the maximizer of maxq?? G, ?0 = 1 (3.3) and ?s > 0. For k ? 0,
let (qk , ?k ) be a solution to (1). Iterate the following steps until ?? = B for some ?.
1. (3.4) Perform ?-step: solve (17) for (?? qkT ?? ?Tk )T and select ?k+1 = ?k + dk
?s
where dk = ?
.
2
2
||?? qk || +||?? ?k || +1
(0)
2. (3.4) The initial guess for qk+1 at ?k+1 is qk+1 = qk + dk ? ?? qk .
3. Optimization: solve
max G(q) + ?k+1 D(q)
q??
(0)
to get the maximizer qk+1 , using initial guess qk+1 .
4. (3.2) Check for bifurcation: compare the sign of the determinant of an identical
block of each of
?q [G(qk ) + ?k D(qk )] and ?q [G(qk+1 ) + ?k+1 D(qk+1 )].
(0)
If a bifurcation is detected, then set qk+1 = qk + dk ? u where u is defined as in (16)
for some m ? M , and repeat step 3.
Acknowledgments
Many thanks to Dr. John P. Miller at the Center for Computational Biology at Montana State
University-Bozeman. This research is partially supported by NSF grants DGE 9972824, MRI
9871191, and EIA-0129895; and NIH Grant R01 MH57179.
References
[1] Thomas Cover and Jay Thomas. Elements of Information Theory. Wiley Series in
Communication, New York, 1991.
[2] Robert M. Gray. Entropy and Information Theory. Springer-Verlag, 1990.
[3] Kenneth Rose. Deteministic annealing for clustering, compression, classification,
regerssion, and related optimization problems. Proc. IEEE, 86(11):2210?2239, 1998.
[4] Alexander G. Dimitrov and John P. Miller. Neural coding and decoding: communication channels and quantization. Network: Computation in Neural Systems, 12(4):441?
472, 2001.
[5] Alexander G. Dimitrov and John P. Miller. Analyzing sensory systems with the information distortion function. In Russ B Altman, editor, Pacific Symposium on Biocomputing 2001. World Scientific Publushing Co., 2000.
[6] Tomas Gedeon, Albert E. Parker, and Alexander G. Dimitrov. Information distortion
and neural coding. Canadian Applied Mathematics Quarterly, 2002.
[7] Naftali Tishby, Fernando C. Pereira, and William Bialek. The information bottleneck
method. The 37th annual Allerton Conference on Communication, Control, and Computing, 1999.
[8] Noam Slonim and Naftali Tishby. Agglomerative information bottleneck. In S. A.
Solla, T. K. Leen, and K.-R. M?uller, editors, Advances in Neural Information Processing Systems, volume 12, pages 617?623. MIT Press, 2000.
[9] Wolf-Jurgen Beyn, Alan Champneys, Eusebius Doedel, Willy Govaerts, Yuri A.
Kuznetsov, and Bjorn Sandstede. Handbook of Dynamical Systems III. World Scientific, 1999. Chapter in book: Numerical Continuation and Computation of Normal
Forms.
[10] Eusebius Doedel, Herbert B. Keller, and Jean P. Kernevez. Numerical analysis and control of bifurcation problems in finite dimensions. International Journal of Bifurcation
and Chaos, 1:493?520, 1991.
[11] M. Golubitsky and D. G. Schaeffer. Singularities and Groups in Bifurcation Theory I.
Springer Verlag, New York, 1985.
[12] M. Golubitsky, I. Stewart, and D. G. Schaeffer. Singularities and Groups in Bifurcation
Theory II. Springer Verlag, New York, 1988.
[13] J. Smoller and A. G. Wasserman. Bifurcation and symmetry breaking. Inventiones
mathematicae, 100:63?95, 1990.
[14] Allen Gersho and Robert M. Gray. Vector Quantization and Signal Compression.
Kluwer Academic Publishers, 1992.
[15] E. T. Jaynes. On the rationale of maximum-entropy methods. Proc. IEEE, 70:939?952,
1982.
[16] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, New York, 2000.
[17] Albert E. Parker III. Solving the rate distortion problem. PhD thesis, Montana State
University, 2003.
[18] H. Boerner. Representations of Groups. Elsevier, New York, 1970.
[19] D. S. Dummit and R. M. Foote. Abstract Algebra. Prentice Hall, NJ, 1991.
[20] Tomas Gedeon and Bryan Roosien. Phase transitions in information distortion. In
preparation, 2003.
| 2264 |@word determinant:1 mri:1 compression:3 norm:1 hu:2 q1:1 initial:5 series:1 jaynes:1 intriguing:1 must:1 john:3 numerical:10 happen:1 stationary:1 guess:5 quantizer:6 math:2 location:1 allerton:1 mathematical:2 along:1 constructed:1 symposium:1 ik:3 introduce:3 behavior:1 p1:3 equivariant:1 cpu:1 becomes:1 begin:1 xx:1 eigenvector:1 finding:3 nj:1 pseudo:1 um:6 control:2 grant:2 yn:74 engineering:1 local:5 slonim:1 switching:1 analyzing:1 ap:1 montana:8 co:1 bi:1 range:1 acknowledgment:1 block:9 implement:3 definite:1 ker:4 projection:1 get:2 onto:1 interior:1 prentice:1 equivalent:1 deterministic:4 map:1 center:2 lagrangian:2 outweigh:1 attention:1 minq:4 keller:1 tomas:2 wasserman:2 imax:3 stability:2 variation:1 justification:1 altman:1 play:1 suppose:2 us:4 element:3 predicts:1 observed:1 role:1 solla:1 decrease:1 yk:5 rose:2 gedeon:4 solving:3 algebra:1 upon:2 easily:1 resolved:1 joint:2 various:1 represented:1 chapter:1 effective:2 detected:4 whose:1 jean:1 posed:1 valued:1 solve:6 distortion:36 say:2 otherwise:2 eigenvalue:1 maximal:2 unresolved:5 product:1 tu:1 relevant:1 bozeman:2 optimum:2 tk:3 pose:2 jurgen:1 p2:3 solves:3 involves:1 indicate:2 implies:1 direction:2 stochastic:1 multilinear:1 singularity:5 considered:1 wright:1 normal:1 hall:1 equilibrium:4 proc:2 label:1 largest:1 uller:1 mit:1 conjunction:1 corollary:1 check:1 greatly:1 detect:1 elsevier:1 el:1 i0:8 mth:2 supercritical:1 interested:1 classification:1 qkt:3 special:1 bifurcation:34 saving:1 biology:2 identical:6 look:1 divergence:1 individual:1 phase:4 n1:4 william:1 interest:3 introduces:1 mixture:2 permuting:1 chain:1 euclidean:3 desired:1 minimal:2 cover:1 stewart:1 cost:4 vertex:1 uniform:2 tishby:3 perturbed:1 thanks:1 international:1 yl:2 decoding:1 squared:2 thesis:1 satisfied:1 dr:1 book:1 derivative:1 yp:1 account:1 coding:4 explicitly:4 depends:2 tion:1 analyze:3 qk:35 efficiently:2 miller:3 correspond:1 nonsingular:1 modelled:1 rus:1 mathematicae:1 inventiones:1 di:4 schaeffer:2 lim:1 knowledge:1 back:1 appears:1 tom:1 methodology:2 improved:1 formulation:3 eia:1 leen:1 furthermore:4 until:2 nonlinear:2 maximizer:5 lack:1 golubitsky:2 gray:2 scientific:2 dge:1 effect:1 multiplier:3 equality:1 q0:6 leibler:1 moore:1 branching:1 naftali:2 generalized:1 theoretic:1 allen:1 consideration:1 chaos:1 nih:1 mt:1 volume:1 kluwer:1 significant:1 mathematics:1 scenario:2 certain:1 verlag:3 yuri:1 herbert:1 analyzes:1 minimum:1 ey:2 determine:2 fernando:1 signal:1 ii:1 branch:8 d0:5 smooth:1 alan:1 academic:1 parenthesis:1 basic:1 albert:3 iteration:1 kernel:2 achieved:1 justified:2 annealing:6 singular:2 source:2 dimitrov:4 publisher:1 unlike:1 undergo:1 flow:1 call:2 canadian:1 iii:2 identically:1 iterate:2 switch:1 inner:1 tradeoff:1 bottleneck:6 algebraic:1 reformulated:1 hessian:5 proceed:1 york:5 action:1 continuation:8 nsf:1 sign:1 bryan:1 discrete:1 group:6 four:3 rewriting:1 kenneth:1 nocedal:1 imaging:1 subcritical:6 monotone:1 inverse:1 extends:1 def:12 guaranteed:2 correspondence:1 annual:1 nontrivial:2 constraint:4 alex:1 speed:1 department:2 pacific:1 ascertain:1 invariant:2 taken:1 assures:1 discus:1 turn:2 needed:1 gersho:1 end:1 gaussians:2 quarterly:1 generic:3 existence:4 thomas:2 denotes:1 clustering:1 newton:2 exploit:1 especially:1 r01:1 occurs:2 diagonal:1 bialek:1 subspace:3 distance:1 agglomerative:1 considers:2 discriminant:1 length:1 pointwise:5 robert:2 quantizers:1 noam:1 negative:1 perform:3 markov:1 sm:6 arc:1 finite:2 bifurcating:12 supporting:1 communication:4 looking:3 perturbation:1 kl:2 subgroup:2 maxq:7 able:1 usually:2 dynamical:3 appeared:1 max:7 including:1 critical:1 rely:1 nth:1 scheme:3 improve:1 concludes:1 sn:4 func:1 text:1 tangent:3 determining:1 permutation:2 rationale:1 analogy:1 incurred:1 principle:1 editor:2 repeat:1 supported:1 vv:8 foote:1 taking:1 benefit:1 curve:3 dimension:2 valid:1 transition:5 world:2 willy:1 sensory:1 projected:1 kullback:1 handbook:1 conclude:1 xi:2 continuous:1 search:3 channel:1 symmetry:7 improving:2 investigated:1 pivotal:1 augmented:1 parker:4 wiley:1 pereira:1 explicit:4 wish:1 breaking:2 third:1 jacobian:2 jay:1 theorem:8 symbol:2 dk:11 incorporating:1 exists:1 quantization:4 phd:1 easier:1 entropy:7 depicted:1 simply:1 penrose:1 lagrange:3 bjorn:1 partially:1 kuznetsov:1 springer:4 wolf:1 loses:2 determines:1 conditional:1 goal:1 formulated:1 exposition:1 replace:1 hard:1 specifically:1 lemma:1 called:2 select:1 arises:1 alexander:4 biocomputing:1 preparation:1 |
1,390 | 2,265 | Branching Law for Axons
Dmitri B. Chklovskii and Armen Stepanyants
Cold Spring Harbor Laboratory
1 Bungtown Rd.
Cold Spring Harbor, NY 11724
mitya@cshl. edu [email protected]
Abstract
What determines the caliber of axonal branches? We pursue the
hypothesis that the axonal caliber has evolved to minimize signal
propagation delays, while keeping arbor volume to a minimum. We
show that for a general cost function the optimal diameters of
mother (do) and daughter (d], d 2 ) branches at a bifurcation obey
? 1aw: d0v + 2 =d]v + 2 + d 2v + 2 . The denvatIOn
"
a b ranc hmg
re l'les on th e
fact that the conduction speed scales with the axon diameter to the
power V (v = 1 for myelinated axons and V = 0.5 for nonmyelinated axons). We test the branching law on the available
experimental data and find a reasonable agreement.
1 Introduction
Multi-cellular organisms have solved the problem of efficient transport of nutrients
and communication between their body parts by evolving spectacular networks:
trees, blood vessels, bronchs, and neuronal arbors. These networks consist of
segments bifurcating into thinner and thinner branches. Understanding of branching
in transport networks has been advanced through the application of the optimization
theory ([1], [2] and references therein) . Here we apply the optimization theory to
explain the caliber of branching segments in communication networks , i.e. neuronal
axons.
Axons in different organisms vary in caliber from O. ll1m (terminal segments in
neocortex) to lOOOl1m (squid giant axon) [3]. What factors could be responsible for
such variation in axon caliber? According to the experimental data [4] and cable
theory [5], thicker axons conduct action potential faster, leading to shorter reaction
times and, perhaps, quicker thinking. This increases evolutionary fitness or,
equivalently, reduces costs associated with conduction delays. So, why not make all
the axons infinitely thick? It is likely that thick axons are evolutionary costly
because they require large amount of cytoplasm and occupy valuable space [6], [7].
Then, is there an optimal axon caliber, which minimizes the combined cost of
conduction delays and volume?
In this paper we derive an expression for the optimal axon diameter, which
minimizes the combined cost of conduction delay and volume. Although the relative
cost of del ay and volume is unknown, we use this expression to derive a law
describing segment caliber of branching axons with no free parameters. We test this
law on the published anatomical data and find a satisfactory agreement.
2 Derivation of the branching law
Although our theory holds for a rather general class of cost functions (see Methods),
we start, for the sake of simplicity, by deriving the branching law in a special case
of a linear cost function. Detrimental contribution to fitness , It , of an axonal
segment of length , L , can be represented as the sum of two terms , one proportional
to the conduction delay along the segment, T, and the other - to the segment
volume, V:
It =aT+ jJV.
(1)
Here, a and f3 are unknown but constant coefficients which reflect the rel ative
contribution to the fitness cost of the signal propagation delay and the axonal
volume.
5rr--,----.---.----,---.----.---.--7TO
4.5
4
3.5
2.5
2
delay cost
1.5
~ l /d
1
0.5
1.5
2
2.5
3
3.5
4
diameter, d
Figure 1: Fitness cost of a myelinated axonal segment as a function of its diameter.
The lines show the volume cost, the delay cost, and the total cost. Notice that the
total cost has a minimum. Diameter and cost values are normalized to their
respective optimal values.
We look for the axon caliber d that minimizes the cost function It. To do this, we
rewrite It as a function of d by noticing the following relations: i) Volume,
V=!!...Ld 2 .
4
'
ii) Time delay,
T=.!::....;
s
iii) Conduction velocity
s=kd for
myelinated axons (for non-myelinated axons, see Methods):
(2)
This cost function contains two terms, which have opposite dependence on d, and
has a minimum, Fig. 1.
a~
Next, by setting -
ad
=0
we find that the cost is minimized by the following axonal
caliber:
(
)
d=~
lrkfJ
1/3
(3)
The utility of this result may seem rather limited because the relative cost of time
fJ ' is unknown.
delays vs. volume,
a/
Figure 2: A simple axonal arbor with a single branch point and three axonal
segments. Segment diameters are do, d and d 2 . Time delays along each segment are
"
to, t" and t2. The total time delay down the first branch is T , =to +f" and the second T z=to +f2?
However, we can apply this result to axonal branching and arrive at a testable
prediction about the relationship among branch diameters without knowing the
relative cost. To do this we write the cost function for a bifurcation consisting of
three segments, Fig. 2:
(4)
where to is a conduction delay along segment 0, t1 - conduction delay along
segment 1,
t2 -
conduction delay along segment 2. Coefficients
a1
and
a2
represent relative costs of conduction delays for synapses located on the two
daughter branches and may be different. We group the terms corresponding to the
same segment together:
(5)
We look for segment diameters , which minimize this cost function. To do this we
make the dependence on the diameters explicit and differentiate in respect to them.
Because each term in Eq. (5) depends on the diameter of only one segment the
variables separate and we arrive at expressions analogous to Eq.(3):
( 2a J
kfJn
I/3
d =
I
l
'
( Jif3
d = 2a2
2
k {In
(6)
It is easy to see that these diameters satisfy the following branching law:
dg = d? +d~ .
(7)
Similar expression can be derived for non-myelinated axons (see Methods) . In this
case, the conduction velocity scales with the square root of segment diameter,
resulting in a branching exponent of 2.5 .
We note that expressions analogous to Eq. (7) have been derived for blood vessels,
tree branching and bronchs by balancing metabolic cost of pumping viscous fluid
and volume cost [8], [9]. Application of viscous flow to dendrites has been
discussed in [10]. However, it is hard to see how dendrites could be conduits to
viscous fluid if their ends are sealed.
Rail [11] has derived a similar law for branching dendrites by postulating
impedance matching:
(8)
However, the main purpose of Rail's law was to simplify calculations of dendritic
conduction rather than to explain the actual branch caliber measurements.
3
Comparison with experiment
We test our branching law, Eq.(7), by comparing it with the data obtained from
myelinated motor fibers of the cat [12] , Fig. 3. Data points represent 63 branch
points for which all three axonal calibers were available. Eq.(7) predicts that the
data should fall on the line described by:
(9)
= 3 . Despite the large spread in the data it is consistent with our
predictions. In fact, the best fit exponent, TJ = 2.57 , is closer to our prediction than
to Rail ' s law, TJ = 1.5.
where exponent TJ
We also show the histogram of the exponents TJ obtained for each of
63 branch
points from the same data set, Fig. 4. The average exponent, TJ = 2.67 , is much
closer to our predicted value for myelinated axons, '7
= 3,
than to RaIl's law,
'7 = 1.5.
0.9
0.8
0.7
0.6
'"tj"'"
'--..
0.5
'"tj'-<
RaZZ's law,
1] = 1.5
0.4
0.3
0.2
0.1
O L-~~~--~~~---L---L--~--~~~~
o
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Figure 3: Comparison of the experimental data (asterisks) [12] with theoretical
predictions. Each axonal bifurcation (with d, =F- d 2 ) is represented in the plot twice.
The lines correspond to Eq.(9) with various values of the exponent: the RaIl's law,
'7 = 1.5 , the best-fit exponent, '7 = 2.57 , and our prediction for myelinated axons,
'7 = 3.
Analysis of the experimental data reveals a large spread in the values of the
exponent, '7. This spread may arise from the biological variability in the axon
diameters, other factors influencing axon diameters, or measurement errors due to
the finite resolution of light microscopy. Although we cannot distinguish between
these causes, we performed a simulation showing that a reasonable measurement
error is sufficient to account for the spread.
First, based on the experimental data [12], we generate a set of diameters do, d,
and d 2 at branch points, which satisfy Eq. (7). We do this by taking all diameter
pairs at branch point from the experimental data and calculating the value of the
third diameter according to Eq. (7). Next we simulate the experimental data by
adding Gaussian noise to all branch diameters, and calculate the probability
distribution for the exponent '7 resulting from this procedure. The line in Fig. 4
shows that the spread in the histogram of branching exponent could be explained by
Gaussian measurement error with standard deviation of O.4.um. This value of
standard deviation is consistent with 0.5.um precision with which diameter
measurements are reported in [12].
14
12
RaIl's
10
average
exponent
8
6
predicted
exponent
2
0
0
2
3
6
Figure 4: Experimentally observed spread in the branching exponent may arise from
the measurement errors. The histogram shows the distribution of the exponent '7,
Eq. (9) , calculated for each axonal bifurcation [12]. The average exponent is
'7 = 2.67 . The line shows the simulated distribution of the exponent obtained in the
presence of measurement errors.
4
Conclusion
Starting with the hypotheses that axonal arbors had been optimized in the course of
evolution for fast signal conduction while keeping arbor volume to a minimum we
derived a branching law that relates segment diameters at a branch point. The
derivation was done for the cost function of a general form , and relies only on the
known scaling of signal propagation velocity with the axonal caliber. This law is
consistent with the available experimental data on myelinated axons. The observed
spread in the branching exponent may be accounted for by the measurement error.
More experimental testing is clearly desirable.
We note that similar considerations could be applied to dendrites. There, similar to
non-myelinated axons, time delay or attenuation of passively propagating signals
scales as one over the square root of diameter. This leads to a branching law with
exponent of 5/2. However, the presence of reflections from branch points and
active conductances is likely to complicate the picture.
5
Methods
The detrimental contribution of an axonal arbor to the evolutionary fitness can be
quantified by the cost, Q:. We postulate that the cost function , Q:, is a
monotonically increasing function of the total axonal volume per neuron, V , and all
signal propagation delays, Tj , from soma to
j -th synapse, where j = 1,2,3, ... :
(10)
Below we show that this rather general cost function (along with biophysical
properties ofaxons) is minimized when axonal caliber satisfies the following
branching law :
( 11)
with branching exponent '7
axons .
=3
for myelinated and '7
= 2.5
for non-myelinated
Although we derive Eq. (11) for a single branch point, our theory can be trivially
extended to more complex arbor topologies. We rewrite the cost function, ([, in
terms of volume contributions, ~, of i -th axonal segment to the total volume of
the axonal arbor, V , and signal propagation delay, t i , occurred along i -th axonal
segment. The cost function reduces to:
(12)
Next, we express volume and signal propagation delay of each segment as a
function of segment diameter. The volume of each cylindrical segment is given by:
1r
2
V =-Ld,
4
I
where
I
(13)
I
Li and d i are segment length and diameter, correspondingly. Signal
propagation delay, t i , is given by the ratio of segment length, L i , and signal speed,
Si'
Signal speed along axonal segment, in turn, depends on its diameter as :
(14)
where V = 1 for myelinated [4] and V = 0.5 for non-myelinated fibers [5]. As a
result propagation delay along segment i is:
(15)
Substituting Eqs. (13), (15) into the cost function, Eq. (12) , we find the dependence
of the cost function on segment diameters,
t1'(1r
1r ~d2 +1r ~d2 -Lo+
~-v -Lo+
~-v J
- Lod 2 +v
v
~
4
0
4
I
4
2 '
kd o
kd I
'
kd 0
kd 2
(16)
.
To find the diameters of all segments, which minimize the cost function ([, we
calculate its partial derivatives with respect to all segment diameters and set them to
zero:
(17)
~=Q:'!!...'
ad
v
2
2 '--2
d -Q:'
2
T2
v~
kd v +1
=0
2
By solving these equations we find the optimal segment diameters:
dv +2
o
= 2v(Q:~
I
+Q:;. )
2
k1rQ:~'
2vQ:'
d v + 2 =----l..
1
k1rQ:~
,
2vQ:'
d v +2 =----!J..
2
k1rQ:~
.
(18)
These equations imply that the cost function is minimized when the segment
diameters at a branch point satisfy the following expression (independent of the
particular form of the cost function, which enters Eq. (18) through the partial
derivatives Q:~ , Q:~I , and Q:~2 ):
d"=d"+d"
o
1
2 ,
l]
= v+2.
(19)
References
[I] Brown, J. H., West, G. B., and Santa Fe Institute (Santa Fe N.M.). (2000) Scaling in
biology. Oxford; New York: Oxford University Press.
[2] Weibel, E. R. (2000) Symmorphosis : on form and function in shaping life. Cambridge,
Mass.; London: Harvard University Press.
[3] Purves, D . (1997) Neuroscience. Sunderland, Mass.: Sinauer Associates.
[4] Rushton, W. A. H. (1951) A theory of the effects of fibre size in medullated nerve. J
Physiol 115, 10 1-122.
[5] Hodgkin, A. L. (1954) A note on conduction velocity. J Physioll25, 221-224.
[6] Cajal, S. R. y. (1999) Texture of the Nervous System of Man and the Vertebrates, Volume
l. New York: Springer.
[7] Chklovskii, D. B., Schikorski, T., and Stevens, C. F. (2002) Wiring optimization in
cortical circuits. Neuron 34,341-347.
[8] Murray, C. D. (1926) The physiological principle of minimum work. 1. The vascular
system and the cost of blood volume. PNAS 12, 207-214.
[9] Murray, C. D. (1927) A relationship between circumference and weight in trees and its
bearing on branching angles. J Cen Physioll0, 725-729.
[10] Cherniak, C., Changizi, M., and Kang D.W. (1999) Large-scale optimization of neuron
arbors. Phys Rev E 59,6001-6009.
[11] Rail , W. (1959) Branching dendritic trees and motoneuron membrane resistivity. Exp
Neuroll,491-527.
[12] Adal, M. N., and Barker, D . (1965) Intramuscular branching of fusimotor fibers. J
Physiol 177, 288-299.
| 2265 |@word cylindrical:1 squid:1 d2:2 simulation:1 ld:2 contains:1 reaction:1 comparing:1 si:1 physiol:2 motor:1 plot:1 v:1 nervous:1 adal:1 along:9 multi:1 terminal:1 actual:1 increasing:1 vertebrate:1 cherniak:1 circuit:1 mass:2 what:2 evolved:1 viscous:3 pursue:1 minimizes:3 giant:1 attenuation:1 thicker:1 um:2 t1:2 rushton:1 influencing:1 thinner:2 despite:1 pumping:1 oxford:2 twice:1 therein:1 quantified:1 limited:1 responsible:1 testing:1 procedure:1 cold:2 evolving:1 matching:1 cannot:1 spectacular:1 circumference:1 starting:1 barker:1 resolution:1 simplicity:1 deriving:1 variation:1 analogous:2 hypothesis:2 lod:1 agreement:2 harvard:1 velocity:4 associate:1 located:1 predicts:1 observed:2 quicker:1 solved:1 enters:1 calculate:2 valuable:1 rewrite:2 segment:34 solving:1 f2:1 represented:2 fiber:3 cat:1 various:1 derivation:2 fast:1 london:1 differentiate:1 rr:1 biophysical:1 derive:3 propagating:1 eq:13 predicted:2 thick:2 stevens:1 nutrient:1 require:1 dendritic:2 biological:1 hold:1 exp:1 substituting:1 cen:1 vary:1 a2:2 purpose:1 clearly:1 gaussian:2 rather:4 derived:4 relation:1 sunderland:1 among:1 exponent:19 special:1 bifurcation:4 f3:1 biology:1 look:2 thinking:1 minimized:3 t2:3 simplify:1 dg:1 cajal:1 fitness:5 consisting:1 conductance:1 light:1 tj:8 closer:2 partial:2 shorter:1 respective:1 tree:4 conduct:1 re:1 theoretical:1 cost:37 deviation:2 delay:22 reported:1 conduction:14 aw:1 combined:2 together:1 reflect:1 postulate:1 derivative:2 leading:1 li:1 account:1 potential:1 coefficient:2 satisfy:3 ad:2 depends:2 performed:1 root:2 start:1 purves:1 ative:1 contribution:4 minimize:3 square:2 correspond:1 published:1 explain:2 synapsis:1 phys:1 resistivity:1 complicate:1 associated:1 shaping:1 nerve:1 hmg:1 synapse:1 done:1 transport:2 propagation:8 del:1 perhaps:1 effect:1 normalized:1 brown:1 evolution:1 laboratory:1 satisfactory:1 wiring:1 branching:23 ay:1 ofaxons:1 reflection:1 fj:1 consideration:1 volume:18 discussed:1 organism:2 occurred:1 measurement:8 cambridge:1 mother:1 rd:1 trivially:1 sealed:1 had:1 life:1 minimum:5 motoneuron:1 monotonically:1 signal:11 ii:1 relates:1 branch:17 desirable:1 pnas:1 reduces:2 faster:1 calculation:1 a1:1 prediction:5 histogram:3 represent:2 microscopy:1 chklovskii:2 flow:1 seem:1 axonal:21 presence:2 iii:1 easy:1 harbor:2 fit:2 topology:1 opposite:1 knowing:1 expression:6 utility:1 vascular:1 york:2 cause:1 action:1 santa:2 conduit:1 amount:1 neocortex:1 cshl:2 diameter:30 generate:1 occupy:1 notice:1 neuroscience:1 per:1 anatomical:1 write:1 express:1 group:1 soma:1 blood:3 dmitri:1 sum:1 fibre:1 angle:1 noticing:1 hodgkin:1 arrive:2 reasonable:2 scaling:2 distinguish:1 armen:1 sake:1 myelinated:14 speed:3 simulate:1 spring:2 passively:1 according:2 kd:6 membrane:1 cable:1 rev:1 mitya:1 explained:1 dv:1 equation:2 vq:2 describing:1 turn:1 end:1 available:3 apply:2 obey:1 calculating:1 testable:1 murray:2 costly:1 dependence:3 evolutionary:3 detrimental:2 separate:1 simulated:1 cellular:1 length:3 relationship:2 ratio:1 equivalently:1 fe:2 daughter:2 fluid:2 unknown:3 bungtown:1 neuron:3 finite:1 bifurcating:1 extended:1 communication:2 variability:1 pair:1 optimized:1 kang:1 below:1 power:1 advanced:1 imply:1 picture:1 weibel:1 understanding:1 relative:4 law:18 sinauer:1 proportional:1 asterisk:1 sufficient:1 consistent:3 principle:1 metabolic:1 balancing:1 lo:2 course:1 accounted:1 keeping:2 free:1 institute:1 fall:1 taking:1 correspondingly:1 calculated:1 cortical:1 cytoplasm:1 active:1 reveals:1 why:1 impedance:1 dendrite:4 vessel:2 bearing:1 complex:1 main:1 spread:7 noise:1 arise:2 body:1 neuronal:2 fig:5 west:1 postulating:1 ny:1 axon:25 precision:1 explicit:1 rail:7 third:1 down:1 showing:1 physiological:1 consist:1 rel:1 adding:1 texture:1 likely:2 infinitely:1 springer:1 determines:1 relies:1 satisfies:1 man:1 hard:1 experimentally:1 total:5 arbor:9 experimental:9 ranc:1 |
1,391 | 2,266 | FloatBoost Learning for Classification
Stan Z. Li
Microsoft Research Asia
Beijing, China
ZhenQiu Zhang
Institute of Automation
CAS, Beijing, China
Heung-Yeung Shum
Microsoft Research Asia
Beijing, China
HongJiang Zhang
Microsoft Research Asia
Beijing, China
Abstract
AdaBoost [3] minimizes an upper error bound which is an exponential
function of the margin on the training set [14]. However, the ultimate
goal in applications of pattern classification is always minimum error
rate. On the other hand, AdaBoost needs an effective procedure for
learning weak classifiers, which by itself is difficult especially for high
dimensional data. In this paper, we present a novel procedure, called
FloatBoost, for learning a better boosted classifier. FloatBoost uses a
backtrack mechanism after each iteration of AdaBoost to remove weak
classifiers which cause higher error rates. The resulting float-boosted
classifier consists of fewer weak classifiers yet achieves lower error rates
than AdaBoost in both training and test. We also propose a statistical
model for learning weak classifiers, based on a stagewise approximation
of the posterior using an overcomplete set of scalar features. Experimental comparisons of FloatBoost and AdaBoost are provided through a
difficult classification problem, face detection, where the goal is to learn
from training examples a highly nonlinear classifier to differentiate between face and nonface patterns in a high dimensional space. The results
clearly demonstrate the promises made by FloatBoost over AdaBoost.
1 Introduction
Nonlinear classification of high dimensional data is a challenging problem. While designing such a classifier is difficult, AdaBoost learning methods, introduced by Freund and
Schapire [3], provides an effective stagewise approach: It learns a sequence of more easily
learnable ?weak classifiers?, and boosts them into a single strong classifier by a linear combination of them. It is shown that the AdaBoost learning minimizes an upper error bound
which is an exponential function of the margin on the training set [14].
Boosting learning originated from the PAC (probably approximately correct) learning theory [17, 6]. Given that weak classifiers can perform slightly better than random guessing
http://research.microsoft.com/ szli
The work presented in this paper was carried out at Microsoft Research Asia.
on every distribution over the training set, AdaBoost can provably achieve arbitrarily good
bounds on its training and generalization errors [3, 15]. It is shown that such simple weak
classifiers, when boosted, can capture complex decision boundaries [1].
Relationships of AdaBoost [3, 15] to functional optimization and statistical estimation are
established recently. A number of gradient boosting algorithms are proposed [4, 8, 21]. A
significant advance is made by Friedman et al. [5] who show that the AdaBoost algorithms
minimize an exponential loss function which is closely related to Bernoulli likelihood.
In this paper, we address the following problems associated with AdaBoost:
1. AdaBoost minimizes an exponential (some another form of ) function of the margin over the training set. This is for convenience of theoretical and numerical
analysis. However, the ultimate goal in applications is always minimum error
rate. A strong classifier learned by AdaBoost may not necessarily be best in this
criterion. This problem has been noted, eg by [2], but no solutions have been
found in literature.
2. An effective and tractable algorithm for learning weak classifiers is needed. Learning the optimal weak classifier, such as the log posterior ratio given in [15, 5],
requires estimation of densities in the input data space. When the dimensionality
is high, this is a difficult problem by itself.
We propose a method, called FloatBoost (Section 3), to overcome the first problem. FloatBoost incorporates into AdaBoost the idea of Floating Search originally proposed in [11]
for feature selection. A backtrack mechanism therein allows deletion of those weak classifiers that are non-effective or unfavorable in terms of the error rate. This leads to a strong
classifier consisting of fewer weak classifiers. Because deletions in backtrack is performed
according to the error rate, an improvement in classification error is also obtained. To solve
the second problem above, we provide a statistical model (Section 4) for learning weak
classifiers and effective feature selection in high dimensional feature space. A base set of
weak classifiers, defined as the log posterior ratio, are derived based on an overcomplete
set of scalar features. Experimental results are presented in (Section 5) using a difficult
classification problem, face detection. Comparisons are made between FloatBoost and AdaBoost in terms of the error rate and complexity of boosted classifier. Results clear show
that FloatBoost yields a strong classifier consisting of fewer weak classifiers yet achieves
lower error rates.
2 AdaBoost Learning
In this section, we give a brief description of AdaBoost algorithm, in the notion of RealBoost [15, 5], as opposed to the original discrete AdaBoost [3].
For
two class problems, a
set
of
labelled training examples is given as
!
,
where
is the class label associated with example
"#%$'&
. A stronger classifier is a linear combination of ( weak classifiers
*
)+*
,.-
/
021
43
0
5
(1)
,+6$
0
In this real version of AdaBoost, the weak classifiers can take a real value,
3 89 ,
,7
0
and have absorbed the
coefficients
needed
in
the
discrete
version
(there,
).
3
,:-<; =?>@BA )C*
ED
,
)
)C*
F
The class label for is obtained as
while the magnitude F
indicates the confidence. Every training example is associated with a weight. During the
learning process, the weights are updated dynamically in such a way that more emphasis is
placed on hard examples which are erroneously classified previously. It is important for the
original AdaBoost. However, recent studies [4, 8, 21] show that the artificial operation of
explicit re-weighting is unnecessary and can be incorporated into a functional optimization
procedure of boosting.
!
'&$
#"
%$
" (*),+.-
0. (Input)
,
(1) Training examples
where
; of which examples have
and examples have
;
of weak classifiers to be combined;
(2) The maximum number
1. (Initialization)
for those examples with
or
for those examples with
.
;
2. (Forward Inclusion)
while
(1)
;
according to Eq.4;
(2) Choose
(3) Update
, and normalize to
3. (Output)
.
/10324
/1" 0324 57 6
#"89%$
.
5
:
" '&$
(;" =<
(?AB
>@( (C),@+.- $
(;
/1DE 0 E 4
" AGFIHKJMLN&O
"QP ER " TS
P W,XY[Z]\^L U _aE ` D _ TS
/10 4
U " "E V
$;
Figure 1: RealBoost Algorithm.
*b
5
)
-
Rced
)C*
,
4
, or
. The ?margin?
of an example
An error occurs
when
,7%$
,
achieved by 3
on the training set examples is defined as 3
. This can be considered as a measure of
the
confidence
of
the
?s
prediction.
The
upper
bound
on classification
3
)+*
error achieved by
can be derived as the following exponential loss function [14]
f
)
mon p
Vgh^ijQkal j
/
#*
(2)
5
by stage-wise minimization of Eq.(2). Given the current
rq h
, the best
for the new strong classifier
h
is the one which leads to the minimum cost
h
tsu av w#x f
(3)
h
It is shown in [15, 5] that the minimizer is
? m p
~
}
? m h p
(4)
y{zo| }
h
m p are the weights given at time . Using }
? }
? }
where
h
and letting
?
?
?
?
?
(5)
y?z?| ?
?
?
?
(6)
y zo| ~}}
*
AdaBoost construct
*
)
)
*
-
*
071
3
,
0
3
3
5
*
-
*
>
= @
)
5
*
,
-
5
3
3
*
5
-
+-
>
F
+-
3
*
*
)
3
F
5
(
2*
5
-
-
>
we arrive
*
>
-
F
+-
+-
F
+
*
*
F
-
F
,
4
?
+-
*
,
?
(7)
?
The half log likelihood
? ratio is learned from the training examples of? the two classes,
and the threshold is determined by the log ratio of prior probabilities. can be adjusted
3
to balance between detection rate and false alarm (ROC curve). The algorithm is shown
in Fig.1 (Note: Re-weight formula in this description is equivalent to the multiplicative
rule in the original
form
of* AdaBoost
[3, 15]). In Section 4, we will present an model for
5
,
F
approximating
.
}
? m
p
h
3 FloatBoost Learning
*
FloatBoost backtracks after the newest weak classifier 3
is added and delete unfavorable
weak classifiers 3 0 from the ensemble (1), following the idea of Floating Search [11].
Floating Search [11] is originally aimed to deal with non-monotonicity of straight sequential feature selection, non-monotonicity meaning that adding an additional feature may lead
to drop in performance. When a new feature is added, backtracks are performed to delete
those features that cause performance drops. Limitations of sequential feature selection
is thus amended, improvement gained with the cost of increased computation due to the
extended search.
"^'&$
*
(
,
)
.
+
E
/ 0324 P
" 9%$
/ 03" 24 57 6
.
5
:
"8 &$
)_ " 2
),+.'
$
I
?
(
(;=<
(;AB(C@$
/ 0E 4
/ DE 0 E 4
U
$;
AGFIHKJMLN&O
"QP ER) " TS
"
"
)E P " E V
E E
R DE E P E~
;
D Z Y[\ l ) P E &*D
P E &*D > E 8
)E E
8 P E E &*&*D D (;( & $
D
P E 9U
l
/ 0(;
4 9(*),+.- E~a>
" E A FHJMLN&O
"QP ER " TS
P W,XY[Z]\^L U 0 4 D
TS
l
0. (Input)
(1) Training examples
,
; of which examples have
where
and examples have
;
of weak classifiers;
(2) The maximum number
(3) The error rate
, and the acceptance threshold .
1. (Initialization)
(1)
for those examples with
or
for those examples with
;
max-value (for
(2)
),
,
.
2. (Forward Inclusion)
;
(1)
(2) Choose
according to Eq.4;
(3) Update
, and normalize
to
(4)
; If
, then
3. (Conditional Exclusion)
(1)
;
, then
(2) If
;
(a)
;
;
(b)
;
(c) goto 3.(1);
(3) else
(a) if
or
, then goto 4;
(b)
; goto 2.(1);
4. (Output)
.
"89%$
Figure 2: FloatBoost Algorithm.
*
-
*
q
*
The FloatBoost procedure is shown in Fig.2 Let
be the
so-far-best set
3
3
) *
,
) * 5.
0
021
of ( weak classifiers;
be the error rate achieved by
(or a
3
weighted sum of missing rate and false alarm rate which is usually the criterion in one-class
0 ! # be the minimum error rate achieved so far with an ensemble of $
detection problem); "
weak classifiers.
In Step 2 (forward inclusion), given already selected, the best weak classifier is added one
at a time, which is the same as in AdaBoost. In Step 3* (conditional exclusion), FloatBoost
removes the least significant weak classifier
from
, subject to the condition that the
*
removal leads to a lower error rate "! # . These are repeated until no more removals can
be done. The procedure terminates when the risk on the training set is below
or the
is reached.
maximum number (
f
h
Incorporating the conditional exclusion, FloatBoost renders both effective feature selection
and classifier learning. It usually needs fewer weak classifiers than AdaBoost to achieve
the same error rate .
4 Learning Weak Classifiers
The section presents a method for computing the log likelihood in Eq.(5) required in learning optimal weak classifiers. Since deriving a weak classifier in high dimensional space is
a non-trivial task, here we provide a statistical model for stagewise learning of weak classifiers based on some scalar features. A scaler feature
of is computed by a transform from
5:8$
the -dimensional data space to the real line,
. A feature can be the coefficient
of, say, a wavelet transform
in
signal
and
image
processing.
If projection pursuit is used as
,
is simply
the
-th
coordinate
of
.
A
dictionary
of candidate scalar
the transform,
,
, . In the following, we use 0 to denote
features can be created
the
is the feature computed from using the
feature selected in the $ -th stage, while
-th transform.
m p
Assuming that is an over-complete basis, a set of candidate weak classifiers for the
optimal
weak classifier
(7) can be designed
in the following way: First, at stage ( where
*
(
features
5 , * * have been selected and the weight is given as
,
F
we can approximate
by using the distributions of ( features
?
m p m Ip ?
?
m
? p
h
,
F
*
mm
-
m
pp
hh
? m p m p m pm h p ?
? m p ? h ?
m Ip m
? m h p m p m h Im p ?
? m p m h h
F
F
,
F
,
F
F
*
*
,
*
Because
enough (
*
*
*
p
m
m
*
h m
p
*
ph
*
p h
p
h
p
(8)
(9)
is an over-complete basis set, the approximation is good enough for large
and when the ( features are chosen appropriately.
m p
m p
m p ? m h p
? m p mp
?
h
mp
m p
h
? m h p ? m p ? mp m ? mIp p ? m p m p
(10)
?
?
m
p
? h
(11)
h ?
h
On the right-hand? side
above equation, all conditional densities are fixed except the
m ofp the. Learning
last one ?
h
f the best weak classifier at stage is to choose the
best feature m p for such that is minimized according to Eq.(3).
? m p for the positive class
The conditional probability densities ?
h
and the negative class
can be estimated using the histograms
m p . Letcomputed from the
weighted voting of the training examples using the weights
h
m
p
m? p
m h p
(12)
y{z?| 1??
h
,
,
0
0
Note that 0 F
is actually 0 F
because
con 0
tains
the
information
about
entire
history
of
and
accounts
for
the
dependencies
on
0
. Therefore, we have
F
*
,
+F
,
*
,
F
*
F
*
,
F
,
*
(
-
*
,
F
*
F
,.-
*
,
*
>
F
F
-
*
*
-
m p
*
and 3
? m p
*
,.-
,
? . We can derive the set of candidate weaker classifiers as
m p m p
*
-
*
3
,
*
m
*
p
(13)
F
m p
5-
*
)
among all in
for the new * strong classifier
*
) *
is
given
by
Eq.(3)
among
all
, for which the optimal
3
3
weak classifier has been derived as (7). According the theory of gradient
based boosting
*
[4, 8, 21], we can
choose
the
optimal
weak
classifier
by
finding
the
that
best fits the
3
) *
gradient
where
Recall that the best
h
5 3
f
h
f
f
m p
?
)+*
h
f
.-
,
h
f
"
(14)
,7
*
In our
stagewise approximation
formulation, this can be done by first finding the 3
*
that best fits
in direction and then scaling it so that the two has
the same
(re-weighted) norm. An alternative selection scheme is simply
to
choose
so
that the
*
error
rate
(or
some
computed
from
the
two
histograms
and
F
* risk),
-
F
, is minimized.
? m
h
?
p
? m
h
p
5 Experimental Results
Face Detection The face detection problem here is to classifier an image of standard size
(eg 20x20 pixels) into either face or nonface (imposter). This is essentially a one-class
problem in that everything not a face is a nonface. It is a very hard problem. Learning
based methods have been the main approach for solving the problem , eg [13, 16, 9, 12].
Experiments here follow the framework of Viola and Jones [19, 18]. There, AdaBoost is
used for learning face detection; it performs two important tasks: feature selection from a
large collection features; and constructing classifiers using selected features.
Data Sets
A set of 5000 face images are collected from various sources. The faces are cropped and
re-scaled to the size of 20x20. Another set of 5000 nonface examples of the same size are
collected from images containing no faces. The 5000 examples in each set is divided into
a training set of 4000 examples and a test set of 1000 examples. See Fig.3 for a random
sample of 10 face and 10 nonface examples.
Figure 3: Face (top) and nonface (bottom) examples.
Scalar Features
Three basic types of scalar features are derived from each example, as shown in Fig.4,
for constructing weak classifiers. These block differences are an extended set of steerable
filters
used in [10, 20]. There are hundreds of thousands of different for admissible
,
values. Each candidate weak classifier
is constructed
as the log likelihood ratio
*
,
(12) computed
from
the
two
histograms
of
a
scalar feature for the
F
face (
) and nonface (
) examples (cf. the last part of the previous section).
?
? m
h
p
m p
Figure
4: The three types of simple
Harr wavelet like features
defined on a sub-window
:
. The rectangles are of size
andare
at
distances
of
apart. Each feature takes
) sum of the pixels in the rectangles.
a value calculated by the weighted (
y
Performance Comparison The same data sets are used for evaluating FloatBoost and AdaBoost. The performance is measured by false alarm error rate given the detection rate
fixed at 99.5%. While a cascade of stronger classifiers are needed to achiever very low
false alarm [19, 7], here we present the learning curves for the first strong classifier composed of up to one thousand weak classifiers. This is because what we aim to evaluate
here is to contrast between FloatBoost and AdaBoost learning algorithms, rather than the
system work. Interested
reader is referred to [7] for a complete system which achieved a
false alarm of
with the detection rate of 95%. (A live demo of multi-view face detection system, the first real-time system of the kind in the world, is being submitted to the
conference).
]d
h
0.8
AdaBoost?train
AdaBoost?test
FloatBoost?train
FloatBoost?test
0.75
0.7
Error Rates
0.65
0.6
0.55
0.5
0.45
0.4
0.35
100
200
300
400
500
600
# Weak Classifiers
700
800
900
1000
Figure 5: Error Rates of FloatBoost vs AdaBoost for frontal face detection.
The training and testing error curves for FloatBoost and AdaBoost are shown in Fig.5,
with the detection rate fixed at 99.5%. The following conclusions can be made from these
curves: (1) Given the same number of learned features or weak classifiers, FloatBoost always achieves lower training error and lower test error than AdaBoost. For example, on the
test set, by combining 1000 weak classifiers, the false alarm of FloatBoost is 0.427 versus
0.485 of AdaBoost. (2) FloatBoost needs many fewer weak classifiers than AdaBoost in
order to achieve the same false alarms. For example, the lowest test error for AdaBoost
is 0.481 with 800 weak classifiers, whereas FloatBoost needs only 230 weak classifiers
to achieve the same performance. This clearly demonstrates the strength of FloatBoost in
learning to achieve lower error rate.
6 Conclusion and Future Work
By incorporating the idea of Floating Search [11] into AdaBoost [3, 15], FloatBoost effectively improves the learning results. It needs fewer weaker classifiers than AdaBoost to
achieve a similar error rate, or achieves lower a error rate with the same number of weak
classifiers. Such a performance improvement is achieved with the cost of longer training
time, about 5 times longer for the experiments reported in this paper.
The Boosting algorithm may need substantial computation for training. Several methods
can be used to make the training more efficient with little drop in the training performance.
Noticing that only examples with large weigh values are influential, Friedman et al. [5]
propose to select examples with large weights, i.e. those which in the past have been
wrongly classified by the learned weak classifiers,
for the training weak classifier in t+- he
next round.
Top
examples
within
a
fraction
of
of the total weight mass are used,
88A
4
?D
where
.
d d Id
References
[1] L. Breiman. ?Arcing classifiers?. The Annals of Statistics, 26(3):801?849, 1998.
[2] P. Buhlmann and B. Yu. ?Invited discussion on ?Additive logistic regression: a statistical view of boosting (friedman, hastie
and tibshirani)? ?. The Annals of Statistics, 28(2):377?386, April 2000.
[3] Y. Freund and R. Schapire. ?A decision-theoretic generalization of on-line learning and an application to boosting?. Journal
of Computer and System Sciences, 55(1):119?139, Aug 1997.
[4] J. Friedman. ?Greedy function approximation: A gradient boosting machine?. The Annals of Statistics, 29(5), October
2001.
[5] J. Friedman, T. Hastie, and R. Tibshirani. ?Additive logistic regression: a statistical view of boosting?. The Annals of
Statistics, 28(2):337?374, April 2000.
[6] M. J. Kearns and U. Vazirani. An Introduction to Computational Learning Theory. MIT Press, Cambridge, MA, 1994.
[7] S. Z. Li, L. Zhu, Z. Q. Zhang, A. Blake, H. Zhang, and H. Shum. ?Statistical learning of multi-view face detection?. In
Proceedings of the European Conference on Computer Vision, page ???, Copenhagen, Denmark, May 28 - June 2 2002.
[8] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Functional gradient techniques for combining hypotheses. In A. Smola,
P. Bartlett, B. Sch?olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 221?247. MIT Press,
Cambridge, MA, 1999.
[9] E. Osuna, R. Freund, and F. Girosi. ?Training support vector machines: An application to face detection?. In CVPR, pages
130?136, 1997.
[10] C. P. Papageorgiou, M. Oren, and T. Poggio. ?A general framework for object detection?. In Proceedings of IEEE International Conference on Computer Vision, pages 555?562, Bombay, India, 1998.
[11] P. Pudil, J. Novovicova, and J. Kittler. ?Floating search methods in feature selection?. Pattern Recognition Letters,
(11):1119?1125, 1994.
[12] D. Roth, M. Yang, and N. Ahuja. ?A snow-based face detector?. In Proceedings of Neural Information Processing Systems,
2000.
[13] H. A. Rowley, S. Baluja, and T. Kanade. ?Neural network-based face detection?. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 20(1):23?28, 1998.
[14] R. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. ?Boosting the margin: A new explanation for the effectiveness of voting
methods?. The Annals of Statistics, 26(5):1651?1686, October 1998.
[15] R. E. Schapire and Y. Singer. ?Improved boosting algorithms using confidence-rated predictions?. In Proceedings of the
Eleventh Annual Conference on Computational Learning Theory, pages 80?91, 1998.
[16] K.-K. Sung and T. Poggio. ?Example-based learning for view-based human face detection?. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 20(1):39?51, 1998.
[17] L. Valiant. ?A theory of the learnable?. Communications of ACM, 27(11):1134?1142, 1984.
[18] P. Viola and M. Jones. ?Asymmetric AdaBoost and a detector cascade?. In Proceedings of Neural Information Processing
Systems, Vancouver, Canada, December 2001.
[19] P. Viola and M. Jones. ?Rapid object detection using a boosted cascade of simple features?. In Proceedings of IEEE
Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii, December 12-14 2001.
[20] P. Viola and M. Jones. ?Robust real time object detection?. In IEEE ICCV Workshop on Statistical and Computational
Theories of Vision, Vancouver, Canada, July 13 2001.
[21] R. Zemel and T. Pitassi. ?A gradient-based boosting algorithm for regression problems?. In Advances in Neural Information
Processing Systems, volume 13, Cambridge, MA, 2001. MIT Press.
| 2266 |@word version:2 stronger:2 norm:1 shum:2 imposter:1 past:1 current:1 com:1 yet:2 additive:2 numerical:1 girosi:1 remove:2 drop:3 designed:1 update:2 v:1 newest:1 half:1 fewer:6 selected:4 greedy:1 intelligence:2 provides:1 boosting:12 zhang:4 constructed:1 consists:1 eleventh:1 rapid:1 multi:2 floatboost:27 little:1 window:1 provided:1 mass:1 lowest:1 what:1 kind:1 minimizes:3 finding:2 sung:1 every:2 voting:2 classifier:67 scaled:1 demonstrates:1 positive:1 id:1 approximately:1 emphasis:1 therein:1 china:4 initialization:2 dynamically:1 challenging:1 testing:1 block:1 kauai:1 steerable:1 procedure:5 cascade:3 projection:1 confidence:3 convenience:1 selection:8 wrongly:1 risk:2 live:1 equivalent:1 missing:1 roth:1 rule:1 deriving:1 notion:1 coordinate:1 updated:1 annals:5 us:1 designing:1 hypothesis:1 recognition:2 asymmetric:1 bottom:1 capture:1 thousand:2 kittler:1 rq:1 substantial:1 weigh:1 complexity:1 rowley:1 solving:1 basis:2 easily:1 various:1 zo:2 train:2 effective:6 artificial:1 zemel:1 mon:1 solve:1 cvpr:1 say:1 statistic:5 transform:4 itself:2 ip:2 differentiate:1 sequence:1 propose:3 combining:2 achieve:6 description:2 normalize:2 olkopf:1 object:3 derive:1 frean:1 measured:1 aug:1 eq:6 strong:7 direction:1 snow:1 closely:1 correct:1 filter:1 human:1 everything:1 generalization:2 im:1 adjusted:1 mm:1 considered:1 blake:1 achieves:4 dictionary:1 estimation:2 label:2 weighted:4 minimization:1 mit:3 clearly:2 always:3 aim:1 rather:1 boosted:5 breiman:1 arcing:1 derived:4 june:1 improvement:3 bernoulli:1 likelihood:4 indicates:1 contrast:1 entire:1 interested:1 provably:1 pixel:2 classification:7 among:2 construct:1 jones:4 yu:1 future:1 minimized:2 composed:1 floating:5 consisting:2 microsoft:5 friedman:5 ab:2 detection:19 acceptance:1 highly:1 poggio:2 re:4 overcomplete:2 theoretical:1 delete:2 increased:1 bombay:1 cost:3 hundred:1 reported:1 dependency:1 combined:1 density:3 international:1 lee:1 opposed:1 choose:5 containing:1 hawaii:1 li:2 account:1 automation:1 coefficient:2 mp:2 performed:2 multiplicative:1 view:5 reached:1 minimize:1 who:1 ensemble:2 yield:1 weak:46 backtrack:3 straight:1 classified:2 history:1 submitted:1 detector:2 ed:1 pp:1 associated:3 con:1 recall:1 dimensionality:1 improves:1 actually:1 higher:1 originally:2 follow:1 asia:4 adaboost:39 improved:1 april:2 formulation:1 done:2 stage:4 smola:1 until:1 hand:2 nonlinear:2 logistic:2 stagewise:4 eg:3 deal:1 round:1 during:1 nonface:7 noted:1 criterion:2 complete:3 demonstrate:1 theoretic:1 performs:1 meaning:1 wise:1 image:4 novel:1 recently:1 functional:3 qp:3 volume:1 he:1 significant:2 cambridge:3 pm:1 inclusion:3 longer:2 realboost:2 base:1 pitassi:1 posterior:3 exclusion:3 recent:1 apart:1 arbitrarily:1 minimum:4 additional:1 signal:1 july:1 ofp:1 divided:1 prediction:2 basic:1 regression:3 ae:1 essentially:1 vision:4 yeung:1 iteration:1 histogram:3 achieved:6 oren:1 cropped:1 whereas:1 else:1 float:1 source:1 appropriately:1 invited:1 hongjiang:1 sch:1 probably:1 subject:1 goto:3 december:2 incorporates:1 effectiveness:1 yang:1 enough:2 baxter:1 fit:2 hastie:2 idea:3 bartlett:3 ultimate:2 render:1 cause:2 clear:1 aimed:1 ph:1 schapire:4 http:1 estimated:1 tibshirani:2 discrete:2 promise:1 threshold:2 heung:1 rectangle:2 fraction:1 sum:2 beijing:4 noticing:1 letter:1 arrive:1 reader:1 decision:2 scaling:1 bound:4 scaler:1 annual:1 strength:1 erroneously:1 influential:1 according:5 combination:2 terminates:1 slightly:1 osuna:1 iccv:1 equation:1 previously:1 mechanism:2 hh:1 needed:3 singer:1 letting:1 tractable:1 pursuit:1 operation:1 backtracks:2 alternative:1 original:3 top:2 harr:1 cf:1 especially:1 approximating:1 society:1 added:3 already:1 occurs:1 guessing:1 gradient:6 distance:1 collected:2 trivial:1 denmark:1 assuming:1 relationship:1 ratio:5 balance:1 difficult:5 x20:2 october:2 negative:1 ba:1 perform:1 upper:3 av:1 t:6 viola:4 extended:2 incorporated:1 communication:1 buhlmann:1 canada:2 introduced:1 copenhagen:1 required:1 learned:4 deletion:2 established:1 boost:1 address:1 usually:2 pattern:6 below:1 max:1 explanation:1 zhu:1 scheme:1 rated:1 brief:1 stan:1 created:1 carried:1 prior:1 literature:1 removal:2 vancouver:2 freund:4 loss:2 limitation:1 versus:1 vg:1 editor:1 placed:1 last:2 side:1 weaker:2 institute:1 india:1 face:21 boundary:1 overcome:1 curve:4 calculated:1 evaluating:1 world:1 forward:3 made:4 amended:1 collection:1 far:2 transaction:2 vazirani:1 approximate:1 monotonicity:2 tains:1 unnecessary:1 demo:1 search:6 kanade:1 learn:1 robust:1 ca:1 schuurmans:1 complex:1 necessarily:1 constructing:2 european:1 papageorgiou:1 main:1 alarm:7 repeated:1 fig:5 referred:1 roc:1 ahuja:1 sub:1 originated:1 explicit:1 exponential:5 candidate:4 weighting:1 learns:1 wavelet:2 admissible:1 formula:1 pac:1 er:3 learnable:2 mason:1 incorporating:2 workshop:1 false:7 sequential:2 adding:1 gained:1 effectively:1 valiant:1 magnitude:1 margin:6 simply:2 absorbed:1 scalar:7 minimizer:1 acm:1 ma:3 conditional:5 goal:3 labelled:1 hard:2 determined:1 except:1 baluja:1 kearns:1 called:2 total:1 experimental:3 unfavorable:2 select:1 support:1 frontal:1 evaluate:1 |
1,392 | 2,267 | Data-Dependent Bounds for Bayesian
Mixture Methods
Ron Meir
Department of Electrical Engineering
Technion, Haifa 32000, Israel
[email protected]
Tong Zhang
IBM T.J. Watson Research Center
Yorktown Heights, NY 10598, USA
[email protected]
Abstract
We consider Bayesian mixture approaches, where a predictor is
constructed by forming a weighted average of hypotheses from some
space of functions. While such procedures are known to lead to
optimal predictors in several cases, where sufficiently accurate prior
information is available, it has not been clear how they perform
when some of the prior assumptions are violated. In this paper we
establish data-dependent bounds for such procedures, extending
previous randomized approaches such as the Gibbs algorithm to
a fully Bayesian setting. The finite-sample guarantees established
in this work enable the utilization of Bayesian mixture approaches
in agnostic settings, where the usual assumptions of the Bayesian
paradigm fail to hold. Moreover, the bounds derived can be directly
applied to non-Bayesian mixture approaches such as Bagging and
Boosting.
1
Introduction and Motivation
The standard approach to Computational Learning Theory is usually formulated
within the so-called frequentist approach to Statistics. Within this paradigm one is
interested in constructing an estimator, based on a finite sample, which possesses a
small loss (generalization error). While many algorithms have been constructed and
analyzed within this context, it is not clear how these approaches relate to standard
optimality criteria within the frequentist framework. Two classic optimality criteria
within the latter approach are the minimax and admissibility criteria, which characterize optimality of estimators in a rigorous and precise fashion [9]. Except in some
special cases [12], it is not known whether any of the approaches used within the
Learning community lead to optimality in either of the above senses of the word.
On the other hand, it is known that under certain regularity conditions, Bayesian
estimators lead to either minimax or admissible estimators, and thus to well-defined
optimality in the classical (frequentist) sense. In fact, it can be shown that Bayes
estimators are essentially the only estimators which can achieve optimality in the
above senses [9]. This optimality feature provides strong motivation for the study
of Bayesian approaches in a frequentist setting.
While Bayesian approaches have been widely studied, there have not been generally
applicable bounds in the frequentist framework. Recently, several approaches have
attempted to address this problem. In this paper we establish finite sample datadependent bounds for Bayesian mixture methods, which together with the above
optimality properties suggest that these approaches should become more widely
used.
Consider the problem of supervised learning where we attempt to construct an estimator based on a finite sample of pairs of examples S = {(x1 , y1 ), . . . , (xn , yn )},
each drawn independently according to an unknown distribution ?(x, y). Let A be
a learning algorithm which, based on the sample S, constructs a hypothesis (estimator) h from some set of hypotheses H. Denoting by `(y, h(x)) the instantaneous
loss of the hypothesis h, we wish to assess the true loss L(h) = E? `(y, h(x)) where
the expectation is taken with respect to ?. In particular, the objective is to provide
data-dependent bounds of the following form. For any h ? H and ? ? (0, 1), with
probability at least 1 ? ?,
L(h) ? ?(h, S) + ?(h, S, ?),
(1)
where ?(h, S) is some empirical assessment of the true loss, and ?(h, S, ?) is a complexity term. For example, inPthe classic Vapnik-Chervonenkis framework, ?(h, S)
n
is the empirical error (1/n) i=1 `(yi , h(xi )) and ?(h, S, ?) depends on the VCdimension of H but is independent of both the hypothesis h and the sample S. By
algorithm and data-dependent bounds we mean bounds where the complexity term
depends on both the hypothesis (chosen by the algorithm A) and the sample S.
2
A Decision Theoretic Bayesian Framework
Consider a decision theoretic setting where we define the sample dependent loss of
an algorithm A by R(?, A, S) = E? `(y, A(x, S)). Let ?? be the optimal predictor
for y, namely the function minimizing E? {`(y, ?(x))} over ?. It is clear that the
best algorithm A (Bayes algorithm) is the one that always return ?? , assuming ?
is known. We are interested in the expected loss of an algorithm averaged over
samples S:
Z
R(?, A) = ES R(?, A, S) = R(?, A, S)d?(S),
where the expectation is taken with respect to the sample S drawn i.i.d. from the
probability measure ?. If we consider a family of measures ?, which possesses some
underlying prior distribution ?(?), then we can construct the averaged risk function
with respect to the prior as,
Z
Z
r(?, A) = E? R(?, A) = d?(S)d?(?) R(?, A, S)d?(?|S),
where d?(?|S) =
R d?(S)d?(?)
d?(S)d?(?)
?
is the posterior distribution on the ? family, which
induces a posterior distribution on the sample space as ?S = E?(?|S) ?. An algorithm
minimizing the Bayes risk r(?, A) is referred to as a Bayes algorithm. In fact, for a
given prior, and a given sample S, the optimal algorithm should return the Bayes
optimal predictor with respect to the posterior measure ?S .
For many important practical problems, the optimal Bayes predictor is a linear
functional of the underlying probability measure. For example, if the loss function is
quadratic, namely `(y, A(x)) = (y ?A(x))2 , then the optimal Bayes predictor ?? (x)
is the conditional mean of y, namely E? [y|x]. For binary classification problems, we
can let the predictor be the conditional probability ?? (x) = ?(y = 1|x) (the optimal
classification decision rule then corresponds to a test of whether ?? (x) > 0.5), which
is also a linear functional of ?. Clearly if the Bayes predictor is a linear functional
of the probability measure, then the optimal Bayes algorithm with respect to the
prior ? is given by
R
Z
? (x)d?(S)d?(?)
? ?
R
.
(2)
AB (x, S) =
?? (x)d?(?|S) =
d?(S)d?(?)
?
?
In this case, an optimal Bayesian algorithm can be regarded as the predictor constructed by averaging over all predictors with respect to a data-dependent posterior
?(?|S). We refer to such methods as Bayesian mixture methods. While the Bayes
estimator AB (x, S) is optimal with respect to the Bayes risk r(?, A), it can be
shown, that under appropriate conditions (and an appropriate prior) it is also a
minimax and admissible estimator [9].
In general, ?? is unknown. Rather we may have some prior information about
possible models for ?? . In view of (2) we consider a hypothesis space H, and an
algorithm based on a mixture of hypotheses h ? H. This should be contrasted
with classical approaches where an algorithm selects a single hypothesis h form
H. For simplicity, we consider a countable hypothesis space H = {h1 , h2 , . . .}; the
general case will be deferredPto the full paper. Let q = {qj }?
j=1 be a probability
vector, namely qj ? 0 and j qj = 1, and construct the composite predictor by
P
fq (x) =
j qj hj (x). Observe that in general fq (x) may be a great deal more
complex that any single hypothesis hj . For example, if hj (x) are non-polynomial
ridge functions, the composite predictor f corresponds to a two-layer neural network
with universal approximation
power. We denote by Q the probability distribution
P
defined by q, namely j qj hj = Eh?Q h.
A main feature of this work is the establishment of data-dependent bounds on
L(Eh?Q h), the loss of the Bayes mixture algorithm. There has been a flurry of
recent activity concerning data-dependent bounds (a non-exhaustive list includes
[2, 3, 5, 11, 13]). In a related vein, McAllester [7] provided a data-dependent bound
for the so-called Gibbs algorithm, which selects a hypothesis at random from H
based on the posterior distribution ?(h|S). Essentially, this result provides a bound
on the average error Eh?Q L(h) rather than a bound on the error of the averaged
hypothesis. Later, Langford et al. [6] extended this result to a mixture of classifiers
using a margin-based loss function. A more general result can also be obtained using
the covering number approach described in [14]. Finally, Herbrich and Graepel
[4] showed that under certain conditions the bounds for the Gibbs classifier can
be extended to a Bayesian mixture classifier. However, their bound contained an
explicit dependence on the dimension (see Thm. 3 in [4]).
Although the approach pioneered by McAllester came to be known as PAC-Bayes,
this term is somewhat misleading since an optimal Bayesian method (in the decision
theoretic framework outline above) does not average over loss functions but rather
over hypotheses. In this regard, the learning behavior of a true Bayesian method is
not addressed in the PAC-Bayes analysis. In this paper, we would like to narrow the
discrepancy by analyzing Bayesian mixture methods, where we consider a predictor
that is the average of a family of predictors with respect to a data-dependent posterior distribution. Bayesian mixtures can often be regarded as a good approximation
to a true optimal Bayesian method. In fact, we have shown above that they are
equivalent for many important practical problems.
Therefore the main contribution of the present work is the extension of the above
mentioned results in PAC-Bayes analysis to a rather unified setting for Bayesian
mixture methods, where different regularization criteria may be incorporated, and
their effect on the performance easily assessed. Furthermore, it is also essential that
the bounds obtained are dimension-independent, since otherwise they yield useless
results when applied to kernel-based methods, which often map the input space into
a space of very high dimensionality. Similar results can also be obtained using the
covering number analysis in [14]. However the approach presented in the current
paper, which relies on the direct computation of the Rademacher complexity, is more
direct and gives better bounds. The analysis is also easier to generalize than the
corresponding covering number approach. Moreover, our analysis applies directly
to other non-Bayesian mixture approaches such as Bagging and Boosting.
Before moving to the derivation of our bounds, we formalize our approach. Consider
a countable hypothesis space H = {hj }?
, and a probability distribution {qj } over
P?j=1
H. Introduce the vector notation k=1 qk hk (x) = q> h(x). A learning algorithm
within the Bayesian mixture framework uses the sample S to select a distribution
Q over H and then constructs a mixture hypothesis fq (x) = q> h(x). In order to
constrain the class of mixtures used in constructing the mixture q> h we impose
constraints on the mixture vector q. Let g(q) be a non-negative convex function of
q and define for any positive A,
?
?
?A = {q ? S : g(q) ? A} ; FA = fq : fq (x) = q> h(x) : q ? ?A ,
(3)
where S denotes the probability simplex. In subsequent sections we will consider
different choices for g(q), which essentially acts as a regularization term. Finally,
for any mixture q> h we define the loss by L(q> h) = E? `(y, (q> h)(x)) and the
? > h) = (1/n) Pn `(yi , (q> h)(xi )).
empirical loss incurred on the sample by L(q
i=1
3
A Mixture Algorithm with an Entropic Constraint
In this section we consider an entropic constraint, which penalizes weights deviating significantly from some prior probability distribution ? = {?j }?
j=1 , which may
incorporate our prior information about he problem. The weights q themselves are
chosen by the algorithm based on the data. In particular, in this section we set g(q)
to be the Kullback-Leibler divergence of q from ?,
X
qj log(qj /?j ).
g(q) = D(qk?) ; D(qk?) =
j
Let F be a class of real-valued functions, and denote by ?i independent Bernoulli
random variables assuming the values ?1 with equal probability. We define the
data-dependent Rademacher complexity of F as
"
#
n
X
1
? n (F) = E? sup
R
?i f (xi ) |S .
f ?F n i=1
? n (F) with respect to S will be denoted by Rn (F). We note
The expectation of R
?
that Rn (F) is concentrated around its mean value Rn (F) (e.g., Thm. 8 in [1]). We
quote a slightly adapted result from [5].
Theorem 1 (Adapted from Theorem 1 in [5])
Let {x1 , x2 , . . . , xn } ? X be a sequence of points generated independently at random
according to a probability distribution P , and let F be a class of measurable functions
from X to R. Furthermore, let ? be a non-negative Lipschitz function with Lipschitz
constant ?, such that ??f is uniformly bounded by a constant M . Then for all f ? F
with probability at least 1 ? ?
r
n
1X
log(1/?)
E?(f (x)) ?
?(f (xi )) ? 4?Rn (F) + M
.
n i=1
2n
An immediate consequence of Theorem 1 is the following.
Lemma 3.1 Let the loss function ` be bounded by M , and assume that it is Lipschitz with constant ?. Then for all q ? ?A with probability at least 1 ? ?
r
log(1/?)
>
>
?
.
L(q h) ? L(q h) + 4?Rn (FA ) + M
2n
Next, we bound the empirical Rademacher average of FA using g(q) = D(qk?).
Lemma 3.2 The empirical Rademacher complexity of FA is upper bounded as follows:
v
?r !
u n
u1 X
2A
?
Rn (FA ) ?
sup t
hj (xi )2 .
n
n i=1
j
Proof: We first recall a few facts from the theory of convex duality ?[10]. Let p(u)
?
>
be a convex function over a domain U , and set its dual s(z) = supP
u?U u z ? p(u) .
It is known that s(z) is also convex. Setting u = q and p(q) = j qj log(qj /?j ) we
P
find that s(v) = log j ?j ezj . From the definition of s(z) it follows that for any
q ? S,
X
X
q> z ?
qj log(qj /?j ) + log
? j ez j .
j
j
P
Since z is arbitrary, we set z = (?/n) i ?i h(xi ) and conclude that for q ? ?A and
any ? > 0
?
( n
"
)
#?
?
?
X
X
X
1
?
1
sup
A + log
?j exp
?i q> h(xi ) ?
?i hj (xi )
.
?
n i=1
??
n i
q??A
j
Taking the
to ?, and using the Chernoff bound
?
P expectation ?with
P 2respect
E? {exp ( i ?i ai )} ? exp
a
/2
,
we
have
that
i i
?
"
#?
?
?
X
X
?
1
? n (FA ) ?
?j exp
A + E? log
?i hj (xi )
R
?
??
n i
j
(
"
#)
1
?X
?
A + sup log E? exp
?i hj (xi )
(Jensen)
?
n i
j
(
"
#)
1
?2 X hj (xi )2
?
A + sup log exp 2
(Chernoff)
?
n i
2
j
X
A
?
= + 2 sup
hj (xi )2 .
?
2n j i
Minimizing the r.h.s. with respect to ?, we obtain the desired result.
?
Combining Lemmas 3.1 and 3.2 yields our basic bound, where ? and M are defined
in Lemma 3.1.
Theorem 2 Let S = {(x1 , y1 ), . . . , (xn , yn )} be a sample of i.i.d. points each
drawn according to a distribution ?(x, y). Let H be a countable hypothesis class,
and set FA to be the class defined in (3) with g(q) = D(qk?). Set ?H =
?
(1/n)E? supj
1??
Pn
hj (xi )2
?1/2
. Then for any q ? ?A with probability at least
r
r
2A
log(1/?)
>
>
?
+M
.
L(q h) ? L(q h) + 4??H
n
2n
i=1
Note that if hj are uniformly bounded, hj ? c, then ?H ? c. Theorem 2 holds for a
fixed value of A. Using the so-called multiple testing Lemma (e.g. [11]) we obtain:
Corollary 3.1 Let the assumptions
of Theorem 2 hold, and let {Ai , pi } be a set of
P
positive numbers such that i pi = 1. Then for all Ai and q ? ?Ai with probability
at least 1 ? ?,
r
r
2Ai
log(1/pi ?)
>
>
?
L(q h) ? L(q h) + 4??H
+M
.
n
2n
Note that the only distinction with Theorem 2 is the extra factor of log pi which is
the price paid for the uniformity of the bound.
Finally, we present a data-dependent bound of the form (1).
Theorem 3 Let the assumptions of Theorem 2 hold. Then for all q ? S with
probability at least 1 ? ?,
r
130D(qk?) + log(1/?)
>
>
?
L(q h) ? L(q h) + max(??H , M ) ?
.
(4)
n
P
Proof sketch Pick Ai = 2i and pi = 1/i(i + 1), i = 1, 2, . . . (note that i pi = 1).
For each q, let i(q) be the smallest index for which Ai(q) ? D(qk?) implying that
log(1/pi(q) ) ? 2 log log2 (4D(qk?)). A few lines of algebra, to be presented in the
full paper, yield the desired result.
?
The results of Theorem 3 can be compared to those derived by McAllester [8] for
the randomized Gibbs procedure. In the latter case, the first term on the r.h.s. is
?
Eh?Q L(h),
namely the average empirical error of the base classifiers h. In our case
? h?Q h), namely the empirical error of the average
the corresponding term is L(E
hypothesis. Since Eh?Q h is potentially much more complex than any single h ? H,
we expect that the empirical term in (4) is much smaller than the corresponding
term in [8]. Moreover, the complexity term we obtain is in fact tighter than the
corresponding term in [8] by a logarithmic factor in n (although the logarithmic
factor in [8] could probably be eliminated). We thus expect that Bayesian mixture
approach advocated here leads to better performance guarantees.
Finally, we comment that Theorem 3 can be used to obtain so-called oracle inequalities. In particular, let q? be the optimal distribution minimizing L(q> h), which
can only be computed if the underlying distribution ?(x, y) is known. Consider an
? by minimizing
algorithm which, based only on the data, selects a distribution q
the r.h.s. of (4), with the implicit constants appropriately specified. Then, using
standard approaches (e.g. [2]) we can obtain a bound on L(?
q> h) ? L(q?> h). For
lack of space, we defer the derivation of the precise bound to the full paper.
4
General Data-Dependent Bounds for Bayesian Mixtures
The Kullback-Leibler divergence is but one way to incorporate prior information.
In this section we extend the results to general convex regularization functions
g(q). Some possible choices for g(q) besides the Kullback-Leibler divergence are
the standard Lp norms kqkp .
In order to proceed along the lines of Section 3, we let
? s(z) be the? convex function associated with g(q), namely s(z) = supq??A q> z ? g(q) . Repeating
Pn
the arguments of Section 3 we have for any ? > 0 that n1 i=1 ?i q> h(xi ) ?
??
?
?? P
1
i ?i h(xi ) , which implies that
? A+s n
(
?
!)
1
?X
?
Rn (FA ) ? inf
A + E? s
?i h(xi )
.
(5)
??0 ?
n i
Pn
Assume that s(z) is second order differentiable, and that for any h = i=1 ?i h(xi )
1
2 (s(h + ?h) + s(h ? ?h)) ? s(h) ? u(?h). Then, assuming that s(0) = 0, it is
easy to show by induction that
n
?
? X
Xn
E? s (?/n)
?i h(xi ) ?
u((?/n)h(xi )).
i=1
(6)
i=1
In the remainder of the section we focus on the the case of regularization based on
the Lp norm. Consider p and q such that 1/q + 1/p = 1, p ? (1, ?), and let p0 =
max(p, 2) and q 0 = min(q, 2). Note that if p ? 2 then q ? 2, q 0 = p0 = 2 and if p > 2
0
then q < 2, q 0 = q, p0 = p. Consider p-norm regularization g(q) = p10 kqkpp , in which
0
case s(z) = q10 kzkqq . The Rademacher averaging result for p-norm regularization
is known in the Geometric theory of Banach spaces (type structure of the Banach
space), and it also follows from Khinchtine?s inequality. We show that it can be
easily obtained in our framework.
In this case, it is easy to see that s(z) =
Substituting in (5) we have
? n (FA ) ? inf 1
R
??0 ?
(
q?1
A+
q0
where Cq = ((q ? 1)/q 0 )
1
q0
q 0 kzkq
implies u(h(x)) ?
q?1
q0
q 0 kh(x)kq .
)
!1/q0
?
? ?q0 X
n
n
Cq
?
1X
q0
q0
1/p0
kh(xi )kq = 1/p0 A
kh(xi )kq
n
n i=1
n
i=1
1/q 0
.
Combining this result with the methods described in Section 3, we establish a bound
for regularization based on the Lp norm. Assume that kh(xi )kq is finite for all i,
? n
o?1/q0
Pn
0
and set ?H,q = E (1/n) i=1 kh(xi )kqq
.
Theorem 4 Let the conditions of Theorem 3 hold and set g(q) =
(1, ?). Then for all q ? S, with probability at least 1 ? ?,
? > h) + max(??H,q , M ) ? O
L(q h) ? L(q
>
?
kqkp
+
n1/p0
r
1
p0
p0 kqkp ,
log log(kqkp + 3) + log(1/?)
n
p ?
!
where O(?) hides a universal constant that depends only on p.
5
Discussion
We have introduced and analyzed a class of regularized Bayesian mixture approaches, which construct complex composite estimators by combining hypotheses
from some underlying hypothesis class using data-dependent weights. Such weighted
averaging approaches have been used extensively within the Bayesian framework,
as well as in more recent approaches such as Bagging and Boosting. While Bayesian
methods are known, under favorable conditions, to lead to optimal estimators in a
frequentist setting, their performance in agnostic settings, where no reliable assumptions can be made concerning the data generating mechanism, has not been well
understood. Our data-dependent bounds allow the utilization of Bayesian mixture
models in general settings, while at the same time taking advantage of the benefits
of the Bayesian approach in terms of incorporation of prior knowledge. The bounds
established, being independent of the cardinality of the underlying hypothesis space,
can be directly applied to kernel based methods.
Acknowledgments We thank Shimon Benjo for helpful discussions. The research
of R.M. is partially supported by the fund for promotion of research at the Technion
and by the Ollendorff foundation of the Electrical Engineering department at the
Technion.
References
[1] P. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: risk bounds
and structural results. In Proceedings of the Fourteenth Annual Conference on Computational Learning Theory, pages 224?240, 2001.
[2] P.L. Bartlett, S. Boucheron, and G. Lugosi. Model selection and error estimation.
Machine Learning, 48:85?113, 2002.
[3] O. Bousquet and A. Chapelle. Stability and generalization. J. Machine Learning
Research, 2:499?526, 2002.
[4] R. Herbrich and T. Graepel. A pac-bayesian margin bound for linear classifiers; why
svms work. In Advances in Neural Information Processing Systems 13, pages 224?230,
Cambridge, MA, 2001. MIT Press.
[5] V. Koltchinksii and D. Panchenko. Empirical margin distributions and bounding the
generalization error of combined classifiers. Ann. Statis., 30(1), 2002.
[6] J. Langford, M. Seeger, and N. Megiddo. An improved predictive accuracy bound
for averaging classifiers. In Proceeding of the Eighteenth International Conference on
Machine Learning, pages 290?297, 2001.
[7] D. A. McAllester. Some pac-bayesian theorems. In Proceedings of the eleventh Annual
conference on Computational learning theory, pages 230?234, New York, 1998. ACM
Press.
[8] D. A. McAllester. PAC-bayesian model averaging. In Proceedings of the twelfth
Annual conference on Computational learning theory, New York, 1999. ACM Press.
[9] C. P. Robert. The Bayesian Choice: A Decision Theoretic Motivation. Springer
Verlag, New York, 1994.
[10] R.T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, N.J., 1970.
[11] J. Shawe-Taylor, P. Bartlett, R.C. Williamson, and M. Anthony. Structural risk
minimization over data-dependent hierarchies. IEEE trans. Inf. Theory, 44:1926?
1940, 1998.
[12] Y. Yang. Minimax nonparametric classification - part I: rates of convergence. IEEE
Trans. Inf. Theory, 45(7):2271?2284, 1999.
[13] T. Zhang. Generalization performance of some learning problems in hilbert functional
space. In Advances in Neural Information Processing Systems 15, Cambridge, MA,
2001. MIT Press.
[14] T. Zhang. Covering number bounds of certain regularized linear function classes.
Journal of Machine Learning Research, 2:527?550, 2002.
| 2267 |@word polynomial:1 norm:5 twelfth:1 p0:8 paid:1 pick:1 chervonenkis:1 denoting:1 current:1 com:1 subsequent:1 statis:1 fund:1 implying:1 provides:2 boosting:3 ron:1 herbrich:2 zhang:3 height:1 along:1 constructed:3 direct:2 become:1 eleventh:1 introduce:1 expected:1 behavior:1 themselves:1 cardinality:1 provided:1 moreover:3 underlying:5 notation:1 agnostic:2 bounded:4 israel:1 unified:1 guarantee:2 act:1 megiddo:1 classifier:7 utilization:2 yn:2 before:1 positive:2 engineering:2 understood:1 consequence:1 analyzing:1 lugosi:1 studied:1 supq:1 averaged:3 practical:2 acknowledgment:1 testing:1 procedure:3 universal:2 empirical:9 significantly:1 composite:3 word:1 suggest:1 selection:1 context:1 risk:5 equivalent:1 map:1 measurable:1 center:1 eighteenth:1 independently:2 convex:7 simplicity:1 estimator:12 rule:1 regarded:2 classic:2 stability:1 hierarchy:1 pioneered:1 us:1 hypothesis:21 vein:1 electrical:2 mentioned:1 panchenko:1 complexity:7 flurry:1 uniformity:1 algebra:1 predictive:1 easily:2 derivation:2 exhaustive:1 widely:2 valued:1 otherwise:1 statistic:1 sequence:1 differentiable:1 advantage:1 remainder:1 combining:3 achieve:1 kh:5 q10:1 convergence:1 regularity:1 extending:1 rademacher:6 generating:1 ac:1 vcdimension:1 advocated:1 strong:1 implies:2 enable:1 mcallester:5 generalization:4 tighter:1 extension:1 hold:5 sufficiently:1 around:1 exp:6 great:1 ezj:1 substituting:1 entropic:2 smallest:1 favorable:1 estimation:1 applicable:1 quote:1 weighted:2 minimization:1 promotion:1 clearly:1 mit:2 always:1 gaussian:1 establishment:1 rather:4 pn:5 hj:14 corollary:1 derived:2 focus:1 bernoulli:1 fq:5 hk:1 seeger:1 rigorous:1 sense:1 helpful:1 dependent:16 interested:2 selects:3 classification:3 dual:1 denoted:1 special:1 equal:1 construct:6 eliminated:1 chernoff:2 discrepancy:1 simplex:1 few:2 divergence:3 deviating:1 n1:2 attempt:1 ab:2 mixture:25 analyzed:2 sens:2 accurate:1 taylor:1 penalizes:1 haifa:1 desired:2 ollendorff:1 predictor:14 technion:4 kq:4 characterize:1 combined:1 international:1 randomized:2 together:1 return:2 supp:1 includes:1 rockafellar:1 depends:3 later:1 view:1 h1:1 sup:6 bayes:15 defer:1 contribution:1 ass:1 il:1 accuracy:1 qk:8 yield:3 generalize:1 bayesian:33 definition:1 proof:2 associated:1 recall:1 knowledge:1 dimensionality:1 graepel:2 formalize:1 hilbert:1 supervised:1 improved:1 furthermore:2 implicit:1 langford:2 hand:1 sketch:1 assessment:1 lack:1 kqq:1 usa:1 effect:1 true:4 regularization:7 q0:8 leibler:3 boucheron:1 deal:1 covering:4 yorktown:1 criterion:4 outline:1 theoretic:4 ridge:1 instantaneous:1 recently:1 functional:4 kqkp:4 banach:2 extend:1 he:1 refer:1 cambridge:2 gibbs:4 ai:7 shawe:1 chapelle:1 moving:1 base:1 posterior:6 recent:2 showed:1 hide:1 inf:4 verlag:1 certain:3 inequality:2 binary:1 watson:2 came:1 yi:2 p10:1 somewhat:1 impose:1 paradigm:2 full:3 multiple:1 concerning:2 basic:1 essentially:3 expectation:4 kernel:2 addressed:1 appropriately:1 extra:1 posse:2 probably:1 comment:1 ee:1 structural:2 yang:1 easy:2 qj:12 whether:2 bartlett:3 proceed:1 york:3 generally:1 clear:3 repeating:1 nonparametric:1 extensively:1 induces:1 concentrated:1 svms:1 meir:1 rmeir:1 drawn:3 fourteenth:1 tzhang:1 family:3 decision:5 layer:1 bound:33 quadratic:1 annual:3 oracle:1 activity:1 adapted:2 constraint:3 incorporation:1 constrain:1 x2:1 bousquet:1 u1:1 argument:1 optimality:8 min:1 department:2 according:3 smaller:1 slightly:1 lp:3 taken:2 fail:1 mechanism:1 supj:1 available:1 observe:1 appropriate:2 frequentist:6 bagging:3 denotes:1 log2:1 establish:3 classical:2 objective:1 fa:9 dependence:1 usual:1 thank:1 induction:1 assuming:3 besides:1 useless:1 index:1 cq:2 minimizing:5 robert:1 potentially:1 relate:1 negative:2 countable:3 unknown:2 perform:1 upper:1 finite:5 immediate:1 extended:2 incorporated:1 precise:2 y1:2 rn:7 arbitrary:1 thm:2 community:1 introduced:1 pair:1 namely:8 specified:1 distinction:1 narrow:1 established:2 trans:2 address:1 usually:1 max:3 reliable:1 power:1 eh:5 regularized:2 minimax:4 misleading:1 prior:12 geometric:1 fully:1 loss:13 admissibility:1 expect:2 h2:1 foundation:1 incurred:1 pi:7 ibm:2 supported:1 allow:1 taking:2 benefit:1 regard:1 dimension:2 xn:4 made:1 kullback:3 conclude:1 xi:23 why:1 williamson:1 complex:3 constructing:2 domain:1 anthony:1 main:2 motivation:3 bounding:1 x1:3 referred:1 fashion:1 ny:1 tong:1 wish:1 explicit:1 admissible:2 shimon:1 theorem:14 pac:6 jensen:1 list:1 essential:1 mendelson:1 vapnik:1 margin:3 easier:1 logarithmic:2 forming:1 ez:1 datadependent:1 contained:1 partially:1 applies:1 springer:1 corresponds:2 relies:1 acm:2 ma:2 conditional:2 formulated:1 ann:1 lipschitz:3 price:1 except:1 contrasted:1 uniformly:2 averaging:5 lemma:5 called:4 duality:1 e:1 attempted:1 select:1 latter:2 assessed:1 violated:1 incorporate:2 princeton:2 |
1,393 | 2,268 | Shape Recipes: Scene Representations that Refer
to the Image
William T. Freeman and Antonio Torralba
Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
{wtf, torralba}@ai.mit.edu
Abstract
The goal of low-level vision is to estimate an underlying scene, given
an observed image. Real-world scenes (eg, albedos or shapes) can be
very complex, conventionally requiring high dimensional representations
which are hard to estimate and store. We propose a low-dimensional representation, called a scene recipe, that relies on the image itself to describe the complex scene configurations. Shape recipes are an example:
these are the regression coefficients that predict the bandpassed shape
from image data. We describe the benefits of this representation, and
show two uses illustrating their properties: (1) we improve stereo shape
estimates by learning shape recipes at low resolution and applying them
at full resolution; (2) Shape recipes implicitly contain information about
lighting and materials and we use them for material segmentation.
1 Introduction
From images, we want to estimate various low-level scene properties such as shape, material, albedo or motion. For such an estimation task, the representation of the quantities
to be estimated can be critical. Typically, these scene properties might be represented as a
bitmap (eg [14]) or as a series expansion in a basis set of surface deformations (eg [10]).
To represent accurately the details of real-world shapes and textures requires either fullresolution images or very high order series expansions. Estimating such high dimensional
quantities is intrinsically difficult [2]. Strong priors [14] are often needed, which can give
unrealistic shape reconstructions.
Here we propose a new scene representation with appealing qualities for estimation. The
approach we propose is to let the image itself bear as much of the representational burden
as possible. We assume that the image is always available and we describe the underlying
scene in reference to the image. The scene representation is a set of rules for transforming
from the local image information to the desired scene quantities. We call this representation
a scene recipe: a simple function for transforming local image data to local scene data. The
computer doesn?t have to represent every curve of an intricate shape; the image does that for
us, the computer just stores the rules for transforming from image to scene. In this paper,
we focus on reconstructing the shapes that created the observed image, deriving shape
recipes. The particular recipes we study here are regression coefficients for transforming
(a)
(b)
(c)
(c)
Figure 1: 1-d example: The image (a) is rendered from the shape (b). The shape depends on
the image in a non-local way. Bandpass filtering both signals allows for a local shape recipe.
The dotted line (which agrees closely with true solid line) in (d) shows shape reconstruction
from 9-parameter linear regression (9-tap convolution) from bandpassed image, (c).
bandpassed image data into bandpassed shape data.
2 Shape Recipes
The shape representation consists in describing, for a particular image, the functional relationship between image and shape. This relationship is not general for all images, but
specific to the particular lighting and material conditions at hand. We call this functional
relationship the shape recipe.
To simplify the computation to obtain shape from image data, we require that the scene
recipes be local: the scene structure in a region should only depend on a local neighborhood of the image. It is easy to show that, without taking special care, the shape-image
relationship is not local. Fig. 1 (a) shows the intensity profile of a 1-d image arising from
the shape profile shown in Fig. 1 (b) under particular rendering conditions (a Phong model
with 10% specularity). Note that the function to recover the shape from the image cannot
be local because the identical local images on the left and right sides of the surface edge
correspond to different shape heights.
In order to obtain locality in the shape-image relationship, we need to preprocess the shape
and image signals. When shape and image are represented in a bandpass pyramid, within
a subband, under generic rendering conditions [4], local shape changes lead to local image
changes. (Representing the image in a Gaussian pyramid also gives a local relationship between image and bandpassed shape, effectively subsuming the image bandpass operation
into the shape recipe. That formulation, explored in [16], can give slightly better performance and allows for simple non-linear extensions.) Figures 1 (c) and (d) are bandpass
filtered versions of (a) and (b), using a second-derivative of a Gaussian filter. In this example, (d) relates to (c) by a simple shape recipe: convolution with a 9-tap filter, learned
by linear regression from rendered random shape data. The solid line shows the true bandpassed shape, while the dotted line is the linear regression estimate from Fig. 1 (c).
For 2-d images, we break the image and shape into subbands using a steerable pyramid
[13], an oriented multi-scale decomposition with non-aliased subbands (Fig. 3 (a) and (b)).
A shape subband can be related to an image intensity subband by a function
Zk = fk (Ik )
(1)
where fk is a local function and Zk and Ik are the kth subbands of the steerable pyramid
representation of the shape and image, respectively. The simplest functional relationship
between shape and image intensity is via a linear filter with a finite size impulse response:
Zk ? rk ? Ik , where ? is convolution. The convolution kernel rk (specific to each scale
and orientation) transforms the image subband
P Ik into the shape subband Zk . The recipe
rk at each subband is learned by minimizing x |Zk ? Ik ? rk |2 , regularizing rk as needed
to avoid overfitting. rk contains information about the particular lighting conditions and
the surface material. More general functions can be built by using non-linear filters and
combining image information from different orientations and scales [16].
(a) Image
(b) Stereo shape
(c) Stereo shape (surface plot)
(d) Re-rendered
stereo shape
Figure 2: Shape estimate from stereo. (a) is one image of the stereo pair; the stereo reconstruction is depicted as (b) a range map and (c) a surface plot and (d) a re-rendering of the
stereo shape. The stereo shape is noisy and misses fine details.
We conjecture that multiscale shape recipes have various desirable properties for estimation. First, they allow for a compact encoding of shape information, as much of the complexity of the shape is encoded in the image itself. The recipes need only specify how to
translate image into shape. Secondly, regularities in how the shape recipes f k vary across
scale and space provide a powerful mechanism for regularizing shape estimates. Instead
of regularizing shape estimates by assuming a prior of smoothness of the surface, we can
assume a slow spatial variation of the functional relationship between image and shape,
which should make estimating shape recipes easier. Third, shape recipes implicitly encode
lighting and material information, which can be used for material-based segmentation. In
the next two sections we discuss the properties of smoothness across scale and space and
we show potential applications in improving shape estimates from stereo and in image
segmentation based on material properties.
3 Scaling regularities of shape recipes
Fig. 2 shows one image of a stereo pair and the associated shape estimated from a stereo
algorithm1. The shape estimate is noisy in the high frequencies (see surface plot and rerendered shape), but we assume it is accurate in the low spatial frequencies.
Fig. 3 shows the steerable pyramid representations of the image (a) and shape (b) and the
learned shape recipes (c) for each subband (linear convolution kernels that give the shape
subband from the image subband). We exploit the slow variation of shape recipes over scale
and assume that the shape recipes are constant over the top four octaves of the pyramid 2
Thus, from the shape recipes learned at low-resolution we can reconstruct a higher resolution shape estimate than the stereo output, by learning the rendering conditions then taking
advantage of shape details visible in the image but not exploited by the stereo algorithm.
Fig. 4 (a) and (b) show the image and the implicit shape representation: the pyramid?s lowresolution shape and the shape recipes used over the top four scales. Fig. 4 (c) and (d) show
explicitly the reconstructed shape implied by (a) and (b): note the high resolution details,
including the fine structure visible in the bottom left corner of (d). Compare with the stereo
1
We took our stereo photographs using a 3.3 Megapixel Olympus Camedia C-3040 camera, with
a Pentax stereo adapter. We calibrated the stereo images using the point matching algorithm of Zhang
[18], and rectified the stereo pair (so that epipoles are along scan lines) using the algorithm of [8],
estimating disparity with the Zitnick?Kanade stereo algorithm [19].
2
Except for a scale factor. We scale the amplitude of the fixed recipe convolution kernels by 2
for each octave, to account for the differentiation operation in the linear shading approximation to
Lambertian rendering [7].
(c) Shape recipes for each subband
(a) Image pyramid
(b) Shape pyramid
Figure 3: Learning shape recipes at each subband. (a) and (b) are the steerable pyramid
representations [13] of image and stereo shape. (c) shows the convolution kernels that best
predict (b) from (a). The steerable pyramid isolates information according to scale (the
smaller subband images represent larger spatial scales) and orientation (clockwise among
subbands of one size: vertical, diagonal, horizontal, other diagonal).
(a) image
(b) low-res shape
(center, top row) and
recipes (for each
subband orientation)
(c) recipes shape (surface plot)
(d) re-rendered
recipes shape
Figure 4: Reconstruction from shape recipes. The shape is represented by the information
contained in the image (a), the low-res shape pyramid residual and the shape recipes (b)
estimated at the lowest resolution. The shape can be regenerated by applying the shape
recipes (b) at the 4 highest resolution scales, then reconstructing from the shape pyramid.
(d) shows the image re-rendered under different lighting conditions than (a). The reconstruction is not noisy and shows more detail than the stereo shape, Fig. 2, including the fine
textures visible at the bottom left of the image (a) but not detected by the stereo algorithm.
output in Fig. 2.
4 Segmenting shape recipes
Segmenting an image into regions of uniform color or texture is often an approximation
to an underlying goal of segmenting the image into regions of uniform material. Shape
recipes, by describing how to transform from image to shape, implicitly encode both lighting and material properties. Across unchanging lighting conditions, segmenting by shape
recipes allows us to segment according to a material?s rendering properties, even overcoming changes of intensities or texture of the rendered image. (See [6] for a non-parametric
approach to material segmentation.)
We expect shape recipes to vary smoothly over space except for abrupt boundaries at
changes in material or illumination. Within each subband, we can write the shape Z k
(a) Shape
(b) Image
(c) Image-based
segmentation
(d) Recipe-based
segmentation
Figure 5: Segmentation example. Shape (a), with a horizontal orientation discontinuity, is
rendered with two different shading models split vertically, (b). Based on image information alone, it is difficult to find a good segmentation into 2 groups, (c). A segmentation into
2 different shape recipes naturally falls along the vertical material boundary, (d).
as a mixture of recipes:
p(Zk |Ik ) =
N
X
p(Zk ? fk,n (Ik ))pn
(2)
n=1
where N specifies the number of recipes needed to explain the underlying shape Z k . The
weights pn , which will be a function of location, will specify which recipe has to be used
within each region and, therefore, will provide a segmentation of the image.
To estimate the parameters of the mixture (shape recipes and weights), given known shape
and the associated image, we use the EM algorithm [17]. We encourage spatial continuity
for the weights pn as neighboring pixels are likely to belong to the same material. We
use the mean field approximation to implement the spatial smoothness prior in the E step,
suggested in [17].
Figure 5 shows a segmentation example. (a) is a fractal shape, with diagonal left structure
across the top half, and diagonal right structure across the bottom half. Onto that shape,
we ?painted? two different Phong shading renderings in the two vertical halves, shown
in (b) (the right half is shinier than the left). Thus, texture changes in each of the four
quadrants, but the only material transition is across the vertical centerline. An image-based
segmentation, which makes use of texture and intensity cues, among others, finds the four
quadrants when looking for 4 groups, but can?t segment well when forced to find 2 groups,
(c). (We used the normalized cuts segmentation software, available on-line [11].) The
shape recipes encode the relationship between image and shape when segmenting into 2
groups, and finds the vertical material boundary, (d).
5 Occlusion boundaries
Not all image variations have a direct translation into shape. This is true for paint boundaries and for most occlusion boundaries. These cases need to be treated specially with
shape recipes. To illustrate, in Fig. 6 (c) the occluding boundary in the shape only produces a smooth change in the image, Fig. 6 (a). In that region, a shape recipe will produce
an incorrect shape estimate, however, the stereo algorithm will often succeed at finding
those occlusion edges. On the other hand, stereo often fails to provide the shape of image regions with complex shape details, where the shape recipes succeed. For the special
case of revising the stereo algorithm?s output using shape recipes, we propose a statistical
framework to combine both sources of information. We want to estimate the shape Z that
maximizes the likelihood given the shape from stereo S and shape from image intensity I
(a) image
(b) image (subband)
(c) stereo depth
(d) stereo depth (subband)
(e) shape recipe (f) recipe&stereo (g) recipe&stereo (h) laser range
(subband)
(subband)
(surface plot)
(subband)
(i) laser range
(surface plot)
Figure 6: One way to handle occlusions with shape recipes. Image in full-res (a) and
one steerable pyramid subband (b); stereo depth, full-res (c) and subband (d). (e) shows
subband of shape reconstruction using learned shape recipe. Direct application of shape
recipe across occlusion boundary misses the shape discontinuity. Stereo algorithm catches
that discontinuity, but misses other shape details. Probabilistic combination of the two
shape estimates (f, subband, g, surface), assuming Laplacian shape statistics, captures the
desirable details of both, comparing favorably with laser scanner ground truth, (h, subband,
i, surface, at slight misalignment from photos).
via shape recipes:
p(Z|S, I) = p(S, I|Z)p(Z)/p(S, I)
(3)
(For notational simplicity, we omit the spatial dependency from I, S and Z.) As both stereo
S and image intensity I provide strong constraints for the possible underlying shape Z, the
factor p(Z) can be considered constant in the region of support of p(S, I|Z). p(S, I) is a
normalization factor. Eq. (3) can be simplified by assuming that the shapes from stereo and
from shape recipes are independent. Furthermore, we also assume independence between
the pixels in the image and across subbands:
YY
p(S, I|Z) =
p(Sk |Zk )p(Ik |Zk )
(4)
k x,y
Sk , Zk and Ik refer to the outputs of the subband k. Although this is an oversimplification
it simplifies the analysis and provides good results.
The terms p(Sk |Zk ) and p(Ik |Zk ) will depend on the noise models for the depth from
stereo and for the shape recipes. For the shape estimate from stereo we assume a Gaussian
distribution for the noise. At each subband and spatial location we have:
2
p(Sk |Zk ) = ps (Zk ? Sk ) =
2
e?|Zk ?Sk | /?s
(2?)1/2 ?s
(5)
In the case of the shape recipes, a Gaussian noise model is not adequate. The distribution
of the error Zk ? fk (Ik ) will depend on image noise, but more importantly, on all shape
and image variations that are not functionally related with each other through the recipes.
Fig. 6 illustrates this point: the image data, Fig. 6 (b) does not describe the discontinuity that
exists in the shape, Fig. 6(h). When trying to estimate shape using the shape recipe f k (Ik ),
it fails to capture the discontinuity although it captures correctly other texture variations,
Fig. 6 (e). Therefore, Zk ? fk (Ik ) will describe the distribution of occluding edges that
do not produce image variations and paint edges that do not translate into shape variations.
Due to the sparse distribution of edges in images (and range data), we expect Z k ? fk (Ik )
to have a Laplacian distribution typical of the statistics of wavelet outputs of natural images
[12]:
p
p
e?|Zk ?fk (Ik )| /?i
p(Ik |Zk ) = p(Zk ? fk (Ik )) =
(6)
2?i /p?(1/p)
In order to verify this, we use the stereo information at the low spatial resolutions that we
expect is correct so that: p(Zk ? fk (Ik )) ' p(Sk ? fk (Ik )). We obtain values of p in the
range (0.6, 1.2). We set p = 1 for the results shown here. Note that p = 2 gives a Gaussian
distribution.
The least square estimate for the shape subband Zk given both stereo and image data, is:
R
Z
Zk p(Sk |Zk )p(Ik |Zk )dZk
?
Zk = Zk p(Zk |Sk , Ik )dZk = R
(7)
p(Sk |Zk )p(Ik |Zk )dZk
This integral can be evaluated numerically independently at each pixel. When p = 2,
then the LSE estimation is a weighted linear combination of the shape from stereo and
shape recipes. However, with p ' 1 this problem is similar to the one of image denosing
from wavelet decompositions [12] providing a non-linear combination of stereo and shape
recipes. The basic behavior of Eq. (7) is to take from the stereo everything that cannot
be explained by the recipes, and to take from the recipes the rest. Whenever both stereo
and shape recipes give similar estimates, we prefer the recipes because they are more accurate than the stereo information. Where stereo and shape recipes differ greatly, such as at
occlusions, then the shape estimate follows the stereo shape.
6 Discussion and Summary
Unlike shape-from-shading algorithms [5], shape recipes are fast, local procedures for
computing shape from image. The approximation of linear shading [7] also assumes a
local linear relationship between image and shape subbands. However, learning the regression coefficients allows a linearized fit to more general rendering conditions than the
special case of Lambertian shading for which linear shading was derived.
We have proposed shape recipes as a representation that leaves the burden of describing shape details to the image. Unlike many other shape representations, these are lowdimensional, and should change slowly over time, distance, and spatial scale. We expect
that these properties will prove useful for estimation algorithms using these representations,
including non-linear extensions [16].
We showed that some of these properties are indeed useful in practice. We developed a
shape estimate improver that relies on an initial estimate being accurate at low resolutions.
Assuming that a shape recipes change slowly over 4 octaves of spatial scale, we learned the
shape recipes at low resolution and applied them at high resolution to find shape from image
details not exploited by the stereo algorithm. Comparisons with ground truth shapes show
good results. Shape recipes fold in information about both lighting and material properties
and can also be used to estimate material boundaries over regions where the lighting is
assumed to be constant.
Gilchrist and Adelson describe ?atmospheres?, which are local formulas for converting
image intensities to perceived lightness values [3, 1]. In this framework, atmospheres are
?lightness recipes?. A full description of an image in terms of a scene recipe would require both shape recipes and reflectance recipes (for computing reflectance values from
image data), which also requires labelling parts of the image as being caused by shading or
reflectance changes, such as [15].
At a conceptual level, this representation is consistent with a theme in human vision research, that our visual systems use the world as a framebuffer or visual memory, not storing
in the brain what can be obtained by looking [9]. Using shape recipes, we find simple transformation rules that let us convert from image to shape whenever we need to, by examining
the image.
We thank Ray Jones and Leonard McMillan for providing Cyberware scans, and Hao Zhang for code
for rectification of stereo images. This work was funded by the Nippon Telegraph and Telephone
Corporation as part of the NTT/MIT Collaboration Agreement.
References
[1] E. H. Adelson. Lightness perception and lightness illusions. In M. Gazzaniga, editor,
The New Cognitive Neurosciences, pages 339?351. MIT Press, 2000.
[2] C. M. Bishop. Neural networks for pattern recognition. Oxford, 1995.
[3] A. Gilchrist et al. An anchoring theory of lightness. Psychological Review,
106(4):795?834, 1999.
[4] W. T. Freeman. The generic viewpoint assumption in a framework for visual perception. Nature, 368(6471):542?545, April 7 1994.
[5] B. K. P. Horn and M. J. Brooks, editors. Shape from shading. The MIT Press, Cambridge, MA, 1989.
[6] T. Leung and J. Malik. Representing and recognizing the visual appearance of materials using three-dimensional textons. Intl. J. Comp. Vis., 43(1):29?44, 2001.
[7] A. P. Pentland. Linear shape from shading. Intl. J. Comp. Vis., 1(4):153?162, 1990.
[8] M. Pollefeys, R. Koch, and L. V. Gool. A simple and efficient rectification method
for general motion. In Intl. Conf. on Computer Vision (ICCV), pages 496?501, 1999.
[9] R. A. Rensink. The dynamic representation of scenes. Vis. Cognition, 7:17?42, 2000.
[10] S. Sclaroff and A. Pentland. Generalized implicit functions for computer graphics.
In Proc. SIGGRAPH 91, volume 25, pages 247?250, 1991. In Computer Graphics,
Annual Conference Series.
[11] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Pattern Analysis
and Machine Intelligence, 22(8):888?905, 2000.
[12] E. P. Simoncelli. Statistical models for images: Compression, restoration and synthesis. In 31st Asilomar Conf. on Sig., Sys. and Computers, Pacific Grove, CA, 1997.
[13] E. P. Simoncelli and W. T. Freeman. The steerable pyramid: a flexible architecture for
multi-scale derivative computation. In 2nd Annual Intl. Conf. on Image Processing,
Washington, DC, 1995. IEEE.
[14] R. Szeliski. Bayesian modeling of uncertainty in low-level vision. Intl. J. Comp. Vis.,
5(3):271?301, 1990.
[15] M. F. Tappen, W. T. Freeman, and E. H. Adelson. Recovering intrinsic images from
a single image. In Adv. in Neural Info. Proc. Systems, volume 15. MIT Press, 2003.
[16] A. Torralba and W. T. Freeman. Properties and applications of shape recipes. Technical Report AIM-2002-019, MIT AI lab, 2002.
[17] Y. Weiss. Bayesian motion estimation and segmentation. PhD thesis, M.I.T., 1998.
[18] Z. Zhang. Determining the epipolar geometry and its uncertainty: A review.
Technical Report 2927, Sophia-Antipolis Cedex, France, 1996. see http://wwwsop.inria.fr/robotvis/demo/f-http/html/.
[19] C. L. Zitnick and T. Kanade. A cooperative algorithm for stereo matching and occlusion detection. IEEE Pattern Analysis and Machine Intelligence, 22(7), July 2000.
| 2268 |@word illustrating:1 version:1 compression:1 nd:1 linearized:1 decomposition:2 solid:2 shading:10 initial:1 configuration:1 series:3 contains:1 disparity:1 bitmap:1 comparing:1 visible:3 unchanging:1 shape:165 plot:6 alone:1 intelligence:3 half:4 cue:1 leaf:1 sys:1 filtered:1 provides:1 location:2 centerline:1 zhang:3 height:1 along:2 direct:2 sophia:1 lowresolution:1 ik:22 incorrect:1 consists:1 megapixel:1 prove:1 combine:1 ray:1 indeed:1 intricate:1 behavior:1 multi:2 brain:1 anchoring:1 freeman:5 estimating:3 underlying:5 maximizes:1 aliased:1 lowest:1 what:1 revising:1 developed:1 finding:1 transformation:1 differentiation:1 corporation:1 every:1 omit:1 segmenting:5 local:17 vertically:1 encoding:1 painted:1 oxford:1 might:1 inria:1 range:5 horn:1 camera:1 practice:1 implement:1 illusion:1 steerable:7 procedure:1 matching:2 quadrant:2 cannot:2 onto:1 applying:2 map:1 center:1 shi:1 independently:1 resolution:11 simplicity:1 abrupt:1 rule:3 importantly:1 deriving:1 handle:1 variation:7 us:1 nippon:1 sig:1 agreement:1 recognition:1 tappen:1 cut:2 cooperative:1 observed:2 bottom:3 capture:3 region:8 adv:1 highest:1 transforming:4 complexity:1 dynamic:1 depend:3 segment:2 basis:1 misalignment:1 siggraph:1 various:2 represented:3 laser:3 forced:1 fast:1 describe:6 artificial:1 detected:1 neighborhood:1 encoded:1 larger:1 reconstruct:1 statistic:2 transform:1 itself:3 noisy:3 advantage:1 took:1 propose:4 reconstruction:6 lowdimensional:1 fr:1 neighboring:1 combining:1 translate:2 representational:1 description:1 recipe:80 regularity:2 p:1 intl:5 produce:3 bandpassed:6 illustrate:1 eq:2 strong:2 recovering:1 differ:1 closely:1 correct:1 filter:4 human:1 material:20 everything:1 require:2 oversimplification:1 atmosphere:2 secondly:1 extension:2 scanner:1 koch:1 considered:1 ground:2 cognition:1 predict:2 vary:2 torralba:3 albedo:2 perceived:1 estimation:6 proc:2 agrees:1 weighted:1 mit:6 always:1 gaussian:5 aim:1 avoid:1 pn:3 encode:3 derived:1 focus:1 notational:1 likelihood:1 greatly:1 leung:1 typically:1 france:1 pixel:3 among:2 orientation:5 flexible:1 html:1 spatial:10 special:3 field:1 washington:1 identical:1 adelson:3 jones:1 regenerated:1 others:1 report:2 simplify:1 oriented:1 geometry:1 occlusion:7 william:1 detection:1 mixture:2 accurate:3 grove:1 edge:5 encourage:1 integral:1 desired:1 re:8 deformation:1 psychological:1 modeling:1 restoration:1 uniform:2 recognizing:1 examining:1 graphic:2 dependency:1 calibrated:1 st:1 probabilistic:1 telegraph:1 synthesis:1 thesis:1 slowly:2 corner:1 cognitive:1 conf:3 derivative:2 account:1 potential:1 coefficient:3 textons:1 explicitly:1 caused:1 depends:1 vi:4 break:1 lab:1 recover:1 square:1 correspond:1 preprocess:1 bayesian:2 accurately:1 lighting:9 rectified:1 comp:3 explain:1 whenever:2 frequency:2 naturally:1 associated:2 massachusetts:1 intrinsically:1 color:1 segmentation:15 amplitude:1 dzk:3 higher:1 specify:2 response:1 april:1 wei:1 formulation:1 evaluated:1 furthermore:1 just:1 implicit:2 hand:2 horizontal:2 multiscale:1 continuity:1 quality:1 impulse:1 requiring:1 contain:1 true:3 normalized:2 verify:1 laboratory:1 eg:3 generalized:1 octave:3 trying:1 motion:3 lse:1 image:104 isolates:1 gilchrist:2 functional:4 phong:2 volume:2 belong:1 slight:1 functionally:1 numerically:1 refer:2 cambridge:2 ai:2 smoothness:3 fk:10 funded:1 surface:12 showed:1 store:2 exploited:2 wtf:1 care:1 converting:1 signal:2 clockwise:1 relates:1 full:4 desirable:2 simoncelli:2 july:1 smooth:1 technical:2 ntt:1 laplacian:2 regression:6 basic:1 vision:4 subsuming:1 represent:3 kernel:4 normalization:1 pyramid:15 want:2 fine:3 source:1 rest:1 specially:1 unlike:2 cedex:1 call:2 split:1 easy:1 rendering:8 adapter:1 independence:1 fit:1 architecture:1 simplifies:1 stereo:49 adequate:1 antonio:1 fractal:1 useful:2 transforms:1 simplest:1 http:2 specifies:1 dotted:2 estimated:3 arising:1 correctly:1 yy:1 neuroscience:1 write:1 pollefeys:1 group:4 four:4 convert:1 denosing:1 powerful:1 uncertainty:2 prefer:1 scaling:1 fold:1 annual:2 constraint:1 scene:18 software:1 rendered:7 conjecture:1 pacific:1 according:2 combination:3 across:8 slightly:1 reconstructing:2 smaller:1 em:1 appealing:1 explained:1 iccv:1 asilomar:1 rectification:2 describing:3 discus:1 mechanism:1 needed:3 photo:1 available:2 operation:2 subbands:6 lambertian:2 generic:2 algorithm1:1 top:4 assumes:1 exploit:1 reflectance:3 subband:27 implied:1 malik:2 paint:2 quantity:3 parametric:1 diagonal:4 kth:1 distance:1 thank:1 assuming:4 code:1 relationship:10 providing:2 minimizing:1 difficult:2 favorably:1 hao:1 info:1 vertical:5 convolution:7 finite:1 pentland:2 looking:2 dc:1 mcmillan:1 intensity:8 overcoming:1 pair:3 tap:2 learned:6 discontinuity:5 brook:1 gazzaniga:1 suggested:1 perception:2 pattern:3 built:1 including:3 memory:1 epipolar:1 gool:1 unrealistic:1 critical:1 treated:1 natural:1 residual:1 representing:2 improve:1 technology:1 lightness:5 created:1 conventionally:1 catch:1 prior:3 review:2 determining:1 expect:4 bear:1 filtering:1 specularity:1 consistent:1 editor:2 viewpoint:1 storing:1 collaboration:1 translation:1 row:1 summary:1 side:1 allow:1 institute:1 fall:1 szeliski:1 taking:2 sparse:1 benefit:1 curve:1 boundary:9 depth:4 world:3 transition:1 doesn:1 simplified:1 reconstructed:1 compact:1 implicitly:3 overfitting:1 conceptual:1 assumed:1 demo:1 sk:10 kanade:2 nature:1 zk:30 antipolis:1 ca:1 improving:1 expansion:2 complex:3 zitnick:2 noise:4 profile:2 fig:16 slow:2 fails:2 theme:1 bandpass:4 third:1 wavelet:2 rk:6 formula:1 specific:2 bishop:1 explored:1 burden:2 exists:1 intrinsic:1 effectively:1 texture:7 phd:1 illumination:1 illustrates:1 labelling:1 easier:1 sclaroff:1 locality:1 smoothly:1 depicted:1 photograph:1 likely:1 appearance:1 visual:4 contained:1 truth:2 relies:2 ma:2 succeed:2 goal:2 leonard:1 hard:1 change:9 typical:1 except:2 telephone:1 rensink:1 miss:3 called:1 occluding:2 support:1 scan:2 regularizing:3 |
1,394 | 2,269 | Ranking with Large Margin Principle: Two
Approaches*
Amnon Shashua
School of CS&E
Hebrew University of Jerusalem
Jerusalem 91904, Israel
email: [email protected]
Anat Levin
School of CS&E
Hebrew University of Jerusalem
Jerusalem 91904, Israel
email: [email protected]
Abstract
We discuss the problem of ranking k instances with the use of a "large
margin" principle. We introduce two main approaches: the first is the
"fixed margin" policy in which the margin of the closest neighboring
classes is being maximized - which turns out to be a direct generalization of SVM to ranking learning. The second approach allows for k - 1
different margins where the sum of margins is maximized. This approach
is shown to reduce to lI-SVM when the number of classes k = 2. Both
approaches are optimal in size of 21 where I is the total number of training
examples. Experiments performed on visual classification and "collaborative filtering" show that both approaches outperform existing ordinal
regression algorithms applied for ranking and multi-class SVM applied
to general multi-class classification.
1 Introduction
In this paper we investigate the problem of inductive learning from the point of view of
predicting variables of ordinal scale [3, 7,5], a setting referred to as ranking learning or
ordinal regression. We consider the problem of applying the large margin principle used
in Support Vector methods [12, 1] to the ordinal regression problem while maintaining an
(optimal) problem size linear in the number of training examples.
Let x{ be the set of training examples where j = 1, ... , k denotes the class number, and
i = 1, ... , ij is the index within each class. Let I = 2: j ij be the total number of training
examples. A straight-forward generalization of the 2-c1ass separating hyperplane problem,
where a single hyperplane determines the classification rule, is to define k - 1 separating
hyperplanes which would separate the training data into k ordered classes by modeling the
ranks as intervals on the real line - an idea whose origins are with the classical cumulative
model [9], see also [7,5]. The geometric interpretation of this approach is to look for k - 1
parallel hyperplanes represented by vector w E Rn (the dimension of the input vectors)
and scalars bl :::; ... :::; bk - I defining the hyperplanes (w, bd, ... , (w, bk-d, such that the
'This work was done while A.S. was spending his sabbatical at the computer science department
of Stanford University.
~
2
Iwl
Iwl
~
Iwl
maximize the m
f~in
~~
:~ ~
.
....
"
(w?o)
Sum-oj-margins
Fixed-margin
Figure 1: Lefthand display: fi xed-margin policy for ranking learning. The margin to be maximized
is associated with the two closest neighboring classes. As in conventional SVM, the margin is prescaled to be equal to 2/lwl thus maximizing the margin is achieved by minimizing w?w. The support
vectors lie on the boundaries between the two closest classes. Righthand display: sum-of-margins
policy for ranking learning. The objective is to maximize the sum of k - 1 margins . Each class is
sandwiched between two hyperplanes, the norm of w is set to unity as a constraint in the optimization
problem and as a result the objective is to maximize I:j (b j - aj). In this case, the support vectors lie
on the boundaries among all neighboring classes (unlike the fi xed-margin policy) . When the number
of classes k = 2, the dual functional is equivalent to v-SVM.
data are separated by dividing the space into equally ranked regions by the decision rule
f (x)
=
min
rE{l , ... ,k}
{r:
w .x
- br < O}.
(1)
In other words, all input vectors x satisfying br - 1 < w . x < br are assigned the rank
r (using the convention that bk = (0). For instance, recently [5] proposed an "on-line"
algorithm (with similar principles to the classic "perceptron" used for 2-class separation)
for finding the set of parallel hyperplanes which would comply with the separation rule
above.
To continue the analogy to 2-class learning , in addition to the separability constraints on
the variables 0: = {w, b1 :S ... :S bk-d one would like to control the tradeoff between
lowering the "empirical risk" Remp(O:) (error measure on the training set) and lowering
the "confidence interval" 1J>(0:, h) controlled by the VC-dimension h of the set of loss
functions. The "structural risk minimization" (SRM) principle [12] minimizes a bound
on the risk over a structure on the set of functions. The geometric interpretation for 2-class
learning is to maximize the margin between the boundaries of the two sets [12, 1].
In our setting of ranking learning, there are k - 1 margins to consider, thus there are two
possible approaches to take on the "large margin" principle for ranking learning:
"fixed margin" strategy: the margin to be maximized is the one defined by the closest
(neighboring) pair of classes. Formally, let w, bq be the hyperplane separating the two
pairs of classes which are the closest among all the neighboring pairs of classes. Let w , bq
be scaled such the distance of the boundary points from the hyperplane is 1, i.e., the margin
between the classes q, q + 1 is 2/lwl (see Fig. 1, lefthand display) . Thus, the fixed margin
policy for ranking learning is to find the direction wand the scalars b1 , ... , bk - 1 such that
w . w is minimized (i.e., the margin between classes q, q + 1 is maximized) subject to the
separability constraints (modulo margin errors in the non-separable case).
"sum of margins" strategy: the sum of all k - 1 margins are to be maximized. In this case,
the margins are not necessarily equal (see Fig. 1, righthand display). Formally, the ranking
rule employs a vector w, Iwi = 1, and a set of 2(k - 1) thresholds ai ::::; bi ::::; a2 ::::; b2 ::::;
... ::::; ak-i ::::; bk- i such that w . x{ : : ; aj and w . x{+i 2:: bj for j = 1, ... , k - 1. In
other words, all the examples of class 1 ::::; j ::::; k are "sandwiched" between two parallel
hyperplanes (w,aj) and (w, bj- t}, where bo = -00 and ak = 00. The k - 1 margins are
therefore (bj - aj) and the large margin principle is to maximize Lj (b j - aj) subject to
the separability constraints above.
It is also fairly straightforward to apply the SRM principle and derive the bounds on the
actual risk functional - see [11] for details.
In the remainder of this paper we will introduce the algorithmic implications of these two
strategies for implementing the large margin principle for ranking learning. The fixedmargin principle will turn out to be a direct generalization of the Support Vector Machine
(SYM) algorithm - in the sense that substituting k = 2 in our proposed algorithm would
produce the dual functional underlying conventional SVM.1t is interesting to note that the
sum-of-margins principle reduces to v-SVM (introduced by [10] and later [2]) when k = 2.
2 Fixed Margin Strategy
Recall that in the fixed margin policy (w, bq ) is a "canonical" hyperplane normalized such
that the margin between the closest classes q, q + 1 is 2/llwll. The index q is of course
unknown. The unknown variables w, bi ::::; ... ::::; bk - i (and the index q) could be solved
in a two-stage optimization problem: a Quadratic Linear Programming (QLP) formulation
followed by a Linear Programming (LP) formulation.
The (primal) QLP formulation of the ("soft margin") fixed-margin policy for ranking learning takes the form:
~w . w + c l: l: (E{ + <j+1)
i
(2)
j
subject to
w?x j -b < -l+c:j
?
J .'
w . x j +1 - b? > 1 - c:~j+1
l
J t'
c: j > 0 c:*j > 0
't
-
,
't
(3)
(4)
(5)
-
where j = 1, ... , k - 1 and i = 1, ... , i j , and C is some predefined constant. The scalars c:{
and
are positive for data points which are inside the margins or placed on the wrong
side of the respective hyperplane. Since the margin is maximized while maintaining separability, it will be governed by the closest pair of classes because otherwise the separability
conditions would cease to hold (modulo the choice of the constant C which would tradeoff
the margin size with possible margin errors - but that is discussed later).
<j+1
The solution to this optimization problem is given by the saddle point of the Lagrange
functional (Lagrangian):
L(?)
~w. w +
i,j
CI: (c:{ + <Hi) + I:A{(W' x{ - b + 1- c:{)
j
i,j
i,j
. an d '>i'
r j '>i
r* j+i ,Ai'
d Ui
d are aII non-negattve
. L agrange
h
were
J. -- 1, ... , k - l ,Z' -- 1, ??? , Zj,
multipliers. Since the primal problem is convex, there exists a strong duality between the
primal and dual optimization functions. By first minimizing the Lagrangian with respect
fi,
to w, bj ,
f;j+1 we obtain the dual optimization function which then must be maximized
with respect to the Lagrange multipliers. From the minimization of the Lagrangian with
respect to w we obtain:
w = - 'L...-t
" )..~x~
't
't
j j
+ '"
L...-t 8 x +1
'I.
(6)
't
i,j
i,j
That is, the direction w of the parallel hyperplanes is described by a linear combination
of the support vectors x associated with the non-vanishing Lagrange multipliers. From the
Kuhn-Tucker theorem the support vectors are those vectors for which equality is achieved
in the inequalities (3,4). These vectors lie on the two boundaries between the adjacent
classes q, q + 1 (and other adjacent classes which have the same margin). From the minimization of the Lagrangian with respect to bj we obtain the constraint:
(7)
and the minimization with respect to
C-
)..j 't
rj
fi and <H1 yields the constraints:
= 0
':,'1.'
C - 8'tj
-
r~H1 =
"::.'1.
0
(8)
which in turn gives rise to the constraints 0 :s )..i :S C where )..i = C if the corresponding
= 0, thus from the Kuhn-Tucker theorem f{ > 0), and
data point is a margin error
likewise for 8{. Note that a data point can count twice as a margin error - once with
respect to the class on its "left" and once with respect to the class on its "right".
?(1
For the sake of presenting the dual functional in a compact form, we will introduce some
x ij matrix whose columns are the data points
new notations. Let X j be the
i = 1, ... , ij. Let )..j = ()..I, ... ,)..i.)
, T be the vector whose components are the Lagrange
n
xi,
multipliers )..{ corresponding to class j. Likewise, let 8j = (8{, ... , 8f)
, T be the Lagrange
multipliers 8! corresponding to class j + 1. Let fL = (P, ... , )..k-1, 81 , ... , 8k- 1) T be the
vector holding all the )..! and 8! Lagrange multipliers, and let fL1 = (fLL ... , fLL1) T =
()..1, ... , )..k-1) T and fL2 = (fLr, ... , fLL1) T = (8 1, ... , 8k- 1) T the first and second halves of
fL. Note that fL] = )..j is a vector, and likewise so is fL3 = 8j . Let 1 be the vector of 1's, and
finally, let Q be the matrix holding two copies of the training data:
(9)
where N = 2l - i1 - ik' For example, (6) becomes in the new notations w
QfL.
By substituting the expression for w = QfL back into the Lagrangian and taking into
account the constraints (7,8) one obtains the dual functional which should be maximized
with respect to the Lagrange multipliers fLi:
max
{!
(10)
i= l
subject to
o :S fLi :S C
1? fLJ = 1 . fL]
i = 1, ... , N
j = 1, ... , k - 1
(11)
(12)
Note that k = 2, i.e., we have only two classes thus the ranking learning problem is equivalent to the 2-class classification problem, the dual functional reduces and becomes equivalent to the dual form of conventional SVM. In that case (QT Q)ij = YiYjXi . Xj where
Yi, Yj = ?1 denoting the class membership.
Also worth noting is that since the dual functional is a function of the Lagrange multipliers
>-.{ and 5{ alone, the problem size (the number of unknown variables) is equal to twice the
number of training examples - precisely N = 2l-il -ik where l is the number oftraining
examples. This favorably compares to the O(l2) required by the recent SYM approach to
ordinal regression introduced in [7] or the kl required by the general multi-class approach
to SYM [4,8].
Further note that since the entries of QT Q are the inner-products of the training examples,
they can be represented by the kernel inner-product in the input space dimension rather than
by inner-products in the feature space dimension. The decision rule, in this case, given a
new instance vector x would be the rank r corresponding to the first smallest threshold br
for which
support vector s
support vectors
where K(x, y) = ?>(x) . ?>(y) replaces the inner-products in the higher-dimensional "feature" space ?>(x).
Finally, from the dual form one can solve for the Lagrange multipliers J-Li and in turn obtain
w = QJ-L the direction of the parallel hyperplanes. The scalar bq (separating the adjacent
classes q, q + 1 which are the closest apart) can be obtained from the support vectors, but
the remaining scalars bj cannot. Therefore an additional stage is required which amounts
to a Linear Programming problem on the original primal functional (2) but this time w is
already known (thus making this a linear problem instead of a quadratic one).
3
Sum-of-Margins Strategy
In this section we propose an alternative large-margin policy which allows for k - 1 margins where the criteria function maximizes the sum of them. The challenge in formulating
the appropriate optimization functional is that one cannot adopt the "pre-scaling" of w approach which is at the center of conventional SYM formulation and of the fixed-margin
policy for ranking learning described in the previous section.
The approach we take is to represent the primal functional using 2(k - 1) parallel hyperplanes instead of k - 1. Each class would be "sandwiched" between two hyperplanes
(except the first and last classes). Formally, we seek a ranking rule which employs a vector
wand a set of 2(k - 1) thresholds al :::; b1 :::; a2 :::; b2 :::; ... :::; ak-l :::; bk- 1 such
that w . x{ :::; aj and w . X{+l ::::: bj for j = 1, ... , k - 1. In other words, all the examples of class 1 :::; j :::; k are "sandwiched" between two parallel hyperplanes (w, aj) and
(w, bj - d, where bo = -00 and ak = 00.
JTIWTI.
The margin between two hyperplanes separating class j and j + 1 is: (b j - aj) /
Thus, by setting the magnitude of w to be of unit length (as a constraint in the optimization
problem) , the margin which we would like to maximize is Lj(bj - aj) for j = 1, ... , k-1
which we can formulate in the following primal QLP (see also Fig. 1, righthand display):
k-l
min
2)aj - bj ) + C
j =l
2: 2:
i
(f{
+ f;j+l)
(13)
j
subject to
aj :::; bj ,
bj:::;aj+l,
w?
xj
(14)
< a?J + fj., b?J - f*j+l
<
?
- w?
< 1 fj > 0 f*j+! > 0
? -
w .w
-,
(15)
j=1, ... , k-2
2-'1,
-
x j +!
.,
(16)
(17)
where j = 1, ... , k - 1 (unless otherwise specified) and i = 1, ... , ij, and C is some predefined constant (whose physical role would be explained later). Note that the (non-convex)
constraint w . w = 1 is replaced by the convex constraint w . w ::; 1 since it can be shown
that the optimal solution w* would have unit magnitude in order to optimize the objective
function (see [11] for details). We will proceed to derive the dual functional below.
The Lagrangian takes the following form:
k- 2
L (e1 + <HI) + L ~j(aj - bj ) + L 1}j(bj L A1(w . x1- aj - e1) + L 61(bj - e: +! - w ? xi+!)
a(w? w -1) - L (lei - L (i* H1 e?
l)aj - bj ) + C
L(?)
j
+
i ,j
j
j
i,j
+
aHd
j=1
i ,j
i ,j
i,j
where j
1, ... , k - 1 (unless otherwise specified) , i
1, ... , ij , and
j,
~j, 1}j, a,
61 are all non-negative Lagrange multipliers. Due to lack of space
we will omit further derivations (those can be found in [11]) and move directly to the dual
functional which takes the following form :
(1, C Ai,
max
(18)
J.L
subject to
o ::; f.1i ::; C
1 . f.1~
1?
;:::
f.11 =
i = 1, ... , N
1, 1? f.1Ll
;::: 1
1 . f.12
(19)
(20)
(21)
where Q and f.1 are defined in the previous section. The direction w is represented by the
linear combination of the support vectors: w = Qf.1/IIQf.111 where, following the KuhnTucker theorem, f.1i > 0 for all vectors on the boundaries between the adjacent pairs of
classes and margin errors . In other words, the vectors x associated with non-vanishing f.1i
are those which lie on the hyperplanes or vectors tagged as margin errors. Therefore, all
the thresholds aj, bj can be recovered from the support vectors - unlike the fixed-margin
scheme which required another LP pass.
The dual functional (18) is similar to the dual functional (10) but with some crucial differences: (i) the quadratic criteria functional is homogeneous , and (ii) constraints (20) lead
to the constraint L:i f.1i ;::: 2. These two differences are also what distinguishes between
conventional SVM and v-SVM for 2-class learning proposed recently by [10]. Indeed, if
we set k = 2 in the dual functional (18) we would be able to conclude that the two dual
functionals are identical (by a suitable change of variables) . Therefore, the role of the constant C complies with the findings of [10] by controlling the tradeoff between the number
of margin errors and support vectors and the size of the margins: 2/ N ::; C ::; 2 such that
when C = 2 a single margin error is allowed (otherwise a duality gap would occur) and
when C = 2/ N all vectors are allowed to become margin errors and support vectors (see
[11] for a detailed discussion on this point) .
In the general case of k > 2 classes (in the context of ranking learning) the role of the
constant C carries the same meaning: C::; 2(k - 1)/#m.e. where #m.e. stand for "total
number of margin errors", thus
2(k;; 1) ::; C ::; 2(k _ 1).
Since a data point can can count twice for a margin error, the total number of margin errors
in the worst case is N = 2l - il - ik where l is the total number of data points.
.
'" ~ ~o~
1~
I~
*
~
~
Figure 2: The results of the fi xed-margin principle plotted against the results of PRank of [5] which
does not use a large-margin principle. The average error of PRank is about 1.25 compared to 0.7 with
the fi xed-margin algorithm.
4
Experiments
Due to lack of space we describe only two sets of experiments we conducted on a "collaborative filtering" problem and visual data ranking. More details and further experiments are
reported in [11].
In general, the goal in collaborative filtering is to predict a person's rating on new items
such as movies given the person's past ratings on similar items and the ratings of other
people of all the items (including the new item). The ratings are ordered, such as "highly
recommended", "good" ,... , "very bad" thus collaborative filtering falls naturally under the
domain of ordinal regression (rather than general multi-class learning).
The "EachMovie" dataset [6] contains 1628 movies rated by 72,916 people arranged as a
2D array whose columns represent the movies and the rows represent the users - about
5% of the entries of this array are filled-in with ratings between 0, ... ,6 totaling 2,811,983
ratings. Given a new user, the ratings of the user on the 1628 movies (not all movies would
be rated) form the Yi and the i'th column of the array forms the Xi which together form the
training data (for that particular user). Given a new movie represented by the vector x of
ratings of all the other 72,916 users (not all the users rated the new movie), the learning
task is to predict the rating f (x) of the new user. Since the array contains empty entries, the
ratings were shifted by -3.5 to have the possible ratings {-2.5, -1.5, -0.5, 0.5,1.5, 2.5}
which allows to assign the value of zero to the empty entries of the array (movies which
were not rated).
For the training phase we chose users which ranked about 450 movies and selected a subset
{50, 100, ... , 300} of those movies for training and tested the prediction on the remaining
movies. We compared our results (collected over 100 runs) - the average distance between the correct rating and the predicted rating - to the best "on-line" algorithm of [5]
called "PRank" (there is no use of large margin principle). In their work, PRank was compared to other known on-line approaches and was found to be superior, thus we limited our
comparison to PRank alone. Attempts to compare our algorithms to other known ranking
algorithms which use a large-margin principle ([7], for example) were not successful since
those square the training set size which made the experiment with the Eachmovie dataset
untractable computationally.
The graph in Fig. 2 shows that the large margin principle makes a significant difference on
the results compared to PRank. The results we obtained with PRank are consistent with
the reported results of [5] (best average error of about 1.25), whereas our fixed-margin
algorithm provided an average error of about 0.7).
We have applied our algorithms to classification of "vehicle type" to one of three classes:
"small" (passenger cars), "medium" (SUVs, minivans) and "large" (buses, trucks). There
Figure 3: Classifi cation of vehicle type: Small, Medium and Large (see text for details).
is a natural order Small, Medium, Large since making a mistake between Small and Large
is worse than confusing Small and Medium, for example. We compared the classification
error (counting the number of miss-classifications) to general multi-class learning using
pair-wise SVM. The error over a test set of about 14,000 pictures was 20% compared to
25% when using general multi-class SVM. We also compared the error (averaging the
difference between the true rank {I, 2,3} and the predicted rank using 2nd-order kernel) to
PRank. The average error was 0.216 compared to 1.408 with PRank. Fig. 3 shows a typical
collection of correctly classified and incorrectly classified pictures from the test set.
References
[1] B.E. Boser, LM. Guyon, and V.N. Vapnik. A training algorithm for optimal margin classifers.
In Proc. of the 5th ACM Workshop on Computational Learning Theory, pages 144-152. ACM
Press, 1992.
[2] C.C. Chang and C.J. Lin. Training v-Support Vector classifi ers: Theory and Algorithms. In
Neural Computations, 14(8),2002.
[3] W.W. Cohen, R .E. Schapire, and Y. Singer. Learning to order things. lournal of Artificial
Intelligence Research (lAIR), 10:243-270, 1999.
[4] K . Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based
vector machines. lournal of Machine Learning Research, 2:265-292, 2001.
[5] K. Crammer and Y. Singer. Pranking with ranking. In Proceedings of the conference on Neural
Information Processing Systems (NIPS), 2001.
[6] http://www.research.compaq.comlSRC/eachmovie/ .
[7] R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. Advances in Large Margin Classifi ers, 2000. pp. 115-132.
[8] Y. Lee, Y. Lin, and G. Wahba. Multicategory support vector machines. Technical Report 1043,
Univ. of Wisconsin, Dept. of Statistics, Sep. 2001.
[9] P. McCullagh and J. A. NeIder. Generalized Linear Models. Chapman and Hall, London, 2nd
edition edition, 1989.
[10] B. Scholkopf, A. Smola, R.C. Williamson, and P.L. Bartless. New support vector algorithms.
Neural Computation, 12:1207-1245, 2000.
[11] A. Shashua and A. Levin. Taxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems. Technical Report 2002-39, Leibniz Center for Research, School of Computer
Science and Eng., the Hebrew University of Jerusalem.
[12] V.N. Vapnik. The nature of statistical learning. Springer, 2nd edition, 1998.
[13] J. Weston and C. Watkins. Support vector machines for multi-class pattern recognition. In Proc.
of the 7th European Symposium on Artificial Neural Networks, April 1999.
| 2269 |@word norm:1 nd:3 seek:1 eng:1 carry:1 c1ass:1 contains:2 denoting:1 past:1 existing:1 recovered:1 bd:1 must:1 alone:2 half:1 selected:1 intelligence:1 item:4 vanishing:2 herbrich:1 hyperplanes:13 direct:2 become:1 symposium:1 ik:3 scholkopf:1 inside:1 introduce:3 indeed:1 multi:7 actual:1 becomes:2 provided:1 underlying:1 notation:2 maximizes:1 medium:4 israel:2 xed:4 what:1 minimizes:1 finding:2 scaled:1 wrong:1 control:1 unit:2 omit:1 positive:1 mistake:1 ak:4 chose:1 twice:3 limited:1 bi:2 yj:1 qlp:3 empirical:1 word:4 confidence:1 pre:1 cannot:2 risk:4 applying:1 context:1 www:1 equivalent:3 optimize:1 conventional:5 lagrangian:6 center:2 maximizing:1 jerusalem:5 straightforward:1 convex:3 formulate:1 rule:6 array:5 his:1 classic:1 controlling:1 flj:1 modulo:2 user:8 programming:3 homogeneous:1 origin:1 satisfying:1 recognition:1 role:3 solved:1 worst:1 region:1 ui:1 classifi:3 sep:1 aii:1 represented:4 derivation:1 separated:1 univ:1 describe:1 london:1 artificial:2 whose:5 stanford:1 solve:1 otherwise:4 compaq:1 statistic:1 propose:1 product:4 remainder:1 neighboring:5 lefthand:2 empty:2 produce:1 derive:2 ac:2 ij:7 qt:2 school:3 strong:1 dividing:1 c:4 predicted:2 convention:1 direction:4 kuhn:2 correct:1 vc:1 implementing:1 assign:1 generalization:3 hold:1 hall:1 algorithmic:2 bj:17 predict:2 lm:1 substituting:2 adopt:1 a2:2 smallest:1 proc:2 minimization:4 rather:2 totaling:1 rank:6 iwl:3 sense:1 membership:1 lj:2 i1:1 classification:7 among:2 dual:16 fairly:1 equal:3 once:2 chapman:1 identical:1 look:1 minimized:1 report:2 employ:2 distinguishes:1 replaced:1 phase:1 attempt:1 investigate:1 highly:1 righthand:3 primal:6 tj:1 implication:1 predefined:2 respective:1 bq:4 unless:2 filled:1 re:1 plotted:1 instance:3 column:3 modeling:1 soft:1 subset:1 entry:4 srm:2 successful:1 levin:2 conducted:1 reported:2 person:2 huji:2 pranking:1 lee:1 together:1 worse:1 li:2 account:1 b2:2 ranking:20 passenger:1 performed:1 view:1 later:3 h1:3 vehicle:2 neider:1 shashua:3 parallel:7 iwi:1 collaborative:4 il:4 square:1 likewise:3 maximized:9 yield:1 worth:1 straight:1 cation:1 classified:2 email:2 against:1 pp:1 tucker:2 naturally:1 associated:3 flr:1 dataset:2 remp:1 recall:1 car:1 graepel:1 back:1 higher:1 april:1 formulation:4 done:1 arranged:1 stage:2 smola:1 lack:2 aj:16 lei:1 normalized:1 multiplier:10 true:1 inductive:1 equality:1 assigned:1 tagged:1 alevin:1 adjacent:4 ll:1 criterion:2 generalized:1 presenting:1 fj:2 spending:1 lwl:2 meaning:1 wise:1 fi:6 recently:2 superior:1 functional:17 physical:1 cohen:1 sabbatical:1 discussed:1 interpretation:2 classifers:1 significant:1 ai:3 closest:8 recent:1 apart:1 inequality:1 continue:1 yi:2 additional:1 maximize:6 recommended:1 ii:1 rj:1 reduces:2 eachmovie:3 fl1:1 technical:2 lin:2 equally:1 e1:2 a1:1 controlled:1 prediction:1 regression:7 kernel:3 represent:3 achieved:2 addition:1 whereas:1 interval:2 crucial:1 unlike:2 subject:6 thing:1 structural:1 noting:1 counting:1 xj:2 wahba:1 reduce:1 idea:1 inner:4 br:4 tradeoff:3 multiclass:1 qj:1 amnon:1 expression:1 proceed:1 detailed:1 amount:1 lournal:2 schapire:1 http:1 outperform:1 canonical:1 zj:1 shifted:1 correctly:1 threshold:4 lowering:2 graph:1 sum:9 wand:2 run:1 guyon:1 separation:2 leibniz:1 decision:2 confusing:1 scaling:1 bound:2 hi:2 oftraining:1 followed:1 fl:4 display:5 fll:1 quadratic:3 replaces:1 truck:1 occur:1 constraint:13 precisely:1 sake:1 min:2 formulating:1 separable:1 department:1 combination:2 llwll:1 fl2:1 separability:5 unity:1 lp:2 making:2 explained:1 computationally:1 bus:1 discus:1 turn:4 count:2 singer:3 ordinal:8 complies:1 apply:1 appropriate:1 alternative:1 original:1 denotes:1 remaining:2 maintaining:2 multicategory:1 classical:1 sandwiched:4 bl:1 objective:3 move:1 already:1 strategy:5 obermayer:1 distance:2 separate:1 separating:5 suv:1 collected:1 length:1 index:3 minimizing:2 hebrew:3 taxonomy:1 holding:2 favorably:1 negative:1 rise:1 lair:1 implementation:1 policy:9 unknown:3 prank:9 incorrectly:1 defining:1 rn:1 rating:13 bk:8 introduced:2 pair:6 required:4 kl:1 specified:2 boser:1 nip:1 able:1 below:1 pattern:1 challenge:1 oj:1 max:2 including:1 suitable:1 ranked:2 natural:1 predicting:1 scheme:1 movie:11 rated:4 picture:2 text:1 comply:1 geometric:2 l2:1 wisconsin:1 loss:1 interesting:1 filtering:4 analogy:1 consistent:1 principle:17 row:1 qf:1 course:1 placed:1 last:1 copy:1 sym:4 side:1 perceptron:1 fall:1 taking:1 kuhntucker:1 boundary:7 dimension:4 stand:1 cumulative:1 forward:1 made:1 collection:1 functionals:1 compact:1 obtains:1 b1:3 conclude:1 xi:3 nature:1 williamson:1 necessarily:1 european:1 fli:2 domain:1 main:1 edition:3 allowed:2 x1:1 fig:5 referred:1 lie:4 governed:1 untractable:1 watkins:1 anat:1 theorem:3 bad:1 er:2 svm:12 cease:1 exists:1 workshop:1 vapnik:2 ci:1 magnitude:2 margin:71 gap:1 saddle:1 visual:2 lagrange:10 ordered:2 scalar:5 bo:2 chang:1 springer:1 determines:1 acm:2 weston:1 ahd:1 goal:1 change:1 mccullagh:1 typical:1 except:1 hyperplane:6 averaging:1 miss:1 total:5 called:1 pas:1 duality:2 formally:3 support:17 people:2 crammer:2 dept:1 tested:1 |
1,395 | 227 | Meiosis Networks
Meiosis Networks
1
Stephen Jose Hanson
Learning and Knowledge Acquisition Group
Siemens Research Center
Princeton, NJ 08540
ABSTRACT
A central problem in connectionist modelling is the control of
network and architectural resources during learning. In the present
approach, weights reflect a coarse prediction history as coded by a
distribution of values and parameterized in the mean and standard
deviation of these weight distributions. Weight updates are a
function of both the mean and standard deviation of each
connection in the network and vary as a function of the error signal
("stochastic delta rule"; Hanson, 1990). Consequently, the weights
maintain information on their central tendency and their
"uncertainty" in prediction. Such information is useful in
establishing a policy concerning the size of the nodal complexity of
the network and growth of new nodes. For example, during
problem solving the present network can undergo "meiosis",
producing two nodes where there was one "overtaxed" node as
measured by its coefficient of variation. It is shown in a number of
benchmark problems that meiosis networks can find minimal
architectures, reduce computational complexity, and overall increase
the efficiency of the representation learning interaction.
1
Also a member or the Cognitive Science Laboratory, Princeton University, Princeton, NJ 08542
533
534
Hanson
1 INTRODUCTION
Search problems which involve high dimensionality, a-priori constraints and
nonlinearities are hard. Unfortunately, learning problems in biological systems
involve just these sorts of properties. Worse, one can characterize the sort of
problem that organisms probably encounter in the real world as those that do not
easily admit solutions that involve, simple averaging, optimality, linear
approximation or complete knowledge of data or nature of the problem being solved.
We would contend there are three basic properties of real learning result in an illdefined set problems and heterogeneous set of solutions:
? Data are continuously available but incomplete; the learner must constantly
update parameter estimates with stingy bits of data which may represent a very
small sample from the possible population
? Conditional distributions of response categories with respect to given features are
unknown and must be estimated from possibly unrepresentative samples.
? Local (in time) information may be misleading, wrong, or non stationary,
consequently there is a poor tradeoff between the present use of data and waiting
for more and possibly flawed data- consequently updates must be small and
revocable.
These sorts of properties represent only one aspect of the learning problem faced by
real organisms in real environments. Nonetheless, they underscore why "weak"
methods-methods that assume little about the environment in which they are
operating -are so critical.
1.1
LEARNING AND SEARCH
It is possible to precisely characterize the search problem in terms of the resources or
degress of freedom in the learning model. If the task the learning system is to
perform is classification then the system can be analyzed in terms of its ability to
dichotomize stimulus points in feature space.
Dichotomization Capability: Network Capacity Using a linear fan-in or hyperplane
type neuron we can characterize the degrees of freedom inherent in a network of
units with thresholded output. For example, with linear boundaries, consider 4
points, well distributed in a 2-dimensional feature space. There are exactly 14
linearly separable dichotomies that can be formed with the 4 target points. However,
there are actually 16 (24) possible dichotomies of 4 points in 2 dimensions
consequently, the number of possible dichotomies or arbitrary categories that are
linearly implementable can be thought of as a capacity of the linear network in k
dimensions with n examples. The general category capacity measure (Cover, 1965)
can be written as:
Ie
C(n,k)=2
E
(n-I)!
j~ (n-l-
,n> k+l
j)!j!
(I)
Meiosis Networks
Note the dramatic growth in C as a function of k, the number of feature dimensions,
for example, for 25 stimuli in a 5 dimensional feature space there are 100,670 linear
dichotomies. U ndertermination in these sorts of linear networks is the rule not the
exception. This makes the search process and the nature of constraints on the search
process critical in finding solutions that may be useful in the given problem domain.
1.2
THE STOCHASTIC DELTA RULE
Actual mammalian neural systems involve noise. Responses from the same individual
unit in isolated cortex due to cyclically repeated identical stimuli will never result in
identical bursts Transmission of excitation through neural networks in living systems
is essentially stochastic in nature. The typical activation (unction used in
connectionist models must be assumed to be an average over many intervals, since
any particular neuronal pulse train appears quite random [in fact, Poisson; (or
example see Burns,1968; Tomko & Crapper, 1974].
This suggests that a particular neural signal in time may be modeled by a
distribution of synaptic values rather then a single value. Further this sort of
representation provides a natural way to affect the synaptic efficacy in time. In order
to introduce noise adaptively, we require that the synaptic modification be a function
of a random increment or decrement proportional in size to the present error signal.
Consequently, the weight delta or gradient itself becomes a random variable based on
prediction performance. Thus, the noise that seems ubiquitous and apparently
useless throughout the nervous system can be turned to at least three advantages in
that it provides the system with mechanisms for (1) entertaining multiple response
hypotheses given a single input (2) maintaining a coarse prediction history that is
local, recent, and cheap, thus providing punctate credit assignment opportunities and
finally, (3) revoking parameterizations that are easy to reach, locally stable, but
distant from a solution.
Although it is possible to implement the present principle a number of different ways
we chose to consider a connection strength to be represented as a distribution of
weights with a finite mean and variance (see Figure 1).
Figure 1: Weights as Sampling Distributions
A forward activation or recognition pass consists o( randomly sampling a weight
from the existing distribution calculating the dot product and producing an output
535
536
Hanson
for that pass.
Xi
(2)
EWi:Yj
=
j
where the sample is found from,
S(Wij=Wi:)
=
J.l 1II + (jill <b(wij;O,l)
IJ
(3)
IJ
Consequently S( Wii=Wi~?) is a random variable constructed from a finite mean J.l 1II
IJ
and standard deviation
based on a normal random variate (<b) with mean zero
(jill
IJ
and standard deviation one. Forward recognition pasSes are therefore one to many
mappings, each sampling producing a different weight depending on the mean and
standard deviation of the particular connection while the system remains stochastic.
In the present implementation there are actually three separate equations for
learning. The mean of the weight distribution is modified as a function of the usual
gradient based upon the error, however, note that the random sample point is
retained for this gradient calculation and is used to update the mean of the
distribution for that synapse.
8E
J.l 1II (n+l)=a(--)+J.l 1II (n)
8 WOO?
IJ
(4)
IJ
I)
Similarly the standard deviation of the weight distribution is modified as a function
of the gradient, however, the sign of the gradient is ignored and the update can only
increase the variance if an error results. Thus errors immediately increase the
variance of the synapse to which they may be attributed.
(n+l)
(jill
8E
=.81 - - I +
IJ
8w~o
(n)
(jill
(5)
IJ
I)
A third and final learning rule determines the decay of the variance of synapses in
the network,
(jill
I)
(n+l)
= ~(jlllIJ (n),
~<l.
(6)
As the system evolves for ~ less than one, the last equation of this set guarantees that
the variances of all synapses approach zero and that the system itself becomes
deterministic prior to solution. For small ~ the system evolves very rapidly to
deterministic, while larger )S allow the system to revisit chaotic states as needed
during convergence. A simpler implementation of this algorithm involves just the
gradient itself as a random variable (hence, the name "stochastic delta rule"),
however this approach confounds the growth in variance of the weight distribution
with the decay and makes parametric studies more complicated to implement.
The stochastic delta rule implements a local, adaptive simulated annealing (cf.
Kirkpatrick, S., Gelatt, C. D. & Veechi, M., 1983) process occuring at different rates
in the network dependent on prediction history. Various benchmark tests of this
Meiosis Networks
basic algorithm are discussed in Hanson (1990).
1.3
MEIOSIS
In the SDR rule disscussed above, the standard deviation of the weight distributions
might be seen as uncertainty measure concerning the weight value and strength.
Consequently, changes in the standard deviation can be taken as a measure of the
"prediction value" of the connection . Hidden units with significant uncertainty have
low prediction value and are performing poorly in reducing errors. IT hidden unit
uncertainty increases beyond the cumulative weight value or "signal" to that unit
then the complexity of the architecture can be traded off with the uncertainty per
unit. Consequently, the unit "splits" into two units each copying half the architecture
information to each of the new two units.
Networks are initialized with a random mean and variance values (where the
variance is started in the interval (10,-10)). Number of hidden units in all problems
was initialized at one. The splitting policy is fixed for all problems to occur when
both the C.V. {standard deviation relative to the mean} for the input and output to
the hidden unit exceeds 100%, that is, when the composite variance of the connection
strengths is 100% of the composite mean value of the connection strengths:
E(1ii
E(1 ile
--- > 1.0 and ---> 1.0
Ell-ii
Ell- ile
Ie
Meiosis then proceeds as follows (see Figure 2)
? A forward stochastic pass is made producing an output
? Output is compared to target producing errors which are then used to update
the mean and variance of weight.
? The composite input and output variance and means are computed for each
hidden units
? For those hidden units whose composite C.V.s are > 1.0 node splitting occurs;
half the variance is assigned to each new node with a jittered mean centered at
the old mean
MEIOSIS
Figure 2: Meiosis
537
538
Hanson
There is no stopping criteria. The network stops creating nodes based on the
prediction error and noise level ( P,~) .
1.4
1.4.1
EXAMPLES
Parity Benchmark: Finding the Right number of units
Small parity problems (Exclusive-or and 3BIT parity) were used to explore sensitivity
of the noise parameters on node splitting and to benchmark the method. All runs
were with ftxed learning rate ( 1] = .5 ) and momentum ( a = .75). Low values of
zeta ( < .7) produce minimal or no node splitting, while higher values (> .99) seem to
produce continuous node spliting without regard to the problem type. Zeta was rlXed
(.98) and beta, the noise per step parameter was varied between values .1 and .5.
The following runs were unaffected by varying beta between these two values.
mean=4 .1
mean=20
.'"
-
..
o
?
?
10
o
?
?
10
Figure 3: Number of Hidden Units at Convergence
Shown in Figure 3 are 50 runs of Exclusive-or and 50 runs of 3 BIT PARITY.
Histograms show for exclusive-or that almost all runs (>95%) ended up with 2
hidden units while for the 3BIT PARITY case most runs produce 3 hidden units,
however with considerably more variance, some ending with 2 while a few runs
ended with as many 9 hidden units. The next figure (Figure 4) shows histograms for
Meiosis Networks
mean. I 18
lSI')
o
50
IDO
150
iii
_
aD
?
:.
?
101
I.
_
ao _
o
r
:lIOII
...
..
..
101D
. .. ...... ,.,..
.
,a
Figure 4: Convergence Times
the convergence time showing a slight advantage in terms of convergence for the
meiosis networks for both exclusive-or and 3 BIT PARITY.
1.4.2
Blood NMR Data: Nonlinear Separability
In the Figure 5 data were taken from 10 different continuous kinds of blood
measurements, including, total lipid content, cholesterol (mg/dl), High density lipids,
low-density lipids, triglyceride, etc as well as some NMR measures. Subjects were
previously diagnosed for presence (C) or absence (N) of a blood disease.
- --
~
.,
co
~c
~
.
U
r
..
.
.
N
0
04'_,,, ......
~
~L
~.,_1 ' 1_
~
4
-2
0
2
4
8
!irs! cIiImrIw>anI .anaIlIe
Figure 5: Blood NMR Separability
The data consisted of 238 samples, 146 Ns and 92 es. Shown in the adjoining figure
is a Perceptron (linear discriminant analysis) response to the data. Each original
data point is projected into the first two discriminant variables showing about 75%
of the data to be linearly separable (k-k/ 2 jackknife tests indicate about 52% transfer
rates). However, also shown is a rough non-linear envelope around one class of
539
540
Hanson
subjects(N) showing the potentially complex decision region for this data.
1.4.3
Meiosis Learning curves
Data was split into two groups (118,120) for learning and transfer tests. Learning
curves for both the meiosis network and standard back-propagation are shown in the
Figure 6. Also shown in this display is the splitting rate for the meiosis network
showing it grow to 7 hidden units and freezIng during the first 20 sweeps.
o
_ __ _ _ __
~
50
_ ____
s_
~
_ _ ._ _ _ _ _ _L -_ _ _ _ _ _
1IX1
150
.~
200
.
-
Figure 6: Learning Curves and Splitting Rate
1.4.4
Transfer Rate
Backpropagation was run on the blood data with 0 (perceptron), 2, 3, 4, 5, 6, 7, and
20 hidden units. Shown is the median transfer rate of 3 runs for each hidden unit
network size. Transfer rate seemed to hover near 65% as the number of hidden
units approached 20. A meiosis network was also run 3 times on the data (using f3
.40 and ~ .98). Transfer Rate shown in Figure 7 was always above 70% at the 7
hidden unit number.
Meiosis Networks
_.---l_
?
10
15
.....-01Figure 7: Transfer Rate as a Function of Hidden Unit Number
1.5
Conclusions
The key property of the present scheme is the integration of representational aspects
that are sensitive to network prediction and at the same time control the
architectural resources of the network. Consequently, with Meiosis networks it is
possible to dynamically and opportunistically control network complexity and
therefore indirectly its learning efficiency and generalization capacity. Meiosis
Networks were defined upon earlier work using local noise injections and noise
related learning rules. As learning proceeds the meiosis network can measure the
prediction history of particular nodes and if found to be poor, can split the node and
opportunistically to increase the resources of the network. Further experiments are
required in order to understand different advantages of splitting policies and their
affects on generalization and speed of learning.
References
Burns, B. D The uncertain nervous system, London Edward Arnold Ltd, 1968.
Cover, T. M. Geometrical and statistical properties of systems of linear inequalities with
applications to pattern recognition . IEEE Trans Elec Computers, Vol EC-14,3, pp
236-334, 1965
Hanson, S. 1. A stochastiC versIOn of the delta rule Physica D, 1990.
Hanson, S J & Burr D J Minkowskl Back-propagation. learning In connectionist models
With non-euclIdean error signals, Neural Information Processing Systems, AmerIcan
Institute of PhYSICS 1988
Hanson, S J & Pratt, L. A comparIson of different biases for minimal network construction
With back-propagation, Advances in Neural Information Processing, D. Touretzsky,
Morgan-Kaufmann, 1989
Kirkpatrick, S, Gelatt, C D. & Veechl, M. Optimization by Simulated annealing, Science,
220, 671-680. 1983.
Tomko, G. 1. & Crapper, D. R Neural varIability Non-stationary response to Identical visual
stimUli, Brain Research, 79, p. 405-418, 1974
541
| 227 |@word version:1 seems:1 pulse:1 dramatic:1 efficacy:1 existing:1 unction:1 activation:2 must:4 written:1 distant:1 entertaining:1 cheap:1 update:6 stationary:2 half:2 nervous:2 provides:2 coarse:2 node:11 parameterizations:1 simpler:1 nodal:1 burst:1 constructed:1 beta:2 consists:1 burr:1 introduce:1 degress:1 brain:1 little:1 actual:1 becomes:2 kind:1 finding:2 nj:2 ended:2 guarantee:1 growth:3 exactly:1 wrong:1 nmr:3 control:3 unit:24 producing:5 local:4 punctate:1 establishing:1 might:1 burn:2 chose:1 dynamically:1 suggests:1 dichotomize:1 co:1 yj:1 implement:3 backpropagation:1 chaotic:1 thought:1 composite:4 deterministic:2 center:1 splitting:7 immediately:1 rule:9 crapper:2 cholesterol:1 population:1 variation:1 increment:1 target:2 construction:1 hypothesis:1 recognition:3 mammalian:1 solved:1 region:1 disease:1 environment:2 complexity:4 solving:1 upon:2 efficiency:2 learner:1 ewi:1 easily:1 represented:1 various:1 train:1 elec:1 london:1 dichotomy:4 approached:1 opportunistically:2 quite:1 whose:1 larger:1 ability:1 itself:3 final:1 advantage:3 mg:1 interaction:1 product:1 hover:1 turned:1 rapidly:1 poorly:1 representational:1 convergence:5 transmission:1 produce:3 depending:1 measured:1 ij:8 edward:1 involves:1 indicate:1 stochastic:8 centered:1 require:1 ao:1 generalization:2 biological:1 physica:1 around:1 credit:1 normal:1 mapping:1 traded:1 vary:1 sensitive:1 rough:1 always:1 modified:2 rather:1 varying:1 modelling:1 underscore:1 dependent:1 stopping:1 hidden:16 wij:2 overall:1 classification:1 priori:1 integration:1 ell:2 never:1 f3:1 flawed:1 sampling:3 identical:3 connectionist:3 stimulus:4 inherent:1 few:1 randomly:1 individual:1 sdr:1 maintain:1 freedom:2 irs:1 kirkpatrick:2 analyzed:1 adjoining:1 incomplete:1 old:1 euclidean:1 initialized:2 isolated:1 minimal:3 uncertain:1 earlier:1 cover:2 assignment:1 deviation:9 characterize:3 jittered:1 ido:1 considerably:1 adaptively:1 density:2 sensitivity:1 ie:2 off:1 physic:1 zeta:2 continuously:1 central:2 reflect:1 possibly:2 worse:1 cognitive:1 admit:1 creating:1 american:1 nonlinearities:1 coefficient:1 ad:1 dichotomization:1 apparently:1 sort:5 capability:1 complicated:1 formed:1 variance:13 kaufmann:1 confounds:1 weak:1 illdefined:1 unaffected:1 history:4 synapsis:2 reach:1 synaptic:3 nonetheless:1 acquisition:1 pp:1 attributed:1 stop:1 knowledge:2 dimensionality:1 ubiquitous:1 spliting:1 actually:2 back:3 appears:1 higher:1 response:5 synapse:2 diagnosed:1 just:2 freezing:1 nonlinear:1 propagation:3 name:1 consisted:1 hence:1 assigned:1 laboratory:1 during:4 excitation:1 criterion:1 complete:1 occuring:1 geometrical:1 discussed:1 organism:2 slight:1 significant:1 measurement:1 similarly:1 dot:1 stable:1 cortex:1 operating:1 etc:1 recent:1 inequality:1 touretzsky:1 seen:1 morgan:1 signal:5 stephen:1 living:1 multiple:1 ii:6 exceeds:1 calculation:1 concerning:2 coded:1 prediction:10 ile:2 basic:2 heterogeneous:1 essentially:1 poisson:1 histogram:2 represent:2 interval:2 annealing:2 grow:1 median:1 envelope:1 probably:1 pass:1 subject:2 undergo:1 member:1 seem:1 near:1 presence:1 split:3 easy:1 iii:1 pratt:1 affect:2 variate:1 architecture:3 reduce:1 tradeoff:1 ltd:1 ignored:1 useful:2 involve:4 locally:1 category:3 lsi:1 revisit:1 sign:1 delta:6 estimated:1 per:2 vol:1 waiting:1 group:2 key:1 blood:5 thresholded:1 ani:1 run:10 jose:1 parameterized:1 uncertainty:5 throughout:1 almost:1 architectural:2 decision:1 bit:5 display:1 fan:1 strength:4 occur:1 constraint:2 precisely:1 meiosis:20 aspect:2 speed:1 optimality:1 performing:1 separable:2 injection:1 jackknife:1 poor:2 separability:2 wi:2 evolves:2 modification:1 taken:2 resource:4 equation:2 remains:1 previously:1 mechanism:1 needed:1 available:1 wii:1 indirectly:1 gelatt:2 encounter:1 original:1 cf:1 opportunity:1 maintaining:1 calculating:1 sweep:1 occurs:1 parametric:1 exclusive:4 usual:1 gradient:6 separate:1 simulated:2 capacity:4 s_:1 discriminant:2 ix1:1 modeled:1 useless:1 retained:1 providing:1 copying:1 unfortunately:1 potentially:1 implementation:2 policy:3 unknown:1 contend:1 perform:1 neuron:1 benchmark:4 implementable:1 finite:2 variability:1 varied:1 arbitrary:1 required:1 connection:6 hanson:10 trans:1 beyond:1 proceeds:2 pattern:1 including:1 critical:2 natural:1 jill:5 scheme:1 misleading:1 started:1 woo:1 faced:1 prior:1 relative:1 proportional:1 degree:1 principle:1 last:1 parity:6 l_:1 bias:1 allow:1 understand:1 perceptron:2 arnold:1 institute:1 distributed:1 regard:1 boundary:1 dimension:3 curve:3 world:1 cumulative:1 ending:1 seemed:1 forward:3 made:1 tomko:2 adaptive:1 projected:1 ec:1 assumed:1 xi:1 search:5 continuous:2 why:1 nature:3 transfer:7 complex:1 domain:1 linearly:3 decrement:1 noise:8 repeated:1 neuronal:1 n:1 momentum:1 third:1 cyclically:1 showing:4 decay:2 dl:1 explore:1 visual:1 determines:1 constantly:1 conditional:1 consequently:9 unrepresentative:1 absence:1 content:1 hard:1 change:1 typical:1 reducing:1 averaging:1 hyperplane:1 total:1 pas:3 tendency:1 e:1 siemens:1 exception:1 lipid:3 princeton:3 |
1,396 | 2,270 | An Estimation-Theoretic Framework for
the Presentation of Multiple Stimuli
Christian W. Eurich?
Institute for Theoretical Neurophysics
University of Bremen
Otto-Hahn-Allee 1
D-28359 Bremen, Germany
[email protected]
Abstract
A framework is introduced for assessing the encoding accuracy and
the discriminational ability of a population of neurons upon simultaneous presentation of multiple stimuli. Minimal square estimation errors are obtained from a Fisher information analysis in an
abstract compound space comprising the features of all stimuli.
Even for the simplest case of linear superposition of responses and
Gaussian tuning, the symmetries in the compound space are very
different from those in the case of a single stimulus. The analysis
allows for a quantitative description of attentional effects and can
be extended to include neural nonlinearities such as nonclassical
receptive fields.
1
Introduction
An important issue in the Neurosciences is the investigation of the encoding properties of neural populations from their electrophysiological properties such as tuning
curves, background noise, and correlations in the firing. Many theoretical studies
have used estimation theory, in particular the measure of Fisher information, to account for the neural encoding accuracy with respect to the presentation of a single
stimulus (e. g., [1, 2, 3, 4, 5]).
Most modeling studies, however, neglect the fact that in a natural situation, neural
activity results from multiple objects or even complex sensory scenes. In particular,
attention experiments require the presentation of at least one distractor along with
the attended stimulus. Electrophysiological data are now available demonstrating
effects of selective attention on neural firing behavior in various cortical areas [6,
7, 8]. Such experiments require the development of theoretical tools which deviate
from the usual practice of considering only single stimuli in the analysis. Zemel
et al. [9] employ an extended encoding scheme for stimulus distributions and use
Bayesian decoding to account for the presentation of multiple objects. Similarly,
Bayesian estimation has been used in the context of attentional phenomena [10].
?
homepage: http://www-neuro.physik.uni-bremen.de/?eurich
In this paper, a new estimation-theoretic framework for the simultaneous presentation of multiple stimuli is introduced. Fisher information is employed to compute
lower bounds for the encoding error and the discrimational ability of neural populations independent of a particular estimator. Here we focus on the simultaneous
presentation of two objects in the context of attentional phenomena. Furthermore,
we assume a linearity in the neural response for reasons of analytical tractability;
however, the method can be extended to include neural nonlinearities.
2
2.1
Estimation Theory for Multiple Stimuli
Tuning Curves in Compound Space
The tuning curve f (X ) of a neuron is defined to be the average neural response to
repetitive presentations of stimulus configurations X . In most cases, the response
is taken to be the number n(X ) of action potentials occurring within some time
interval ? after stimulus presentation, or the neural firing rate r(X ) = n(X )/? :
hn(X )i
.
(1)
?
Within an estimation-theoretic framework, the variability of the neural response is
described by a probability distribution conditioned on the value of X , P (n; X ). The
average h?i in (1) can be regarded either as an average over multiple presentations
of the same stimulus configuration (in an experimental setup), or as an average over
n (in a theoretical description).
f (X ) = hr(X )i =
In most electrophysiological experiments, tuning curves are assessed through the
presentation of a single stimulus, X = ~x, such as a bar or a grating characterized by
a single orientation, or a dot of light at a specific position in the animal?s visual field
(e.g., [11, 12]). Such tuning curves will be denoted by f1 (~x), where the subscript
refers to the single object.
The behavior of a neuron upon presentation of multiple objects, however, cannot
be inferred from tuning curves f1 (~x). Instead, neurons may show nonlinearities
such as the so-called non-classical receptive fields in the visual area V1 which have
attracted much attention in the recent past (e. g., [13, 14]). For M simultaneously
presented stimuli, X = ~x1 , . . . , ~xM , the neuronal tuning curve can be written as a
function fM (~x1 , . . . , ~xM ), where the subscript M is not necessarily a parameter of
the function but an indicator of the number of stimuli it refers to. The domain of
this function will be called the compound space of the stimuli.
In the following, we consider a specific example consisting of two simultaneously
presented stimuli, characterized by a single physical property (such as orientation
or direction of movement). The resulting tuning function is therefore a function of
two scalar variables x1 and x2 : f2 (x1 , x2 ) = hr(x1 , x2 )i = hn(x1 , x2 )i/? . Figure 1
visualizes the concept of the compound space.
In order to obtain an analytical access to the encoding properties of a neural population, we will furthermore assume that a neuron?s response f2 (x1 , x2 ) is a linear
superposition of the single-stimulus responses f1 (x1 ) and f1 (x2 ), i. e.,
f2 (x1 , x2 ) = kf1 (x1 ) + (1 ? k)f1 (x2 ) ,
(2)
where 0 < k < 1 is a factor which scales the relative importance of the two stimuli.
Such linear behavior has been observed in area 17 of the cat upon presentation
of bi-vectorial transparent motion stimuli [15] and in areas MT and MST of the
macaque monkey upon simultaneous presentation of two moving objects [16]. In
f2(x1,x2)
f1(x)
x'
x''
x
x''
x'
x2
x1
Figure 1: The concept of compound space. A single-stimulus tuning curve f 1 (x)
(left) yields the average response to the presentation of either x 0 or x00 ; the simultaneous presentation of x0 and x00 , however, can be formalized only through a tuning
curve f2 (x1 , x2 ) (right).
general, however, the compound space method is not restricted to linear neural
responses.
The consideration of a neural population in the compound space yields tuning
properties and symmetries which are very different from those in a D-dimensional
single-stimulus space considered in the literature (e. g., [2, 3, 4]). First, the tuning
curves have a different appearance. Figure 2a shows a tuning curve f2 (x1 , x2 ) given
by (2), where f1 (x) is a Gaussian,
(x ? c)2
f1 (x) = F exp ?
;
(3)
2? 2
F is a gain factor which can be scaled to be the maximal firing rate of the neuron.
f2 (x1 , x2 ) is not radially symmetric but has cross-shaped level curves. Second, a
f2(x1,x2)
x2
1.2
1
f1(x)
0.8
x
c
0.6
0.4
0.2
(c,c)
8
8
6
(a)
x2
6
4
4
2 2
x1
(b)
x1
Figure 2: (a) A tuning curve f2 (x1 , x2 ) in a 2-dimensional compound space given
by (2) and (3) with k = 0.5, c = 5, ? = 0.3, F = 1. (b) Arrangement of tuning
curves: The centers of the tuning curves are restricted to the diagonal x 1 = x2 . The
cross is a schematic cross-section of the tuning curve in (a).
single-stimulus tuning curve f1 (x) whose center is located at x = c yields a linear
superposition whose center is given by the vector (c, c) in the compound space.
This is due to the fact that both axes describe the same physical stimulus feature.
Therefore, all tuning curve centers are restricted to the 1-dimensional subspace
x1 = x2 . The tuning curve centers are assumed to have a distribution in the
compound space which can be written as
0
if c1 6= c2
??(c1 , c2 ) =
.
(4)
?(c) if c1 = c2
The geometrical features in the compound space suggest that an estimationtheoretic approach will yield encoding properties of neural populations which are
different from those obtained from the presentation of a single stimulus.
2.2
Fisher Information
In order to assess the encoding accuracy of a neural population, the stochasticity
of the neural response is taken into account. For N neurons, it is formalized as the
probability of obtaining n(i) spikes in the i-th neuron (i = 1 . . . , N ) as a response to
the stimulus configuration X , P (n(1) , n(2) , . . . , n(N ) ; X ) ? P (~n; X ). Here we assume
independent spike generation mechanisms in the neurons:
N
Y
P (n(1) , n(2) , . . . , n(N ) ; X ) =
P (n(i) ; X ) .
(5)
i=1
These parameter-dependent distributions are obtained either experimentally or
through a noise model; a convenient choice for the latter is a Poisson distribution
with a spike count average given by the tuning curve (1) of each neuron.
In the 2-dimensional compound space discussed in the previous section, P (~n; X ) ?
P (~n; x1 , x2 ). The Fisher information is a 2 ? 2 matrix J(x1 , x2 ) = (Jij (x1 , x2 ))
(i, j ? {1, 2}), whose entries are given by
?
?
Jij (x1 , x2 ) = (
ln P (~n; x1 , x2 ))(
ln P (~n; x1 , x2 ))
(i, j ? {1, 2}) . (6)
?xi
?xj
The Cram?er-Rao inequality states that a lower bound on the expected square estimation error of the ith feature, 2i,min (i=1,2), is given by (J ?1 )ii provided that
the estimator is unbiased. In the following, this lower bound is studied in the
2-dimensional compound space.
3
Results
Single-neuron Fisher Information. The single-neuron Fisher information in
the compound space can be written down for an arbitrary noise model. Here we
choose a Poissonian spike distribution,
(? f2 (x1 , x2 ))n exp {?? f2 (x1 , x2 )}
P (n; x1 , x2 ) =
,
(7)
n!
whereby the tuning is assumed to be linear according to (2), and the single-stimulus
tuning curve f1 (x) is a Gaussian given by (3). A straightforward calculation yields
c
the single-neuron Fisher information matrix J c (x1 , x2 ) = (Jij
(x1 , x2 )) (i, j ? {1, 2})
given by
?F
?
J c (x1 , x2 ) =
(8)
(x1 ?c)2
(x ?c)2
?
? 2 2
4
2
? ke 2?
+ (1 ? k)e 2?
?
?
(x ?c)2
(x ?c)2 +(x2 ?c)2
? 1
2
2 ? 1?2
2
2?
k (x1 ? c) e
k(1 ? k)(x1 ? c)(x2 ? c)e
?
?;
(x ?c)2 +(x2 ?c)2
(x ?c)2
? 1
? 2 2
2
2
2
2?
?
k(1 ? k)(x1 ? c)(x2 ? c)e
(1 ? k) (x2 ? c) e
the index c refers to the center (c, c) of the tuning curve.
Population Fisher Information. For independently spiking neurons (5), the
population Fisher information is the sum of the single-neuron Fisher information
values. Assuming some density ?(c) of tuning curve centers on the diagonal x1 = x2 ,
the population Fisher information is therefore obtained by an integration of (8).
Here we consider the simple case of a constant density, ?(c) ? ?0 resulting in
elements Jij (x1 , x2 ) (i, j ? {1, 2}) of the Fisher information maxtrix given by
Jij (x1 , x2 ) = ?
Z?
c
Jij
(x1 , x2 )dc .
(9)
??
A symmetry with respect to the diagonal x1 = x2 allows the replacement of the
two variable x1 , x2 by a single variable ? visualized in Fig. 3. It is straightforward
x2
(
x1+x2, x1+x2
)
2
2
( -rr )
(x1,x2)
x1
Figure 3: Transformation to the variable ?
which is proportional to the distance of the
point (x1 , x2 ) to the diagonal. ? therefore
quantifies the similarity of the stimuli x1 and
x2 .
to obtain two additional symmetries, J12 (?) = J21 (?) and J11 (?) = J11 (??). The
final population Fisher information is given by
J11 (?)
J12 (?)
2
,
(10)
J(?) =
J12 (?) (1?k)
k2 J11 (?)
whereby
J11 (?)
=
k2 ? F ?
?
Z?
??
J12 (?)
=
(? + ?? )2 exp{?(? + ?? )2 }
d? ,
k exp{? 21 (? + ?? )2 } + (1 ? k) exp{? 21 (? ? ?? )2 }
k(1 ? k)? F ?
?
Z?
??
(? + ?? )(? ? ?? ) exp{? 21 ((? + ?? )2 + (? ? ?? )2 )}
d? .
k exp{? 21 (? + ?? )2 } + (1 ? k) exp{? 21 (? ? ?? )2 }
In the following, three examples will be discussed.
3.1
Example 1: Symmetrical Tuning
First we study the symmetrical case k = 1/2 the receptive fields of which are given
in Fig. 2a. Fig. 4 shows the minimal square estimation error for x1 , 21,min (?), as
obtained from the first diagonal element of the inverse Fisher information matrix.
Due to the symmetry, it is identical to the minimal square error for x2 , 22,min(?).
The estimation error diverges as ? ?? 0. This can be understood as follows: For
k = 1/2, the matrix (10) is symmetric and can be diagonalized. The eigenvector
directions are
1
1
1
?1
~v1 = ?
~v2 = ?
.
(11)
1
1
2
2
Correspondingly, the diagonal Fisher
? information matrix
? yields a lower bound for
the estimation errors of (x1 + x2 )/ 2 and (x2 ? x1 )/ 2,?respectively. The results
are shown in Fig. 5. The estimation error for (x1 + x2 )/ 2 takes a finite value for
20
2
emin
(r)
15
Figure 4: Minimal square estimation error for
stimulus x1 or x2 . Solid line: F = 1; dotted
line: F = 1.5.In both cases, k = 0.5, ? = 1,
? = 1, ? = 1.
10
5
0
-4
-2
0
2
4
r
20
(a)
(b)
direction
x1+x2
21/2
1
direction
x2-x1
21/2
15
2
emin
(r)
2
emin
(r)
1.5
10
5
0.5
-4
-2
0
2
4
0
-4
r
-2
0
2
4
r
?
Figure
? 5: Minimal square estimation error for (a) (x1 + x2 )/ 2 and (b) (x2 ?
x1 )/ 2. Solid lines: F = 1; dotted lines: F = 1.5. Same parameters as in Fig. 4.
?
all %. However, the estimation error for (x2 ? x1 )/ 2 diverges as ? ?? 0. This
error corresponds to an estimation of the difference of the two presented stimuli.
As expected, a discrimination
? becomes impossible as the stimuli merge. The Fisher
information for (x2 ? x1 )/ 2 can be regarded as a discrimination measure which
takes the simultaneous presentation of stimuli into account.
3.2
Example 2: Attention on Both Stimuli
Electrophysiological studies in V1 and V4 [7] and MT [8] of macaque monkeys
suggest that the gain but not the width of tuning curves is increased as stimuli in
a cell?s receptive field are attended. This can easily be incorporated in the current
model: The gain corresponds to the factor F in the tuning curve (3). Figures 4
and 5 compare the results obtained in the previous section (F = 1) with a maximal
firing rate F = 1.5. As expected, the minimal square errors are smaller for higher
F in all cases (dotted lines); a higher firing rate yields a better stimulus estimation.
This suggests that attention increases localization accuracy of x1 and x2 as well
as their discrimination if both stimuli are attended. The former is consistent with
psychophysical results on attentional enhancement of spatial resolution in human
subjects [17].
3.3
Example 3: Attending One Stimulus
The situation changes if only one of the two stimuli is attended. Electrophysiological recordings in monkey area V4 suggest that upon presentation of two stimuli
inside a neuron?s receptive field, the influence of the attended stimulus increases
as compared to the unattended one [6]. In our framework, this situation can be
considered by increasing the weight factor of the attended stimulus in the linear
superposition (2). Here we study the case k = 0.75 corresponding to attending
stimulus x1 . The resulting tuning curve shows characteristic distortions as compared to the symmetrical case k = 0.5 (Fig. 6a). The Fisher information analysis
20
(a)
f2(x1,x2)
(b)
1.2
direction
x2-x1
1/2
2
15
2
emin
(r)
1
0.8
0.6
0.4
10
0.2
5
8
8
6
x2
6
4
4
2 2
x1
0
-4
-2
0
2
4
r
Figure 6: Neural encoding for one attended stimulus. (a) Tuning curve (2), (3) for
k = 0.75, i. e., stimulus x1 is attended. All other parameters?as in Fig. 1a. (b)
Minimal square estimation errors for the direction (x2 ? x1 )/ 2 resulting from a
rotated Fisher information matrix. Solid line: k = 0.5 as in Fig. 5b; dotted line:
k = 0.75. F = 1, all other parameters as in Fig. 4.
reveals that the attended stimulus x1 yields a smaller minimal square estimation
error than it does in the non-attention case k = 0.5 whereas the minimal square
error for the unattended stimulus x2 is increased (data not shown). Figure
? 6b shows
the minimal square error for the difference of the stimuli, (x2 ? x1 )/ 2. The minimal estimation error becomes larger as compared to k = 0.5. This result can be
interpreted as follows: Attending stimulus x1 yields a better encoding of x1 but
a worse encoding of ?
x2 . The latter results in the larger estimation error for the
difference (x2 ? x1 )/ 2 of the stimulus values. This can be interpreted as a worse
discriminational ability: In a psychophysical experiment, subjects attending stimulus x1 will have only a crude representation of the unattended stimulus x2 will
therefore yield a performance which is worse as compared to the situation where
both stimuli are processed in the same way. This is a prediction resulting from the
presented framework.
4
Summary and Discussion
A method was introduced to account for the encoding of multiple stimuli by populations of neurons. Estimation theory was performed in a compound space whose
axes are defined by the features of each stimulus. Here we studied a specific example of linear neurons with Gaussian tuning and Poissonian spike statistics to gain
insight into the symmetries in the compound space and the interpretation of the
resulting estimation errors. The approach allows for a detailed consideration of attention effects on the neural level [7, 8, 6]. The method can be extended to include
nonlinear neural behavior as multiple stimuli are presented; see e. g. [13, 14], where
the response of single neurons to two orientation stimuli cannot be easily inferred
from the neural behavior in the case of only one stimulus. More experimental and
theoretical work has to be done in order to account for the psychophysical performance under the influence of attention as it has been measured, for example, in [17].
For this purpose, the presented approach has to be related to classical measures in
discrimination and same-different tasks. From theoretical considerations in the case
of a single stimulus [2, 3, 4, 5] it is well known that the encoding accuracy of a neural population may depend on various properties such as the number of encoded
features, the noise model, and the correlations in the neural activity. The influence
of such factors within the presented framework is currently under investigation.
Acknowledgments
I wish to thank Shun-ichi Amari, Hiroyuki Nakahara, Anthony Marley and Stefan
Wilke for stimulating discussions. Part of this paper was written during my stay at
the RIKEN institute. I also acknowledge support from SFB 517, Neurocognition.
References
[1] M. A. Paradiso, A theory for the use of visual orientation information which exploits
the columnar structure of striate cortex, Biol. Cybern. 58 (1988) 35?49.
[2] K. Zhang and T. J. Sejnowski, Neuronal tuning: to sharpen or broaden? Neural
Comp. 11 (1999) 75?84.
[3] C. W. Eurich and S. D. Wilke, Multidimensional encoding strategy of spiking neurons,
Neural Comp. 12 (2000) 1519?1529.
[4] S. D. Wilke and C. W. Eurich, Representational accuracy of stochastic neural populations, Neural Comp. 14 (2001) 155?189.
[5] H. Nakahara, S. Wu and S.-i. Amari, Attention modulation of neural tuning through
peak and base rate, Neural Comp. 13 (2001) 2031?2047.
[6] J. Moran and R. Desimone, Selective attention gates visual processing in the extrastriate cortex, Science 229 (1985) 782?784.
[7] C. J. McAdams and J. H. R. Maunsell, Effects of attention on orientation-tuning
functions of single neurons in macaque cortical area V4, J. Neurosci. 19 (1999) 431?
441.
[8] S. Treue and J. C. Mart??netz Trujillo, Feature-based attention influences motion processing gain in macaque visual cortex, Nature 399 (1999) 575?579.
[9] R. S. Zemel, P. Dayan and A. Pouget, Probabilistic interpretation of population codes,
Neural Comp. 10 (1998) 403?430.
[10] P. Dayan and R. S. Zemel, Statistical models and sensory attention, in: D. Willshaw
und A. Murray (eds), Procedings of the Ninth International Conference on Artificial
Neural Networks, ICANN 99, Venue, University of Edinburgh (1999) 1017?1022.
[11] D. H. Hubel and T. Wiesel, Receptive fields and functional architecture of monkey
striate cortex, J. Physiol. 195 (1968) 215?244.
[12] N. V. Swindale (1998), Orientation tuning curves: empirical description and estimation of parameters, Biol. Cybern. 78 (1998) 45?56.
[13] J. J. Knierim und D. van Essen, Neuronal responses to static texture patterns in area
V1 of the alert macaque monkey, J. Neurophysiol. 67 (1992) 961?979.
[14] A. M. Sillito, K. Grieve, H. Jones, J. Cudeiro und J. Davies, Visual cortical mechanisms detecting focal orientation discontinuities, Nature 378 (1995) 492?496.
[15] R. J. A. van Wezel, M. J. M. Lankheet, F. A. J. Verstraten, A. F. M. Mar?ee and
W. A. van de Grind, Responses of complex cells in area 17 of the cat to bi-vectorial
transparent motion, Vis. Res. 36 (1996) 2805?2813.
[16] G. H. Recanzone, R. H. Wurtz and U. Schwarz, Responses of MT and MST neurons
to one and two moving objects in the receptive field, J. Neurophysiol. 78 (1997)
2904?2915.
[17] Y. Yeshurun and M. Carrasco, Attention improves or impairs visual performance by
enhancing spatial resolution, Nature 396 (1998) 72?75.
| 2270 |@word wiesel:1 physik:2 attended:9 solid:3 extrastriate:1 configuration:3 past:1 diagonalized:1 current:1 attracted:1 written:4 mst:2 physiol:1 christian:1 discrimination:4 ith:1 detecting:1 zhang:1 along:1 c2:3 alert:1 marley:1 inside:1 grieve:1 x0:1 expected:3 behavior:5 distractor:1 considering:1 increasing:1 becomes:2 provided:1 linearity:1 homepage:1 interpreted:2 monkey:5 eigenvector:1 transformation:1 quantitative:1 multidimensional:1 willshaw:1 scaled:1 k2:2 wilke:3 maunsell:1 understood:1 encoding:14 subscript:2 firing:6 modulation:1 merge:1 studied:2 suggests:1 bi:2 acknowledgment:1 practice:1 area:8 empirical:1 convenient:1 davy:1 refers:3 cram:1 suggest:3 cannot:2 context:2 impossible:1 influence:4 unattended:3 cybern:2 www:1 center:7 straightforward:2 attention:14 independently:1 ke:1 resolution:2 formalized:2 pouget:1 estimator:2 attending:4 insight:1 regarded:2 j12:4 population:15 element:2 located:1 carrasco:1 observed:1 movement:1 und:3 depend:1 upon:5 localization:1 f2:12 neurophysiol:2 easily:2 yeshurun:1 various:2 cat:2 riken:1 describe:1 sejnowski:1 artificial:1 zemel:3 neurophysics:1 whose:4 encoded:1 larger:2 distortion:1 amari:2 otto:1 ability:3 statistic:1 final:1 mcadams:1 rr:1 analytical:2 nonclassical:1 maximal:2 jij:6 representational:1 description:3 enhancement:1 assessing:1 diverges:2 rotated:1 object:7 measured:1 paradiso:1 grating:1 direction:6 stochastic:1 human:1 shun:1 require:2 f1:11 transparent:2 investigation:2 swindale:1 considered:2 exp:8 purpose:1 estimation:24 currently:1 superposition:4 schwarz:1 grind:1 wezel:1 tool:1 stefan:1 gaussian:4 treue:1 ax:2 focus:1 dependent:1 dayan:2 selective:2 comprising:1 germany:1 issue:1 orientation:7 denoted:1 development:1 animal:1 spatial:2 integration:1 field:8 shaped:1 identical:1 jones:1 stimulus:59 employ:1 simultaneously:2 consisting:1 replacement:1 essen:1 light:1 desimone:1 re:1 theoretical:6 minimal:11 increased:2 modeling:1 rao:1 tractability:1 entry:1 my:1 density:2 peak:1 international:1 venue:1 stay:1 v4:3 probabilistic:1 decoding:1 hn:2 choose:1 worse:3 account:6 potential:1 nonlinearities:3 de:3 vi:1 cudeiro:1 performed:1 ass:1 square:11 accuracy:6 characteristic:1 yield:10 bayesian:2 recanzone:1 comp:5 visualizes:1 simultaneous:6 ed:1 static:1 gain:5 radially:1 improves:1 electrophysiological:5 hiroyuki:1 higher:2 response:15 emin:4 done:1 mar:1 furthermore:2 correlation:2 nonlinear:1 effect:4 concept:2 unbiased:1 former:1 symmetric:2 during:1 width:1 whereby:2 theoretic:3 motion:3 geometrical:1 consideration:3 functional:1 mt:3 physical:2 spiking:2 discussed:2 interpretation:2 trujillo:1 tuning:36 focal:1 similarly:1 stochasticity:1 sharpen:1 dot:1 moving:2 access:1 similarity:1 cortex:4 base:1 recent:1 compound:17 inequality:1 additional:1 employed:1 ii:1 multiple:10 characterized:2 calculation:1 cross:3 schematic:1 prediction:1 neuro:1 enhancing:1 poisson:1 wurtz:1 repetitive:1 cell:2 c1:3 background:1 whereas:1 interval:1 subject:2 recording:1 j11:5 ee:1 xj:1 architecture:1 fm:1 j21:1 sfb:1 impairs:1 action:1 detailed:1 visualized:1 processed:1 simplest:1 http:1 dotted:4 neuroscience:1 ichi:1 demonstrating:1 v1:4 sum:1 verstraten:1 inverse:1 wu:1 bound:4 activity:2 vectorial:2 scene:1 x2:69 min:3 according:1 smaller:2 restricted:3 taken:2 ln:2 count:1 mechanism:2 available:1 v2:1 gate:1 broaden:1 include:3 neglect:1 exploit:1 hahn:1 murray:1 classical:2 psychophysical:3 arrangement:1 spike:5 receptive:7 strategy:1 usual:1 diagonal:6 striate:2 subspace:1 distance:1 attentional:4 thank:1 reason:1 assuming:1 code:1 index:1 setup:1 neuron:22 finite:1 acknowledge:1 situation:4 extended:4 variability:1 incorporated:1 dc:1 ninth:1 arbitrary:1 knierim:1 inferred:2 introduced:3 procedings:1 eurich:5 kf1:1 macaque:5 discontinuity:1 poissonian:2 bar:1 pattern:1 xm:2 natural:1 indicator:1 hr:2 scheme:1 deviate:1 literature:1 relative:1 generation:1 proportional:1 consistent:1 bremen:4 allee:1 summary:1 institute:2 correspondingly:1 edinburgh:1 van:3 curve:28 cortical:3 sensory:2 uni:2 hubel:1 reveals:1 symmetrical:3 assumed:2 xi:1 x00:2 quantifies:1 sillito:1 nature:3 symmetry:6 obtaining:1 complex:2 necessarily:1 anthony:1 domain:1 icann:1 neurosci:1 noise:4 x1:72 neuronal:3 fig:9 position:1 wish:1 crude:1 down:1 specific:3 er:1 moran:1 importance:1 texture:1 conditioned:1 occurring:1 columnar:1 appearance:1 visual:7 scalar:1 corresponds:2 mart:1 stimulating:1 presentation:19 nakahara:2 fisher:19 experimentally:1 change:1 called:2 experimental:2 support:1 latter:2 assessed:1 phenomenon:2 biol:2 |
1,397 | 2,271 | Dynamic Structure Super-Resolution
Amos J Storkey
Institute of Adaptive and Neural Computation
Division of Informatics and Institute of Astronomy
University of Edinburgh
5 Forrest Hill, Edinburgh UK
[email protected]
Abstract
The problem of super-resolution involves generating feasible higher
resolution images, which are pleasing to the eye and realistic, from
a given low resolution image. This might be attempted by using simple filters for smoothing out the high resolution blocks or
through applications where substantial prior information is used
to imply the textures and shapes which will occur in the images.
In this paper we describe an approach which lies between the two
extremes. It is a generic unsupervised method which is usable in
all domains, but goes beyond simple smoothing methods in what it
achieves. We use a dynamic tree-like architecture to model the high
resolution data. Approximate conditioning on the low resolution
image is achieved through a mean field approach.
1
Introduction
Good techniques for super-resolution are especially useful where physical limitations
exist preventing higher resolution images from being obtained. For example, in
astronomy where public presentation of images is of significant importance, superresolution techniques have been suggested. Whenever dynamic image enlargement
is needed, such as on some web pages, super-resolution techniques can be utilised.
This paper focuses on the issue of how to increase the resolution of a single image
using only prior information about images in general, and not relying on a specific
training set or the use of multiple images.
The methods for achieving super-resolution are as varied as the applications. They
range from simple use of Gaussian or preferably median filtering, to supervised
learning methods based on learning image patches corresponding to low resolution
regions from training data, and effectively sewing these patches together in a consistent manner. What method is appropriate depends on how easy it is to get suitable
training data, how fast the method needs to be and so on. There is a demand for
methods which are reasonably fast, which are generic in that they do not rely on
having suitable training data, but which do better than standard linear filters or
interpolation methods.
This paper describes an approach to resolution doubling which achieves this. The
method is structurally related to one layer of the dynamic tree model [9, 8, 1] except
that it uses real valued variables.
2
Related work
Simple approaches to resolution enhancement have been around for some time.
Gaussian and Wiener filters (and a host of other linear filters) have been used for
smoothing the blockiness created by the low resolution image. Median filters tend
to fare better, producing less blurry images. Interpolation methods such as cubicspline interpolation tend to be the most common image enhancement approach.
In the super-resolution literature there are many papers which do not deal with the
simple case of reconstruction based on a single image. Many authors are interested
in reconstruction based on multiple slightly perturbed subsamples from an image [3,
2] . This is useful for photographic scanners for example. In a similar manner other
authors utilise the information from a number of frames in a temporal sequence [4].
In other situations highly substantial prior information is given, such as the ground
truth for a part of the image. Sometimes restrictions on the type of processing
might be made in order to keep calculations in real time or deal with sequential
transmission.
One important paper which deals specifically with the problem tackled here is by
Freeman, Jones and Pasztor [5]. They follow a supervised approach, learning a
low to high resolution patch model (or rather storing examples of such maps),
and utilising a Markov random field for combining them and loopy propagation
for inference. Later work [6] simplifies and improves on this approach. Earlier
work tackling the same problem includes that of Schultz and Stevenson [7], which
performed an MAP estimation using a Gibbs prior.
There are two primary difficulties with smoothing (eg Gaussian, Wiener, Median
filters) or interpolation (bicubic, cubic spline) methods. First smoothing is indiscriminate. It occurs both within the gradual change in colour of the sky, say, as well
as across the horizon, producing blurring problems. Second these approaches are
inconsistent: subsampling the super-resolution image will not return the original
low-resolution one. Hence we need a model which maintains consistency but also
tries to ensure that smoothing does not occur across region boundaries (except as
much is as needed for anti-aliasing).
3
The model
Here the high-resolution image is described by a series of very small patches with
varying shapes. Pixel values within these patches can vary, but will have a common
mean value. Pixel values across patches are independent. Apriori exactly where
these patches should be is uncertain, and so the pixel to patch mapping is allowed
to be a dynamic one.
The model is best represented by a belief network. It consists of three layers. The
lowest layer consists of the visible low-resolution pixels. The intermediate layer is a
high-resolution image (4 ? 4 the size of the low-resolution image). The top layer is
a latent layer which is a little more than 2 ? 2 the size of the low resolution image.
The latent variables are ?positioned? at the corners, centres and edge centres of
the pixels of the low resolution image. The values of the pixel colour of the high
resolution nodes are each a single sample from a Gaussian mixture (in colour space),
where each mixture centre is given by the pixel colour of a particular parent latent
Latent
Hi Res
Low Res
Figure 1: The three layers of the model. The small boxes in the left figure (64 of
them) give the position of the high resolution pixels relative to the low resolution
pixels (the 4 boxes with a thick outline). The positions of the latent variable nodes
are given by the black circles. The colour of each high resolution pixel is generated
from a mixture of Gaussians (right figure), each Gaussian centred at its latent
parent pixel value. The closer the parent is, the higher the prior probability of
being generated by that mixture is.
variable node. The prior mixing coefficients decay with distance in image space
between the high-resolution node and the corresponding latent node.
Another way of viewing this is that a further indicator variable can be introduced
which selects which mixture is responsible for a given high-resolution node. We say
a high resolution node ?chooses? to connect to the parent that is responsible for it,
with a connection probability given by the corresponding mixing coefficient. These
connection probabilities can be specified in terms of positions (see figure 2).
The motivation for this model comes from the possibility of explaining away. In
linear filtering methods each high-resolution node is determined by a fixed relationship to its neighbouring low-resolution nodes. Here if one of the latent variables
provides an explanation for a high-resolution node which fits well with it neighbours
to form the low-resolution data, then the posterior responsibility of the other latent
nodes for that high-resolution pixel is reduced, and they are free to be used to model
other nearby pixels. The high-resolution pixels corresponding to a visible node can
be separated into two (or more) independent regions, corresponding to pixels on
different sides of an edge (or edges). A different latent variable is responsible for
each region. In other words each mixture component effectively corresponds to a
small image patch which can vary in size depending on what pixels it is responsible
for.
Let vj ? L denote a latent variable at site j in the latent space L. Let xi ? S
denote the value of pixel i in high resolution image space S, and let yk denote the
value of the visible pixel k. Each of these is a 3-vector representing colour. Let V
denote the ordered set of all vj . Likewise X denotes the ordered set of all xi and Y
the set of all yi . In all the work described here a transformed colorspace of (gray,
red-green, blue-yellow) is used. In other words the data is a linear transformation
on the RGB colour values using the matrix
!
0.66 1 0.5
0.66 ?1 0.5 .
0.66 0 ?1
The remaining component is the connectivity (i.e. the indicator for the responsibility) between the high-resolution nodes and the nodes in the latent layer. Let zij
denote this connectivity with zij an indicator variable taking value 1 when vj is a
parent of xi in the belief network. Every high resolution pixel has one and only one
parent in the latent layer. Let Z denote the ordered set of all zij .
3.1
Distributions
A uniform distribution over the range of pixel values is presumed for the latent
variables. The high resolution pixels are given by Gaussian distributions centred
on the pixel values of the parental latent variable. This Gaussian is presumed
independent in each pixel component. Finally the low resolution pixels are given
by the average of the sixteen high resolution pixels covering the site of the low
resolution pixel. This pixel value can also be subject to some additional Gaussian
noise if necessary (zero noise is assumed in this paper).
It is presumed that each high resolution pixel is allowed to ?choose? its parent from
the set of latent variables in an independent manner. A pixel has a higher probability
of choosing a nearby parent than a far away one.
For this we use a Gaussian integral form so that :
Z
Y z
(rj ? r)2
ij
P (Z) =
pij where pij ?
dr exp ?
,
2?
Bi
ij
(1)
The equations for the other distributions are given here. First we have
!
m 2
Y
(xm
1
i ? vj )
P (X|Z, V ) =
exp ?zij
.
2?m
(2??m )1/2
ijm
(2)
where r is a position in the high resolution picture space, rj is the position of the
jth latent variable in the high resolution image space (where these are located at
the corners of every second pixel in each direction as described above). The integral
is over Bi defined as the region in image space corresponding to pixel xi . ? gives
the width (squared) over which the probability decays. The larger ? the more
possible parents with non-negligible probability. The connection probabilities can
be illustrated by the picture in figure 2.
where ?m is a variance which determines how much each pixel must be like its
latent parent. Here the indicator zij ensures the only contribution for each i comes
from the parent j of i. Second
!
P
2
Y
(ykm ? d1 i?P a(k) xm
1
i )
exp ?
(3)
P (Y |X) =
2?
(2??)1/2
km
Figure 2: An illustration of the connection probabilities from a high resolution pixel
in the position of the smaller checkered square to the latent variables centred at each
of the larger squares. The probability is proportional to the intensity of the shading:
darker is higher probability.
with P a(k) denoting the set of all the d = 16 high resolution pixels which go to
make up the low resolution pixel yk . In this work we let the variance ? ? 0.
? determines the additive Gaussian noise which is in the low resolution image.
Last, P (V ) is simply uniform over the whole of the possible values of V . Hence
P (V ) = 1/C for C the volume of V space being considered.
3.2
Inference
The belief network defined above is not tree structured (rather it is a mixture of tree
structures) and so we have to resort to approximation methods for inference. In this
paper a variational approach is followed. The posterior distribution is approximated
using a factorised distribution over the latent space and over the connectivity. Only
in the high resolution space X do we consider joint distributions: we use a joint
Gaussian for all the nodes corresponding to one low resolution pixel. The full
distribution can be written as Q(Z, V, X) = Q(Z)Q(V )Q(X) where
!
Y z
Y
(vjm ? ?jm )2
1
ij
Q(Z) =
qij ,
Q(V ) =
exp ?
and (4)
1/2
2(?m
(2??m
j )
j )
ij
jm
Y (2?)?d/2
1 ? m
? m T
m ?1
? m
? m
Q(X) =
exp ? [(x )k ? (? )k ] (?k ) [(x )k ? (? )k ]
(5)
1/2
2
|?m
k |
km
m
where (x? )m
k is the vector (xi |i ? P a(k)), the joint of all d high resolution pixel
values corresponding to a given low resolution pixel k (for a given colour component
m
m
m
m). Here qij , ?m
i , ?j , ?j and ?i are variational parameters to be optimised.
As usual, a local minima the KL divergence between the approximate distribution
and the true posterior distribution is computed. This is equivalent to maximising
the negative variational free energy (or variational log likelihood)
Q(Z, V, X)
(6)
L(Q||P ) = log
P (Z, V, X, Y ) Q(Z,V,X)
where Y is given by the low resolution image. In this case we obtain
L(Q||P ) = hlog Q(Z) ? log P (Z)iQ(Z) + hlog Q(V ) ? log p(V )iQ(V )
+ hlog Q(X)iQ(X) ? hlog P (X|Z, V )iQ(X,Z,V ) ? hlog P (Y |X)iQ(Y,X) . (7)
Taking expectations and derivatives with respect to each of the parameters in the
approximation gives a set of self-consistent mean field equations which we can solve
by repeated iteration. Here for simplicity we only solve for qij and for the means ?m
i
and ?jm which turn out to be independent of the variational variance parameters.
We obtain
P
m
X
m
m
i qij xi
?jm = P
and ?m
qij vim
(8)
i = ?i + Dc(i) where ?i =
q
ij
i
j
where c(i) is the child of i, i.e. the low level pixel which i is part of. Dk is
a Lagrange multiplier, and is obtained through constraining the high level pixel
values to average to the low level pixels:
1 X m
1 X
?
m
m
?m
?i
(9)
i = y k ? Dk ? Dk = y k ?
d
d
i?P a(k)
i?P a(k)
In the case where ? is non-zero, this constraint is softened and Dk is given by
Dk = ?Dk? /(? + ?). The update for the qij is given by
!
X (xm ? v m )2
i
k
qij ? pij exp ?
(10)
2?m
m
where the constant of proportionality is given by normalisation:
P
j qij
= 1.
Optimising the KL divergence involves iterating these equations. For each Q(Z)
optimisation (10), equations (8a) and (8b) are iterated a number of times. Each
optimisation loop is either done a preset number of times, or until a suitable convergence criterion is met. The former approach is generally used, as the basic criterion
is a limit on the time available for the optimisation to be done.
4
Setting parameters
The prior variance parameters need to be set. The variance ? corresponds to the
additive noise. If this is not known to be zero, then it will vary from image to image,
and needs to be found for each image. This can be done using variational maximum
likelihood, where ? is set to maximise the variational log likelihood. ? is presumed
to be independent of the images presented, and is set by hand by visualising changes
on a test set. The ?m might depend on the intensity levels in the image: very dark
images will need a smaller value of ?1 for example. However for simplicity ?m = ?
is treated as global and set by hand. Because the primary criterion for optimal
parameters is subjective, this is the most sensible approach, and is reasonable when
there are only two parameters to determine. To optimise automatically based on
the variational log likelihood is possible but does not produce as good results due to
the complicated nature of a true prior or error-measure for images. For example, a
highly elaborate texture offset by one pixel will give a large mean square error, but
look almost identical, whereas a blurred version of the texture would give a smaller
mean square error, but look much worse.
5
Implementation
The basic implementation involves setting the parameters, running the mean field
optimisation and then looking at the result. The final result is a downsampled
version of the 4 ? 4 image to 2 ? 2 size: the larger image is used to get reasonable
anti-aliasing.
To initialise the mean field optimisation, X is set equal to the bi-cubic interpolated
image with added Gaussian noise. The Q(Z) is initialised to P (Z). Although in
the examples here we used 25 optimisations Q(Z), each of which involves 10 cycles
through the mean field equations for Q(X) and Q(V ), it is possible to get reasonable
results with only three Q(Z) optimisation cycles each doing 2 iterations through
the mean field equations. In the runs shown here, ? is set to zero, the variance ?
is set to 0.008, and ? is set to 3.3.
6
Demonstrations and assessment
The method described in this paper is compared with a number of simple filtering and interpolation methods, and also with the methods of Freeman et al.
The image from Freeman?s website is used for comparison with that work (figure 3). Full colour comparisons for these and other images can be found at
http://www.anc.ed.ac.uk/~amos/superresolution.html. First two linear filtering approaches are considered, the Wiener filter and a Gaussian filter. The third
method is a median filter. Bi-cubic interpolation is also given.
Quantitative assessment of the quality of super-resolution results is always something of a difficulty because the basic criterion is human subjectivity. Even so we
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3: Comparison with approach of Freeman et al. (a) gives the 70x70 low resolution image, (b) the true image, (c) a bi-cubic interpolation (d) Freeman et al result
(taken from website and downsampled), (e) dynamic structure super-resolution, (f)
median filter.
compare the results of this approach with standard filtering methods using a root
mean squared pixel error on a set of 8, 128 by 96 colour images, giving 0.0486, 0.0467,
0.0510 and 0.0452 for the original low resolution image, bicubic interpolation, the
median filter and dynamic structure super-resolution respectively. Unfortunately
the unavailability of code prevents representative calculations for the Freeman et al
approach. Dynamic structure resolution requires approximately 30 ? 60 flops per
2 ? 2 high resolution pixel per optimisation cycle, compared with, say, 16 flops for
a linear filter, so it is more costly. Trials have been done working directly with
2 ? 2 grids rather than with 4 ? 4 and then averaging up. This is much faster and
the results, though not quite as good, were still an improvement on the simpler
methods.
Qualitatively, the results for dynamic structure super-resolution are significantly
better than most standard filtering approaches. The texture is better represented
because it maintains consistency, and the edges are sharper, although there is still
some significant difference from the true image. The method of Freeman et al
is perhaps comparable at this resolution, although it should be noted that their
result has been downsampled here to half the size of their enhanced image. Their
method can produce 4 ? 4 the resolution of the original, and so this does not
accurately represent the full power of their technique. Furthermore this image is
representative of early results from their work. However their approach does require
learning large numbers of patches from a training set. Fundamentally the dynamic
structure super-resolution approach does a good job at resolution doubling without
the need for representative training data. The edges are not blurred and much of
the blockiness is removed.
Dynamic structure super-resolution provides a technique for resolution enhancement, and provides an interesting starting model which is different from the Markov
random field approaches. Future directions could incorporate hierarchical frequency
information at each node rather than just a single value.
References
[1] N. J. Adams. Dynamic Trees: A Hierarchical Probabilistic Approach to Image Modelling. PhD thesis, Division of Informatics, University of Edinburgh, 5 Forrest Hill,
Edinburgh, EH1 2QL, UK, 2001.
[2] S. Baker and T. Kanade. Limits on super-resolution and how to break them. In
Proceedings of CVPR 00, pages 372?379, 2000.
[3] P. Cheeseman, B. Kanefsky, R. Kraft, and J. Stutz. Super-resolved surface reconstruction from multiple images. Technical Report FIA-94-12, NASA Ames, 1994.
[4] M. Elad and A. Feuer. Super-resolution reconstruction of image sequences. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 21(9):817?834, 1999.
[5] W. T. Freeman, T. R. Jones, and E. C. Pasztor. Markov networks for super-resolution.
Technical Report TR-2000-08, MERL, 2000.
[6] W. T. Freeman, T. R. Jones, and E. C. Pasztor. Example-based super-resolution.
IEEE Computer Graphics and Applications, 2002.
[7] R. R. Schultz and R. L. Stevenson. A Bayesian approach to image expansion for
improved definition. IEEE Transactions on Image Processing, 3:233?242, 1994.
[8] A. J. Storkey. Dynamic trees: A structured variational method giving efficient propagation rules. In C. Boutilier and M. Goldszmidt, editors, Uncertainty in Artificial
Intelligence, pages 566?573. Morgan Kauffmann, 2000.
[9] C. K. I. Williams and N. J. Adams. DTs: Dynamic trees. In M. J. Kearns, S. A. Solla,
and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11. MIT
Press, 1999.
| 2271 |@word trial:1 version:2 indiscriminate:1 proportionality:1 km:2 gradual:1 rgb:1 tr:1 shading:1 series:1 zij:5 denoting:1 subjective:1 tackling:1 must:1 written:1 realistic:1 visible:3 additive:2 shape:2 update:1 half:1 intelligence:2 website:2 provides:3 node:16 ames:1 simpler:1 qij:8 consists:2 manner:3 presumed:4 aliasing:2 freeman:9 relying:1 automatically:1 little:1 jm:4 baker:1 superresolution:2 what:3 lowest:1 astronomy:2 transformation:1 temporal:1 sky:1 preferably:1 every:2 quantitative:1 exactly:1 uk:4 producing:2 negligible:1 maximise:1 local:1 limit:2 optimised:1 interpolation:8 approximately:1 might:3 black:1 range:2 bi:5 responsible:4 block:1 significantly:1 word:2 downsampled:3 get:3 restriction:1 equivalent:1 map:2 www:1 go:2 williams:1 starting:1 resolution:76 simplicity:2 rule:1 initialise:1 kauffmann:1 enhanced:1 neighbouring:1 us:1 storkey:3 approximated:1 located:1 visualising:1 region:5 ensures:1 cycle:3 solla:1 removed:1 yk:2 substantial:2 dynamic:14 depend:1 division:2 blurring:1 kraft:1 resolved:1 joint:3 represented:2 separated:1 fast:2 describe:1 artificial:1 choosing:1 quite:1 larger:3 valued:1 solve:2 say:3 cvpr:1 elad:1 final:1 subsamples:1 sequence:2 reconstruction:4 combining:1 loop:1 mixing:2 colorspace:1 parent:11 enhancement:3 transmission:1 convergence:1 produce:2 generating:1 adam:2 depending:1 iq:5 ac:2 ij:5 job:1 involves:4 come:2 met:1 direction:2 thick:1 filter:12 human:1 viewing:1 public:1 require:1 scanner:1 around:1 considered:2 ground:1 exp:6 mapping:1 achieves:2 vary:3 early:1 estimation:1 amos:2 mit:1 gaussian:13 always:1 super:18 rather:4 varying:1 focus:1 improvement:1 modelling:1 likelihood:4 inference:3 transformed:1 interested:1 selects:1 pixel:44 issue:1 html:1 smoothing:6 apriori:1 field:8 equal:1 having:1 optimising:1 identical:1 jones:3 unsupervised:1 look:2 future:1 report:2 spline:1 fundamentally:1 neighbour:1 divergence:2 pleasing:1 normalisation:1 highly:2 possibility:1 mixture:7 extreme:1 bicubic:2 edge:5 closer:1 integral:2 necessary:1 stutz:1 tree:7 re:2 circle:1 uncertain:1 merl:1 earlier:1 loopy:1 uniform:2 graphic:1 connect:1 perturbed:1 chooses:1 probabilistic:1 informatics:2 together:1 connectivity:3 squared:2 thesis:1 choose:1 dr:1 worse:1 corner:2 resort:1 usable:1 derivative:1 return:1 stevenson:2 factorised:1 centred:3 includes:1 coefficient:2 blurred:2 depends:1 later:1 try:1 utilised:1 performed:1 responsibility:2 doing:1 root:1 red:1 break:1 maintains:2 complicated:1 contribution:1 square:4 wiener:3 variance:6 likewise:1 yellow:1 bayesian:1 iterated:1 accurately:1 whenever:1 ed:2 definition:1 energy:1 blockiness:2 initialised:1 frequency:1 subjectivity:1 improves:1 positioned:1 nasa:1 higher:5 supervised:2 follow:1 improved:1 done:4 box:2 though:1 furthermore:1 just:1 until:1 hand:2 working:1 web:1 cohn:1 assessment:2 propagation:2 quality:1 perhaps:1 gray:1 true:4 multiplier:1 former:1 hence:2 illustrated:1 deal:3 eg:1 unavailability:1 width:1 self:1 covering:1 noted:1 criterion:4 hill:2 outline:1 enlargement:1 image:54 variational:9 common:2 physical:1 conditioning:1 volume:1 fare:1 significant:2 gibbs:1 consistency:2 grid:1 centre:3 surface:1 something:1 posterior:3 yi:1 morgan:1 minimum:1 additional:1 determine:1 multiple:3 photographic:1 rj:2 full:3 technical:2 faster:1 calculation:2 ykm:1 host:1 basic:3 optimisation:8 expectation:1 iteration:2 sometimes:1 represent:1 achieved:1 whereas:1 median:6 subject:1 tend:2 inconsistent:1 intermediate:1 constraining:1 easy:1 fit:1 architecture:1 simplifies:1 colour:10 boutilier:1 useful:2 iterating:1 generally:1 dark:1 reduced:1 http:1 exist:1 per:2 blue:1 achieving:1 utilising:1 run:1 uncertainty:1 almost:1 reasonable:3 forrest:2 patch:10 comparable:1 layer:9 hi:1 followed:1 tackled:1 occur:2 constraint:1 nearby:2 interpolated:1 softened:1 structured:2 describes:1 slightly:1 across:3 smaller:3 taken:1 equation:6 turn:1 needed:2 fia:1 available:1 gaussians:1 hierarchical:2 away:2 generic:2 appropriate:1 blurry:1 original:3 top:1 denotes:1 subsampling:1 ensure:1 remaining:1 running:1 giving:2 especially:1 added:1 eh1:1 occurs:1 primary:2 costly:1 usual:1 distance:1 sensible:1 feuer:1 maximising:1 code:1 relationship:1 illustration:1 demonstration:1 ql:1 unfortunately:1 hlog:5 sharper:1 negative:1 implementation:2 markov:3 pasztor:3 anti:2 situation:1 flop:2 looking:1 frame:1 dc:1 varied:1 intensity:2 introduced:1 ijm:1 specified:1 kl:2 connection:4 dts:1 beyond:1 suggested:1 parental:1 pattern:1 xm:3 green:1 optimise:1 explanation:1 belief:3 power:1 suitable:3 difficulty:2 rely:1 vjm:1 treated:1 indicator:4 cheeseman:1 representing:1 eye:1 imply:1 picture:2 created:1 prior:8 literature:1 relative:1 interesting:1 limitation:1 filtering:6 proportional:1 sixteen:1 pij:3 consistent:2 editor:2 storing:1 last:1 free:2 jth:1 side:1 institute:2 explaining:1 taking:2 edinburgh:4 boundary:1 preventing:1 author:2 made:1 adaptive:1 qualitatively:1 schultz:2 far:1 transaction:2 approximate:2 checkered:1 keep:1 global:1 assumed:1 xi:6 latent:21 kanade:1 nature:1 reasonably:1 anc:1 expansion:1 domain:1 vj:4 motivation:1 noise:5 whole:1 allowed:2 repeated:1 child:1 site:2 representative:3 elaborate:1 cubic:4 darker:1 structurally:1 position:6 sewing:1 lie:1 vim:1 third:1 specific:1 offset:1 decay:2 dk:6 sequential:1 effectively:2 importance:1 texture:4 phd:1 demand:1 horizon:1 simply:1 lagrange:1 prevents:1 ordered:3 doubling:2 utilise:1 truth:1 corresponds:2 determines:2 presentation:1 feasible:1 change:2 specifically:1 except:2 determined:1 averaging:1 preset:1 kearns:1 attempted:1 goldszmidt:1 incorporate:1 kanefsky:1 d1:1 |
1,398 | 2,272 | Fast Kernels for String and Tree Matching
S. V. N. Vishwanathan
Dept. of Compo Sci. & Automation
Indian Institute of Science
Bangalore, 560012, India
vishy@csa . iisc . ernet . in
Alexander J. Smola
Machine Learning Group, RSISE
Australian National University
Canberra, ACT 0200, Australia
Alex . Smola@anu . edu . au
Abstract
In this paper we present a new algorithm suitable for matching discrete
objects such as strings and trees in linear time, thus obviating dynarrtic
programming with quadratic time complexity. Furthermore, prediction
cost in many cases can be reduced to linear cost in the length of the sequence to be classified, regardless of the number of support vectors. This
improvement on the currently available algorithms makes string kernels
a viable alternative for the practitioner.
1 Introduction
Many problems in machine learning require the classifier to work with a set of discrete examples. Common examples include biological sequence analysis where data is represented
as strings [4] and Natural Language Processing (NLP) where the data is in the form a parse
tree [3]. In order to apply kernel methods one defines a measure of similarity between
discrete structures via a feature map ? : X ----+ Jek.
Here X is the set of discrete structures (eg. the set of all parse trees of a language) and Je K
is a Hilbert space. Furthermore, dot products then lead to kernels
k(x , x') = (?(x ), ?(X') )
(1)
where x, x ' E X. The success of a kernel method employing k depends both on the faithful
representation of discrete data and an efficient means of computing k.
This paper presents a means of computing kernels on strings [15, 7, 12] and trees [3] in
linear time in the size of the arguments, regardless of the weighting that is associated with
any of the terms, plus linear time complexity for prediction, regardless of the number of
support vectors. This is a significant improvement, since the so-far fastest methods [8, 3]
rely on dynarrtic programming which incurs a quadratic cost in the length of the argument.
Note that the method we present here is far more general than strings and trees, and it can
be applied to finite state machines, formal languages, automata, etc. to define new kernels
[14]. However for the scope of the current paper we Iirrtit ourselves to a fast means of
computing extensions of the kernels of [15, 3, 12].
In a nutshell our idea works as follows: assume we have a kernel k( x, x')
I: iE I ?i (x )?i (x') , where the index set I may be large, yet the number of nonzero entries is small in comparison to III- Then an efficient way of computing k is to sort the set
of nonzero entries ?(x ) and ?(X') beforehand and count only matching non-zeros. This
is similar to the dot-product of sparse vectors in numerical mathematics. As long as the
sorting is done in an intelligent manner, the cost of computing k is linear in the sum of
non-zeros entries combined. In order to use this idea for matching strings (which have a
quadratically increasing number of substrings) and trees (which can be transformed into
strings) efficient sorting is realized by the compression of the set of all substrings into a
suffix tree. Moreover, dictionary keeping allows us to use arbitrary weightings for each of
the substrings and still compute the kernels in linear time.
2
String Kernels
We begin by introducing some notation. Let A be a finite set which we call the alphabet.
The elements of A are characters. Let $ be a sentinel character such that $ tf. A. Any
x E A k for k = 0, 1, 2 ... is called a string. The empty string is denoted by E and A *
represents the set of all non empty strings defined over the alphabet A.
In the following we will use s , t , u , v, w, x, y, z E A * to denote strings and a, b, c E A to
denote characters. Ixl denotes the length of x , uv E A * the concatenation of two strings
u , v and au the concatenation of a character and a string. We use xli : j] with 1 ::; i ::; j ::;
Ixl to denote the substring of x between locations i and j (both inclusive). If x = uv w for
some (possibly empty) u, v , w, then u is called a prefix of x while v is called a substring
(also denoted by v [;;; x ) and w is called a suffix of x . Finally, numy(x ) denotes the number
of occurrences of yin x . The type of kernels we will be studying are defined by
k( x, X'): =
L
w s6s,s'
=
L
nums(x ) nums(x ') w s.
(2)
s EA "
That is, we count the number of occurrences of every string s in both x and x ' and weight
it by w s , where the latter may be a weight chosen a priori or after seeing data, e.g., for
inverse document frequency counting [11]. This includes a large number of special cases:
? Setting W s = 0 for all lsi > 1 yields the bag-of-characters kernel, counting simply
single characters.
? The bag-of-words kernel is generated by requiring s to be bounded by whitespace.
? Setting Ws = 0 for all lsi> n yields limited range correlations of length n.
? The k-spectrum kernel takes into account substrings of length k [J 2] . It is achieved
by setting W s = 0 for all lsi i- k.
? TFIDF weights are achieved by first creating a (compressed) list of all s including
frequencies of occurrence, and subsequently rescaling W s accordingly.
All these kernels can be computed efficiently via the construction of suffix-trees, as we will
see in the following sections. However, before we do so, let us turn to trees. The latter are
important for two reasons: first since the suffix tree representation of a string will be used to
compute kernels efficiently, and secondly, since we may wish to compute kernels on trees,
which will be carried out by reducing trees to strings and then applying a string-kernel.
3
Tree Kernels
A tree is defined as a connected directed graph with no cycles. A node with no children
is referred to as a leaf A subtree rooted at node n is denoted as Tn and t F T is used to
indicate that t is a subtree of T. If a set of nodes in the tree along with the corresponding
edges forms a tree then we define it to be a subset tree. If every node n of the tree contains
a label, denoted by label( n), then the tree is called an labeled tree. If only the leaf nodes
contain labels then the tree is called an leaf-labeled tree. Kernels on trees can be defined
by defining kernels on matching subset trees as proposed by [3] or (more restrictively) by
defining kernels on matching subtrees. In the latter case we have
k(T, T') =
Wt6t ,t' .
(3)
L
t FT ,t' FT'
Ordering Trees An ordered tree is one in which the child nodes of every node are ordered
as per the ordering defined on the node labels. Unless there is a specific inherent order on
the trees we are given (which is, e.g., the case for parse-trees), the representation of trees is
not unique. For instance, the following two unlabeled trees are equivalent and can obtained
from each other by reordering the nodes.
To order trees we assume that a lexicographic order is associated with the labels if they exist. Furthermore, we assume that the additional symbols
'[', '1' satisfy ' [' < '1', and that '1', '[' < label( n) for
all labels. We will use these symbols to define
Figure 1: Two equivalent trees
tags for each node as follows:
~
c!0
? For an unlabeled leaf n define tag( n) := [ l.
? For a labeled leaf n define tag( n) := [ label( n) 1.
? For an unlabeled node n with children nl, ... , nc sort the tags of the children in
lexicographical order such that tag( n i) ::=; tag( nj) if i < j and define
tag(n)
=
[tag(nl)tag(n2) ... tag(nc)l .
? For a labeled node perform the same operations as above and set
tag(n)
= [ label(n)tag(nl)tag(n2) ... tag(n c) l .
For instance, the root nodes of both trees depicted above would be encoded as [[] [[] [lll. We
now prove that the tag of the root node, indeed, is a unique identifier and that it can be
constructed in log linear time.
Theorem 1 Denote by T a binary tree with I nodes and let A be the maximum length of a
label. Then the following properties hold for the tag of the root node:
1. tag (root) can be computed in (A + 2)(llog21) time and linear storage in I.
2. Substrings S oftag(root) starting with '[' and ending with a balanced '] ' correspond to subtrees T' ofT where s is the tag on T'.
3. Arbitrary substrings s oftag(root) correspond to subset trees T' ofT.
4. tag (root) is invariant under permutations of the leaves and allows the reconstruction of an unique element of the equivalence class (under permutation).
Proof We prove claim 1 by induction. The tag of a leaf can be constructed in constant time
by storing [, ], and a pointer to the label of the leaf (if it exists), that is in 3 operations. Next
assume that we are at node n, with children nl, n2. Let Tn contain In nodes and Tn, and
Tn2 contain h, 12 nodes respectively. By our induction assumption we can construct the tag
for nl and n2 in (A + 2)(h log2 h) and (A + 2)(12 log2 12) time respectively. Comparing
the tags of nl and n2 costs at most (A + 2) min(h, l2) operations and the tag itself can
be constructed in constant time and linear space by manipulating pointers. Without loss of
generality we assume that h ::=; 12 ? Thus, the time required to construct tag(n) (normalized
by A + 2) is
II (log2 11
+ 1) + 1210g2 (1 2) =
h log2 (2h)
+ l210g2 (12) ::=; In log2 (In).
(4)
One way of visualizing our ordering is by imagining that we perform a DFS (depth first
search) on the tree T and emit a '[' followed by the label on the node, when we visit a node
for the first time and a '1' when we leave a node for the last time. It is clear that a balanced
substring s of tag (root) is emitted only when the corresponding DFS on T' is completed.
This proves claim 2.
We can emit a substring of tag( root) only if we can perform a DFS on the corresponding
set of nodes. This implies that these nodes constitute a tree and hence by definition are
subset trees of T. This proves claim 3.
Since leaf nodes do not have children their tag is clearly invariant under permutation. For an
internal node we perform lexicographic sorting on the tags of its children. This removes any
dependence on permutations. This proves the invariance of tag(root) under permutations
of the leaves. Concerning the reconstruction, we proceed as follows: each tag of a subtree
starts with ' [' and ends in a balanced '] ', hence we can strip the first [] pair from the tag,
take whatever is left outside brackets as the label of the root node, and repeat the procedure
with the balanced [... J entries for the children of the root node. This will construct a tree
with the same tag as tag(root), thus proving claim 4.
?
An extension to trees with d nodes is straightforward (the cost increases to d log2 d of the
original cost), yet the proof, in particular (4) becomes more technical without providing
additional insight, hence we omit this generalization for brevity.
Corollary 2 Kernels on trees T , T' can be computed via string kernels, if we use
tag(T) , tag(T') as strings. Ifwe require that only balanced [. .. J substrings have nonzero
weight W s then we obtain the subtree matching kernel defined in (3).
This reduces the problem of tree kernels to string kernels and all we need to show in the following is how the latter can be computed efficiently. For this purpose we need to introduce
suffix trees.
4
Suffix Trees and Matching Statistics
Definition The suffix tree is a compacted trie that stores all suffixes of a given text string.
We denote the suffix tree of the string x by S (x) . Moreover, let nodes( S( x)) be the set of
all nodes of S(x ) and let root (S (x )) be the root of S(x ). For a node w, father (w) denotes
its parent, T(w) denotes the subtree tree rooted at the node, Ivs(w) denotes the number of
leaves in the subtree and path( w) := w is the path from the root to the node. That is, we
use the path w from root to node as the label of the node w.
We denote by words(S(x )) the set of all
ab
strings w such that wu E nodes(S(x )) for
some (possibly empty) string u, which means
abc$
that words(S(x)) is the set of all possible
substrings of x. For every t E words(S(x))
we define ceil (t) as the node w such that
w = tu and u is the shortest (possibly empty)
Figure 2: Suffix Tree of ababc
substring such that w E nodes(S(x)). Similarly, for every t E words(S(x)) we define
floor(t) as the node w such that t = wu and u is the shortest (possibly empty) substring
such that w E nodes(S(x )). Given a string t and a suffix tree S(x), we can decide if
t E words(S(x)) in O(lt l) time by just walking down the corresponding edges of S(x).
If the sentinel character $ is added to the string x then it can be shown that for any t E
words(S(x)), lvs( ceil( t)) gives us the number of occurrence of t in x [5]. The idea works
as follows: all suffixes of x starting with t have to pass through ceil(t), hence we simply
have to count the occurrences of the sentinel character, which can be found only in the
leaves. Note that a simple depth first search (OFS) of S(x) will enable us to calculate
Ivs(w) for each node in S(x) in O(lxl) time and space.
Let aw be a node in S(x), and v be the longest suffix of w such that v E nodes(S(x)).
An unlabeled edge aw ---+ v is called a suffix link in S (x ). A suffix link of the form
aw ---+ W is called atomic. It can be shown that all the suffix links in a suffix tree are atomic
[5, Proposition 2.9]. We add suffix links to S(x), to allow us to perform efficient string
matching: suppose we found that aw is a substring of x by parsing the suffix tree S (x ).
It is clear that w is also a substring of x. If we want to locate the node corresponding to
w, it would be wasteful to parse the tree again. Suffix links can help us locate this node in
constant time. The suffix tree building algorithms make use of this property of suffix links
to perform the construction in linear time. The suffix tree construction algorithm of [13]
constructs the suffix tree and all such suffix links in linear time.
Matching Statistics Given strings x, y with Ix l = nand Iy l = m, the matching statistics
of x with respect to y are defined by v, C E p,[n, where V i is the length of the longest
substring of y matching a prefix of xli : n], Vi := i + v i - 1, Ci is a pointerto ceil(x[i : Vi])
and Ci is a pointer to floor(x [i : Vi]) in S(y). For an example see the table below.
For a given y one can construct v, C corresponding to x in linear time. The key observation is that
VH I ::::: Vi - 1, since if xli : Vi] is a substring of
y then definitely xli + 1 : Vi ] is also a substring of
Table 1: Matching statistic of abba with
respect to S (a b abc ).
y. Besides this, the matching substring in y that we
find, must have xli + 1 : Vi] as a prefix. The Matching Statistics algorithm [2] exploits this observation and uses it to cleverly walk down the
suffix links of S(y) in order to compute the matching statistics in O( lxl ) time.
More specifically, the algorithm works by maintaining a pointer Pi = floor( x [i : Vi ]). It
then finds P H I = floor( x[i + 1 : Vi ]) by first walking down the suffix link of Pi and then
walking down the edges corresponding to the remaining portion of xli + 1 : Vi] until it
reaches floor( x[i + 1 : Vi]) . Now VH I can be found easily by walking from P H I along the
edges of S(y) that match the string x li + l : n], until we can go no further. The value of
VI is found by simply walking down S(y) to find the longest prefix of x which matches a
substring of y.
String
a
2
ab
b
1
b
b
2
babeS
a
1
ab
Matching substrings Using V and C we can read off the number of matching substrings
in x and y. The useful observation here is that the only substrings which occur in both x
and y are those which are prefixes of x li : Vi] . The number of occurrences of a substring in
y can be found by lvs( ceil(w)) (see Section 4). The two lemmas below formalize this.
Lemma 3 w is a substring of x iff there is an i such that w is a prefix of x li : n]. The
numbe r of occurrences of w in x can be calculated by finding all such i.
Lemma 4 The set of matching substrings of x and y is the set of all prefixes of xli : Vi] .
Proof Let w be a substring of both x and y. By above lemma there is an i such that w
is a prefix of xli : n]. Since Vi is the length of the maximal prefix of xli : n] which is a
substring in y, it follows that Vi ::::: Iw l. Hence w must be a prefix of x li : Vi] .
?
5
Weights and Kernels
From the previous sections we know how to determine the set of all longest prefixes x li : Vi ]
of x li : n] in y in linear time. The following theorem uses this information to compute
kernels efficiently.
Theorem 5 Let x and y be strings and c and V be the matching statistics of x with respect
to y. Assume that
W(y , t) =
L
Wus -
Wu
where u = floor(t) and t = uv.
(5)
sE prefix(v)
can be computed in constant time for any t. Then k( x, y) can be computed in O(l x l + Iy l)
time as
Ixl
Ixl
k(x, y) =
val(x[i : Vi ]) =
val( ci ) + lvs(ceil(x[i : Vi])) W(y , xli : Vi ]) (6)
L
L
i= 1
i= 1
where val (t) := lYseceil (t)) . W (y , t ) + val(floor( t)) and val (root) := O.
Proof We first show that (6) can indeed be computed in linear time. We know that for S(y)
the number of leaves can be computed in linear time and likewise c, v. By assumption on
W(y, t) and by exploiting the recursive nature of valet) we can compute W(y, nodes(i ))
for all the nodes of S(y) by a simple top down procedure in O(ly l) time.
Also, due to recursion, the second equality of (6) holds and we may compute each term in
constant time by a simple lookup for val(ci ) and computation of W(y , xli : Vi]) ' Since we
have Ixl terms, the whole procedure takes O( lxl ) time, which proves the O( lxl + Iyl) time
complexity.
Now we prove that (6) really computes the kernel. We know from Lemma 4 that the sum
in (2) can be decomposed into the sum over matches between y and each of the prefixes
of xli : Vi] (this takes care of all the substrings in x matching with y). This reduces the
problem to showing that each term in the sum of (6) corresponds to the contribution of all
prefixes of x li : vJ
Assume we descend down the path xli : Vi] in S(y) (e.g., for the string bab with respect
to the tree of Figure 2 this would correspond to (root, b, bab?, then each of the prefixes t
along the path (e.g., (' , , b, ba, bab) for the example tree) occurs exactly as many times
as Ivs( ceil( t)) does. In particular, prefixes ending on the same edge occur the same number
of times. This allows us to bracket the sums efficiently, and W(y , x) simply is the sum
along an edge, starting from the ceiling of x to x . Unwrapping val(x ) shows that this is
simply the sum over the occurrences on the path of x, which proves our claim.
?
So far, our claim hinges on the fact that W(y, t) can be computed in constant time, which
is far from obvious at first glance. We now show that this is a reasonable assumption in all
practical cases.
Length Dependent Weights If the weights Ws depend only on ls i we have Ws = wisi.
Define Wj := Li=l Wj and compute its values beforehand up to W J where J ~ Ix l for all
x. Then it follows that
It I
W(y , t)
=
L
Wj -
W I floo r (tl l
= Wlt l -
(7)
WI floor(t l l
j= 1ceil (tl l
which can be computed in constant time. Examples of such weighting schemes are the
kernels suggested by [15], where Wi = A- i, [7] where Wi = 1, and [10], where Wi = Olio
Generic Weights In case of generic weights, we have several options: recall that one
often will want to compute m 2 kernels k(x , x'), given m strings x E X. Hence we could
build the suffix trees for Xi beforehand and annotate each of the nodes and characters on
the edges explicitly (at super-linear cost per string), which means that later, for the dot
products, we will only need to perform table lookup of W( x , x' (i : Vi)).
However, there is an even more efficient mechanism, which can even deal with dynamic
weights, depending on the relative frequency of occurrence of the substrings in all x . We
can build a suffix tree I; of all strings in X. Again , this can be done in time linear in the
total length of all the strings (simply consider the concatenation of all strings) . It can be
shown that for all x and all i , xli : Vi] will be a node in this tree. Leaves-counting allows
to compute these dynanUc weights efficiently, since I; contains all the substrings.
For W( x,x'(i : Vi)) we make ilie simplifying assumption that Ws = ? (Isl ) . ?(freq(s)),
that is, Ws depends on length and frequency only. Now note that all the strings ending on
the same edge in I; will have the same weights assigned to them. Hence, can rewrite (5) as
W(y , t) =
L
s Eprefix (tl
Ws -
L
s Eprefix(floor(tl l
It I
Ws =
? (freq(t))
L
? (i)
(8)
i= 1floor(t l l+l
where u = floor(t), t = uv and s E prefix(v). By precomputing L i ? (i) we can evaluate
(8) in constant time.
The benefit of (8) is twofold: we can compute the weights of all the nodes of I; in time
linear in the total length of strings in X . Secondly, for arbitrary x we can compute W(y , t)
in constant time, thus allowing us to compute k( Xi' x') in O(l xi l + Ix' l) time.
Linear Time Prediction Let Xs = {Xl, X2 , . . . , x m } be the set of support vectors.
Recall that, for prediction in a Support Vector Machine we need to compute f( x) =
L : I Ctik(Xi, x ), which implies that we need to combine the contribution due to matching
substrings from each one of the Support Vectors. We first construct S (Xs) in linear time by
using the [1] algorithm. In S(X 8 ) , we associate weight Cti with each leaf associated with
the support vector Xi . For a node V E nodes(S(X8)) we modify the definition of Ivs(v)
as the sum of weights associated with the subtree rooted at node v. A straightforward application of the matching statistics algorithm of [2] shows that we can find the matching
statistics of x with respect to all strings in Xs in O(l xl ) time. Now Theorem 5, can be
applied unchanged to compute f (x ). A detailed account and proof can be found in [14].
In summary, we can classify texts in linear time regardless of the size of the training set.
This makes SVM for large-scale text categorization practically feasible. Similar modifications can also be applied for training SMO like algorithms on strings.
6 Experimental Results
For a proof of concept we tested our approach on a remote homology detection problem 1
[9] using Stafford Noble's SVM package 2 as the training algorithm. A length weighted
kernel was used and we assigned weights W s = Aisl for all substring matches of length
greater than 3 regardless of triplet boundaries. To evaluate performance we computed the
ROC 50 scores. 3
Being a proof of concept, we did not try to
tune the soft margin SVM parameters (the
main point of the paper being the introduction of a novel means of evaluating string
kernels efficiently rather than applications
-~- a separate paper focusing on applications
-"1
?e.\a..
is in preparation).
Table 3 contains the ROC 50 scores for the
"._-- ...... _..._---"',---. _--spectrum kernel with k = 3 [12] and our
string kernel with A = 0.75. We tested
with A E {0.25, 0.5, 0.75, O.g} and re?o~--~~----~----~----~----~
port the best results here. As can be seen
Figure 3: Total number of families for which an
our kernel outperforms the spectrum kerSVM classifier exceeds a ROC50 score threshold.
nel on nearly every every family in the
dataset.
It should be noted that this is the first method to allow users to specify weights rather arbitrarily for all possible lenghts of matching sequences and still be able to compute kernels at
O(lxl + Ix' l) time, plus, to predict on new sequences at O(l xl ) time, once the set of support
vectors is established. 4
lsIrbda .. O.7ti _
Spectrum !(.ernel _ -
7
Conclusion
We have shown that string kernels need not come at a super-linear cost in SVMs and that
prediction can be carried out at cost linear only in the length of the argument, thus providing
optimal run-time behaviour. Furthermore the same algorithm can be applied to trees.
The methodology pointed out in our paper has several immediate extensions: for instance,
we may consider coarsening levels for trees by removing some of the leaves. For not
too-unbalanced trees (we assume that the tree shrinks at least by a constant factor at each
coarsening) computation of the kernel over all coarsening levels can then be carried out at
cost still linear in the overall size of the tree. The idea of coarsening can be extended to
approximate string matching. If we remove characters, this amounts to the use of wildcards.
Likewise, we can consider the strings generated by finite state machines and thereby compare the finite state machines themselves. This leads to kernels on automata and other
dynamical systems. More details and extensions can be found in [14].
IDetails and data available at www.cse.ucsc.edu/research/compbio/discriminative.
at www.cs.columbia.edu/compbio/svm.
3The ROC 50 score [6, 12] is the area under the receiver operating characteristic curve (the plot of
true positives as a function of false positives) up to the first 50 false positives. A score of I indicates
perfect separation of positives from negatives, whereas a score of 0 indicates that none of the top 50
sequences selected by the algorithm were positives .
4[12] obtain an O(kl xl ) algorithm in the (somewhat more restrictive) case ofw s = 6k(lsl) .
2 Available
Acknowledgments We would like to thank Patrick Haffner, Daniela Pucci de Farias, and
Bob Williamson for comments and suggestions. This research was supported by a grant of
the Australian Research Council. SVNV thanks Trivium India Software and Netscaler Inc.
for their support.
References
[1] A. Amir, M. Farach, Z. Galil, R. Giancarlo, and K. Park. Dynamic dictionary matching. Journal of Computer and System Science, 49(2):208-222, October 1994.
[2]
w. I. Chang and E. L. Lawler.
Sublinear approximate sting matching and biological
applications. Algorithmica, 12(4/5):327-344, 1994.
[3] M. Collins and N. Duffy. Convolution kernels for natural language. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2001. MIT Press.
[4] R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis:
Probabilistic models ofproteins and nucleic acids. Cambridge University Press, 1998.
[5] R. Giegerich and S. Kurtz. From Ukkonen to McCreight and Weiner: A unifying
view of linear-time suffix tree construction. Algorithmica, 19(3):331-353, 1997.
[6] M. Gribskov and N. L. Robinson. Use of receiver operating characteristic (ROC)
analysis to evaluate sequence matching. Computers and Chemistry, 20(1):25-33,
1996.
[7] D. Haussler. Convolutional kernels on discrete structures. Technical Report UCSCCRL-99-1O, Computer Science Department, UC Santa Cruz, 1999.
[8] R. Herbrich. Learning Kernel Classifiers: Theory and Algorithms. MIT Press, 2002.
[9] T. S. Jaakkola, M. Diekhans, and D. Haussler. A discriminative framework for detecting remote protein homologies. Journal of Computational Biology, 7:95-114, 2000.
[10] T. Joachims. Making large-scale SVM learning practical. In B. SchOlkopf, C. J. C.
Burges, and A. J. Smola, editors, Advances in Kernel Methods-Support Vector
Learning, pages 169-184, Cambridge, MA, 1999. MIT Press.
[11] E. Leopold and J. Kindermann. Text categorization with support vector machines:
How to represent text in input space? Machine Learning, 46(3):423-444, March
2002.
[12] C. Leslie, E. Eskin, and W. S. Noble. The spectrum kernel: A string kernel for SVM
protein classification. In Proceedings of the Pacific Symposium on Biocomputing,
pages 564-575, 2002.
[13] E. Ukkonen. On-line construction of suffix trees. Algorithmica, 14(3):249-260, 1995.
[14] S. V. N. Vishwanathan. Kernel Methods: Fast Algorithms and Real Life Applications.
PhD thesis, Indian Institute of Science, Bangalore, India, November 2002.
[15] C. Watkins. Dynamic alignment kernels. In A. J. Smola, P. L. Bartlett, B. Scholkopf,
and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 39-50,
Cambridge, MA, 2000. MIT Press.
| 2272 |@word compression:1 simplifying:1 incurs:1 thereby:1 contains:3 score:6 document:1 prefix:17 outperforms:1 current:1 comparing:1 yet:2 must:2 parsing:1 cruz:1 numerical:1 remove:2 plot:1 leaf:16 selected:1 amir:1 accordingly:1 compo:1 pointer:4 eskin:1 detecting:1 node:54 location:1 cse:1 herbrich:1 along:4 constructed:3 ucsc:1 symposium:1 viable:1 scholkopf:2 prove:3 combine:1 introduce:1 manner:1 indeed:2 themselves:1 decomposed:1 lll:1 increasing:1 becomes:1 iisc:1 begin:1 moreover:2 notation:1 bounded:1 sting:1 string:49 finding:1 nj:1 every:7 act:1 ti:1 nutshell:1 exactly:1 classifier:4 whatever:1 ly:1 omit:1 grant:1 kurtz:1 before:1 positive:5 ceil:8 modify:1 path:6 plus:2 au:2 equivalence:1 fastest:1 limited:1 range:1 trie:1 directed:1 faithful:1 unique:3 lexicographical:1 practical:2 atomic:2 recursive:1 acknowledgment:1 procedure:3 area:1 matching:29 word:7 seeing:1 protein:2 unlabeled:4 storage:1 applying:1 www:2 equivalent:2 map:1 straightforward:2 regardless:5 starting:3 go:1 automaton:2 l:1 insight:1 haussler:2 proving:1 construction:5 suppose:1 lsl:1 user:1 programming:2 us:2 associate:1 element:2 walking:5 labeled:4 ft:2 descend:1 calculate:1 wj:3 connected:1 cycle:1 stafford:1 ordering:3 remote:2 balanced:5 complexity:3 dynamic:3 depend:1 rewrite:1 iyl:1 farias:1 easily:1 represented:1 alphabet:2 fast:3 outside:1 encoded:1 compressed:1 statistic:9 itself:1 sequence:7 reconstruction:2 product:3 maximal:1 tu:1 iff:1 exploiting:1 parent:1 empty:6 categorization:2 perfect:1 leave:1 object:1 help:1 depending:1 krogh:1 c:1 indicate:1 australian:2 implies:2 come:1 dfs:3 subsequently:1 australia:1 enable:1 require:2 behaviour:1 generalization:1 really:1 tfidf:1 biological:3 proposition:1 secondly:2 extension:4 hold:2 practically:1 scope:1 predict:1 claim:6 dictionary:2 purpose:1 bag:2 label:14 currently:1 iw:1 kindermann:1 council:1 tf:1 weighted:1 mit:4 clearly:1 lexicographic:2 super:2 rather:2 jaakkola:1 corollary:1 joachim:1 improvement:2 longest:4 indicates:2 dependent:1 suffix:31 nand:1 w:7 manipulating:1 transformed:1 overall:1 classification:1 denoted:4 priori:1 special:1 ernet:1 uc:1 construct:6 once:1 bab:3 wisi:1 represents:1 park:1 biology:1 nearly:1 noble:2 wlt:1 report:1 intelligent:1 bangalore:2 inherent:1 national:1 algorithmica:3 ourselves:1 ab:3 detection:1 alignment:1 bracket:2 nl:6 subtrees:2 beforehand:3 emit:2 edge:9 unless:1 tree:68 iv:4 walk:1 re:1 instance:3 classify:1 soft:1 leslie:1 cost:11 introducing:1 entry:4 subset:4 father:1 too:1 aw:4 combined:1 thanks:1 definitely:1 ie:1 probabilistic:1 off:1 iy:2 again:2 thesis:1 possibly:4 creating:1 rescaling:1 li:8 jek:1 account:2 de:1 lookup:2 chemistry:1 automation:1 includes:1 inc:1 satisfy:1 explicitly:1 depends:2 vi:27 later:1 root:19 try:1 view:1 portion:1 start:1 sort:2 compacted:1 option:1 contribution:2 convolutional:1 acid:1 characteristic:2 efficiently:7 lvs:3 yield:2 correspond:3 likewise:2 farach:1 xli:14 substring:34 none:1 bob:1 classified:1 reach:1 strip:1 definition:3 frequency:4 vishy:1 obvious:1 ofs:1 associated:4 proof:7 dataset:1 recall:2 hilbert:1 formalize:1 eddy:1 ea:1 lawler:1 focusing:1 methodology:1 specify:1 done:2 shrink:1 generality:1 furthermore:4 just:1 smola:4 babe:1 correlation:1 until:2 parse:4 glance:1 defines:1 building:1 dietterich:1 requiring:1 true:1 contain:3 normalized:1 concept:2 hence:7 equality:1 assigned:2 read:1 homology:2 nonzero:3 freq:2 eg:1 deal:1 visualizing:1 rooted:3 noted:1 compbio:2 tn:3 numbe:1 novel:1 common:1 significant:1 cambridge:4 uv:4 mathematics:1 similarly:1 pointed:1 language:4 dot:3 similarity:1 operating:2 etc:1 add:1 patrick:1 store:1 binary:1 success:1 arbitrarily:1 life:1 seen:1 additional:2 care:1 floor:11 greater:1 somewhat:1 determine:1 shortest:2 ii:1 reduces:2 exceeds:1 technical:2 match:4 long:1 dept:1 concerning:1 visit:1 prediction:5 ucsccrl:1 annotate:1 kernel:50 represent:1 achieved:2 whereas:1 want:2 ilie:1 comment:1 coarsening:4 practitioner:1 call:1 emitted:1 counting:3 iii:1 idea:4 haffner:1 diekhans:1 weiner:1 bartlett:1 becker:1 proceed:1 constitute:1 useful:1 clear:2 se:1 detailed:1 tune:1 santa:1 amount:1 nel:1 svms:1 reduced:1 exist:1 lsi:3 restrictively:1 per:2 discrete:6 group:1 key:1 threshold:1 wasteful:1 graph:1 sum:8 run:1 inverse:1 package:1 family:2 reasonable:1 decide:1 wu:4 separation:1 whitespace:1 followed:1 giancarlo:1 quadratic:2 durbin:1 occur:2 vishwanathan:2 alex:1 inclusive:1 x2:1 software:1 tag:35 argument:3 min:1 department:1 pacific:1 march:1 cleverly:1 character:10 wi:4 modification:1 making:1 invariant:2 ceiling:1 turn:1 count:3 mechanism:1 daniela:1 know:3 end:1 studying:1 available:3 operation:3 tn2:1 apply:1 generic:2 ernel:1 occurrence:9 gribskov:1 alternative:1 original:1 denotes:5 remaining:1 include:1 nlp:1 completed:1 top:2 log2:6 maintaining:1 hinge:1 floo:1 unifying:1 lenghts:1 exploit:1 restrictive:1 ghahramani:1 prof:5 build:2 unchanged:1 added:1 realized:1 occurs:1 dependence:1 link:9 separate:1 sci:1 concatenation:3 thank:1 reason:1 induction:2 length:15 besides:1 index:1 providing:2 nc:2 october:1 negative:1 ofw:1 ba:1 perform:7 allowing:1 observation:3 convolution:1 nucleic:1 finite:4 november:1 immediate:1 defining:2 extended:1 locate:2 arbitrary:3 isl:1 pair:1 rsise:1 required:1 kl:1 leopold:1 smo:1 quadratically:1 established:1 robinson:1 able:1 suggested:1 below:2 dynamical:1 oft:2 including:1 suitable:1 natural:2 rely:1 recursion:1 scheme:1 carried:3 x8:1 columbia:1 vh:2 text:5 l2:1 val:7 relative:1 reordering:1 loss:1 permutation:5 ukkonen:2 sublinear:1 suggestion:1 ixl:5 port:1 editor:3 storing:1 pi:2 unwrapping:1 summary:1 repeat:1 last:1 keeping:1 supported:1 formal:1 allow:2 burges:1 india:3 institute:2 sparse:1 benefit:1 boundary:1 depth:2 calculated:1 ending:3 evaluating:1 curve:1 computes:1 employing:1 far:4 approximate:2 roc50:1 receiver:2 sentinel:3 xi:5 discriminative:2 mitchison:1 spectrum:5 search:2 triplet:1 table:4 nature:1 schuurmans:1 csa:1 imagining:1 williamson:1 vj:1 precomputing:1 did:1 main:1 whole:1 n2:5 identifier:1 child:8 obviating:1 canberra:1 je:1 referred:1 tl:4 roc:4 wish:1 xl:4 watkins:1 weighting:3 ix:4 theorem:4 down:7 removing:1 specific:1 showing:1 symbol:2 list:1 x:3 svm:6 exists:1 false:2 ci:4 phd:1 subtree:7 anu:1 margin:2 duffy:1 sorting:3 depicted:1 yin:1 lt:1 simply:6 lxl:5 ordered:2 g2:1 chang:1 corresponds:1 pucci:1 abc:2 ma:3 cti:1 twofold:1 feasible:1 specifically:1 reducing:1 lemma:5 called:8 total:3 pas:1 invariance:1 experimental:1 internal:1 support:10 latter:4 unbalanced:1 alexander:1 brevity:1 indian:2 preparation:1 collins:1 biocomputing:1 evaluate:3 tested:2 |
1,399 | 2,273 | Graph-Driven Features Extraction from
Microarray Data using Diffusion Kernels and
Kernel CCA
Jean-Philippe Vert
Ecole des Mines de Paris
[email protected]
Minoru Kanehisa
Bioinformatics Center, Kyoto University
[email protected]
Abstract
We present an algorithm to extract features from high-dimensional gene
expression profiles, based on the knowledge of a graph which links together genes known to participate to successive reactions in metabolic
pathways. Motivated by the intuition that biologically relevant features
are likely to exhibit smoothness with respect to the graph topology, the
algorithm involves encoding the graph and the set of expression profiles into kernel functions, and performing a generalized form of canonical correlation analysis in the corresponding reproducible kernel Hilbert
spaces.
Function prediction experiments for the genes of the yeast S. Cerevisiae
validate this approach by showing a consistent increase in performance
when a state-of-the-art classifier uses the vector of features instead of the
original expression profile to predict the functional class of a gene.
1 Introduction
Microarray technology (DNA chips) is quickly becoming a major data provider in the postgenomics era, enabling the monitoring of the quantity of messenger RNA present in a cell
for several thousands genes simultaneously. By submitting cells to various experimental
conditions and comparing the expression profiles of different genes, a better understanding of the regulation mechanisms and functions of each gene is expected. As a matter of
fact, early experiments confirmed that many genes with similar function yield similar expression patterns [4], and systematic use of state-of-the-art machine learning classification
algorithms highlighted the possibility of gene function prediction from microarray data, at
least for some functional categories [2].
Independently of microarray technology, decades of research in molecular biology have
characterized the roles played by many genes as catalyzing chemical reactions in the cell.
This information has now been integrated into databases such as KEGG [8], where series
of successive chemical reactions arranged into pathways are represented, together with the
genes catalyzing them. In particular one can extract from such a database a graph of genes,
where two genes are linked whenever they catalyze two successive reactions.
The question motivating this report is whether the knowledge of this graph can help improve the performance of gene function prediction algorithms based on microarray data
only. To this end we propose a graph-driven feature extraction process, based on the idea
that expression patterns which correspond to actual biological events, such as the activation
or inhibition of a particular pathway, are more likely to be shared by genes close to each
other in the graph than non-relevant patterns. Our approach consists in translating this intuition as a regularized version of canonical component analysis between the genes mapped
to two reproducible kernel Hilbert spaces, defined respectively by a diffusion kernel [9]
on the graph and a linear kernel on the expression profiles. This formulation leads to a
well-posed problem equivalent to a generalized eigenvector problem [1].
2 Problem formulation
The set of genes is represented by
a discrete
set
of cardinality . The set of
, where
is the number of measurements and
expression profiles is a mapping
is the expression profile of gene . In the sequel we assume that the set of profiles has
been centered, i.e., "! .
The
of genes extracted from the pathway database is represented by a simple graph
# $graph
&%(' , with the genes as vertices. Our goal is use this graph to extract features
from the expression profiles. To this
end
formally define a feature
+ ,
,we
tothebeseta real-valued
mapping on the set of genes )*
and we denote by -.
of possible
features. The set of centered features is denoted by -0/132)546-78"96): !; .
In particular
features extracted from expression profiles )8<>= ? are defined, for any
@ 4 A
, by ) <>linear
= ? @ 0 @8B , for any C4 @ B (here and often in the sequel we use matrix
notations, where is a column vector and its transpose). We call DFE*- / the set of linear
features. The normalized variance of a linear feature is defined by:
G ) <>= ? 4HD %JI K) <>= ? &) <>= ? ML
N @ O L
P
(1)
It is a first indicator of the possible relevance of a linear vector. Indeed biological events
such as the synthesis of new molecules usually require the coordinated actions of many
proteins: they are therefore likely to have characteristic patterns in terms of gene expression
which capture variation between the genes involved and the others, and should therefore
have large variance. Linear features with a large normalized variance (1) are called relevant
in the sequel, as opposed to irrelevant features. Relevant features can be extracted by PCA.
While the normalized
# variance (1) is an intrinsic property of the set of profiles, the knowledge of the graph suggests another criterion to judge ?good? features. As genes linked
together in the graph are supposed to participate in successive reactions in the cell, it is
likely that the activation/inhibition of a biochemical pathway has a characteristic expression pattern shared by clusters of genes in the graph. More globally, the graph defines a
structure on the set of genes, and therefore a notion of smoothness for any feature )Q45- .
A feature is called smooth if it varies slowly between adjacent nodes in the graph, and
rugged otherwise. As just stated, features of interest are more likely to be smooth than
other features.
We therefore end up with two criteria for extracting ?good? features: they should simultaneously be relevant and smooth, the latter being defined with respect
% to the gene graph.
One way to extract such features is to look for pairs of features, K)R ) S4T-VUWD , such
L
that )R be smooth, ) be a relevant linear feature, and the correlation between )R and )
L
L
be as large as possible. The decoupling of the two criteria enables us to state the problem
mathematically as follows.
[]\
for any feature, and a
Suppose we can define a smoothness
XYRZ0 ^:\ forfunctional
relevance functional X D
linear features, in such a way that lower values of
L
the functional XR (resp. X ) correspond to smoother (resp. more relevant) features. Then
L
the following optimization problem:
%
) R B ) L
=
) R B ) R X R K) R ) B ) X )
L L
L L
(2)
where
! is a regularization parameter, is a way to extract smooth and relevant features.
Irrelevance and ruggedness penalize any candidate pair through the functionals XR and
X L , and controls the trade-off between relevance and smoothness on the one hand, and
correlation on the other hand. ! amounts to finding )R and ) as correlated as possible
(which is obtained by taking )8RS ) ), while
! forces )R toL be relevant and ) L to be
L
smooth.
In order to turn (2) into an algorithm we remark that if X R and X can be expressed as norms
L
in reproducible kernel Hilbert spaces (RKHS, see Section 3), then (2) takes the form of a
generalization of canonical correlation analysis (CCA) known as kernel-CCA [1], which is
equivalent to a generalized eigenvector problem. Let us therefore show how to build two
RKHS on the set of genes whose norms are smoothness (Section 4) and relevance (Section
5) functionals, respectively.
3 Reproducible kernel Hilbert spaces and smoothness functionals
Let us briefly review basic properties of RKHS relevant for the sequel. The reader is referred to [12, 14] for more details.
% =% = %$
be
"#T P F4 , and
L
Let
be a Mercer kernel in the sense that the matrix
E - be the linear span of
symmetric positive semidefinite. Let
consider a decomposition of as:
!
(-,(., (
'( &
B%
V
(3)
,
,
) +R *
% % & 4 - & is an
& are the eigenvalues
where !/
of( , ( and the set R
R0/
/
(
P P P basis
P P P of any )W42!
*
* of eigenvectors
associated orthonormal
. The decomposition
& in 1L , where
on this basis can be expressed as )5 )43 \ R65
7 is the multiplicity of ! as an
eigenvalue. An inner product can be defined in ! as follows: ( (
8 ( ' & ( , ( ( ' & ( , (;:=< ( ' &
%
5 (
(4)
5
)43 \ R
)43 \ R9
)>3 \ R * 9 P
The resulting Hilbert space ! is called a reproducing< kernel Hilbert space, due to the
following reproducing property:
G % B 4 L %@? C % % C % B BA CC % B
(5)
P
P
P
The inner product in ! can be easily expressed
in a dual form as follows. Each ) 4D! can
% , where E is unique up to the addition of an
be decomposed as ): 9%E C
P of and is calledP the dual coordinate of ) . In a matrix form,
element of the null space
< (5) one can easily check that the inner product between two
this reads )H
%H FGE , and using
features )
4I6
! L with dual coordinates E %BJ 4 - L respectively is given by:
? ) %H A '
J LK BC
% K CE B J
E
(6)
= <
P
In particular the ! -norm of a feature )54I! with dual coordinates F
E 4&- is given by:
%
B
O ) O L ME GE
(7)
%BH %!
and the inner product between two features K)
4 &L with dual coordinates
in the original space L9 can also be expressed in dual form:
1
'
) BH
): H E B L J
P
E % J 4&- L
(8)
<
When
is a subspace of
then it is known that the norm in the RKHS defined by
several popular kernels such as the Gaussian radial basis kernel are smoothing functionals,
in the sense that larger values of N ) N correspond to functions ) with more energy at
high frequency in their Fourier decomposition. This fact has been much exploited e.g. in
regularization theory [14, 5], and we now adapt it to the discrete setting.
4 Smoothness functional on a graph
A natural way to quantify the smoothness of a feature on a graph is by its energy at high
frequency, as computed from its Fourier transform. Fourier transforms on graphs is a classical tool of spectral graph analysis
[3, 11] which we briefly recall now. Let be the 6U
#
adjacency matrix of the graph ( =
if there is an edge between and , ! otherwise)
and the diagonal
matrix of vertex degrees. Then the U matrix C
is called the
#
Laplacian of , and is known to share many properties with the continuous
Laplacian
[11].
% % belongs
It is symmetric, semidefinite positive, and singular. The eigenvector
to
P P P components
the# eigenvalue R ! , whose multiplicity is equal to the number of connected
of .
K
,(
*
%
% % $ an
Let us denote by ,!C
( * R / P P P / * & the eigenvalues of 1 and "
P and
orthonormal set of associated eigenvectors. This basis is a discrete
Fourier basisP P [3],
it is known that
oscillates more and more on the graph as increases. The Fourier
decomposition of any feature )546- is the expansion
( ,( in terms of this basis:
'( &
%
)
)
(9)
( ,(
)
R
B ) and )6
)
R % % )
& is called the discrete Fourier transform of ) .
where )
PPP
\ :\ " ! $ , let us now consider the funcFor any monotonic
decreasing
mapping
tion
L defined by:
( ,( ,(
'( &
G % K 4 L %
%
(10)
K
LK P
)R *
The mapping being assumed to take only positive values, the matrix is definite positive and is therefore a Mercer kernel on the set . The corresponding RKHS is the set of
(
features - , with norm given by:
&'(
(
G ) 46- % O ) N L
)L
(11)
)
R * P
(
(
As increases,
increases so decreases. As a result the norm (11) has a higher
*
*
1
value on features which have a lot of energy at high frequency, and is therefore a natural
smoothing functional.
An example of valid function with rapid decay is the exponential 7
, where
is a parameter. In that case we recover the diffusion kernel introduced and discussed in
[9]. Considering other mapping would be beyond the scope of this report, so we restrict
ourselves to this diffusion kernel in the sequel. Observe that it can be expressed using the
matrix exponential as 1
.
1
5 Relevance functional
A
$
%
If @ 4
has a projection @ / onto the linear span of 8 "4
then )<>= ? )9<>= ? .
As a result the set of linear features D can be parametrized by directions of the form @
9 M , where 4W- is called the dual coordinate of @ and is defined up to the
addition of an element of the null space of the Gram matrix = B . The RKHS
E - associated with this semidefinite
positive matrix consists of the set of features of
C % 0 ) ? = < , where @ 9 (8 . In other words
the form ):
P the set of linear features,
P
this is exactly
D .
J
"
J
!
J
LK
!
J
D can be expressed by (1), (6) and (8) <as follows:
I K)9< = ?9 9 ) <>= ? L J J B L J J N ) <>= ? O
@
B
N ) <>= ? O P
< N N L
The variance of a feature )54
As a result, a natural relevance functional to balance the term N ) N in (2) is the norm
in the% RKHS: X K) < = ? N ) <>= ? O , where is the RKHS associated with the linear kernel
C " L B .
K
!
LK
6 Extracting smooth correlations
<
% R B 1 denote the diffusion kernel and L denote the linear kernel
Taking XR9K)
L K 7 8K , with associated RKHS !&R and ! L respectively.
N ) N as a smoothness function for any ) 4&- , and X K) N ) N
as a relevance func-
Let
<
L
tional for any linear feature )54&D , we can express the maximization Problem (2) in a dual
form as:
% J
=
E
E B
R
E B . R L
R E] J
L
J
B . L J
P
L
L
(12)
%
<
At first sight it seems that (12) is the dual formulation of an optimization over ) R ) S4
L
R U L -
U D , and not - / U6D as in (2). However it can be checked that any solution
of (12) is in fact in - / UZD . Indeed the numerator remains unchanged when a constant
function is added to )R
R 4 - , while both N )RO and N )R9N are minimized
when ) has mean ! (for the latter case, this results from the fact that the constant vector is
an eigenvector of the diffusion kernel, so the norm defined by (4) is minimized when the
corresponding projection of ) , namely its average, is null).
! !
E
Formulated as (12) the problem appears to be a generalization of canonical correlation
analysis (CCA)
as kernel-CCA, discussed in [1]. In particular Bach and Jordan
% known
show that
is a solution of (12) if and only if it satisfies the following generalized
eigenvalue problem:
E J
R
EJ
5RL R
!
EJ
(13)
L 6R ! L
!
5L L L
( (
( (
with %BJ the%
largest
possible.
Moreover, solving
(13) provides a series of pairs of features
%
%
$
%
"8 E P P P , where
, with decreasing values of E ( %BJ for(
= is null, equivalent to the extraction of successive canonical diwhich the
( gradient
(
rections with decreasing
correlation in classical CCA. The resulting features ) R = R E
J are therefore
and ) =
a set of features likely to have decreasing biological releL
L
!
vance when increases, and are the features we propose to extract in this report.
As discussed in [1] we regularize the problem (13) by adding L on the diagonal of the
matrix on the right-side, to be able to perform the Cholesky decomposition necessary to
solve this problem. Hence we end up with the following problem:
B
R
EJ
R L
!
EJ % (14)
B
L R ! L
L
!
. L
%
B
J
B
where . If E
eigenvector solution
of (14) belonging to the
is an generalized
(
%BJ belong
generalized eigenvalue
, % then% =E
to . As a result the
spectrum of (14) is
%
%
& & with R & , ! for
.
symmetric : R R
!
PPP
PPP
7 Experiments
We extracted from the LIGAND database of chemical compounds of reactions in biological
pathways [6] a graph made of 774 genes of the budding yeast S. Cerevisiae, linked through
16,650 edges, where two genes are linked when they have the possibility to catalyze two
successive reactions in the LIGAND database (i.e, two reactions such that the main product
of the first one be the main substrate of the second one). Expression data were collected
from the Stanford Microarray Database [13]. Concatenating several publicly available data,
we ended up with 330 measurements for 6075 genes of the yeast, i.e., almost all its known
or predicted genes. Following [4, 2] we work with the normalized logarithm of the ratio
of expression levels of the genes between two experimental conditions. The functional
classes of the yeast genes we consider are the one defined by the January 10, 2002 version
of the Comprehensive Yeast Genome Database (CYGD) [10], which is a comprehensive
classification of 3,936 genes into 259 categories.
The 669 genes in the gene graph with known expression profiles were first used to perform
the feature extraction process described in this report. The resulting linear features were
then extracted from the expression profiles of the disjoint set of 2,688 genes which are
in the CYGD functional catalogue but not in the pathway database. We then performed
functional classification experiments on this set of 2,688 genes, using either the profiles
themselves or the features extracted. All functional classes with more than 20 members in
this set were tested (which amount to 115 categories).
Experiments were carried out with SVM Light [7], a public and free implementation of
SVM. All vectors were scaled to unit length before
sent to the SVM, and all SVM
% being
use a radial basis kernel with unit width, i.e.,
SN YN L . The trade-off
parameter between training error and margin error was set to its default value ( in that
case), and the cost of errors on positive and and negative examples were adjusted to have
the same total.
K
K
Preliminary experiments to tune the two parameters of the algorithm, namely the width of
the diffusion kernel and the regularization parameter , showed that
and "! !!
P
provide good performances. For these values we first tested whether there exists an optimal
number of features to be extracted for optimal gene function prediction. Figure 1 shows the
performance of SVM using different numbers of features, in terms of ROC index averaged
over all 115 classes. The ROC index is the area under the curve of false negative vs true
positive, normalized to !! for a perfect classifier and 9 ! for a random classifier. For each
category the ROC index was averaged over ! random splitting of the data into training and
! 9
! . It appears that the more features are included, the better the
test set, in the proportion
performance averaged over all categories. A more precise analysis of the different classes
shows however that some classes don?t follow the average trend and are better predicted
by a smaller number of features, as shown on Figure 2 for categories best predicted by
less than !! features. Finally Figure 3 compares, for each of the 115 categories, the ROC
index for a SVM using the original expression profiles with a SVM using the vectors of
330 features. It demonstrates that the representation of genes as vectors of features helps
improve the performance of SVM (the ROC index averaged over all categories increases
62
61
Average ROC index
60
59
58
57
56
55
50
100
150
200
Number of features
250
300
350
Figure 1: ROC index averaged over 115 categories, for various number of features
Prediction performance for several functional classes
75
"fermentation"
"ionic_homeostasis"
"protein_complexes"
"vacuolar_transport"
"nucleus_organization"
70
ROC index
65
60
55
50
45
50
100
150
200
Number of features
250
300
350
Figure 2: ROC index for 5 functional categories, for various number of features
). The difference is especially important for classes such as heavy metal
to
from
P
P ( vs ), ribosome biogenesis ( vs ! ), protein synthesis (
ion transporters
%
P
P
P
P
vs
) or morphogenesis
(P vs )
P
P
8 Discussion and Conclusion
Results reported in the previous section are encouraging for at least two reasons. First of
all, the performance reached for some classes such as heavy ion metal transporters shows
that a ROC above 80% can be expected for several classes. Second, while many classes are
apparently not learned by the SVM based on expression profiles (ROC around 50), the ROC
based on extracted features of the same classes is around 60. This shows that there is hope
to be able to predict more functional classes than previously thought [2] from microarray
data, which is a good news since the amount of microarray data is expected to explode in
the coming years.
The method presented in this paper can be seen as an attempt to explore the possibilities of
data mining and analysis provided by kernel methods. Few studies have used kernel methods other than SVM, and have used kernels other than Gaussian or polynomial kernels. In
this report we tried to show how ?exotic? kernels such as the diffusion kernel, and ?exotic?
methods such as kernel-CCA, can be adapted to particular problems, graph-driven feature
extraction in our case. Exploring other possibilities of kernel methods in the data-rich field
of computational biology is among our future plans.
100
ROC index based on expression profiles
90
80
70
60
50
40
30
40
50
60
70
80
ROC index based on extracted features
90
100
Figure 3: ROC index of a SVM classifier based on expression profiles (y axis) or extracted
features (x axis). Each point represents one functional category.
References
[1] F. R. Bach and M. I. Jordan. Kernel independent component analysis. Journal of Machine
Learning Research, 3:1?48, 2002.
[2] Michael P. S. Brown, William Noble Grundy, David Lin, Nello Cristianini, Charles Walsh Sugnet, Terence S. Furey, Jr. Manuel Ares, and David Haussler. Knowledge-based analysis of
microarray gene expression data by using support vector machines. Proc. Natl. Acad. Sci. USA,
97:262?267, 2000.
[3] Fan R.K. Chung. Spectral graph theory, volume 92 of CBMS Regional Conference Series.
American Mathematical Society, Providence, 1997.
[4] Michael B. Eisen, Paul T. Spellman, Patrick O. Brown, and David Botstein. Cluster analysis
and display of genome-wide expression patterns. Proc. Natl. Acad. Sci. USA, 95:14863?14868,
Dec 1998.
[5] Frederico Girosi, Michael Jones, and Tomaso Poggio. Regularization theory and neural networks architectures. Neural Computation, 7(2):219?269, 1995.
[6] S. Goto, Y. Okuno, M. Hattori, T. Nishioka, and M. Kanehisa. LIGAND: database of chemical
compounds and reactions in biological pathways. Nucleic Acid Research, 30:402?404, 2002.
[7] Thorsten Joachims. Making large-scale svm learning practical. In B. Sch?olkopf, C. Burges,
and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 169?184.
MIT Press, 1999.
[8] M. Kanehisa, S. Goto, S. Kawashima, and A. Nakaya. The KEGG databases at GenomeNet.
Nucleic Acid Research, 30:42?46, 2002.
[9] R. I. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete input. In ICML
2002, 2002.
[10] H.W. Mewes, D. Frishman, U. G?uldener, G. Mannhaupt, K. Mayer, M. Mokrejs, B. Morgenstern, M. M?unsterkoetter, S. Rudd, and B. Weil. MIPS: a database for genomes and protein
sequences. Nucleic Acid Research, 30(1):31?34, 2002.
[11] B. Mohar. Some applications of laplace eigenvalues of graphs. In G. Hahn and G. Sabidussi,
editors, Graph Symmetry: Algebraic Methods and Applications, volume 497 of NATO ASI
Series C, pages 227?275. Kluwer, Dordrecht, 1997.
[12] S. Saitoh. Theory of reproducing Kernels and its applications. Longman Scientific & Technical,
Harlow, UK, 1988.
[13] G. Sherlock, T. Hernandez-Boussard, A. Kasarskis, G. Binkley, J.C. Matese, S.S. Dwight,
M. Kaloper, S. Weng, H. Jin, C.A. Ball, M.B. Eisen, and P.T. Spellman. The stanford microarray database. Nucleic Acid Research, 29(1):152?155, Jan 2001.
[14] G. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, 1990.
| 2273 |@word briefly:2 version:2 polynomial:1 norm:8 seems:1 proportion:1 kondor:1 r:1 tried:1 decomposition:5 series:5 ecole:1 rkhs:9 bc:1 reaction:9 comparing:1 manuel:1 activation:2 girosi:1 enables:1 reproducible:4 v:5 provides:1 node:1 successive:6 org:1 mathematical:1 kasarskis:1 consists:2 pathway:8 indeed:2 expected:3 rapid:1 themselves:1 tomaso:1 globally:1 decomposed:1 decreasing:4 actual:1 encouraging:1 cardinality:1 considering:1 provided:1 notation:1 moreover:1 exotic:2 furey:1 null:4 eigenvector:5 morgenstern:1 rugged:1 finding:1 ended:1 oscillates:1 exactly:1 classifier:4 scaled:1 demonstrates:1 control:1 unit:2 uk:1 positive:7 before:1 acad:2 era:1 encoding:1 becoming:1 hernandez:1 suggests:1 walsh:1 averaged:5 unique:1 practical:1 definite:1 jan:1 area:1 asi:1 thought:1 vert:2 projection:2 word:1 radial:2 r65:1 protein:3 hattori:1 onto:1 close:1 bh:2 catalogue:1 kawashima:1 equivalent:3 center:1 independently:1 splitting:1 haussler:1 orthonormal:2 regularize:1 hd:1 notion:1 variation:1 coordinate:5 laplace:1 resp:2 suppose:1 substrate:1 us:1 element:2 trend:1 database:12 role:1 frishman:1 capture:1 thousand:1 connected:1 news:1 trade:2 decrease:1 intuition:2 grundy:1 cristianini:1 mine:2 solving:1 basis:6 easily:2 chip:1 various:3 represented:3 s4t:1 dordrecht:1 jean:2 whose:2 posed:1 valued:1 larger:1 solve:1 stanford:2 otherwise:2 highlighted:1 transform:2 sequence:1 eigenvalue:7 propose:2 product:5 coming:1 relevant:10 kuicr:1 supposed:1 validate:1 olkopf:1 cluster:2 perfect:1 help:2 ac:1 predicted:3 involves:1 judge:1 quantify:1 direction:1 f4:1 ppp:3 centered:2 observational:1 translating:1 public:1 adjacency:1 require:1 generalization:2 preliminary:1 biological:5 mathematically:1 adjusted:1 exploring:1 around:2 mapping:5 predict:2 bj:4 scope:1 major:1 early:1 proc:2 largest:1 tool:1 hope:1 mit:1 rna:1 cerevisiae:2 gaussian:2 sight:1 ej:4 joachim:1 check:1 sugnet:1 sense:2 tional:1 biochemical:1 integrated:1 classification:3 dual:9 among:1 denoted:1 plan:1 art:2 smoothing:2 equal:1 field:1 extraction:5 biology:2 represents:1 look:1 jones:1 icml:1 noble:1 future:1 minimized:2 report:5 others:1 spline:1 few:1 simultaneously:2 comprehensive:2 ourselves:1 william:1 attempt:1 interest:1 possibility:4 mining:1 weng:1 irrelevance:1 semidefinite:3 light:1 natl:2 edge:2 necessary:1 poggio:1 logarithm:1 column:1 maximization:1 cost:1 vertex:2 motivating:1 reported:1 providence:1 varies:1 siam:1 sequel:5 systematic:1 off:2 terence:1 michael:3 together:3 quickly:1 synthesis:2 opposed:1 slowly:1 american:1 chung:1 de:2 matter:1 coordinated:1 mohar:1 tion:1 performed:1 lot:1 linked:4 apparently:1 reached:1 recover:1 publicly:1 variance:5 characteristic:2 acid:4 yield:1 correspond:3 provider:1 monitoring:1 confirmed:1 messenger:1 whenever:1 checked:1 energy:3 frequency:3 involved:1 associated:5 popular:1 recall:1 knowledge:4 hilbert:6 cbms:2 appears:2 higher:1 follow:1 botstein:1 arranged:1 formulation:3 just:1 smola:1 correlation:7 hand:2 defines:1 scientific:1 yeast:5 usa:2 dwight:1 true:1 brown:2 normalized:5 regularization:4 hence:1 chemical:4 read:1 symmetric:3 ribosome:1 adjacent:1 numerator:1 width:2 criterion:3 generalized:6 nakaya:1 charles:1 functional:16 ji:1 rl:1 jp:1 volume:3 discussed:3 belong:1 kluwer:1 measurement:2 smoothness:9 mathematics:1 i6:1 inhibition:2 patrick:1 saitoh:1 showed:1 irrelevant:1 driven:3 belongs:1 compound:2 exploited:1 seen:1 r0:1 smoother:1 kyoto:2 smooth:7 technical:1 characterized:1 adapt:1 bach:2 lin:1 molecular:1 laplacian:2 prediction:5 basic:1 kernel:36 cell:4 penalize:1 ion:2 addition:2 dec:1 singular:1 microarray:10 sch:1 regional:2 goto:2 sent:1 member:1 lafferty:1 jordan:2 call:1 extracting:2 transporter:2 mips:1 architecture:1 topology:1 restrict:1 wahba:1 inner:4 idea:1 whether:2 motivated:1 expression:23 pca:1 algebraic:1 action:1 remark:1 tol:1 harlow:1 eigenvectors:2 tune:1 amount:3 transforms:1 s4:1 category:11 dna:1 canonical:5 nsf:1 disjoint:1 discrete:5 express:1 ce:1 diffusion:9 longman:1 graph:31 year:1 almost:1 reader:1 rudd:1 cca:7 played:1 display:1 fan:1 adapted:1 l9:1 explode:1 fourier:6 span:2 performing:1 ball:1 catalyze:2 belonging:1 smaller:1 jr:1 biologically:1 making:1 kegg:2 multiplicity:2 thorsten:1 remains:1 previously:1 turn:1 mechanism:1 ge:2 end:4 available:1 observe:1 spectral:2 original:3 dfe:1 build:1 especially:1 hahn:1 classical:2 society:1 unchanged:1 question:1 quantity:1 added:1 minoru:1 diagonal:2 exhibit:1 gradient:1 subspace:1 link:1 mapped:1 sci:2 parametrized:1 participate:2 me:1 collected:1 nello:1 reason:1 length:1 index:12 ratio:1 balance:1 regulation:1 stated:1 negative:2 ba:1 implementation:1 perform:2 budding:1 nucleic:4 enabling:1 jin:1 philippe:2 january:1 precise:1 reproducing:3 rections:1 morphogenesis:1 introduced:1 david:3 pair:3 paris:1 namely:2 mayer:1 c4:1 learned:1 biogenesis:1 beyond:1 able:2 usually:1 pattern:6 sherlock:1 event:2 natural:3 force:1 regularized:1 indicator:1 spellman:2 improve:2 technology:2 lk:4 axis:2 carried:1 extract:6 philadelphia:1 func:1 review:1 understanding:1 ruggedness:1 degree:1 metal:2 consistent:1 mercer:2 metabolic:1 editor:2 share:1 heavy:2 transpose:1 free:1 side:1 burges:1 wide:1 taking:2 curve:1 default:1 valid:1 gram:1 genome:3 rich:1 eisen:2 made:1 functionals:4 nato:1 gene:42 ml:1 assumed:1 spectrum:1 don:1 continuous:1 decade:1 molecule:1 decoupling:1 symmetry:1 expansion:1 main:2 paul:1 profile:18 matese:1 referred:1 boussard:1 roc:15 exponential:2 concatenating:1 candidate:1 showing:1 decay:1 svm:12 submitting:1 intrinsic:1 exists:1 false:1 adding:1 margin:1 likely:6 explore:1 expressed:6 monotonic:1 ligand:3 satisfies:1 extracted:10 kanehisa:4 goal:1 formulated:1 shared:2 included:1 called:6 r9:2 total:1 experimental:2 formally:1 cholesky:1 support:2 latter:2 bioinformatics:1 relevance:7 tested:2 correlated:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.